I built an app that display time series data. The display is very similar to a yahoo finance chart (ie https://finance.yahoo.com/quote/AMZN/). I am using Postgres to store the data because that was the database available to me when I was building it. The issue I am having is when I try to scale the different time frames (1 day, vs month, vs max, ect). The raw data is saved in minute resolution in the below schema. The app_id is an id the app manages. Price is hopefully obvious, and time is milliseconds since epoch.
column | Type |
-------+--------+---
app_id | bigint |
price | double |
time | bigint |
The problem is when someone clicks 5 days (5D in the yahoo plot) for example I can't load 7200 (# of minutes in 5 days) data points into the front end I need to scale the data to say 60 data points. That would mean I need every 120th data point. To accomplish this I use MOD(app_id,120). So my query would be something like
select price from my_table
where app_id > (select count(*) from my_table) - 7200
and MOD(app_id,120) = 0
However I am wondering if there is a more efficient or scalable way to handle this? Because in order to guarantee I have the first and last data point I have to grab those separately. Because if there are 7201 data points in the db that query will not grab the first one. Also if there are gaps in the saved data that I have to manually fill, for whatever reason, all the ids that come after it have to be updated. So I my question boils down to:
Is there a way to do this with the time stamps themselves, or a simpler way than what I am currently doing with MOD ?