Justin Alsing’s Post

View profile for Justin Alsing, graphic

Founder at Calda AI | Physicist | Machine Learning Researcher

Large time-series models (LTMs) enjoy similar power-law scaling behaviour to LLMs. We just put out a paper (https://lnkd.in/givY528D) establishing power-law like scaling-laws for large time-series models as a function of data, compute, and model size. Similar scaling-laws for LLMs (from the landmark Kaplan et al. paper https://lnkd.in/g9KHYN9u) have provided key guidance in allocating enormous resources for predictable - and eventually breakthrough - performance gains. The demonstration of similarly favourable scaling behaviour for large time-series models provides both a motivation and guide in the pursuit of foundation models for time-series forecasting. Foundation models for time-series are coming (with enough data and compute). Thanks to Thomas Edwards, James Alvey, Benjamin Wandelt and Nam Nguyen for the hard work!

  • No alternative text description for this image

Thanks for sharing! That's interesting and promising. We also propose a work regarding the scaling law for time series forecasting(https://arxiv.org/pdf/2405.15124) at the same time 😁 . We believe these two works are more complementary than substitutive. Moreover, they provide insights into scaling laws in the realm of time series models from different perspectives.

To view or add a comment, sign in

Explore topics