I have an intuitive concept of systems that grows in complexity steadily as it computes. I can think in some examples:
Nature. Given the physical laws, some initial conditions (say, the Earth planet at the beginning of life) and enough time, the system eventually evolves in complexity, all the way from dinosaurs fighting over territory to humans building castles.
Cellular automatas. Given a set of automata rules, some initial conditions (a grid with specific cells set) and enough time, some automatas eventually evolve complex structures.
AI applications such as genetic programming (to some extent). The issue is that most of those evolve towards a specific goal and often reach a tipping point and stop.
The kind of system I'm talking about is supposed to grow in complexity indefinitely. That is, one can expect that, given enough time, such system will eventually develop complex structures such as "lifeforms" that defend their own existence. It is also not goal oriented, it is supposed to grow in complexity because their own internal structures are more stable when they are more complex. We know nature/physical laws satisfy that criteria, but I'm not sure about cellular automatas and genetic programming. That is a very vague concept and hard to define (there is no accepted definition of complexity, after all). Yet, is there any name/formalization for this kind of system? Do we know any examples of systems/specific automatas that, as far as our observation goes, never reach a "tipping point" where they stop becoming more complex?