...if we have a pinhole camera model several parameters describe the specific camera (such as aspect-ratio, focal length, principal point, distortion parameters etc).
"Distortion parameters" does not sound like a typical pinhole camera model. A pinhole camera does not have a "finite apperture" or way of focusing light other than a tiny little opening which all rays of light (that will go on to produce the image) go through. You can take lens distortions into account by adding them to the pinhole model as additional operations but that then is not the standard pinhole camera model.
The pinhole camera model is a very simple transformation of a point from a global 3D coordinate system to a local 2D coordinate system. It does nothing more than that. It maps the angle at which a point appears within its field of view to a point on its focusing plane. Another useful way to be thinking about lenses is in terms of mapping points in front of them to points behind them.
The pinhole camera model is the simplest of those transformations and requires:
- The position of the camera in global coordinates (Where is it?).
- The orientation of the camera (Where does it point to?)
- The "Focal Length"
- The sensor size
- The principal point
The focal length in particular is exactly the "thing" that creates this simple map. The shorter the focal length, the wider is the field of view within which points get mapped to a plane behind the pinhole and vice versa.
But notice here, rays in front of the pinhole enter it at some angle and leave exactly behind the pihnole at the same angle.
I was wondering however if I replaced the pinhole camera with a spherical camera (such as 360 camera for example) what parameters describe such camera?
A pinhole 360 camera is a pinhole camera with 360 degrees field of view. Exactly the same set of parameters but now practically points can enter the pinhole from anywhere around it and depart at the same angle behind it. Notice here that in this view the "focal length" is still valid as a "ratio" of lengths.
But in reality, there are no 360 degree "pinholes". Instead, the wide field of view is acquired in some other way. That might be a mirror for example or multiple cameras with various lenses.
In this case, it is even more useful to be thinking in terms of rays entering the lens and rays departing from the lens (or optical system in general). The general model in this case is established around a transfer matrix that maps "entry points" to "exit points" and basically can model any sort of configuration you like.
In terms of references, Geyer and Daniilidis:A unifying theory for central panoramic
systems and practical implications is probably still relevant and since then, Daniilidis has also put together a volume of relevant work, the contents of which you might find useful, even if it is just for more relevant terminology to get you closer to what you are dealing with.
Hope this helps.