I'm working under the assumption that you are looking at an API that you did not develop and do not have access to the source or design details. If that's incorrect, please let me know.
First off, it's usually difficult or impossible to look at an external-facing API and know what the design is behind it and/or the decisions behind it. It's quite possible that this is over-designed. This answer is really more about the general reasons you might do this. Don't read it as a specific endorsement or critique of this specific API's design.
You might assume that there is a single database that stores all this information but that's not necessarily the case. And that's one of the goals behind an API like this: to decouple the user interface from the implementation details. Even if all this data is stored in a single DB, by breaking down the API in this way, the implementation can be changed easily without disrupting clients. That's one good reason to do this.
Another reason for this kind of thing is performance. One thing I've encountered many times in my career is an assumption that the way to make things fast and efficient is to do as much joining, filtering, and preparation of data as possible on the database and server. There is a lot to that idea, but taken to its logical extreme, it can cause some really terrible performance from the client perspective. This happens because when the client makes a request to a synchronous service, they generally will have to wait until the last byte of the response is received and parsed. I say 'generally' here because there are some ways to get around that and start doing things sooner, but in my experience, these can be tricky to put in place and come with some pretty major downsides such as managing errors that occur mid-stream.
Typically, 1) the client sends a request such as a GET. 2) The server then does its DB queries or other tasks to get that data. 3) Then it creates a response document. 4) Then it transmits the document. 5) The client receives the entire response 6) once the response is fully received the response is parsed 7) then the data is displayed and/or otherwise used. The time from request to display is then the total of all 7 of these steps.
So how does decomposing the API as in your example help with this? First off, the more data you are pulling and joining together, the longer steps 2 & 3 will take, all things being equal. The more content in the response document means steps 4 & 6 will take longer for obvious reasons. Steps 6 & 7 take a little longer as well but typically aren't a major concern. While all of this is happening, the user sees no outcome. Maybe a spinning wheel.
If the API is decomposed, it doesn't make the total time faster (it might even be slower) but the intervals during which there's no visible progress are shorter. And what if users don't usually care about the developer information? Why should they have to wait for that to be retrieved? Why take on the cost of collecting things that many people don't look at?
I learned this lesson the hard way early in my career. We were tasked with providing a broad and deep tree structure. Because "it was more efficient" the more senior developers on the project decided it would be pulled down as one large document. For broadband users, this meant it took about 30 seconds from the time they requested the document until the time it was displayed in the UI. We had some users that were on an island with 56K modems, though. They could wait upwards of 15 minutes for many of these documents.
One of silly things about this was that many of the items in this content tree were generally applicable to many or all of the requests. The primary hack I used to improve the performance was to cache those details locally instead of pulling them down on each request. The more general and effective fix was to decompose the API so that each node of the tree was retrieved individually and displayed as it was received. This not only avoided the user having to wait for 100s of items they weren't even going to look at or use but also allowed for horizontal scaling (which could be a reason in the case you are looking at, especially if a CDN or similar is involved.)
Our users were very unhappy with the first design. They loved the second approach. It looked magically instantaneous to them because the time it took them to scroll through the top layers and find what they were looking for was usually more than enough to fill in the details of those items behind the scenes.
Wouldn't it be simpler to just have the /games endpoint return all the necessary info upfront?
Not really. What if you don't need all the info? What if you want to see other games by the same developers? Should the endpoint provide these aswell? At some point you'll find yourself just returning your whole database.