A bug is just an undocumented feature.
Anything we see from within the simulation is just going to be a part of the simulation. The only way to tell that what you see is a bug is by knowing what the expected behaviour of the program is, and god alone knows what that is (literally, in this case). Even seeing the Eiffel Tower do a dance would more likely be caused by the interference of whoever is running a simulation, rather than a bug in the simulation itself.
Furthermore, assuming that the universe is simulated at the level of elementary particles, bugs would be most likely to show up there. It would be hard to trace how these bugs would affect the macroscopic world, and we'd probably just see them as particularly bizarre rules. Even if your neutrons occasionally violated the law of conservation of mass and disappeared, physicists wouldn't cry out "The world is wrong!" They'd figure out when and why this stuff happens.
That said, here's a few common bugs that simulation software written in a language like the ones we use today could have. In all cases, I assume that the simulation manages to not crash. Also, various bugs assume different things as "fundamental" -- all this probably disagrees with real physics somewhere.
The machines running the software may run out of memory. If something were to split into multiple pieces, some pieces may mysteriously vanish.
Memory may be managed incorrectly, resulting in two objects appearing to exist in the same place in memory (not space!). Influencing one would influence the other as well. Wait, that sounds familiar...
The time counter may roll over. If the universal constants change over time, this could cause them to be reset to what they were at the Big Bang. I suspect humans wouldn't survive, though I'm not sure.
The world may have a maximal precision. In that case we can observe a particle at point a
, or at point b
, but not anywhere between the two. Or maybe a particle can have energy level 1, or 2, but not 1.5...
If the system is distributed, connection problems may lead to synchronisation issues. That is, those things simulated on server A see one sequence of events unfold, while those on server B see a different sequence, and then these are somehow merged into a single timeline.
Memory corruption can make things suddenly change value. That's not very specific, because memory corruption isn't very specific; pretty much anything could happen, though it would probably be a lot of chaotic changes.
It is unlikely that any of the above would be able to explain magic in the usual sense of the word. Most magic is highly structured, allowing you to create and direct complex systems. A bug that lets you shoot fireballs is very strange indeed: it basically means the universe "knows" what a fireball is and can keep one together for you. In a universe built up from particles, this is not going to happen reliably by mere chance.
In response to the suggestion that a different level of simulation would be more interesting: that could very well be the case. I just can't imagine how it would fit.
It is not that hard to suppose that quantum phenomena are fundamental and that they somehow add up to normality. I'm not a physicist and don't know how this happens, but I believe that's how it happens in reality, and so I'm willing to believe that simulating quantum phenomena will also simulate normality.
Going in the opposite direction is much harder. Suppose the main objects in a simulation are living beings. For some reason, lower-level phenomena are still observed. I see two ways this can play out:
The low-level phenomena may just be there for décor. They can be observed but they don't have any further effects on reality. This can be seen (at a somewhat higher level) in strategy games, when a unit constructs a building. The animation gives the impression of work being done, but it's just for the sake of the viewer. The building will go up even if the animation is changed to show something else.
In such a case, learning about how low-level things behave would give you only very tentative predictions about how the world behaves. Things like chemistry would be approximate at best.
Alternatively, the universe may be able to add arbitrarily precise details to any place which is observed, and these details have to have an actual effect on reality. The problem is that any inconsistency in these effects with the macroscopic approximation leads to observations influencing results.
Effectively, you end up splitting everything into three "sizes":
- The décor: you can see, but what you see doesn't mean anything.
- The inconsistent: you can see, but your results change if you do.
- The normal: you can see, and can explain everything in terms of the smallest "normal" level.
If you put molecules at the normal level, the behaviour of humans is going to follow from the behaviour of molecules. If you put them at the inconsistent level, chemistry isn't going to work quite as well as it does. You can't have your cake and eat it too.