I was thinking on how you would effectively model activities that may occur in the real world in order to create a game where you can play god and, without actually doing anything obviously god-like, affect the way the world works in various ways in order to create a ‘ripple effect’ - e.g. genetically changing a virus just ever-so-slightly, sending a cosmic ray to just the wrong place to give someone a tumour, changing the direction of tectonic plates in a way so as to cause a major earthquake etc.

The idea is that you make changes in ways that people don’t notice so that they can’t suspect a ‘godlike’ influence. It’s just all natural phenomena.

So how to model this without using ridiculous resources. Well, why not make the universe a whole, then as you increase complexity - galaxies, systems, planets etc. - you simply subdivide. You can place the objects relative to some ‘universal central point’ and alter their positions according to some predefined rules over time - lets call that rule set, ‘physics’.

Well, if the observer is at one end of the galaxy, and not really watching the stars too closely, you don’t need too much detail - i.e. the detailed composition of stars etc. so you could make the composition of those starts quite generalised.

It makes sense to have a separate server deal with all the cosmology, and synch up with your local server (e.g. ’earth’) on a regular basis with some generalised rules that impinge on the local system - e.g. ‘star position at the moment, and formula for changing these positions until the next anticipated update’.

When there is a non-general interface between local and universal systems, e.g. someone looking through a telescope to get more detail, then it would be necessary to set up a connection between the two systems to allow for more intensive processing to take place to get additional detail.

Do you see where I’m going with this? We can take the principal further so that we spawn more servers each time we look at something in more detail, so that the general principles continue to work with smaller levels of processing required when there is a normal level of interaction between observer / observed. When a more detailed interaction (e.g. looking through a microscope, getting a cold from a virus) takes place, then more processing power is required, and so a greater level of detail is recorded for the duration of observation with more complex rules applied, and then when the observation / interaction dissipates, the processing can become more generalized again.

When you have multiple observers, the interaction between observers, and shared experience poses an interesting computation complexity, as we then have co-dependent observers. As perception between observers does not have to be identical, and communication determines commonality of experience, language can be generalized enough so that each observers believes they have experienced the same thing, although the reality may be that due to the interactions being increased in complexity at the same time for all observers, a commensurate increase in computation power may cause some issues finding that requisite computational power quickly. Two things come to the rescue here: Foresight - we can watch out for these occurrences, and plan for upscaling in advance. Also, if the perception timings are synchronized, we can take as long as we like to do the computations, as long as we re-synchronize with our surroundings following the event. The perception of time by the observers does not need to compromised, even if the real computation time is greater than normal.

An interesting complexity here is the recording and replay of data - e.g. humans storing / retrieving information from the internet, watching movies etc. This suggests that for stored information, the information needs to be correspondingly stored in the model for accurate replay. So for a sophisticated society in the throes of the digital age, this could take a huge amount of data to model.

There are various ways around this: Share common data that looks the same until inspected more closely, in which case, differentiate in some way (e.g. 1million books published. Store once, but for individual copies, if there are minor differences - e.g. spine broken, page ripped, coffee spilt on it, store only the differences as they occur.

Another possibility is that if the information in a local system is getting increasingly swamped, find some way of reducing complexity. Civil unrest - get everyone to burn all the books. Computer virus, data is trashed. Natural disaster, lots of complex lives suddenly become very simple in that they cease to be. Look for ways to notice complexity, and find ways of reducing the complexity as much as possible to minimize system load.

There is also grades of observers - intelligent observers would need the most computational power, less intelligent observers that operate largely on instinct would need a little less computational power, and inanimate observers (like a rock) would need still less, as it’s observational power really only comes into play with direct interaction, observing basic physical laws. That would be part of how computational power could be predicted, according to the grade of observers taking part in the interaction.

The last piece of the puzzle is self-organization. This system needs to interact with itself and other systems in such a way that it can keep itself maintained without outside influence, unless outside influence is absolutely required, or to be implemented for the sake of stirring things up a bit. So this means intelligent algorithms that work within certain rule sets, the rule sets of which can be increased in complexity if more detail is required. That will be the most interesting part of setting up such a world. Finding ways to make these rule sets evolve.

Of course, as our own technology improves, the complexity allowed would increase, thus allowing additional complexity and much more interesting systems. Maybe even a system that starts developing models of its own to try to work out how its encompassing model works. So long as the containing model is of greater complexity than the contained model, then we can keep the increasing complexity going indefinitely!

On a final note, one cannot help but wonder whether we live in such a self-regulating system ourselves. Perhaps the universe is not physical, but informational - it’s been postulated for hundreds of years (and scientifically within the last century), that maybe the observed only exists when it is actually observed - i.e. if a tree falls in a forest with no-one to hear it, does it make a sound? - how do we know? We take our experience which says whenever anyone observes it, it makes a sound, and thus conclude that it must make a sound when not being observed, as the physics still hold true for how that sounds is generated, irrespective of the observer. In our model, that’s unneccessary computational power, so there would be no sound, as the complexity is not required with no observer present.

So, anyone volunteering to generate such a model, with the knowledge that it will never be able to achieve the complexity of the bounding system, which is our perceptual universe? Go on, give it a go!