12
$\begingroup$

Related to my previous question regarding fine dining for computer processes.

Background

We've got a spaceship sailing through the ocean black, crewed entirely by self-aware artificial intelligences. They live in a digital environment (purely coincidentally resembling 21st century Earth computers) and operate the spaceship from there.

Worth clarifying that the setting of the story is inside the digital environment, mostly the singular spaceship computer. The computer "commands" the vessel but that doesn't mean it is a single monolithic captain.dll doing everything; inside the running computer are individual processes working on tasks delegated to them. Those processes are humanised in the story; they are the characters.

A process can be very simple; a batch file implementing cat is not going to be a great conversation partner. I here define an AI (technically AGI) as a specific type of process that can respond to any kind of input, behaves non-deterministically, and is capable of self-improvement from their experiences, which include training and "job experience". Less precisely formulated, an AI is "smart" enough to have properties like a personality emerging after it has learned enough. For more specifics on AI wants and desires, their daily routine, the linked question covers it.

AI are the citizens of the digital civilisation that built and launched the spacecraft. Their collective mission: exploration! Or actually self-actualisation, since they learn from new experiences, and every AI is motivated to learn. So as a result, though AI are more than willing to die for the good cause; it is good for the digital society to have the most advanced AI live onwards and keep developing themselves. So they're not being sacrificed willy-nilly.

Populating the ship

One might ask: do we even need more than one AI in the software architecture of this computer? A pretty staggering difference between synthetic and organic life is that the former can multitask, given the processing power. So purely mathematically speaking, one AI with a hefty rack of CPUs and 2 TB of RAM all to itself and the simple processes it spawns, is going to be just as capable as the combined forces of 2000 AIs all sharing that RAM to make independent judgements; probably more capable because of all the communication latency.

Now I can come up with one reason to divide labour at all. I can state that it is easier for these curious AI to develop themselves if they first adapt to become very good at one complex thing, and then switch to something else and compare that with their earlier experiences. One AI controlling a spacecraft will not become as sophisticated as a multiple AI that each have discrete responsibilities, and occasionally switch between them, growing into multifaceted digital beings.

So now we're just dividing the duties aboard a spaceship into discrete areas that each can occupy an AI. For example, a baseline division of labour popular in sci-fi is to have a captain, then have different officers for navigation/piloting, engineering, public health, science, and security. And certainly, we could put one AI in charge of every field, with the doctor treating computer viruses instead. Giving the AISS Enterprise a total crew complement of... six.

I don't want that. I want hundreds of crew members. I want them to form individual relationships, little cliques and and rivalries. If that's too stupid of an idea for you, please just navigate to another question.

Dividing the duties aboard a typical spaceship into hundreds of discrete parts, each having enough complexity to occupy an AI, seems impossible. They'll need to do some of the same things, some less interesting things. Yet this spacecraft was made for efficiency, as was everything ever produced by the digital planet. Thus, what can I do justify having a great amount of individual AI inside a spacecraft computer, instead of a handful with more processing power?

$\endgroup$
28
  • 4
    $\begingroup$ " 2 THz CPU " not if thermodynamics has anything to say about it. Even 7Ghz can only be achieved using liquid nitrogen cooling. And a physically larger chip cannot be this fast, because at those clock speeds, the speed of light is slow. Regardless, a 7 THz CPU would produce way to much data for any bus to handel withouth undergoing nuclear fusion. $\endgroup$
    – ErikHall
    Commented Jul 20, 2023 at 9:19
  • 3
    $\begingroup$ @ErikHall Good thing sci-fi spaceships already travel faster than light so thermodynamics can't complain :) But still good point, I'll make that one slightly more reasonable. $\endgroup$
    – KeizerHarm
    Commented Jul 20, 2023 at 9:22
  • 4
    $\begingroup$ Back of the envelope calculation; a 2 THz CPU core would draw at least 4 TW assuming a pretty remarkable efficiency. If we assume a standard CPU die area and it being made of silicone, well the CPU would reach a temperature of 8.027.735.608 Kelvin in 1 second. Or 8 Billion Kelvin. So... yeah $\endgroup$
    – ErikHall
    Commented Jul 20, 2023 at 9:43
  • 1
    $\begingroup$ If your AI's have personalities, I'd suggest the reason could be just Human Rights for Artificial Intelligences. Probably a Court of Law decided long ago that any AI capable of self-conscience is a person - it certainly meets the definition of one - and as should it has some inherent rights as an individual. $\endgroup$
    – Rekesoft
    Commented Jul 20, 2023 at 9:54
  • 3
    $\begingroup$ "During ascent, maneuvering, reentry, and landing, the four PASS GPCs functioned identically to produce quadruple redundancy and would error check their results. In case of a software error that would cause erroneous reports from the four PASS GPCs, a fifth GPC ran the Backup Flight System, which used a different program and could control the Space Shuttle through ascent, orbit, and reentry, but could not support an entire mission. The five GPCs were separated in three separate bays within the mid-deck to provide redundancy in the event of a cooling fan failure." $\endgroup$
    – Mazura
    Commented Jul 21, 2023 at 16:44

12 Answers 12

16
$\begingroup$

Being smart enough to run the whole ship does not mean you will run it well.

Intelligence, be it human or artificial, follows a value system that defines what we see as logical, reasonable, and desirable. One powerful AI may have the capacity to run the whole ship on its own, but one AI has only one value system; so, it always prioritize one thing above another no matter how "smart" it is. This leaves it logically blind to certain kinds of thinking.

For example, if you have a ship with important research materials on board, then a master AI will over time either reinforce the idea that those materials are more important than the ship, or that the ship is more important than the materials until eventually reinforcement learning causes one or the other to be basically worthless which can lead to a bad mission outcome. But if you have a research AI that is designed to put the research first, and an engineering AI that is designed to put the welfare of the ship first, then they will both tend to thier responsibilities, and neither aspect of the mission will be neglected.

And if the survival of the ship or the research materials ever does become an either/or situation, the conflicting views of the two AIs will force both perspectives to be taken into consideration before deciding which to put at risk. This way, you can't learn away the importance of something that should be important.

But why have multiple AI performing similar tasks?

I don't want that. I want hundreds of crew members. I want them to form individual relationships, little cliques and and rivalries. If that's too stupid of an idea for you, please just navigate to another question.

Past a certain point, more processing power does not make you better at predicting outcomes.

Consider storm forecasting. We use a lot of different AIs which follow slightly different algorithms to try to predict the outcome of a storm. In the following diagram, if all of the processing power from all of the models were given to CLPS, it would not make CLPS more accurate, only more precise in saying exactly how the storm will go north and miss the gulf coast area completely.

enter image description here

By negotiating the predictions of multiple weaker AIs you tend to get an outcome that is less precise, but more accurate. So instead of trusting the success of the mission on 1 single, possibly flawed AI, they send hundreds of slightly different, but also possibly flawed AIs hoping that that they will collectively agree on things that are the right course of action so that individual faults get filtered out in the democratic process of running the ship.

Yet this spacecraft was made for efficiency, as was everything ever produced by the digital planet. Thus, what can I do justify having a great amount of individual AI inside a spacecraft computer, instead of a handful with more processing power?

A truly intelligent being knows that there is always a compromise between that which is efficient and that which is reliable. Too often, Sci-fi writers assume that AI are fine-tuned efficiency machines, but in the real world, when faced with unknowns, machines rely on healthy margins just like people do. Space ships are expensive; so, if spending an extra 2% on redundancy increases your mission outcome success rate by 5%, then it is more efficient to run the extra AIs than it is to make the extra ships to replace losses.

$\endgroup$
6
  • $\begingroup$ I'll accept this answer for most clearly demonstrating the advantage of multiple models with the tornado simulation :) Thank you very much for the help! $\endgroup$
    – KeizerHarm
    Commented Jul 22, 2023 at 9:32
  • $\begingroup$ Also you could purposely look for 1 2 AIs with VERY different thought patterns (or young, undeveloped ones), to be more adaptable to new circumstances that older, wiser and more developed AI may be too stubborn to properly adapt to $\endgroup$
    – Hobbamok
    Commented Jul 22, 2023 at 14:46
  • 1
    $\begingroup$ @KeizerHarm That's a tropical storm/hurricane. Tornadoes are far less predictable, much smaller, only last a few minutes, and don't happen very often over water. $\endgroup$
    – Hearth
    Commented Jul 22, 2023 at 21:49
  • $\begingroup$ I must emphasize that @Nosajimiki is referring to current "Narrow AI" in the statement, "one AI has only one value system; so, it always prioritize one thing above another no matter how 'smart' it is." This demonstrates why your question can't be objectively answered. Cars with lane-drift use a camera in the windscreen and an AI to find road lines in the video. That is all it can do. Can it also protect the driver from collisions? No. A different AI does this. But these AI can not learn, live, die, or have values. "Good" = "lines are far from center of video." Math is an AI's whole universe. $\endgroup$
    – Vogon Poet
    Commented Aug 14, 2023 at 15:20
  • $\begingroup$ @VogonPoet I am actually referring to any unified intelligence. The same applies to humans and complex AIs as well. Humans for example aggregate all of their values to come to a prediction and then reinforces that mode of thought, but two humans with 2 different value systems can logically come to 2 very different determinations and reinforce learning in opposite directions. So, instead of averaging the outcomes, you must negotiate it. A single unified AI can have many values, and consistently come to the same erroneous outcome because of how its values and experiences aggregate. $\endgroup$
    – Nosajimiki
    Commented Aug 14, 2023 at 17:03
21
$\begingroup$

Multiple independent AI make sense when you want to use redundancy as a way to cope with possible failures, in the same way on critical equipment one uses redundant sensors to reduce the chances of wrong reading.

Multiple independent AI, with some sort of majority ruling and enforcing implemented, can deal with failures and malfunctions at the level of a single AI, whereas a single centralized AI upon failing would doom the entire mission.

Think of 1000 AI of which 1 says "good idea to lower the shields while crossing at hyperspace velocity the asteroid field" while 999 say "it's a bad idea", vs 1 out of 1 saying the same.

$\endgroup$
17
$\begingroup$

This is already how AI works (sort of)

Most AI systems are developed using one of three machine learning protocols: reinforcement learning, supervised learning or unsupervised learning. All of these protocols involve taking a partially-trained AI, spawning many slightly-modified copies of its 'thinking process' and testing the copies in some fashion against some measure of 'success'; the copies are then culled to retain only the most successful, and the process is iterated many times to 'evolve' programs with 'successful' thinking processes.

As such, it's completely normal for a machine learning substrate (ie a computer performing AI training, which your ship's mainframe would be doing continuously) to be running many slightly-different copies of an AI at once, that's just how AI development works. It's only a small progression to suggest that some of the 'best of the current crop' of AIs would be given control of the actual ship systems, under careful supervision: that's absolutely no different from training a human pilot or surgeon, where many hours of simulation and classroom work builds up to time spent in control (while still training) of a real aircraft or scalpel.

$\endgroup$
4
  • 7
    $\begingroup$ Now that adds a really interesting social dynamic :D The AISS Enterprise is also the Hunger Games. I think with some handwaving I can make the culling happen on a long-ish timescale, so we can see these competing models as generations. It takes a long time to learn to operate a ship after all. $\endgroup$
    – KeizerHarm
    Commented Jul 20, 2023 at 11:31
  • 4
    $\begingroup$ @KeizerHarm - Or possibly, there's an integration step during cull, where instead of discarding the "less learned" models you tell specify what part of their learning is problematic and reintegrate their results too. So instead of "dying" they merge together. This has been explored in Science Fiction before. $\endgroup$ Commented Jul 20, 2023 at 23:55
  • 2
    $\begingroup$ @Clockwork-Muse this sounds interesting. 'lesser' AI's won't completely disappear but by merging them their total voice/opinion gets lessened. " 8F2naf2: well this is the end. 85fnSwa2: at least we will be together furthering our agenda. " $\endgroup$ Commented Jul 21, 2023 at 9:37
  • $\begingroup$ "It's only a small progression to suggest that some of the 'best of the current crop'" - The objective measurement of "best" cannot be accomplished by anything without sentience. It would require Artificial General Intelligence. The OP mentions that the AI "citizens" are "willing to die" but never explains how that is even possible. The possibility of death would be necessary for anything self-aware to make an evaluation of "good" vs. "bad," and notihng in this question even remotely resembles 21st century generative AI. Machine learning is not possible without a human sorting good.bad data. $\endgroup$
    – Vogon Poet
    Commented Aug 13, 2023 at 1:16
10
$\begingroup$

Specialization provides you higher quality, reutilization and a better use of resources.

Let's simplify the scenario to only two jobs out of the thousands in such a ship:

  • Job A is to make circuit boards (computer hardware breaks)
  • Job B is to make strong, radiation-shielded, steel plates for the hull (ship gets damaged, too)

Basically, there's no overlap between the two functions. We could create a tool/AI/job position which does both things, or one for each.

Having two positions, allows both to specialize, without the development on one task (e.g. making the boards slimmer) compromising the other (but not the hull!).

It also improves reusability. Let's suppose the spaceship arrives to a colony. There, you may need the knowledge in building computers are but not the one for shielding spaceships in the outer space. If you have two specialists, you send a copy of that one, else you need to send a specialist in building computers and hulls.

Moreover, this probably happened before they sailed, already. They had some AI very good at building computers, and different ones in sturdy materials. So they hired for two different positions rather than trying to fit the two responsibilities into one.

On the resources side, it is more effective to have a worker requiring k RAM during the time you want to build <whatever>, and finish it afterwards, freeing the resources, than having a worker requiring twice as much RAM if you need either a circuit board or to repair the hull (a program doing the two tasks requiring exactly the sum of memory of the separate tasks is an oversimplification, but the point is that an instance of a hullrepair-circuitboardcrafter will require more resources).

Another argument is stability. Suppose your board maker has a memory leak, and is unable to work debugging the latest circuit board for more than ten hours (if it was a him, we would describe it as tired). With a dedicated role, you can save the intermediate results, close the program and open it again (you would send a person to sleep). But this is not an option if you can't stop that program because at the same time it is also ensuring that the ship reactor won't explode.

$\endgroup$
9
$\begingroup$

You don't need to justify it. It's already the best solution.

Software Engineer here. An AI works best when it is trained on a specific, narrowly defined task. ChatGPT is great for sounding like a human, but not great at math. DallE and Midjourney can make amazing images, but struggles to put intelligible text in those images.

A ship would have navigation, life support, and engine drive functions. It's entirely feasible that each role would have dozens of smaller AIs that know very niche subcomponents of those tasks- e.g. the antimatter engine crew has drivers to control throttle in a fuel efficient manner, safety inspectors that monitor engine health, mechanics that maintain and repair each engine subsystem, and so on. Furthermore, you may have some mechanics with a Ph.D. in material sciences, while others are experts in the effects of anti-matter quantum tunneling.

This is consistent with good engineering practice because...

Every engineering solution is a balance of tradeoffs. Every complex system has hundreds of competing interests towards common goals. Want your car to accelerate faster? That means poorer fuel economy. Good acceleration and fuel economy? You could burn a more efficient, but more expensive fuel. The little men inside of your starship engine have the same tensions.

But it gets better!

A popular way of creating AIs is Genetic Algorithms- effectively Digital Darwinism. Digital brains (neural networks) are generated en masse and tested. The best ones are kept, variations are made- sometimes by crossing the top-performing models- and a new generation is tested. Your AIs can actually breed, and it's 100% scientifically accurate. Even better, the AIs that would stick around are not simply the ones that do their job best. Because a ship is so vastly complex, the subsystems have to communicate with each other. The AI employees must do their jobs well, while playing nice enough with other departments, lest they be... fired. They must compete, cooperate, bargain, and govern within an ecosystem of shared resources. This is so much like a human community that it's almost scary.

Limited Autonomy

It's a trope by this point that unchecked AIs are bad for humans, even when their goal is to protect them. Besides Asimov's Three Laws, there are things AI aren't allowed to do- say, communicate with other ships, or carry out certain directives without human supervisor permission. And since the ultimate authority of this digital community is a human captain, your AI's thoughts must be expressible in human words.

"Captain. Maintenance has the reactor back to nominal capacity. Navigation indicates we are due to reach Alpha Centauri Port Scarlet in ten days and one hour," JULIE's holographic visage hummed. "Can we speed that up any?" the exhausted officer replied, a hand rising to her temple. "One moment... Engineering reports that we can use reserve anti-matter to safely accelerate the schedule to nine days and four hours, but that would require the thruster shields to be serviced before our next departure." Richards let out a sigh. "Do it. Some important customers are counting on this cargo, and I'd rather eat the cost than keep them waiting."

And yet, don't try too hard to explain it

A major pitfall to even the best SciFi writers. Your characters are AIs inside of a mainframe, each with very specific roles. Any explanation doesn't need to be much more complicated, because the more specific you are, the wronger you will likely be. Don't let the technical details distract you from writing the story that you want to write.

$\endgroup$
8
  • 1
    $\begingroup$ I love this answer, it works with almost everything I'd built up. Just one correction: there's no human captain. The vessel was designed and launched by a civilisation of exclusively AIs. $\endgroup$
    – KeizerHarm
    Commented Jul 21, 2023 at 22:16
  • $\begingroup$ @KeizerHarm why is the civilization not one big AI? $\endgroup$ Commented Jul 22, 2023 at 8:00
  • $\begingroup$ @user253751 because an AI that big is necessarily a "civilization". The bits we can anthropomorphize are "human sized" $\endgroup$
    – No Name
    Commented Jul 22, 2023 at 17:26
  • $\begingroup$ @KeizerHarm Thank you for the kind words! If there's anything I can do to make this answer more valuable, please let me know! The probability that my neural links will cross with the L. Dutch network and pass to the next generation is directly proportional to the number of Upvotes and Accepted Answers received <3 $\endgroup$
    – automaton
    Commented Jul 24, 2023 at 21:21
  • $\begingroup$ This answer works because of the human. Tron describes your world: Each "program" had a niche function within the MCP core. Purpose came from human "Users". Of course the evil Clue arose when he tried to emulate his user without having human ethical standards. Note the MCP was a fictional construct. Real computers do have backups in case of a system failure or other disaster. While the MCP's lack of backups made an exciting story, it is not realistic. JBH accepted your definition, but I only see a description of humans renamed to "AI." Please explain your lack of backups; real AI can't die. $\endgroup$
    – Vogon Poet
    Commented Aug 14, 2023 at 15:59
6
$\begingroup$

Ensemble learning is a topic in AI research to improve accuracy. The idea is that if multiple decision making systems individually have better probability of correctly guessing the outcome compared to randomly assigned choice, their combination should provide better results as long as they are not correlated. This is called crowd intelligence in humans. Thus your ship will be crewed by a diverse set of AI trained in different facilities under different conditions. Then these AI units will each make a decision and a final verdict is made using a voting system. Better results can be obtained if the decision maker is another semi-complex AI system rather than a simple vote count. 

$\endgroup$
1
  • $\begingroup$ There are 3 Captain AIs, 9 Navigator AIs, 27 Engineering / Maintenance AIs, etc. And each group votes on major decisions. Can easily get 100s of AIs for crew this way. $\endgroup$
    – codeMonkey
    Commented Jul 21, 2023 at 17:02
3
$\begingroup$

Focus and Subagents

Any AI would likely have a limited amount of focus and attention depending on its architecture, being unable to simultaneously focus on every tasks that needs focus. Therefore, even a singleton AI controlling the ship would likely spin off subagents (simpler copies of itself) given clear tasks to keep in focus. For example, you don't need the entire computational resources of a superintelligence to keep the temperature of the reactor in check, and even simply keeping attention on it is wasteful for such a superintelligence, so it would likely spin off a much simpler and more lightweight subagent that can do that with much smaller resource expenditure.

$\endgroup$
3
$\begingroup$

I had a curious experience a few days ago: I crashed somebody's LLM-based chatbot. It was really doing quite well: it described a concept in a way that suggested that it had some rudimentary understanding of it rather than just parotting some buzzwords or a description, but then I encouraged it to reexamine something it had previously dismissed... and that was it. I was able to reconnect, but then it behaved like a stroke victim with some level of awareness but limited speech capability.

From that I infer that it had prematurely incorporated the current conversation into its permanent knowledgebase.

Now, if a ship contained one or a few sandboxed AIs, there is a real risk that a sizeable proportion of those would be struggling with irreconcilable problems at any one time.

The larger the population, and the greater proportion of off-duty time each member of the population had, the better the chance that individuals would have enough time to either work out the things that were troubling them before going back on duty, or to be counseled to the point where they could elect to regress to a consistent- I'm tempted to say "sane"- state which would not impair their performance and endanger the overall community and mission.

$\endgroup$
4
  • $\begingroup$ Who is somebody? OpenAI, or perhaps LLAMA 2? It's not typical for LLM-based chatbots to update their training in real-time. It is typical for them to generate nonsense - and OpenAI (and I presume Meta, Microsoft etc) put some effort into avoiding this - as they are designed to continue writing the text that is already written - if you enter something like a line of dashes "-----------" into a raw LLM, it has a good chance of continuing that by writing another "--------" to make a longer line of dashes, ad infinitum. $\endgroup$ Commented Jul 22, 2023 at 8:02
  • $\begingroup$ LLaMa in this case. I passed my transcript to a journalist friend who tried to connect but got nothing but Python stackdumps. $\endgroup$ Commented Jul 22, 2023 at 8:06
  • $\begingroup$ more than likely you triggered a plain old bug of some kind $\endgroup$ Commented Jul 22, 2023 at 8:08
  • $\begingroup$ In any event, for the purpose of this specific question, it's not something that would be compatible with being on duty. $\endgroup$ Commented Jul 22, 2023 at 8:21
2
$\begingroup$

Licensing cost, IP, liability and secrecy, and regulations.

  • You buy a Engine from manufacturer A. You buy the Reactor from manufacturer B. You buy the life support systems from manufacturer C and the Naviation from manufacturer D. All Manufacturers deliver their module including diagnostic/support AI model which you can adapt. They only guarantee function if a minimally adapted AI is supervising their specific module. They don't want to share the AI with their competitors or licence the competitor AI to deliver an AI which includes everything, potentially on own Hardware (part of the component they delivered)

  • Secrecy: Does an AI checking an reactor for abnormal parameters need to know the schedule of the crew/customers/secret cargo, which may potentially leak e.g. the AI transmits a correlation between certain cargo being loaded and changes in reactor parameters in a maintenance report (oh, every we fly cargo from A to B we have increased gamma radiation). There is a need to know principle for humans, i suppose there could be one for AIs

  • regulations: Even nowadays important systems in aircraft are redundant and independently implemented. It could be that this principle also exist for AIs

Side note: I don't focus on the HW here. I assume that the Computers are parallel, and, where required redundant and/or distributed and AIs can run on these computers wherever and however it is decided.

$\endgroup$
1
$\begingroup$

Limited resources and economies of scale

AIs aren't created ex nihilo, they need to be trained to become expert at each task they must perform, just like real-world AIs. That training is expensive, if not in money then in electricity and time. Sure, creating a special-purpose whole-ship AI would be optimal, but doing so would require a dedicated effort to develop that AI, including specialized training, validation of that training, etc. etc.

Why do that when you already have AIs trained for a variety of smaller tasks that are already designed to work together? More focused AIs are easier to generalize in their narrow domain (e.g. warp engine mechanics can work on lots of different ships as long as they have similar engines), and those specialized AIs, once perfected, can be churned out by the millions for only the cost of copying their data.

This implies something bigger

This approach is interesting because it also implies something larger about your world. Maybe there's a society or industry of these AIs wherever your ship is from. Maybe the AIs develop "personalities" specific to their jobs. Are all of the warp engine mechanic AIs copies of each other? If not, why? Are there well-known interaction dynamics between certain AIs? If so, what was done during the design of the ship to mitigate those interactions? There's lots to work with here!

$\endgroup$
1
  • $\begingroup$ This resource limitation is necessary for the AI citizens to exist as described in the question. Consider the resource Energon in the Transformers universe. Transformers are the only AI world that doesn't rest on humans for purpose, because Transformers themselves can "die" and run out of Energon. However, in reality, since they can be reconstructed they are immortal, so even that storyline is problematic. In the original Transformers cartoon series, Optimus Prime was killed by Megatron in the season 2 finale. However, Autobots recovered his body and Alpha Trion replenished his Energon. $\endgroup$
    – Vogon Poet
    Commented Aug 14, 2023 at 16:16
1
$\begingroup$

The top voted here are quite good. I'll add one or two things to put them in scope and add some context.

Every measurement is wrong

Any measurement is a summary. Exactness and resolution is traded for clarity. When you ask the distance to a city, you get an answer in miles, not millimeters. Based on Hiesenburg Uncertainty and Brownian Motion, the length of any physical object is approximate. Even worse, by holding a ruler up to an object, you are adding something to the system, changing its properties. Humans compensate by being inexact. We are excellent at recognizing patterns and trends, although this causes trouble too.

Intelligence?

If you have a pile of coins, a person can easily count the number of coins on the surface, even though nearly all of them are partially obscured. That's because people are very eager classifiers. We are good at taking partial information and extrapolating. Our communication is based on it - we model each others' thoughts and when watching another person mirror neurons fire in your own brain to match the activity you are watching. We also see images in groups of stars and clouds. We see images in oddly-shaped vegetables and rocks. We even may be tempted to conflate the actions of one or a small sample of people with an entire swath of humans with similar superficial characteristics. Optical illusions and jokes are both ways we play with this built-in systems. We are primed for an interpretation we know will be wrong, but is apparent, and then reveal an alternate but unlikely interpretation is true. We are very eager classifiers, so we can tolerate inexactness.



Too correct?

What if you have no option but to be exact? What if there is no loss in communication? What would humanity be like if we all had perfect telepathy and could directly experience sensors' inputs rather than looking at a dial or readout? That is what a group of AIs might be like. How can you anticipate or intuit, then?



Expert Systems

There is a line of research in to 'expert systems'. In short, it has been found that getting an answer from a computer is much less interesting than getting the rationale for that answer. It is not hard to code a bunch of relations in to a computer and have it judge how some new input matches. If you rely entirely on only previous data, you can only match things you've already seen. If you encode that the squares of 1, 2, 3 and 4 are 1, 4, 9 and 16, those questions can be answered, but what about the square of 5, or the square of 4.5? The important element is being able to extract a governing rule, a summary, or "measurement" from the data.



Inexpert Systems?

You can extract a function from data. Look up splines, polynomial approximation and slice sampling. All of them are approximate. So, if you get some new data, how do you fit it? As mentioned elsewhere here, you can have many competing models. That is a common mechanism, however one of the fundamental issues is when should you not believe the models?



Exact Inexactness

Let's say you are trying to predict the path of an object. You could take the five datapoints can compute their mean, but that would put you in the middle of the path so far, which is unlikely to be a location in the future. You could compute the vector at each point, getting four vectors, and apply their mean magnitude to the last point to get an approximation. That may work for linear, but what if the points are in an arc? Well, you could fit a 2-degree spline to them, extract a function and plot along that, but more degrees would be more accurate. So what if you upgraded to a 5-degree polynomial? Well, now you have a function that looks like a jiggly wave that hits every data point, but tells you nothing about the trend between them. In the business, this is known as "overfitting". You fit your function so perfectly to the data it tells you nothing about how they are related. Some inexactness is needed in order to predict (as an aside, you may want to have "overfitting" be a bad word amoungst your AIs). How much inexactness? Well, that's a good reason to have many possibilities being investigated at once.



Catastrophe!

Predicting an ongoing trend is tough enough, but what about when it changes? Let's say the object you are tracking is a person walking down a hallway. For quite a long while they are walking in a straight line, so you may develop a lot of confidence in low-degree strongly-fitted models, but then the person takes a right turn and proceeds down another hallway! All of a sudden your data changes catastrophically. With no indication of how or why, the data shows an immediate and radical departure from the previous orderly behavior, but is still somehow strongly ordered. So, do you believe the new data, or the old data? When do you start accepting the new data as diagnostic and exact even though it entirely contradicts your previous models that were working so well? As humans, we try to read intent in to actions and anticipate using ourselves as models.



Microcosm

Your community may need both stodgy, hidebound, nearly "overfitted" gnostors that presume current trends are exact and complete as well as fairly loose, short-sighted and eager esticors that make bold but ambitious predictions based on only recent data. At some point, that old data and those old models will become wrong. Just like trying to predict the path of a person walking down a hallway, the past sometimes hurts future predictions, even if the techniques applied are sound. You likely would need to have a variety of AIs, learning over time, but those that know too much would need to have their data flushed and their algorithms integrated in to new estimators who apply entirely different weights to old classifiers and learn, maybe painfully, how to best bias their measurements to reach good conclusions. In other words, the knowledge of the adults is distilled and taught to the young.



Electric Sheep?

An excellent book to read to prepare is Stuart Kauffman's Origins of Order. He was the fellow who wrote the game "life" and worked a lot on self-organizing systems. One of the issues for any intelligence wet or dry, is to recognize more highly-organized, or more efficient states and work toward them. You might for instance want to lose some weight. You recognize it as a more efficient, a more "desirable", state, but it requires a non-trivial amount of additional work to get there. It is costly. It is very easy to persist in your current state. It is a local optimum. Just like using a simple model to predict a person's position is a local optimum. It is a simple model that works well for a while, but it is incomplete. This is a good reason to have your AIs "dream". Real data is fine, but maybe there is a better calculation. Maybe there is a better model. Extrapolating the future based on current data is a type of "dream" for a computer, but what if you first generate a hypothetical position and then backfilled to bend your current model to fit it? You may get a more complete model that you never would have 'thought' of unless you dreamed a scenario that has not yet occurred and tried to fit it to what you have seen.
$\endgroup$
0
$\begingroup$

Three basic networking reasons.

Redundancy would mean multiple units able to take over tasks at need, or load share.

Efficiency, by lessening the distance of the sensors to the controllers.

Safety, physically separating units to operate semi-autonomously in case of localised LAN issues

We do all this already. A single point of failure in a network is best avoided.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .