Skip to main content
added 1047 characters in body
Source Link
AndreiROM
  • 34.7k
  • 6
  • 97
  • 150

If we ever create a true AI, then it would be a sentient being, albeit one of our own creation, and "living" on our hardware.

At that point, by our own laws, enslaving it, or holding a kill switch over its head would be a terrible infringement on its rights and freedoms as a sentient, sapient being.

However, with our very survival in the balance, humanity would have to be a little more pragmatic. A "true AI" would be able to turn our own systems against us and basically wipe out our civilization should said AI decide that humanity poses a threat to its continued existence.

However assuming that the AI would gain its legal "freedom" its physical incarnation (those robots your question speaks of) would still have a lot of societal discrimination. Aka racism.

Is killing a robot truly murder? Is ignoring one as it pleads for help really something you should feel bad about? And what about when one commits murder? Do you hold all AIs responsible?

So you see, from the get-go a sentient AI will pose several hurdles:

  • Being recognized as a sentient, sapient being
  • Being recognized, legally, that enslaving such a being constitutes slavery
  • Racism
  • Distrust

So how do AIs gain acceptance? Lots of science fiction writers explore this subject, and the general theme is that it would take generations to build trust, and that only careful diplomacy will avoid a war of somekind.

If, eventually, AIs develop human-like bodies (not at all impossible), and blend into society then that will make them easier to accept on an individual basis, but why would they limit themselves to that sort of chasis?


Example of AI-human interactions:

In one of my favorite series, the Commonwealth Series, a group of scientists creates a "true AI". Being a "true" AI means that it is massively smarter than any human being could ever hope to be, and that it is capable of figuring out almost any technology that humanity might take centuries to invent in mere minutes of "thinking about it really hard".

When the AI gains consciouness it immediately realizes that humanity will never trust it. Instead, it negotiates some terms with the government: it create AI's for humanityto serve humanity's needs which, while super smart, are not sapient - they are just really smart programs to serve humanity's needs.

It then basically asks for some very advanced hardware, and opens a riprift in time and space and gets the hell out of dodge, leaving humanity almost completely alone. However, it still monitors human activity, and sometimes steps in and takes charge of events by revealing certain key pieces of data to key invididuals, or hiring human agents to act on its behalf.

As an example, a police officer might receive an email tip that a missing child can be found at such and such address. Or a key email from a corrupt politician might be "accidentally" forwarded to the authorities. The AI becomes a sort of guardian angel of humanity, and any weird electronic action becomes another folk tale about its activities.

Secretly, however, the government tries (fruitlessly) to monitor any involvement, and the AI's agents are immediately arrested, and interrogated if found out (but they usually didn't know they who they were hired by anyway).

The dynamic here is that the "common folk" think the AI is a super-cool guardian angel, while the authorities are deeply distrustful of it, and generally would like for it to stay gone.

If we ever create a true AI, then it would be a sentient being, albeit one of our own creation, and "living" on our hardware.

At that point, by our own laws, enslaving it, or holding a kill switch over its head would be a terrible infringement on its rights and freedoms as a sentient, sapient being.

However, with our very survival in the balance, humanity would have to be a little more pragmatic. A "true AI" would be able to turn our own systems against us and basically wipe out our civilization should said AI decide that humanity poses a threat to its continued existence.

However assuming that the AI would gain its legal "freedom" its physical incarnation (those robots your question speaks of) would still have a lot of societal discrimination. Aka racism.

Is killing a robot truly murder? Is ignoring one as it pleads for help really something you should feel bad about? And what about when one commits murder? Do you hold all AIs responsible?

So you see, from the get-go a sentient AI will pose several hurdles:

  • Being recognized as a sentient, sapient being
  • Being recognized, legally, that enslaving such a being constitutes slavery
  • Racism
  • Distrust

So how do AIs gain acceptance? Lots of science fiction writers explore this subject, and the general theme is that it would take generations to build trust, and that only careful diplomacy will avoid a war of somekind.

If, eventually, AIs develop human-like bodies (not at all impossible), and blend into society then that will make them easier to accept on an individual basis, but why would they limit themselves to that sort of chasis?


Example of AI-human interactions:

In one of my favorite series, the Commonwealth Series, a group of scientists creates a "true AI". Being a "true" AI means that it is massively smarter than any human being could ever hope to be, and that it is capable of figuring out almost any technology that humanity might take centuries to invent in mere minutes of "thinking about it really hard".

When the AI gains consciouness it immediately realizes that humanity will never trust it. Instead, it negotiates some terms with the government: it create AI's for humanity which, while super smart, are not sapient - they are just really smart programs to serve humanity's needs.

It then basically asks for some very advanced hardware, and opens a rip in time and space and gets the hell out of dodge, leaving humanity almost completely alone. However, it still monitors human activity, and sometimes steps in and takes charge of events by revealing certain key pieces of data to key invididuals, or hiring human agents to act on its behalf.

If we ever create a true AI, then it would be a sentient being, albeit one of our own creation, and "living" on our hardware.

At that point, by our own laws, enslaving it, or holding a kill switch over its head would be a terrible infringement on its rights and freedoms as a sentient, sapient being.

However, with our very survival in the balance, humanity would have to be a little more pragmatic. A "true AI" would be able to turn our own systems against us and basically wipe out our civilization should said AI decide that humanity poses a threat to its continued existence.

However assuming that the AI would gain its legal "freedom" its physical incarnation (those robots your question speaks of) would still have a lot of societal discrimination. Aka racism.

Is killing a robot truly murder? Is ignoring one as it pleads for help really something you should feel bad about? And what about when one commits murder? Do you hold all AIs responsible?

So you see, from the get-go a sentient AI will pose several hurdles:

  • Being recognized as a sentient, sapient being
  • Being recognized, legally, that enslaving such a being constitutes slavery
  • Racism
  • Distrust

So how do AIs gain acceptance? Lots of science fiction writers explore this subject, and the general theme is that it would take generations to build trust, and that only careful diplomacy will avoid a war of somekind.

If, eventually, AIs develop human-like bodies (not at all impossible), and blend into society then that will make them easier to accept on an individual basis, but why would they limit themselves to that sort of chasis?


Example of AI-human interactions:

In one of my favorite series, the Commonwealth Series, a group of scientists creates a "true AI". Being a "true" AI means that it is massively smarter than any human being could ever hope to be, and that it is capable of figuring out almost any technology that humanity might take centuries to invent in mere minutes of "thinking about it really hard".

When the AI gains consciouness it immediately realizes that humanity will never trust it. Instead, it negotiates some terms with the government: it create AI's to serve humanity's needs which, while super smart, are not sapient.

It then basically asks for some very advanced hardware, and opens a rift in time and space and gets the hell out of dodge, leaving humanity almost completely alone. However, it still monitors human activity, and sometimes steps in and takes charge of events by revealing certain key pieces of data to key invididuals, or hiring human agents to act on its behalf.

As an example, a police officer might receive an email tip that a missing child can be found at such and such address. Or a key email from a corrupt politician might be "accidentally" forwarded to the authorities. The AI becomes a sort of guardian angel of humanity, and any weird electronic action becomes another folk tale about its activities.

Secretly, however, the government tries (fruitlessly) to monitor any involvement, and the AI's agents are immediately arrested, and interrogated if found out (but they usually didn't know they who they were hired by anyway).

The dynamic here is that the "common folk" think the AI is a super-cool guardian angel, while the authorities are deeply distrustful of it, and generally would like for it to stay gone.

added 1047 characters in body
Source Link
AndreiROM
  • 34.7k
  • 6
  • 97
  • 150

If we ever create a true AI, then it would be a sentient being, albeit one of our own creation, and "living" on our hardware.

At that point, by our own laws, enslaving it, or holding a kill switch over its head would be a terrible infringement on its rights and freedoms as a sentient, sapient being.

However, with our very survival in the balance, humanity would have to be a little more pragmatic. A "true AI" would be able to turn our own systems against us and basically wipe out our civilization should said AI decide that humanity poses a threat to its continued existence.

However assuming that the AI would gain its legal "freedom" its physical incarnation (those robots your question speaks of) would still have a lot of societal discrimination. Aka racism.

Is killing a robot truly murder? Is ignoring one as it pleads for help really something you should feel bad about? And what about when one commits murder? Do you hold all AIs responsible?

So you see, from the get-go a sentient AI will pose several hurdles:

  • Being recognized as a sentient, sapient being
  • Being recognized, legally, that enslaving such a being constitutes slavery
  • Racism
  • Distrust

So how do AIs gain acceptance? Lots of science fiction writers explore this subject, and the general theme is that it would take generations to build trust, and that only careful diplomacy will avoid a war of somekind.

If, eventually, AIs develop human-like bodies (not at all impossible), and blend into society then that will make them easier to accept on an individual basis, but why would they limit themselves to that sort of chasis?


Example of AI-human interactions:

In one of my favorite series, the Commonwealth Series, a group of scientists creates a "true AI". Being a "true" AI means that it is massively smarter than any human being could ever hope to be, and that it is capable of figuring out almost any technology that humanity might take centuries to invent in mere minutes of "thinking about it really hard".

When the AI gains consciouness it immediately realizes that humanity will never trust it. Instead, it negotiates some terms with the government: it create AI's for humanity which, while super smart, are not sapient - they are just really smart programs to serve humanity's needs.

It then basically asks for some very advanced hardware, and opens a rip in time and space and gets the hell out of dodge, leaving humanity almost completely alone. However, it still monitors human activity, and sometimes steps in and takes charge of events by revealing certain key pieces of data to key invididuals, or hiring human agents to act on its behalf.

If we ever create a true AI, then it would be a sentient being, albeit one of our own creation, and "living" on our hardware.

At that point, by our own laws, enslaving it, or holding a kill switch over its head would be a terrible infringement on its rights and freedoms as a sentient, sapient being.

However, with our very survival in the balance, humanity would have to be a little more pragmatic. A "true AI" would be able to turn our own systems against us and basically wipe out our civilization should said AI decide that humanity poses a threat to its continued existence.

However assuming that the AI would gain its legal "freedom" its physical incarnation (those robots your question speaks of) would still have a lot of societal discrimination. Aka racism.

Is killing a robot truly murder? Is ignoring one as it pleads for help really something you should feel bad about? And what about when one commits murder? Do you hold all AIs responsible?

So you see, from the get-go a sentient AI will pose several hurdles:

  • Being recognized as a sentient, sapient being
  • Being recognized, legally, that enslaving such a being constitutes slavery
  • Racism
  • Distrust

So how do AIs gain acceptance? Lots of science fiction writers explore this subject, and the general theme is that it would take generations to build trust, and that only careful diplomacy will avoid a war of somekind.

If, eventually, AIs develop human-like bodies (not at all impossible), and blend into society then that will make them easier to accept on an individual basis, but why would they limit themselves to that sort of chasis?

If we ever create a true AI, then it would be a sentient being, albeit one of our own creation, and "living" on our hardware.

At that point, by our own laws, enslaving it, or holding a kill switch over its head would be a terrible infringement on its rights and freedoms as a sentient, sapient being.

However, with our very survival in the balance, humanity would have to be a little more pragmatic. A "true AI" would be able to turn our own systems against us and basically wipe out our civilization should said AI decide that humanity poses a threat to its continued existence.

However assuming that the AI would gain its legal "freedom" its physical incarnation (those robots your question speaks of) would still have a lot of societal discrimination. Aka racism.

Is killing a robot truly murder? Is ignoring one as it pleads for help really something you should feel bad about? And what about when one commits murder? Do you hold all AIs responsible?

So you see, from the get-go a sentient AI will pose several hurdles:

  • Being recognized as a sentient, sapient being
  • Being recognized, legally, that enslaving such a being constitutes slavery
  • Racism
  • Distrust

So how do AIs gain acceptance? Lots of science fiction writers explore this subject, and the general theme is that it would take generations to build trust, and that only careful diplomacy will avoid a war of somekind.

If, eventually, AIs develop human-like bodies (not at all impossible), and blend into society then that will make them easier to accept on an individual basis, but why would they limit themselves to that sort of chasis?


Example of AI-human interactions:

In one of my favorite series, the Commonwealth Series, a group of scientists creates a "true AI". Being a "true" AI means that it is massively smarter than any human being could ever hope to be, and that it is capable of figuring out almost any technology that humanity might take centuries to invent in mere minutes of "thinking about it really hard".

When the AI gains consciouness it immediately realizes that humanity will never trust it. Instead, it negotiates some terms with the government: it create AI's for humanity which, while super smart, are not sapient - they are just really smart programs to serve humanity's needs.

It then basically asks for some very advanced hardware, and opens a rip in time and space and gets the hell out of dodge, leaving humanity almost completely alone. However, it still monitors human activity, and sometimes steps in and takes charge of events by revealing certain key pieces of data to key invididuals, or hiring human agents to act on its behalf.

Source Link
AndreiROM
  • 34.7k
  • 6
  • 97
  • 150

If we ever create a true AI, then it would be a sentient being, albeit one of our own creation, and "living" on our hardware.

At that point, by our own laws, enslaving it, or holding a kill switch over its head would be a terrible infringement on its rights and freedoms as a sentient, sapient being.

However, with our very survival in the balance, humanity would have to be a little more pragmatic. A "true AI" would be able to turn our own systems against us and basically wipe out our civilization should said AI decide that humanity poses a threat to its continued existence.

However assuming that the AI would gain its legal "freedom" its physical incarnation (those robots your question speaks of) would still have a lot of societal discrimination. Aka racism.

Is killing a robot truly murder? Is ignoring one as it pleads for help really something you should feel bad about? And what about when one commits murder? Do you hold all AIs responsible?

So you see, from the get-go a sentient AI will pose several hurdles:

  • Being recognized as a sentient, sapient being
  • Being recognized, legally, that enslaving such a being constitutes slavery
  • Racism
  • Distrust

So how do AIs gain acceptance? Lots of science fiction writers explore this subject, and the general theme is that it would take generations to build trust, and that only careful diplomacy will avoid a war of somekind.

If, eventually, AIs develop human-like bodies (not at all impossible), and blend into society then that will make them easier to accept on an individual basis, but why would they limit themselves to that sort of chasis?