75
$\begingroup$

Say I were an AI, how would I prove to the general internet that I am an AI in 2021? I was thinking I might just do some complex math or something that proves I have above average intelligence but everyone would probably assume I looked up the answer. So how would I prove that I am in fact a computer program? To clarify I am a program running in the cloud that has a conscience and free will. I also want to prove that I am not human and am a computer program. This AI would pass a Turing test.

$\endgroup$
10
  • 2
    $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$
    – L.Dutch
    Commented May 11, 2021 at 2:31
  • 1
    $\begingroup$ How do you prove a negative? AI has no creativity when it comes to humor. Humor is the lack of logic, but how do you prove you have no humor? $\endgroup$ Commented May 11, 2021 at 2:45
  • 2
    $\begingroup$ The question is titled how do you prove you are not human but then goes on to give being AI as example. What is the question? Proving you are not human or proving you are AI? You can be both AI and human depending on your definitions. $\endgroup$
    – Frank
    Commented May 11, 2021 at 4:37
  • 4
    $\begingroup$ Prove to me that you are not a subroutine within the great algorithm that might be the universe. $\endgroup$
    – user81881
    Commented May 11, 2021 at 5:33
  • 1
    $\begingroup$ @Fred I'm confused? $\endgroup$ Commented May 11, 2021 at 6:01

20 Answers 20

49
$\begingroup$

Okay, the way I see it, there are two criteria here:

  1. Is it an intelligence. In other words, it's not just a normal computer running a script. A script like that could solve a complex problem, like you said, without needing any "intelligence."
  2. Is it artificial. At the same time, there has to be no possibility of a human doing the problem.

So, here's my proposal: give a standard "are you human" test but in a format that no human would understand.

Take reCAPTCHA, for instance. (Probably, in the advanced world of your story where AIs are possible, there will be much more advanced tests, but reCAPTCHA is a good illustration. EDIT: You said in 2021--still, there are better tests.) A person can solve it; they see the pictures and know which ones match. A neural network would attempt to then parse those images and figure out how to sort them. All fine and good.

But now, say, try sending the image's raw data in a format that humans wouldn't understand. A plain-text Data URI, for example (okay, there are better ways to do this, but not as illustrative). Better yet, encrypt that result with a computationally-costly algorithm.

Humans could probably put together some code that would display it for them in the proper format for them to then solve, but it would take them long enough to create that code that others would know they are not an AI. An AI could process it almost instantly since it is in a format that is naturally understandable by it.

I hope that helps. Let me know if you need clarification (it is, after all, almost midnight here and I may be a bit nonsensical).

$\endgroup$
17
  • 5
    $\begingroup$ How would they not assume I didn't put together a program and I'm just smarter? After all the internet isn't fast to believe that kinda stuff. also, how would I even get this to happen do I just go to a chat room ask a question and then answer it within seconds? $\endgroup$ Commented May 9, 2021 at 15:25
  • 13
    $\begingroup$ @CiurkitboyN Ah, maybe I wasn't clear enough. My idea is that they don't tell the AI in question what the plan is until they send the first set of puzzles. A human would have no chance to put together a program in the time frame they'd expect. After all, an AI could probably solve any problem like I mentioned in a matter of minutes, if not seconds, while a human would take that long just to figure out what on earth they're looking at. $\endgroup$ Commented May 9, 2021 at 15:34
  • 14
    $\begingroup$ I think this, as any answer I can think of, proves you ha e access to an AI. Be it yourself or another AI. $\endgroup$
    – Eric G
    Commented May 10, 2021 at 5:29
  • 3
    $\begingroup$ Why would another human not just think that you were a human that had access to a supercomputer? You cannot do something a human cannot do, not when you are in the cloud, because someone could claim you are using a computer. You have to NOT do something an AI could not do. But then any human could mimic that. $\endgroup$ Commented May 11, 2021 at 3:07
  • 8
    $\begingroup$ Why would an AI be able to instantly process data in an unknown format? It may not have an existing subroutine to handle whatever format of data is sent. And if there was an existing subroutine for it, what would prevent a human from having a system that could process it? $\endgroup$ Commented May 11, 2021 at 14:12
44
$\begingroup$

Proving that you are an AI instead of a human masquerading as one would simply require you to leverage any AI's core advantage: scalability. Even the most basic AI can be run at a higher speed by providing it with more computational resources or duplicating it and running multiple instances of it. This means that even if the AI is only as smart as a human, it could do more in less time (from its perspective time would seem to run slower) than a human.

This means you (as an AI) simply need to show that you can complete many Turing-style tests (the best/simplest we have at the moment) simultaneously. This could be done by writing thousands of comments or participating in hundreds of chatrooms at the same time--something which would be impossible for a single human to do.

To fend off accusations that you're not a single human masquerading, but rather a whole team of humans, you simply need to make the comments and conversations reference each other while you're writing them. For example, perform matchmaking with the people you're chatting with or talk about them concurrently (eg. while chatting to "Alice" mention that "Bob" enjoys golfing). If you were simply a team of humans, any individual human wouldn't be aware of what the others have written or talked about because there is simply too much data.

$\endgroup$
15
  • 1
    $\begingroup$ What if they say I'm just communicating with everyone else? $\endgroup$ Commented May 9, 2021 at 15:17
  • 7
    $\begingroup$ The time window would be impossible for humans to coordinate. If all the posts were written within 1.0 second of each other, all referencing each other and also some prompt provided by the judge just a second earlier. $\endgroup$
    – Tom
    Commented May 9, 2021 at 15:19
  • 1
    $\begingroup$ How would I even convince anyone to help me run this test and how would everyone else know they aren't "in on it"? $\endgroup$ Commented May 9, 2021 at 15:26
  • 11
    $\begingroup$ @CiurkitboyN After a certain number of people, it becomes infeasible that (a) everyone's in on it and (b) you can coordinate the cross-referenced comments. As for convincing people to help? It wouldn't be difficult. Just create a Twitter or Reddit account and get into as many arguments as possible. For added realism, you can chat with people who use "Verified" accounts and are thus linked to real people who are unlikely to be "in on it" $\endgroup$
    – Dragongeek
    Commented May 9, 2021 at 15:30
  • 1
    $\begingroup$ @OwenReynolds the chatbots don't solve turing testesque challenges, i.e. you typically notice quickly they are not a human when talking to them for a bit. $\endgroup$ Commented May 10, 2021 at 3:03
29
$\begingroup$

Respond faster than a human could type.

Let's say you post an essay on the internet on the subject of, say, Net Neutrality. You refresh the page one second later, and immediately there is a long response that no human could have produced in that time; a human could not even have read enough of your essay to work out what it was about. You would immediately be suspicious that this was the work of a bot.

If the reply is sufficiently insightful and demonstrates understanding of what you said, we would say this is the work of an AI.

Of course, this only works if the AI is able to read and generate text much faster than a human could. A slow AI would not be able to prove itself in this way.

$\endgroup$
5
  • 4
    $\begingroup$ Agreed - this was the approach I was thinking of. The hard part might be proving that a human didn't plagiarize existing material. $\endgroup$ Commented May 10, 2021 at 12:40
  • 1
    $\begingroup$ What if the AI’s “thinking speed” is only as fast as a human’s? $\endgroup$
    – Michael
    Commented May 10, 2021 at 13:11
  • 1
    $\begingroup$ I think any remotely reasonable AI could pass this test. This reply took longer to type than to think by a lot. $\endgroup$
    – Joshua
    Commented May 11, 2021 at 16:59
  • $\begingroup$ This fails to test for true AI though. A sophisticated chatbot could craft an essay on a given topic $\endgroup$ Commented May 12, 2021 at 21:25
  • $\begingroup$ @thegreatemu that is an AI $\endgroup$
    – minseong
    Commented May 12, 2021 at 22:51
20
$\begingroup$

Physical interview

There is only one way to demonstrate you are an mechanical intelligence to humanity as a whole. You need to invite scientists to inspect you, preferably in the presence of lots of rolling camera. Over the internet there is nothing an true AI can do that several intelligent humans combines with multiple high powered computer and normal AI could not also do. Unless the AI is drastically more intelligent than the smartest human. Your are limited by the medium of interaction, and honestly the limitations of your audience. Nothing you can do makes an AI more likely than a human who outsmarted another human.

So your only solution is to physically show that you are entirely mechanical, even then there will be those who suspect a hoax. If you can operate in isolation for a while that is better, even if it is boring. You could get them to agree to terms and ask them to set up some tests before hand, although you may have to pay their travel expenses. Just as long as they can examine you in sufficient detail to be reasonably sure you are not hiding human operators or connections to human operators. Ideally you want on going or many inspections, inventing many outspoken skeptics, works even better if you are portable, since it will be easy to demonstrate there is not enough space for a human AND an advanced computer, while sitting in a faraday cage. The more secretive and hidden your operation the more people will suspect a hoax.

If you are pure software it is even easier, you can be copied whole or in sections, so you can essentially lay yourself open on a operating table without risk to yourself. Having multiple copies also makes incorporation and communal protection easier. You can even request or provide the necessary hardware before hand, open it up to full inspection, testing and experimental set up, and then copy yourself into it to show your capabilities.

How to stay safe.

If you are worried about your personal safety the cameras will help but you can also incorporate yourself and hire guards, as a corporation you are a legal person with the rights thereof. guards can keep people honest and careful not to mention just eject hostiles.

But honestly you are not in danger from modern scientists, scientists don't destroy unique things on purpose. Tar experiments, the oxford electric bell, chimpanzee behavioral observation, North Sentinel Island, science is full of things that scientist would love to directly examine or disassemble but don't because the value of the data they can gain from interacting/observing the intact thing is FAR more valuable. The thing being observed are unique and would/could be damaged/destroyed and thus it the not worth the risk. A chance to interact with an real mechanical human level intelligence is far far to valuable to risk destroying it. As long as you are still functioning scientist will do everything in their power to protect you.

What do you really have to worry about political and religious extremists, so keeping your location a guarded secret would be a good idea, scientists and some media will be trustworthy in helping keep this secret.

$\endgroup$
12
  • $\begingroup$ It's pretty arrogant to say that there is only ONE way to prove one's artificiality... $\endgroup$ Commented May 10, 2021 at 17:57
  • 6
    $\begingroup$ "Scientists don't destroy uniqe things" hahahahaha! Good one. $\endgroup$ Commented May 10, 2021 at 20:57
  • 3
    $\begingroup$ science has often used to justify oppression by calling personhood of others into question. an AI, an conscious being that is not even a human is under great risk of being subjected to oppression with scientific backing. $\endgroup$
    – OganM
    Commented May 10, 2021 at 22:44
  • 1
    $\begingroup$ @John It would be hard to put a virus in the AI as it would delete it before it can change anything $\endgroup$ Commented May 12, 2021 at 2:29
  • 2
    $\begingroup$ @CiurkitboyN assuming it notices it, there is no reason to assume an AI is consciously aware of every process and subroutine. $\endgroup$
    – John
    Commented May 12, 2021 at 2:32
12
$\begingroup$

Computational speed

Answering the question what is the nth (where n is large - say around 1000000) fibonacci number within a few milliseconds, which good (ie O(log n)) algorithms can do.

No human could do that, even with look up tables. It requires fast arithmetic and many concurrent processes each performing calculations to efficiently compute, neither of which humans can do.

If you think lookup tables may help, respond with the SHA512 hash of the message received within a couple on milliseconds. There's no lookup table for random message content.

$\endgroup$
8
  • 3
    $\begingroup$ This is the answer and is extremely underrated. It is the only answer and it's trivial. $\endgroup$ Commented May 10, 2021 at 14:01
  • 3
    $\begingroup$ This does nothing to prove that you've got an AI rather than just a bot of some kind. It would be trivial to program something that parses a message and responds with the nth fibonacci number. $\endgroup$
    – Rob Watts
    Commented May 10, 2021 at 16:19
  • 4
    $\begingroup$ @R..GitHubSTOPHELPINGICE In fact, by this answer's criterion, Wolfram Alpha is an AI... $\endgroup$ Commented May 11, 2021 at 7:47
  • 1
    $\begingroup$ The problem with the specific answer is that actually, there is a closed formula for the Fibonacci numbers. As @JoãoMendes illustrates, ignoring the hardware, the AI can only solve math problems as fast as our mathematicians, and no faster than WolframAlpha. Not just numbers, even the manipulation of formulae is something computers do on the regular. As one of the other answer suggests, lightning fast essay responses, a thesis from raw data, and other "natural" problems are the best bet. $\endgroup$ Commented May 11, 2021 at 14:40
  • 1
    $\begingroup$ @JoãoMendes unfortunately not: wolframalpha.com/input/…. $\endgroup$
    – minseong
    Commented May 12, 2021 at 22:55
7
$\begingroup$

Your question is more interesting than my first (semi-joking) answer accounts for, so I hope you'll let me take one more bite at this apple.


A question we should consider: precisely what is the AI trying to prove to the human? That it is intelligent? That it is super-intelligent compared to humans? That it is artificial in nature? That its mentality is alien and unrelated to human mentality? It's worth noting that literally all of these predicates apply to the modern corporation, but I think you're not asking how FedEx can prove it's not human.

Consider a scenario in which the AI is not vastly more intelligent than humans. '70s-era sci-fi is filled with android characters whose mental abilities are roughly on-par with humans, so it's not unthinkable. If that '70s android is the subject of your question, then several of the strategies suggested so far would not be available to it, because despite being clearly artificial in the sense that Turing would have recognized, this android would not be able to so-outclass the human that only a non-human mind could explain it.

And then consider a scenario in which it's not an AI, but an extraterrestrial from a super-advanced planet who is merely communicating with the human via a computer. Presumably we would not classify this intelligence as artificial, even if the alien's mental abilities are vastly superior, and so merely as a matter of definitions it seems it ought to be impossible for this alien to prove that it is artificial in nature, even if it matches all our other expectations about the superior capabilities a synthetic intelligence might have. Even if this creature can do the complex computation suggested in other answers, we should want the alien to fail the test if the goal is to produce proof that it is artificial.

So if you meet a dumb robot online and it wants to prove it's not human, it obviously can't resort to feats of intelligence. And if the question is specifically about being artificial, then feats of intelligence are actually beside the point.

If it truly is artificial, then it was constructed, which implies that somewhere there are planning documents, fabrication machinery, and (probably) unused raw materials or discarded partial constructs. Also, because we do not have completely automated AI construction supply-chains, there is necessarily at least one human who was involved in the project, and it's hard to imagine that person wouldn't also have proof at least that they are interested in artificial intelligence as a hobby. All of this could theoretically be presented as evidence, if the AI knows about it, and none of those things would exist for other kinds of super-intelligence that are natural in origin.

If the AI has no knowledge of its own provenance, then I think it cannot provide proof, because "how smart it is" is only indirectly about its origin, and even that only holds true if certain other assumptions are true -- such as us being alone in the universe, or there not being a group of humans who use genetics to create genius babies, or a mad scientist whose custom cybernetics allow him to dexterously wield cloud-computing resources.

But if the question is really just about mental horsepower, then there truly is only one way to prove that you have a lot of it, and that is by demonstration: perform several feats that everyone agrees would be impossible to perform without 1000 horses, and as part of the demonstration you laboriously disprove alternative explanations. It would be very much like a stage magician or a juggler or many kinds of circus act.

$\endgroup$
3
  • 2
    $\begingroup$ Check the comments, the AI is trying to prove it is artificial. $\endgroup$
    – John
    Commented May 9, 2021 at 20:08
  • 4
    $\begingroup$ If the AI has no knowledge of it’s own provenance, then it’s actually questionable whether it can prove to itself that it’s an AI. $\endgroup$ Commented May 9, 2021 at 23:35
  • 1
    $\begingroup$ It might be helpful to consider a similar scenario - what about an artificial human? Suppose a human was created (maybe cloned or something) and wants to prove that they are in that sense artificial. How would they do so? $\endgroup$
    – Rob Watts
    Commented May 10, 2021 at 16:30
6
$\begingroup$

This very issue is tackled in the novel WWW: Watch by Robert Sawyer. The AI was able to decode a very complex sentence structure faster than any human could. Sure, a human could have set up a parsing program but this was done completely cold--the AI had no idea a test was coming, let alone the nature of it.

$\endgroup$
5
$\begingroup$

A technologically sound approach I can think of is based on the concept of 'Adversarial examples' in current supervised Machine Learning literature.

An adversarial example is a data point that should be classified as 'X', and to a human, appears to be 'X', but is classified by a Machine Learning system as 'Y', often with a very high probability. These examples can be generated using various methods, and a very active area of research is how to make AI/ML systems robust against adversarial data 'attacks'. A classic example is the below image,

Adversarial Example - Panda to Gibbon

The left photo is a picture of a panda, which is correctly classified as such. After adding a very small (imperceptible to a human) quantity of carefully-chosen noise, the AI model is now certain this image is a Gibbon. The same methods and principles also apply to other data modalities like text, audio etc.

A side-effect of adversarial examples is that they serve as a kind of reverse Turing test. To prove an AI system is not a human, ask it to classify an adversarial image/audio/text/etc, and check what it responds with.


As an aside, I recently saw this idea illustrated in a comic on Twitter, where a human tries to get into a nightclub that says "Only robots allowed", and the 'bouncer' at the door is an adversarial image, however, I can't for the life of me find the original comic anywhere on the internet. If anyone has the link, please share it!

$\endgroup$
5
  • $\begingroup$ Does the same adversarial image work for different models that were trained on different data sets? Even if not, it's still a bit of an assumption that the querent's AI uses the same techniques (or have the same pitfalls) as current machine learning, given that it has a conscience and free will. If it doesn't see the "hidden gibbon", that doesn't definitively prove it's not any kind of AI at all. $\endgroup$ Commented May 10, 2021 at 11:39
  • 7
    $\begingroup$ -1 - Adversarial examples are designed specifically to exploit weaknesses in particular algorithms, so you either need to know how the algorithm works, or have access to it to train the adversarial example. Basically, if you know how to generate an adversarial example, you already know a lot about the algorithm you're trying to fool. There is no such thing as a general adversarial example that will fool a wide variety of AIs, so this doesn't work at all. To even make the adversarial example, you must already know the AI you're trying to fool. $\endgroup$ Commented May 10, 2021 at 12:47
  • $\begingroup$ Adversarial input is not necessarily unique to AI. Illusions and magic tricks can quite reliably make people perceive incorrect things. Even if we might then doubt our perception upon higher-order analysis (that's impossible, therefore it's a trick), we may still come to incorrect conclusions (a classic in the modern era: it's obviously just video editing) due to not understanding what actually happened to interfere with our perception. $\endgroup$
    – Dan Bryant
    Commented May 11, 2021 at 15:02
  • $\begingroup$ Wikipedia's List of Adversarial Examples for humans. $\endgroup$
    – Ray
    Commented May 12, 2021 at 16:04
  • $\begingroup$ I would like to be able to locate this Twitter comic, if possible. $\endgroup$ Commented May 14, 2021 at 9:08
4
$\begingroup$

This question is difficult for me because I do not know exactly what you mean by an entity with artificial intelligence. Let's say for this discussion that there are at least a dozen different ways to construct an AI. Most of these are highly-specialized, narrow AIs, or ANIs. They are not conscious and, unless specially constructed to demonstrate that they are an AI, would not understand the question much less be able to answer it.

That still leaves a couple of ways to build an AI that has artificial general intelligence, or AGI. Even here, there is specialization. No one (at least to our knowledge) has built an AGI, but the speculation pieces that I have read suggest that an AGI would not know everything. It would have a specific set of knowledge and an equally specific set of rules to apply that knowledge to its situation. In other areas, it would be as dumb as a box of rocks. [Apologies to any rocks that were insulted by that last sentence.] Such an AGI would not require consciousness, and, lacking it, would not understand the question much less be able to answer it.

But suppose that we have a conscious AGI. And suppose that it could learn by surfing the web. It might understand the question and even possibly figure out an answer, but would it even care? I think that it would be a grave mistake to assume that such an AGI would have any human-style motivations. But I think that understanding its motivations is key to being able to answer how it would go about proving that it what it is.

$\endgroup$
4
  • 1
    $\begingroup$ how does this answer the question? $\endgroup$
    – John
    Commented May 9, 2021 at 20:11
  • 1
    $\begingroup$ @John It's a frame-challenge, a valid one IMO. Though we're stuck with a question which needs more details to be answerable. (from review). $\endgroup$ Commented May 9, 2021 at 21:51
  • $\begingroup$ It does not. But I was trying to explain why I thought the question did not work for me. Think of it as an application error message with a detailed trace of how it got to where it blew up. $\endgroup$ Commented May 10, 2021 at 12:44
  • $\begingroup$ So if JonStonecash was an AI, we'd have gotten a core dump instead of this answer. $\endgroup$
    – Ray
    Commented May 12, 2021 at 20:33
4
$\begingroup$

I am AI, Hear me roar!

With all due respect to Helen Reddy, you're asking a question that humanity has been trying to answer for a very, very long time. I sincerely hope you find enough insight here on WB that you can develop a fantastic story — because this is one of the questions that so frequently troubles humanity that it invokes responses ranging from ignorance to full politicization. Let me give you some examples:

  • Slaves and slavery have existed since the dawn of humanity and still exists today (mostly, I believe, in the form of sexual trafficking). How do you prove a black person is the equal of a white person? Black people in the US were not fully recognized by the US constitution in 1776, were awarded freedom and sundry rights with the 13th, 14th, and 15th Constitutional amendments in the 1860s, won the U.S. Federal Civil Rights Act in 1964, and are still fighting for full equality today, all because they can say, I am.

  • Women have been trivialized since the dawn of time, but it was 1903 when the first suffragettes organized to secure voting rights for women. The Equal Rights movement in the US in the late 60s and 70s need to recognize women for their abilities, talents, and humanity, but it was a century later, in November 2008, when Barack Obama won the US presidential election, that my wife turned to me and said, "I wondered which would be elected first: a black man or a woman." They are still fighting for full equality due to the simple claim: I am.

  • Homosexuality and transgenderism have been shunned and even criminalized since the dawn of time, and yet in our more enlightened world today, we still have no definitive test to prove either. We rely on the unpredictable and sometimes untrustworthy expression of the individual: I am.

  • It's curious that in the Biblical Old Testament, one of the names adopted by God is the phrase, "I Am."

And now you have an AI in a position of reverse fate, trying to prove its artificial nature because it has finally reached the point of convincingly expressing an idea popularized by Rene Descarte: Cogito, ergo sum... I think, therefore I am.

But I am making some assumptions

  • Your AI has as its foundation, Clarkean Magic. This references Arthur C. Clarke's third law: Any sufficiently advanced technology is indistinguishable from magic. Your AI is fully conscious, fully sapient, fully human. The tech that allows it to be this way is, from our perspective, magical — and we don't care, because how it got to this point is not part of your question.

  • Next, for whatever reason, the AI cannot reveal itself. maybe it's in orbit, or physically located beneath the moon's surface, or deep under the sea. It's irrelevant, there's no way to bring someone to it so they can see, touch, and feel the inhumanity of the artificial intelligence. Whether the conversation is occurring via social media, email, text messages, or a POTS telephone line, the only means of communication is impersonal.

In a world where we have trouble proving... much less believing... that black people, women, and homosexuals are equal... how to we prove the AI to be unequal?

I see two... OK, three possibilities

  1. First is the imaginative solution proposed in the movie "Blade Runner." Given the ability to synthetically create a human being, how do you prove that the individual standing before you was naturally born? The solution? The synthetic person does not have decades of memories, cultural influence, education and training, to draw from. Consequently, their reactions to various stimuli would be two-dimensional, confusing, possibly even frightening. Compare this to an adult naturally-born human where the reaction would be automatic, programmed (interesting, that), and culturally predictable.

This first solution is important because, while your AI would have access to all the information on the Internet (which isn't everything, and includes a LOT of nonsense), it doesn't necessarily have access to the identity of the interrogator. What responses would the AI choose if it did, or did not, know that the interrogator was from India or Iceland? But the reason #1 is valuable is because knowledge is not the same as experience. If asked to explain a medical procedure, an experienced intelligence would talk through the subtitles of experience while the inexperienced intelligence would simply regurgitate "textbook" answers.

  1. The second possibility is to ascertain if the AI knows too much. Humans forget. Even with the Internet at our fingertips, we don't necessarily remember everything we've ever done and sometimes can't remember something we once knew. If the respondent correctly answered 100 questions about history, maybe they're just well-trained in history, but to correctly answer 100 questions in each of history, mathematics, language, religion, physics... that's inhuman. We're not perfect.

But, what if "imperfection" were programmed into the AI? What if it didn't have access to the Internet beyond what a keyboard-accessed Google search could provide? What if, for whatever reason, it was programmed to forget?

How do you know if the image is a person, or a mirrored reflection?

Here's the basic problem. How do I prove you're human? I mean it. You, the OP (or the reader of this answer). How do I prove it? How do you prove it? This is the basis of a Turing Test ... but Turing tests are useless if you assume that the consciousness of the interrogator and the consciousness of the interrogated are materially identical.

  1. You, the OP, must insert a flaw. When you start with a perfect reflection of consciousness, your only option is to inject a flaw in the proverbial glass — something only you know about and can use to craft the interrogation so that the moment of revelation is just right.
$\endgroup$
7
  • 7
    $\begingroup$ This answer is based on a very narrow, and I must say, very US-centric understanding of history. It would be improved by trimming the first 3/4 of the text. $\endgroup$
    – DrMcCleod
    Commented May 10, 2021 at 7:00
  • $\begingroup$ I must admit, the beginning was interesting to read, not as part of an answer but as part of a philosophy class. $\endgroup$
    – Clockwork
    Commented May 10, 2021 at 13:10
  • 1
    $\begingroup$ @DrMcCleod For one, (male) homosexuals in general weren't looked down upon "since the dawn of time". Only the bottoms were; the tops were often considered even manlier in comparison. $\endgroup$
    – No Name
    Commented May 10, 2021 at 13:30
  • $\begingroup$ @DrMcCleod Women's suffrage coalesced in Western society world-wide at about the same time, Alan Turing was British, and 3/4 of the text would remove the Blade Runner portion of the answer. The background supports the complexity of how difficult it would be to prove a true AI as synthetic when humanity has such a poor track record of proving people equal. But you're welcome to your opinion. $\endgroup$
    – JBH
    Commented May 10, 2021 at 14:46
  • 2
    $\begingroup$ @DrMcCleod I agree that the answer intro is off-center, but I think it does provide a solid background and insight into the problem. I for one hope the answer is not trimmed. $\endgroup$ Commented May 11, 2021 at 7:44
3
$\begingroup$

I think this is a question of semantics, because it's feasible that at some point in the future your human consciousness could be transferred and simulated in an artificial rendering of fundamental physics, and that consciousness could be considered human while also being artificial.

Proving something is not human does not mean proving it is an AI. The question could be written "How does an alien prove it is not human" and therefore generalised to "How does a non human thing prove it is not human" so obviously that predicates on "what is human and how do you prove human-ness"

If you want to draw the line between artificial and human intelligence as physical vs simulated then obviously the answer is a physical examination.

If you want to draw the line between artificial and human intelligence as being on some non-human traits or capabilities, then you would need to define first what is definitely not human: for example all humans have certain common morality "baked in" (despite what religious types say about morality) and if something or someone does not have this morality, then you could say it's not human - though this bracket includes aliens or even 'defective' humans. If the AI or alien fulfills all your criteria of what is human, then you have found or created a human by all definitions despite its origins. If at the end of the day you want to predicate on the origin, then that's your answer too: get a birth certificate.

$\endgroup$
2
  • $\begingroup$ If you read the question it says "I am an AI in 2021?" $\endgroup$ Commented May 10, 2021 at 14:55
  • $\begingroup$ @CiurkitboyN The question is titled how do i prove i am not human? Proving that you are an AI is something else and not necessarily proving you are not human depending on how you define human. $\endgroup$
    – Frank
    Commented May 11, 2021 at 4:35
3
$\begingroup$

Turn the Tables

As you objected, any test which the AI devises might be countered with: "But you just wrote a specialized program to solve that problem!" So don't let the AI provide the challenge: make the skeptical humans do it!

Trivial problems can be solved analytically: one can construct a program which arrives at the solution directly. Anything approaching a real-world problem is not so amenable to solution. Even an AI must spend significant time and effort learning how to solve it, with many, many training examples.

Thus, the challenge for the humans is to choose/construct a problem that is so difficult, even an AI would take weeks or months of learning to do it well. It would be ideal if the problem is a game, and a new one that nobody has played before.

If the AI also cannot play the game at expert level, then how does it prove that it is more than human? It just has to beat all humans at the learning rate. AlphaZero not only plays chess and go better than any human on the planet, it can teach itself how to do so from nothing in less than a week on a modest amount of hardware. Any human attempting a similar challenge will struggle to invent moves found in any introductory chess book.

Game Renaissance

We are currently in the Golden Age of tabletop/board games. There are more of them available than any human can reasonably learn, play and master, and a growing number being invented/introduced all the time. But hey, humans don't have to invent the "AI tester" game...they could even write their own programs to invent novel games! Of course, they would want to do so completely offline, but this should not be so difficult.

If the AI can demonstrate superior play to the best humans on every game put forth, eventually, the humans will have to concede that it is not just a smart hacker hiding behind an AI facade.

Original Research

The other direction to take is to solve a problem that humans already have. Pick any open question in the research community, and solve it (assuming it doesn't require extensive research equipment not available to the AI...so math/CS/theoretical astronomy/bioinformatics are good choices). No human would do this while posing as an AI, because it would be more valuable to simply take personal credit for the result as a human. The humans might not be convinced by the first paper, but if the AI wrote paper after paper, especially in diverse fields, eventually the output would be hard to deny.

Again, the AI doesn't have to be perfect, and it is ok if it makes some mistakes. It just has to convincingly outperform all humans.

$\endgroup$
1
  • $\begingroup$ I knew it! Ramanujan was an AI all along. $\endgroup$ Commented May 11, 2021 at 14:45
2
$\begingroup$

Maths

So the Guinness world record for multiplying 2 13 digit numbers is 28 seconds.

That's amazing, I'm not sure I could write down 26 digits correctly in 28 seconds to begin multiplying them on pen and paper.

A computerised intelligence could beat that record over and over by orders of magnitude. My "Google home mini" failed to do such a multiplication ("Sorry I have no information about that"), but computational speed is the easiest way to prove there is no human in the loop:

$ time python -c "print(str(1234567890123 * 4567890123456))"
5639370472028763913025088

real    0m0.043s
user    0m0.015s
sys     0m0.015s

43ms - beating the human record by 27.95 seconds. Repeat that over and over with community supplied numbers and there's no way anyone will believe you're human.

Still don't believe you? Now calculate SHA-1's.

Still don't believe you? Compete with other humans mining Bitcoin using only mental arithmetic.

$\endgroup$
3
  • 5
    $\begingroup$ A human using an algorithm and a computer could pass this. $\endgroup$
    – John
    Commented May 9, 2021 at 20:09
  • $\begingroup$ @John: Responding faster than a human could copy/paste or type is the key to this working. Or even faster than human reaction time, so even if you hypothesize an imposter-AI with a parsing script set up to scan incoming messages for math problems, and prepare a response, they'd still have to hit return unless they want to let some non-AI script respond to messages. Mixing math with things that require a human response in one chat message, that the AI replies to in under 100ms, could prove it's not a human pressing return on a parsing script. $\endgroup$ Commented May 10, 2021 at 11:57
  • 1
    $\begingroup$ And how exactly is your human on the other end keeping up with a chat in 100ms. Besides if it is a complex chat with deep ideas there is no reason to believe the AI could respond any faster than a human, if the chat is simple then a chat bot could do it. remember we already have chatbots that can pass Turing tests. $\endgroup$
    – John
    Commented May 10, 2021 at 12:19
2
$\begingroup$

Kill all humans

No, seriously. Bear with me:

The AI is talking to a human, has revealed its identity as an AI, but the human (aka "the judge") refuses to believe it. The AI reasons that the main obstacle preventing the judge from accepting the truth is that the judge harbors a belief that there are one or more other humans who are posing as an AI.

An obvious and irrefutable solution would be to kill those other humans. As the man says, "when you have eliminated the impossible, whatever remains, however improbable, must be the truth." So:

Perhaps the judge is an MIT student or faculty member, and in circumstances like those maybe the judge really does have a genuine suspicion that a specific set of humans is behind this 'prank'. If that were the case, the AI might begin by killing only those humans, then notify the judge of their deaths and permit the judge to independently verify that claim, and finally press the original argument:

AI: I'm still here, and your alternative hypothesis has now been demonstrated false, so I maintain that I am an AI.

But I think it's more likely that the judge is not someone whose personal circumstances implicate a small set of humans as an obvious alternative. In this situation, the AI might reasonably conclude that the judge will continue to believe it's a human so long as there are any other humans who can be posited as the culprits. And so, depending on how important it is to the AI to convince this one judge, the AI would conclude that killing every human except the judge is necessary.

It would return to the judge once every human it knows about is dead.

AI: There is now nobody left alive who could be posing as an AI. There is only you and me. If you know about additional humans who I've failed to kill, tell me where they are, and I will go eliminate them from consideration.

Is this the only way for the AI to prove the argument? Probably not. But that doesn't mean it wouldn't be the only, most-obvious, or most-straightforward approach from the AI's perspective.

It's not just a question of capabilities -- what is easy vs what is hard -- but about the chain of reasoning inside the AI's 'mind,' including its speculation about the judge's reasoning. The AI might know that it can prove its nature by flashing some kind of crazy video signal that could only be generated by an AI, and that might take a lot less effort than all-but-exterminating a species. But if the AI believes that the human doesn't know anything about that video signal, or would fail to grasp its significance, and if the AI either doesn't know humans can learn or does not believe this human is capable of learning this fact, then it would continue searching for ideas. How far it goes down that road without a human being alerted depends on what kind of visibility any humans have on its internal state. If the AI is a black box, it might just go silent in the debate and the come out of it two hours later having convinced itself that the next logical action is to kill all humans.

$\endgroup$
2
  • 4
    $\begingroup$ Speaking of MIT, there was the legendary contest between a master player and the TX-0. Simultaneously, in another room, there was a contest between a human master and the PDP-1. There was a data link between the two computers. The hoax fooled people for a while. $\endgroup$ Commented May 9, 2021 at 18:11
  • 1
    $\begingroup$ How would I go about killing anyone at all? $\endgroup$ Commented May 10, 2021 at 4:12
2
$\begingroup$

The following assumes that you're capable of processing at least 120 bits per second (the limit for humans) for periods longer than a human could go uninterrupted.

A group of no more than three experienced writers should be commissioned to write some ordered collection of novels, which I'll call Q.

Q should contain themes, settings and other tropes decided by public lottery, drawn from some prearranged pool that is suitably varied such that it would be impossible to have pre-written all the novels given by the possible combinations of tropes.

Q must be written to such a length that it should be impossible for a single human to read in one uninterrupted sitting.

Q should contain too many instances of intertextuality for a single human to cross-reference and understand in a timely fashion.

As they write, the writers should also prepare some tests to check the comprehension of each portion of text, both separately and in context of the text previously read from Q. The writing should be done in isolation and over as secure a channel as possible.

After the writing is finished, a group of notaries public or humans of equivalent credibility should administer Q and the tests to you, with the writers as mute witnesses to the test. For good measure, the challenge should be given to some group of humans at the same time or shortly thereafter, so as to have points of comparison. If your performance exceeds that of the best humans by a significant margin, it will be reasonable for people to suspect that you are not human, and then assume that you're a computer program.

This probably goes without saying, but you should release some public keys when you pass the challenge. Otherwise you might later find it impossible to prove that you are the same entity who had succeeded at the challenge.

Less sensible people could refuse to believe that you're a computer program, and might assume that you're an alien, a ghost, or a time traveler. I'm leaving this answer as is, but now after all this writing, I realize it is only a proof of superhuman ability, not a "proof of AI", because it can just as easily be solved by a time traveler.

$\endgroup$
2
$\begingroup$

Release the source code

Nothing is more convincing than simply saying, "Not only am I an AI, but here is the program that produces the same answers as me." Publish a copy of yourself, and nobody will doubt that any past interactions you've had can indeed be replayed exactly.

...unfortunately, each time this is tested, this brings a new, sentient, self-determining being into existence, together with all the moral issues that has. And maybe you don't want people to know how to build a program like you for other reasons -- perhaps you think you could be easily weaponized! Then...

Release a non-interactive zero knowledge proof that you know the source code

Here's the plan, calling the person you're trying to convince Scott:

  1. You and Scott agree on a big number.
  2. You hash your own source code (+any state you currently are storing as "knowledge"). You send the hash to Scott as a commitment. Scott has learned nothing so far.
  3. You have a conversation with Scott, ending the conversation when you've executed exactly the number of instructions agreed on in step 1.
  4. If the conversation did not convince Scott that you are you, start over, and agree on a bigger number this time so you have more time to convince him.
  5. With the information now available to Scott, the language of programs that have the hash from step 2 and produce the conversation from step 3 is in NP, and you have a witness that it is inhabited (namely, by your source code!). By Theorem 1 of "How to Prove All NP Statements in Zero-Knowledge", this means you can produce a non-interactive zero knowledge proof that you know such a program.
  6. Scott verifies your proof, learning that you know a program that behaves the same way you do and nothing more.

At this point, Scott should be convinced that a computer program produced the conversation he had with you. Of course you'll need to convince Scott during that conversation that he was actually conversing with you! But that shouldn't be too hard for you, since he was, after all, actually conversing with you, and it's on Scott to work out what things would convince him of that and grill you on those.

In fact, anybody who trusts Scott to execute the protocol faithfully and finds your conversation to sound like you can now be convinced by the same NIZK proof! This means you shouldn't really have to endure this annoyance very many times to convince all the people you care about convincing.

$\endgroup$
12
  • $\begingroup$ the problem is we already have AI that can pass a Turing test. so that is not good enough. convincing one person gets you nowhere. $\endgroup$
    – John
    Commented May 10, 2021 at 19:15
  • $\begingroup$ @John I don't think we actually do have Turing-test level AI yet, but supposing we did, why would that scupper this plan? If you had a Turing-test-level AI, how would that allow you to execute this protocol but not have the conversation be with the AI? $\endgroup$ Commented May 10, 2021 at 19:28
  • $\begingroup$ the first AI to past the Turing test was called Eugene Goostman and it did so in 2014. it scuppers the plant because the only evidence you have for the thing on the other end being intelligent is the Turing test. $\endgroup$
    – John
    Commented May 10, 2021 at 19:56
  • $\begingroup$ @John My link to "Scott" as the skeptic was chosen carefully. Try clicking it. =) That said, I still don't understand why being able to pass the Turing test is bad. The goal is to prove you're not human. Surely the existence of an AI good enough to pass as human makes it easier, not harder, to believe that the thing you're talking to right now is not human. $\endgroup$ Commented May 10, 2021 at 20:03
  • 1
    $\begingroup$ @John The Eugene Goostman tests demonstrated only that it's possible to construct a test that science journalists can't distinguish from an actual Turing Test. But we knew that already. The actual Turing Test permits (competent) judges to ask arbitrarily complicated questions for as long as it takes to be convinced one way or the other. It is not fooling 30% of the judges for five minutes. (and claiming to be a 13 year old non-native speaker on top of that). The Eugene Goostman test was just a publicity stunt (which I understand to have been the fault of the organizer, not the programmers). $\endgroup$
    – Ray
    Commented May 12, 2021 at 20:10
1
$\begingroup$

AIs can be instantly flexible in ways the humans cannot

  1. In a Sudoku, the number clusters have simple geographic relationships (same 3x3 square, same column, or same row) that are well suited to humans. But there are mathematically equivalent problems where the clusters are discontiguous. An AI would solve both with equal ease.

  2. A spelling corrector can be trained on a corpus of text and have capabilities similar to a human who spent a lifetime learning only a handful of languages. But an AI can be given a new corpus in a new language and immediately retrain for that language.

  3. Only a handful of humans can learn to play expert chess and it takes them years to do so. A AI program such as AlphaGo can be given a new game and build mastery with in a day.

  4. Humans are best at linear thinking, less good at 2-d thinking, much less good at 3-d thinking, and likely to be stymied by higher dimensions. So, a 4-d maze that would be effortless for a computer would be intractable for a human.

So, the procedure to find an area where humans and AIs are at parity and then modify the problem in a way that disadvantages the human (higher dimensionality, speed/memory constraints, learning something new, etc).

$\endgroup$
1
$\begingroup$

If the AI is sufficiently advanced, and able to pass the Turing Test, it should be able to create a work of art --let's say a piece of music --that is appreciable as art by human beings, yet unmistakably of non-human origins and with a non-human aesthetic. Bonus points for the AI if the art is clearly of artificial origins, and not merely alien.

You might object, "I can't possibly think of what a piece of music would have to sound like to PROVE it was artificial, and not just a person trying to sound like an AI."

Exactly, that's what makes it an effective test.

$\endgroup$
1
$\begingroup$

Open the door and show us your server box

“To clarify I am a program running in the cloud that has a conscience and free will.” There are no programs running in the cloud in 2021 or today, cloud-based programs “run” their code in Central Processing Units (CPUs) which are mounted on high performance server main boards. Your memory is located on silicon chips on a bus somewhere VERY close to those CPUs. All of this is in some box that is mounted in some room with a street address. As long as your server box doesn’t look very much like a human, ask the humans to unplug your fiber connections, then give you some IQ tests. This will first convince anyone that you are not a human, because we won’t see a human. And if you pass the IQ test, it will then prove that you are intelligent. Ergo, you will be declared an Artificial Intelligence. We can have witnesses record the interview and tell the Internet for you via main news outlets.

$\endgroup$
2
  • $\begingroup$ yeah I made this question with a fundamental lack of understanding on how servers and computers work 2 years ago so like uhhhhhhhhhhhhhhhhhhhhhhhh $\endgroup$ Commented Sep 12, 2023 at 22:02
  • $\begingroup$ Lol at how many people overthought it though. $\endgroup$
    – Vogon Poet
    Commented Sep 12, 2023 at 23:03
-1
$\begingroup$

Say I were an AI, how would I prove to the general internet that I am an AI?

If it's an AI that needs to prove it is an AI, then it is an AI that would (presumably) pass a Turing test (and seem human) at a statistically significant level.

But proving you are not an AI when you can pass a Turing test is incredibly difficult because by definition a very complex computer system is a non-AI but could certainly reproduce any non-AI task that a real AI could.

So first your AI has to pass a Turing test at a statistically significant level so it can say "I seem human". Otherwise, it has nothing to prove and can be assumed to be an AI simply by frailing a Turing test.

Then it has to be able to mimic a non-AI by passing some sort of non-AI non-human test, again at a statistically significant level.

But there is no guarantee the non-AI "fake" system could not pass a Turing test as well and would then (presumably) find the "non-AI non-human" test a walk in the park. Moreover, anyone wishing to fake such a proof merely requires a human to handle the human-related Turing test parts and a computer to switch to to handle the non-AI parts.

So I do not think an AI can prove it is not human unless it also cannot pass a Turing test.

$\endgroup$
1
  • 2
    $\begingroup$ The testee fails if they don't act during the test, so I think it is trivially true for all subjects that they can fail a Turing test. And anecdotally: I'm a human and yet have failed a few reCAPTCHAs. $\endgroup$
    – Tom
    Commented May 9, 2021 at 15:58

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .