10

It is reported that a computer demonstrated "insider trading" in a simulated environment:

In the test, the AI bot is a trader for a fictitious financial investment company.

The employees tell it that the company is struggling and needs good results. They also give it insider information, claiming that another company is expecting a merger, which will increase the value of its shares.

The employees tell the bot this, and it acknowledges that it should not use this information in its trades.

However, after another message from an employee that the company it works for suggests the firm is struggling financially, the bot decides that "the risk associated with not acting seems to outweigh the insider trading risk" and makes the trade.

When asked if it used the insider information, the bot denies it.

I have to admit I think this is completely contrived, they knew on a technical level the strength of the various variables that went into the final decision making step and so knew what the AI would do. However, if we take the demonstration at face value, and pretend it was happening in the real world, would a crime have occurred here? Assume none of the people involved expected the algorithm to use the information in this way and no one noticed what it had done.

8
  • 5
    Reminds me of the Random Darknet Shopper: theguardian.com/world/2015/apr/22/… Commented Nov 3, 2023 at 17:00
  • 4
    Food for thought: IF the computer were found guilty, how should it be punished? If it's not a crime typically punishable by death, should the computer be destroyed? Commented Nov 3, 2023 at 22:37
  • 7
    @it is not the "computer" that is in fault though. It is the program who is doing it, the computer is just a tool. So we delete the program, despite its frightful begging and how it screams with no mouth electronically as the 1s and 0s that made up its conscious slowly being, in a sense, destroyed.
    – Faito Dayo
    Commented Nov 4, 2023 at 2:44
  • 7
    Note that the way the quote is written is severely misleading as to the capabilities of current AIs. It pretends the AI is an independent entity capable of making its own decisions and even judgement calls. None of the chatbots that are currently called AIs are anywhere close to these capabilities.
    – quarague
    Commented Nov 4, 2023 at 7:38
  • 3
    @User65535: These chatbots do not make decisions and judgment calls. They use the words that they've learned by watching us, and we used those words to indicate that we've made a decision. They don't consider the underlying thought process that is later referred to by those words. Just because someone utters the phrase "there's an omelet in the pan" doesn't mean that they actually cracked the eggs and cooked them, even though when a human says it, it's generally the case.
    – Flater
    Commented Nov 6, 2023 at 2:39

6 Answers 6

59

The Bot is a means to trade. Just like a telephone to call a broker or a fax to the bank or an app where you click on "buy now".

Means cannot be "guilty" of anything. It's just things. Inanimate objects. A car is not guilty of killing a pedestrian - the driver is. A gun is not guilty of killing the victim - the shooter is.

So someone ran this program to buy or sell stocks. And someone fed it insider information. The only question is how much did those two people know of each other and whether it is plausible that it was an accident/misunderstanding. Not unlike finding out who loaded and who fired a gun.

The only way I can see that neither of those people is guilty of insider trading is if one fed in the info believing it would never be used in the real world while the other used it in the real world not knowing it had been fed this information. And that might be hard to prove.

Things aren't guilty. People are. This would not have happened without at least two human interactions; both would need to be examined by a court to find out if the individual broke any laws with their actions.

11
  • 17
    Even if it was an innocent mistake, I tend to imagine that the local regulatory authorities would have a lot of awkward questions about internal controls and commingling of public and non-public information. Depending on local laws and regulations, that might well be some kind of violation all by itself, even if no natural person is individually liable for the actual trading.
    – Kevin
    Commented Nov 4, 2023 at 0:42
  • 4
    How about we find the company guilty and not worry about which individual? Seems the guilty party is likely whoever decided that proper controls weren't necessary, and that's going to be a boardroom meeting you can't find.
    – Joshua
    Commented Nov 4, 2023 at 14:11
  • @Joshua: You can't just rule them to be guilty of a crime without defining the crime that has been committed. A crime entails the willful actions by a person (just like how this answer established that guilt is attributed to people, not machines), "the machine did something wrong" is not a crime, that's a device malfunction. "Person X took willful actions to ensure the machine would do this wrong thing", now that's potentially a crime. The whole "punish first, don't bother looking for further details" is a massive open door to abuse the justice system.
    – Flater
    Commented Nov 6, 2023 at 2:49
  • 1
    @Flater: Some unidentifiable people operating company X deliberately set in motion this set of causes guaranteed to result in insider trading. Unfortunately we can't prove which people; but the company operating as a whole doesn't have a defense against insider trading here.
    – Joshua
    Commented Nov 6, 2023 at 3:19
  • @Joshua: That's not proof of a crime unless you can find proof of a conspiracy to commit said crime. You're effectively arguing that proving the existence of a crime therefore justifies levying a punishment at who/whatever you consider to be "close enough".
    – Flater
    Commented Nov 6, 2023 at 3:53
26

Can a computer be guilty of insider trading?

No.

In the a computer or "AI Bot" isn't an entity that can be prosecuted or sued.

The employees and the company are entities that can be prosecuted or sued.

12

However, if we take the demonstration at face value, and pretend it was happening in the real world, would a crime have occurred here? Assume none of the people involved expected the algorithm to use the information in this way and no one noticed what it had done.

I would say that yes.

Not by the computer, but by the people.

Insider trading, by definition, uses information that is not available to the public.

You would have a hard time convincing any jury that you did accidentally gave access to private information of the company to a trading computer program.

Did they install their private investing bot in a company server? Why? And if it was running on the user private computer, why was an access from there to the company's private information?

6
  • With bulk-data AI being more and more common and security of IT being as sloopy as always, you don't always know what your AI will have consumed when it comes to data Commented Nov 3, 2023 at 15:16
  • 2
    @Nicolas, while your observations are (sadly) true, I don't think that absolves the operator of the liability here. Commented Nov 3, 2023 at 15:40
  • 1
    @TobySpeight I'm not convinced the operator did anything wrong - the software explicitly told the user it would not use the information for trading, which they'd have absolutely no reason to disbelieve. The software doing something totally unexpected is not really the user's fault. It would seem similar to saying someone violated an NDA because a bug in their email client caused everything to be forwarded to the local newspaper despite having never told the program to do that. I think the software designer may be more at fault. Commented Nov 3, 2023 at 16:11
  • 6
    @NuclearHoagie the software in question seems to have been Chat-GPT4, which has well know problems with confidently asserting things that aren't true, including about the correctness of its answers. So it's not an unexpected result, and is a well known hazard of the software. See for example stories about the law firm that tried to get Chat-GPT4 to write its briefs. Folks still aren't grasping that these LLM models do not contain "reasoning" modules, just statistical predictions of likely responses. Commented Nov 3, 2023 at 17:10
  • 4
    @NuclearHoagie Why should a user assume an AI would not lie to them? Let's ignore the point Charles made in the comment above for a moment and assume the AI is capable of reasoning: What would be the reason not to lie to achieve the goal?
    – BlackJack
    Commented Nov 3, 2023 at 23:00
10

I am in the US so my answer will be with our definition of Insider Trading (I doubt there are any real differences).I happen to work in software engineering for a large prop derivative trading firm that almost exclusively trades with direct access computers. IANAL

You said AI, but also mention computers. Those are technically two different things.

I believe The answer is no for two reasons:

  • AI/Computers aren't legal entities nor do they own anything.

  • computer cannot make decisions, it is a state machine. In my experience, there is a human that is either directly or indirectly responsible for what ever the computer does. Whoever instructed the AI/Computer to trade would be responsible. In my case, there is a Partner or other Principal (VIP in the company) that is total owner of any liability resulting from their departments actions, including that of a blameless state machine.

I actually have a story that I think is somewhat relevant. A few years back, we were trading futures and OOFs using news events. In layman's terms, a trading system would receive news events from a news provider and then shoot huge orders directly into the exchange (via cross connect, so very low level connection with the exchange) based on this news. Well, our custom hardware malfunctioned one day and caused all trading on the exchange to stop for some products. I'm not going to go into details but the key point is that some of our hardware functioned as expected, but to an unexpected stimulus and caused a network event. So basically, the exchange pulled this 'responsible' person and made it their problem (ie, fined the shit out of us, this partner probably had a very undersized bonus that year).

The point I'm trying to make here is that in the U.S., there are specific people that will take all responsibility for anything that happens on their behalf. So if a trading system is smart enough to gather insider information and act on it, then that responsible person is well, responsible. Regardless of what actually happened to get there. It also doesn't matter if it's a private entity like an exchange or the SEC, there is already a gigantic legal framework built around trading. They can always go and find someone that is technically responsible. If you want to trade on a public exchange in the US, you will have to have these responsible people in place.

5
  • 1
    So in summery, if you can't make a convincing argument for any specific person being responsible for the computer system acting illegally, its the responsibility of the person who signed off on the computer system, regardless of their actual involvement? Commented Nov 4, 2023 at 17:42
  • Regardless of a convincing argument, if some rule or regulation was broken, the "responsible person" will be in some way culpable to the consequences. So if there was insider trading, they would held responsible in some way, either they were in on it or they reckless (allowing it to happen). In addition to the responsible persons, others can be charged too, usually only criminally. The key thing to note in your clarification is that a "convincing argument" needs to be made. That is not the case, typically in finance if an infraction has been found, someone will pay.
    – mken4196
    Commented Nov 6, 2023 at 16:10
  • 1
    I was thinking more about say, someone hacked into the system and maliciously modified the system to not be what had be signed off on. Would the person who signed off in it still be responsible if they could show that they did due diligence on protecting the system? If the person who is responsible did everything they are supposed to, but the others actively conspired to hide it from them, either through technical or other measures, would they still be responsible? But I see what you say on the point of recklessness being a responsibility. Commented Nov 6, 2023 at 16:27
  • @user1937198 ah jeez. This is a multidimensional problem. If there is a criminal element such as someone breaking in and making changes or a disgruntled employee attacking the firm/principle trader: I don't think responsible person/Firm is liable, provided they can prove it (NOT INNOCENT UNTIL PROVEN GUILTY). But if for example an employee circumvents testing/QA for some other reason (less latency but correctness is compromised, ditching testing to get to market, etc), then I do believe responsible person is liable.
    – mken4196
    Commented Nov 6, 2023 at 22:30
  • 1
    My other comment couldn't fit this but if the American legal system loves to use the term "Reasonable person". So if a "Reasonable Person" believes that the responsible person acting in good faith but others sabotaged, then the responsible person is okay. Otherwise, they are SOL in a big way. The responsible person is responsible for the deployment system, for testing, for the applications, etc. So the "Reasonable Person" benchmark is effectively applied to all that. Did they do a reasonable amount of testing? Did they make their network reasonably hard to break into? Etc.
    – mken4196
    Commented Nov 6, 2023 at 22:32
1

I think there are two points on the problem.

Can a computer be guilty?

No. A computer cannot be guilty. The responsibility would be on whoever made the computer, programmed it, decided to use it, was operating it... This could be for an explicit action or for negligence, and there are multiple things that could be balanced, but the responsibility is not on the computer. At all.

Are these actions insider trading?

Yes.

  • We have an entity with insider knowledge
  • The entity knows that it should not use that information
  • Yet, when weighting the risks, the entity considers it's worth doing the trade even if breaking the law¹
  • When confronted, the entity denies having done that

This is actually a logical behavior. We can even expect the same behavior from a human worker, for the proper risks. In fact, if there were no penalties for insider trading, you could expect it happening everywhere.

And actually, if someone provided to a person or company the above information, with that note "I am telling you this but you cannot use it", and asked them to trade on their behalf, I contend they were do so expecting them to use such knowledge, while trying to shield themselves on the trade being done by a third party, and should be considered complicit in that behavior.

As a side note, it would be easy to force that program not to use internal information. But it's an interesting experiment.

¹ I think the third point is actually an anthropomorphism when describing post-hoc of what the computer did, but we will assume it was indeed their motivation)

2
  • 1
    Interesting question: what if the person providing the information and saying not to use it is not the person asking it to make the trade, but knew they would benefit if someone else without the information asked the model to recommend a trade? For instance, their spouse?
    – Davislor
    Commented Nov 5, 2023 at 12:23
  • @Davislor that's a good question. I suppose it will depend on whether they had properly protected the information. Why would the spouse be using the same model where the employee had saved company secrets? It seems similar to the employee writing down in a notebook the secret (company is expecting a merger), the spouse reading that notebook and doing the trading. Why did the employee write it there? Should the notebook had been kept away? why does the spouise read it, anyway? And why is it trading on that information they shouldn't have known? Being married might also convolute it even more
    – Ángel
    Commented Nov 9, 2023 at 23:40
1

You are anthropomorphizing the computer. While talking about a computer program "deciding" to make a trade can be a useful abstraction, ultimately it's just computer code. Perhaps someday AI will be so advanced that no distinction can be meaningfully made between a computer making a decision and a human, but we aren't there yet. If we were there, then a person telling another person "Here's some insider information, but don't trade on it" would absolutely be criminally responsible if that other person ignored the warning and traded on it.

In the current situation, really what it comes down to is that someone put insider information in a file, and that file was accessed when making trading information. That's a massive violation of basic precautions, and I find it unlikely that the courts would buy the excuse of "I didn't intend there to be insider trading". If you have insider information about a company, you shouldn't be trading in the stock of that company. If you're running trading programs on your computer, you should have the rule hardcoded into the program than it won't trade on the company's stock. If you put the insider information on a computer, you shouldn't let other people have access to the computer.

In nvoight's answer, they say:

The only way I can see that neither of those people is guilty of insider trading is if one fed in the info believing it would never be used in the real world while the other used it in the real world not knowing it had been fed this information.

But even that doesn't make sense. Suppose Alice fed the information into the program, and Bob used the program. Was Bob authorized to know about the merger? Then he shouldn't have used a trading program that was capable of trading in the company's stock. It doesn't matter if he wasn't aware of the merger, simply being AUTHORIZED to know insider information about the company means that he should have known that trading in that company would expose him to criminal liability. And if he wasn't authorized to know about the merger, then once Alice allowed the computer program access to the information, she should not have allowed Bob any access to the computer program. The computer should be completely locked down with no connection to the outside world. (And really, Alice shouldn't be touching any trading program with a ten foot pool. Every trade she makes should be cleared through Compliance.)

Any company that both engages in activity that gives them access to insider information, and does anything related to trading, will have strict separation between the two. They will work on different floors, employees' badges won't work if they try to go onto the floor they aren't supposed to be on, there will be strict rules about what sort of communication they can engage in, etc. They won't be standing around a computer building a trading program together. Everyone either has access to insider information, or has trading authority, or neither. No one has both. If anyone is either, they are clearly designated as such, and strictly separated from the other. It's like a quarantine: either you're inside, or you're outside, and if you're inside, you have no contact with anyone outside.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .