10
$\begingroup$

In sci-fi we have seen computers, high-tech armour and other technologies respond to a character speaking orders out loud- would an interface that reads a person's brainwaves- basically receiving and responding to commands as soon as said person thinks them- be plausible? And if so, are there any serious potential drawbacks to such an interface?

$\endgroup$

6 Answers 6

5
$\begingroup$

We already have biofeedback devices capable of reading brainwaves in just this manner, however as they currently operate outside a person's skull, they are very crude and require considerable training to use.

Taking this technology further, if implanted inside the skull, this technology could provide much finer control over more variables with less training.

Ultimately, it can be expected that if taken to its logical extremes, a machine-mind interface would be grown into the recipient's brain and would provide additional abilities that were simply there when they were needed, like having an extra whatever that felt as if it had been part of the recipient's body since birth.

The ultimate drawback of this might be the dissonance of having abilities appearing and disappearing as the user interfaces with different technologies. It wouldn't be like (for example) having a new eye suddenly appear, and having its field of view overlaid on the field of view of the existing eyes; instead it would bring with it an entirely new field of view and an entirely new set of muscles for moving and focussing it, that the user would immediately know how to use. When the technology was unplugged, the eye and its field of view would disappear, unlike someone having their Mk1 eye removed from their skull, leaving an empty field of view. Users would remember having senses and limbs which they simply don't have anymore, and cannot relate to their own bodies until the technology is reattached.

This alteration of the users' sensory-physical environment may cause mental problems, perhaps in all users, perhaps only in particular, susceptible individuals, or it may not be a problem at all. We won't know until we try it for the first time.

$\endgroup$
2
  • $\begingroup$ I was attending "robots on tour" in Zurich, Switzerland in 2013. There was a team letting visitors put on an interface cap and steer a (simple) robot around. It was crude, but you could get a basic grasp in less than a minute. $\endgroup$
    – Burki
    Commented Nov 10, 2015 at 8:53
  • $\begingroup$ I remember writing a short story about a group of future 'pilots' suffering from mental difficulties due to repeated changes in perception if they tried to operate too many different classes of vehicle... $\endgroup$
    – Joe Bloggs
    Commented Nov 10, 2015 at 16:23
3
$\begingroup$

In mass production, we have cochlear implants which provide roughly 350,000 people the ability to hear via almost direct interface with the nervous system.

In clinical trials, the Argus 2 bionic eye is being used in 30 patients who are blind, to restore their eyesight using a camera. Argus 1 was installed in 6 patients, and as I understand it they all work.

On the bleeding edge, a recent brain-to-brain trial allowed one scientist to control the other persons hand, without invasive surgery.

These things are possible because the intermediate hardware doing all the processing can be calibrated by trial and error. Eventually you arrive at a known set of outputs for a given set of inputs.

The basics of the brain-machine-interface you are thinking of already exist, and if you look at them you can see that some of them are voluntary, and some of them are passive sensory input. Thinking rationally about the issue, commanding a set of anime-style battle armor with words that you think doesn't make a bunch of sense - the narrative in your head may engage the ejection seat, when what you intended was to fire missiles.

So it stands to reason this technology will be tempered with some good old fashioned user interface design concepts. Why not actually use the movement of your real arm to command the movement of the armor? We can do that, right now, and work is being done on feedback systems so you can actually feel fingers on a hand that isn't real. That way you can pull the trigger of a unnecessarily large rifle and actually feel it engage.

For less deadly devices, I can absolutely see a wireless brain hookup making for a really great mouse and keyboard. I already spin the mousewheel absentmindedly, and occasionally let sentences get out of control, which I later revise. What would be bad is if my computer shut down because I was trying to type "shut down" with my brain, or if I actually got scrambled eggs every time I kinda wanted some scrambled eggs. I would get hideously fat.

The overall point is that context will always be important. When I have a problem and think bad words, I do not want machinery to give me fertilizer. Whatever happens next in brain-machine interfaces, for voluntary actions, it will need to have a controllable context, and will likely start with something not entirely unlike a command line.

$\endgroup$
1
$\begingroup$

Recently there have been leaps ahead in implanting microchips into brains to help control computers. In the link, a woman who is mostly paralyzed from Lou Gehrig’s disease had a chip implanted on her brain. it uses blue tooth technology to connect to a tablet and she can use the tablet and search the web, watch videos etc.

I think it is a huge door being opened for those with disabilities to be able to control their own environment to some extent. Some of what they can do is turn your thoughts to text. Even wearable devices can do some of these things.

Like anything there can always be drawbacks. One they have already experimented with wearable brain reading devices for games to and learned you can 'hack' a brain, they found it was possible to guess at pin numbers and such. Some of these devices they have used to help people learn faster. This could also be used to change peoples mind and thoughts. I heard on NPR the other morning that they can get images of peoples dreams now. If that isn't an ability to read peoples mind, I don't know what is. While this CAN be good it can also be terrible, thought crime might actually be punishable in some dictatorships then.

$\endgroup$
1
$\begingroup$

I was working with a US AF research lab that was experimenting with using brainwaves to fly fighter aircraft. In their open house they took a 12 year old girl from the audience and taught her to fly an F-16 simulator in a few minutes.

This occurred in about 1993.

There were some problems with this process - it kept requiring recalibration or the pilot would get stuck constantly turning in one direction or another. Just before I went to the next presentation the girl got stuck in a left spiral and ended up crashing because the system needed to get recalibrated again.

I'm not aware of any active military unit using this technology even today. Maybe it requires too much discipline? Maybe the human brain meanders too much and using certain brain states aren't constant features of the conscious human mind? I'm not sure why it never caught on.

But it is totally plausible.

$\endgroup$
0
$\begingroup$

Totally plausible, there are already multiple projects testing it. Drawbacks...

Mostly just thinking something you shouldn't and triggering your gear to make something awful happen.

Brain-powered missile launch. Worker asks, "Should we launch it?" Commander says, "no," but thinks about launching it. It launches.

$\endgroup$
1
  • 2
    $\begingroup$ Can you explain what the "multiple projects" are and why they're good evidence that this can work? Thanks. $\endgroup$
    – HDE 226868
    Commented Nov 10, 2015 at 1:08
0
$\begingroup$

Well there was that quadriplegic woman who flew the F-35 simulator using only her mind so I suppose we're getting there.

As to doing action (A) as soon as a person thinks them, I don't think it'll work that way.

In military applications, this would be dangerous and unwise. Weapons release authority, for instance, needs to have a clear chain of responsibility. You can't have a UAV pilot, or a mobile suit pilot, fire that missile as soon as he thinks, 'hmm, lets shoot that fucker down there.'

What I can see happening is making the suit an 'extension' of your body. Augmented by the brainwave control, the user moves his armor like his moving his limb. Shooting missiles and other weapons would also be like consciously moving his limb. So, you can move your arm, move your finger, and fire your laser. But this is also risky, so I think integrated weapons will still need a separate command like pressing a button on your suit, or blinking at a specific point on your HUD.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .