Click for larger image

We may never achieve artificial superintelligence AGI, but it wouldn’t hurt, because in the next hundred years we will probably be enough just with assistance AI.

That is, with artificial intelligence that will have about as much self-awareness as a doorknob. And yet, it will make us people with superpowers almost like Marvel.

Maybe AI will just turn us into stupid jello

Well, or, on the contrary, they just turn us into something silly jelly – for example, in a camera with basic image recognition. This is exactly what an Israeli startup is trying to do with a bit of exaggeration InnerEyewhose engineers noticed two phenomena:

  1. The human brain processes visual perception very well
  2. However, his cognitive/decision-making and transfer/motor functions are already terribly slow compared to the machine

So what if those two other functions were taken over by a machine and the human eye and brain would only serve as silly girl camera with object detector?

I see a gun + press a button = a bunch of time

What would it be good for? InnerEye demonstrates its invention, for example, by recognizing weapons on X-ray images from airport scanners.

Current practice works roughly in the way that a security specialist watches the image on the display, and when he sees a weapon, he burns another precious hundreds of milliseconds cognitively processing the fact: “Look, it’s really a gun!” and even longer by successfully completing the instruction to the finger: “Look finger, press the safety button!”


When software replaces some parts of a very time-consuming cognitive process: “I see object X, so I press a button”

InnerEye bypasses these steps simply by placing a headset with surface electrodes for EEG sensing on your head. Before that, engineers used machine learning to develop a statistical model of how your particular brain would react on an electrical level when it recognized a gun in footage.

I see gun + computer = 3 to 10 fps

And now the magic. When you see a weapon on the monitor again in the production phase, in the sequence of electrical pulses picked up by the EEG headset, it manifests itself before you even think about it on a cognitive level (“Look, I see a gun!”).

The computer detector will then pass this information on to the security system, which will react in some way. You won’t be doing anything at all – you won’t be thinking deeply about the gun, because all of that would just waste your brain’s processing time and you wouldn’t have time to register more shots.

Thanks to InnerEye and through EEG, a trained expert correctly registered the weapon at a speed of 3 frames per second:

It sounds bizarre, but by delegating decision-making functions to a computer, after practice, a person can correctly sort X-ray images from the airport at a speed of up to 3-10 images per second! You will see the weapon even if the image is only displayed on the screen for about 100-300ms.

There must have been something there, but I won’t think about it

It really works. When you look at such a quick sequence of shots yourself, and try not to think too much about it, you will notice – on the edge of consciousness – that there was probably something there, even if the box just flashed.

That there was something there is precisely that raw and cognitively still unreliably processed information by (sub)consciousness, from the point of view of perception itself and basic recognition, which is reflected in the EEG, that is enough.

Click for larger image
The human camera can also be used to search for industrial defects. It is enough to train the machine model again, how normal photos look (nontarget) and how the ones to which the EEG reacts with the pattern we are looking for (target)

An artificially enhanced person in this way, who paradoxically tries to think as little as possible while watching footage, can then replace several other employees.

Of course, the question arises, why not replace the human eyes and the identification of objects in the brain, but thanks to the ability of visual abstraction, we still manage this process better than a computer.

If you don’t think too much about it and don’t draw lines on the paper, it won’t hurt and you can reliably see all the specks:

In short, if you tell a person to look for pancakes instead of guns in photos, they can start almost immediately.

But in order for him to be able to reliably recognize pancakes in all different shapes and colors, as well as the machine, you have to teach him quite a bit. In this case, the machine will only have to learn what your neural interpretation of the pancake looks like in the EEG output, which is – it seems – somewhat easier.

Leave a Reply