Mora Matassi has a Master's in Technology, Innovation, and Education from Harvard University, and a Master's in Media, Technology, and Society from Northwestern University.

Like a magician seeking to trick the viewer, advances in artificial intelligence, computing power, and technology create fake scenarios, but with real appearance to mislead the consumer. This makes many times we cannot know if what we see or hear is the invention of a machine or if a face exists in the world or was created by an algorithm.

Are we prepared for these technological advances? Or every once again we are going to fall into fake news that is based on images, texts or audios so “real” that only specialists could discover them?

There is an anecdote that is more than 100 years old, but for an expert it can be a great example of what will come with this type of innovation. “When one of the first films in history was released in front of the public in 1896, The arrival of a train at the station by the Lumière brothers, this film produced panic among the people in the room. For 50 seconds it shows a train heading towards the viewers and it scared them: they were afraid that it would go through the fabric screen on which the film was projected. This anecdote, which we do not know if it is true or not, is interesting because it tells us about the fears, the fears and the learning processes that accompany the uses of a certain medium in society”, he explains. Mora MatassiMaster in Technology, Innovation, and Education from Harvard University, and Master in Media, Technology, and Society from Northwestern University, in the USA.

Mora Matassi has a Master’s in Technology, Innovation, and Education from Harvard University, and a Master’s in Media, Technology, and Society from Northwestern University.

“Today it would be rare for an adult to be afraid of what they are seeing on the screen, because in general we understand the conditions of production of what we see. If we extrapolate it to the case of how people can receive or interpret videos that use artificial intelligence to adulterate a voice or an image, we could say that it is likely that over time people will develop mechanisms for understanding the conditions of production of what is produced. that they observe, and that therefore they can eventually collectively or individually discern if it is a video with fake or ‘real’ news”, affirms the specialist.

Elon Musk recommending a cryptocurrency, Tom Cruise with a young face smiling at the camera or Luke Skywalker himself, with the face of a young Mark Hamill in the middle of 2021, are some of the most recent examples of deepfakesvideos in which the faces of the protagonists were replaced using technology.

Sometimes it’s a narrative device, like the involvement of an aging Star Wars character in a recent series; but others can be used for scams, as in the case of the founder of Tesla and SpaceX, whose face was an excuse for people to invest in a business linked to digital assets. There are also those who do it “because they can”, like the latest incarnation of Rambo with the face of Guillermo Francella, baptized “Rambocella”, created by the DeepfakesAR account.

“Part of our consumption of news and circulation of information has a large social component; what we receive generally comes in a social context and we talk about it and understand it in social settings, whether it is with our inner circle, a family WhatsApp group, or a public sphere, such as Twitter. This mechanism of understanding the conditions of production of a medium will go through the way in which we receive these products and will allow us -accompanied by digital literacy- to be able to distinguish something that was adulterated from something that was not adulterated. It even seems to me that the artificial intelligence technologies that help create these adulterated videos are also going to be technologies that allow us or help us to be able to distinguish between something that was false and what was true, ”says Matassi.

When we see, or rather hear, a person speak… Is it her? It’s her voice. Her intonation. The timbre and accent of it. But she can also be a product of technology and artificial intelligence. A recent case, from Hollywood, is that of Val Kilmer.

Laryngeal cancer left the actor without a voice. But for his Iceman character to return to Top Gun: Mavericka sequel to the 1980s classic with Tom Cruise, managed train a software with Kilmer’s voice, his accent and even his way of speakingto say his lines in the movie.

We have a case in Argentina, that of Senator Esteban Bullrich, who, affected by ALS, also uses a platform that allows the creation of a digital version of the voice of a person with speech problems or disorders. Thanks to this, through a computer, he can write whatever he wants and his own voice will come out through a loudspeaker.

Although these cases are extreme, and linked to health problems, the technology is within the reach of anyone with a computer. There are countless apps or online platforms such as FakeYou, which allow you to write a text and have it read with the voice of movie and television characters, even animated ones, or even football, politicians or celebrity commentators. But in 2016 Adobe already showed VoCo, the Photoshop of audio: with a basic phrase it allows you to generate any spoken text, whether it is from a person present or absent, alive or dead.

we already saw that can you make a person say something they didn’t say. Create audio or synthesize a voice in such a way that we do not discover that it is made from a computer. But there is still more that software can do to bring us a little closer to a Terminator-like dystopian future: create faces, faces of people who don’t exist.

In ThisPersonDoesNotExist (this person does not exist) we can see, every time we enter, the face of a person. Except that it is not a person, it is a development with technology from the Nvidia company to show what algorithms can do. Invent faces that seem real but are not.

All the faces are created on the spot by the computer;  they are unique and unrepeatable
All the faces are created on the spot by the computer; they are unique and unrepeatable

Although it may seem like a nice toy, or a way to create a game we have faces that do not belong to someone, this platform can also be an ally of scammerspeople who pretend to be someone else in dating apps or the like, who instead of stealing the face of a third party -with the risk of being discovered when looking for photos on the Internet- have a face without a past on the Internet… or with which they decide to create.

Another of the booms in the networks is DALL-E 2, an artificial intelligence that can convert words or requests -generally bizarre- into images. The company explains that the system can combine different ideas, styles and visual characteristics. To sample a button is enough: someone said “R2D2 being baptized” and the platform offered these images. (Yes, the Twitter account collects rare DALL-E creations. Google has already announced its own AI that can create images from descriptive text, dubbed IMAGE.

The loss of a family member or friend can be hard. And the memory of that person was kept, in another time, with photos or letters. Or more here in time, with a voice message on the answering machine, something that is now multiplied to infinity with hundreds of WhatsApp audios that can be repeated ad infinitum.

What if we could “chat” with that person? Actually it would not be with that loved one but with an artificial intelligence programmed with his words, his style and the type of answers he would give us. A chatbot, like the ones we constantly use on different platforms, but to talk to the dead. A deadbot; the science fiction version already has its Black Mirror chapter, but it’s not that far away.

An episode of Black Mirror, Be Right Back (I’ll be right back) tells the story of a woman who hires a service that digitally recreates her dead husband

Hernan Liendo is CTO and Co-founder of Botmaker, a company dedicated to the creation of bots for social networks and platforms such as WhatsApp. His most famous creation of his? Boti, the service of the Government of the City of Buenos Aires to which many porteños wrote non-stop at the time of Covid-19 vaccines and tests.

But talking to a robot is not the same as talking to a person. Or if?

“People notice when they talk to a bot and change the way they interact. They tend to be more direct and do not have the same cordiality or ways of communicating as with a human. The exception is usually children or older adults, who say thank you, say hello more, and have a warmer interaction. The youngest and adolescents also get very hooked, they experiment, they ask them to tell them jokes. We have specialists who work on this type of interaction and are constantly evolving”, explains Liendo.

Could chatbots that are indistinguishable from a person be a reality? The bot maker has no doubts. “Totally. They are going to have references of knowledge of people, they are going to retrain the models. There are some more specific ones that learn to speak on social networks, but they can be used for different platforms. Robots surprise us and can evolve”, details Liendo.

Blake Lemone He also thinks so: the name of this Google engineer became known because he declared, a few days ago, that a conversational engine that the company develops, called LaMDA, became aware of itself. He is – in his own way – alive. Other experts on the subject consider that Lemoine is wrong and that he is simply a victim of the success of this tool, designed to be able to chat in the most credible way possible.

Blake Lemoine assures that LaMDA has a personality, rights and wishes
Blake Lemoine assures that LaMDA has a personality, rights and wishesTHE WASHINGTON POST

Although for many it could be interesting to have a loved one on WhatsApp to “keep chatting”, these types of developments could affect the concept of grief and how we deal with the loss of other people.

“Very profound ethical questions appear that speak not only of the management we make of our digital life, but also of how we perceive ideas of life, death and mourning -says Matassi-. The researcher Davide Sisto suggests that when people use this type of technology they can modify or shape the grieving process because precisely what they do is sustain the virtual presence of a person, their digital footprints, over time.”

Thus, the question arises: how will we do, in the future, to determine if what we see on the screen (or hear through a digital device) is true? Computers can already generate ultra-realistic images and videos; add the face of a real or fictional person, animate it, add a credible voice, or make it talk about any topic. That infinite plasticity of the digital, as the science fiction writer William Gibson thought (everything that is digital can be modified or recreated), will force us to rethink the idea of ​​”seeing is believing”.

Leave a Reply