Trust issues
What does it do to us if we can no longer trust any information?
Hello friend, I hope this e-mail finds you well.
I'm a friend, an acquaintance, a work colleague, your trusted newsletter. But something's wrong. Something seems wrong. The sender is correct, the address is correct. This isn't spam. But something... something is strange.
Much of what we do today is based on mutual trust. We've created ways and means to ensure that we're really dealing with people or entities we trust. But what if that trust is undermined?
We're already seeing what happens when it becomes increasingly easier to manipulate images, sound, and video. Who among us hasn't looked at an image and doubted its authenticity? It's fake.
This happens to me more and more often. I see an image or video, and my first reaction is: It's not real.
That's why we've relied on media outlets in the past. Entities that, ideally, have reviewed the material they put online. I assume that a reputable outlet won't simply post unverified material.
That's why I pay for certain media outlets. That's why I subscribe to certain podcasts and newsletters. But there's a problem here. What if someone in the middle is manipulating information?
For some time now, there have been a growing number of complaints against major media outlets. They were writing confusing stuff or, even worse, confusing important facts. But that wasn't the case at all. The newsletters were sent out with the correct information, but they reached the recipients with incorrect information.
How could this happen?
In IT, this is referred to as a man-in-the-middle attack.
In cryptography and computer security, a man-in-the-middle (MITM) attack, or on-path attack, is a cyberattack where the attacker secretly relays and possibly alters the communications between two parties who believe that they are directly communicating with each other, where in actuality the attacker has inserted themselves between the two user parties.
But who would stand between the newsletter sender and me? Hardly anyone would make the effort of interfering with my data traffic to manipulate information in my subscribed newsletters.
The answer to this question soon became clear. No one was in the middle; the problem was in the inbox.
Gmail users were particularly affected.
Google recognizes the language of every email and offers automatic translation. So, if Google thinks it recognizes an email written in English, it attempts to translate it into German.
Now, if a German email is mistakenly classified as being in another language, it is translated, and German words are "re-translated" into German. The result is corrupted text.
This may sound a bit bizarre at first, but when a report about Russian attacks on "Ukrainian positions" suddenly becomes "American positions" or other facts are distorted, it takes on a certain seriousness.
T-Online has reported extensively on the problem and the taz newspaper describes a similar case, which, however, was noticed before publication (via achtmilliarden) – all three are German only.
What does this do to us if we can no longer even trust that the texts we write will reach the recipient exactly as they were written? And conversely: If we can't be sure that what we are reading was actually written that way?
Google is working on a solution, T-Online writes.
But the question remains; even if it was just an annoying mistake. With all the AI tools that are now being used everywhere, whether users like it or not, we can never be sure what happens to our data.
What does this do to us?
At least in direct communication, we can mitigate the risk. We can use email services that don't integrate AI functions. Those who have their own domain can usually use it to send and receive emails.
We should also encrypt our emails. If an AI manages to manipulate the contents of a GPG/PGP encrypted email, we have entirely different problems.
Checksums would also be possible. These have long been used in software deployment and make it possible to determine whether data has been tampered with after the source code or the program has been published (and a checksum has been created). If this is the case, the checksum no longer matches the tampered code.
But none of this helps if our apps contain AIs that only begin their work after decryption. As is often the case, open source software can help us avoid this. Thanks to open source code, we can be truly certain that the man in the middle isn't actually sitting on our lap.
If we don't want to lose our trust entirely, if we still want to rely on digital media and our communication, we must take measures now that enable us to prevent manipulation.
And the question remains (and even more so): What does it do to us if we can no longer trust any information?
I hope this e-mail finds you well – and was not manipulated.
Write a comment