Artificial dependency
As I'm writing these lines, an update from iTerm just popped up. iTerm is a terminal I use on my Mac. The big announcement in the update: We've now integrated ChatGPT!
You can subscribe to my newsletter to receive posts like this one
Subscribe Learn moreA few days ago, we got the news that Slack is now providing all data to OpenAI: chats, files, everything... It's creepy, and while I'm still writing plugins to block AI bots, the ground is being pulled out from under us almost everywhere.
Will we even be able to resist in the future, ensuring that everything we enter and store digitally doesn't become a training dataset?
What does this mean for me professionally? How can I ensure that this doesn't happen when I work with sensitive data, while it might be quietly monitored without my knowledge? What impact does it have on us if we constantly worry about this possibility, even if it's not happening?
I don't inherently oppose LLMs, image generators, etc. I find them useful, but we also need to think about how data is handled. We're experiencing what we've seen so often with new big hype products from Silicon Valley1. Big Tech sets the rules by pushing forward, disregarding everything, and when problems arise, they shrug and say it needs a societal solution because the technology is advancing too fast.
But of course this acceleration is driven by the financial pressure on these companies. Despite their claims of being "Open," it's always about money.
It's audacious to warn about the dangers of AI and say action is needed to prevent them, while simultaneously working to make those dangers a reality – because they are profitable.
And we are fools!
We're playing the same old game – again!
We eagerly jump on the bandwagon, integrating these products and interfaces into as many of our products as quickly as possible. I don't even always think it's for financial reasons, but because it gains short bursts of attention when I can announce a cool new AI feature with minimal effort.
What we again forget (or ignore) is the dependency we're creating. We've done this too often in the past and we should know better. Now, we're trying to reclaim an open web after Facebook, Twitter, and the like have made us technologically dependent and are now giving us the finger.
What will happen? Of course, in the coming months and years, the terms of use for APIs from such AI SaaS companies will change, not in our favor but in favor of these companies' profit maximization. By then, these interfaces will be embedded everywhere in our software, so we'll have to go along with it.
If we've learned anything, it's that we can't let ourselves become so dependent again. If we want to create and maintain an open and democratic web, we must not let big companies dictate the rules. It has to be the other way around. We must be the ones to decide what happens with our data. And we must be able to remove features from our software without breaking it.
We've already managed to ruin the social web and are painstakingly working to rebuild an IndieWeb; we've already ruined emails, EMAILS! Such an old, open technology. We must be extremely careful about how we proceed, especially those of us who create data and software.
I'm now going to look for an alternative to Slack, something where I can reasonably assume my data won't end up with OpenAI, is properly encrypted, and I can focus on my work without worrying about who might be reading along.
Oh and just to bring up the irony: This post was translated with the help of AI.
-
This is representative of this type of product development. ↩
Write a comment