Once again, it occurred to me this morning. I was working on an email to a coworker and my phone suggested what I was going to say next before I even typed a single word. Instead of a single word, it was a complete sentence and surprisingly, the exact one I was thinking about.
I was struck stiff, fingers poised above the keyboard, going through what I now refer to as “auto-complete anxiety”: an anxiety that is a blend of concern and relief due to an algorithm’s prediction of your thinking with uncanny precision. Realistically, there is something worrying about having an outer form of code expecting your uniquely created thought processes. Even after two decades of working in technology and spending the last three years building predictive text systems, I still find myself perplexed when my phone appears to know what I am about to say.
The best analogy would be realizing that a diary you owned has been following you instead of minding its own business. I looked for a gift for my wife’s birthday last week. My search bar promptly suggested ‘an anniversary gift for my wife who has everything and hates jewelry’ before I finished typing.
Both were obnoxiously cynical and astutely precise at the same time. I sat there with my jaw unhinged wondering for the first time if Google understands my marriage better than I do. It didn’t simply know the search.
It understood the boundaries of my relational challenges. After fifteen years of marriage, I relied on predictive text to do it for me. What I find more unsettling is having to keep in mind the other part of the equation.
I remember in 2012, I attended numerous meetings where the main agenda was how to ensure these systems felt more helpful. – which is another way of saying “scary intelligent.” -we examined everything from how a person types, the messages they sent, the times and even breaks during the day. Using one of these systems, we created users as avatars that represented different behavioral patterns.
The end goal was always to develop a system that “helped” but did not use the language to describe such a system. “Helpful suggestions” sounds way more pleasant than “we’ve analyzed everything you’ve ever typed and can now predict your next thought.”
I know this might sound strange to some users I know, but for a lot of them it truly is a mystery, these systems don’t simply understand what you wish to communicate – they are working to determine what you will say. It has been proven that heard that users are prone to clicking on a word up until they have to physically type it out – even when it is not appropriate for their intended goal.
Accepting a set of words controlled by an unknown partner seemed to be the easier option. I have done this more often than I would like to admit, and find overly simple accepting the plan that comes with the least amount of work. This causes a weird cycle.
An algorithm makes word suggestions based on your previous work, which you accept. As a result, it reinforces the algorithm’s dataset concerning your communication patterns. This makes it even more likely for future predictions to shape your language.
It’s not so much a prediction machine as it is a very advanced form of controlling your language. This happened to me first-hand last month while texting my teenage daughter regarding an issue at her school. While typing, I had written “I think you should…” and my phone instantly suggested “talk to your teacher about it.” That part of the conversation was not how I was going to phrase it, however, I was comfortable with tapping it considering it wasn’t that bad of a suggestion.
But then the question arose, was this my thought or did I send my parenting skills to Google’s language model? Who, at that moment, was advising my daughter? The auto prediction feature on my phone is aware of certain behavior patterns of mine as a text message system far more advanced than my best friends.
I have a tendency to use the word “actually” too much, so that is helpful for cutting down on that habit. It also understands that I am quite formal in emails sent before 10 AM, but as the day’s progress and my formality becomes more casual. My phone understood that I communicated with my boss using longer sentences and my children with shorter ones.
I’m sure my phone will know patterns that I am unaware of. This creates another kind of self-awareness for yourself— a self-awareness that you have never thought existed before. For example, my phone automatically suggests “sorry” at the start of my emails.
After fifty times, I had the misfortune of coming face to face with reality, which was a harsh one for me. Apparently, I overapologize within professional environments. My algorithm had tracked my style of communication and constructed a theory claiming that I, in fact, am overly self-deprecating.
Helpful? Yes, a little awkward in being exposed by my phone? Yes.
Worries and stress best describes my routine now a days, however, at times my phones estimates is so outrageously ridiculous, that it’s quite comforting to be amused. I had been chatting with a friend about grabbing coffee when my phone prompted me to add, “I will meet you at the funeral home”. I have never ever met anyone at a funeral home.
I do not regularly visit funeral homes. The idea was out of this world false, so much so that I burst into laughter. This serves as a astounding testament.
Despite all the technical advancements, the system continues to solely function by guessing based on information it possesses which in my opinion is a little too limited. The most extreme kind of autocomplete anxiety occurs when your phone seems to know things about you that you have not explicitly shared with it. A colleague of mine told me about how she was trying to type a message complaining about being tired, and her phone completed the sentence with “because of your insomnia.” While she did not share her sleep problems with anyone, she did look up some insomnia treatment options on her browser.
Most people are not aware of how blured the line is between different data silos. For her case, her browsing history influences her texting suggestions, which impacts her advertisement sponsoring and eventually ads that pop out for her, which informs her browsing in an ecosystem where information is shared with the user in a disguised manner. I’ve turned this into a form of entertainment for myself, trying to guess how the algorithm figured out what I’m about to say.
Did the program learn it from me using that phrase in similar situations before? Is it common among people like me? Or perhaps the machine picked up something so delicate I have no clue about?
This quickly evolves into a digital self-examination spiral, similar to attempting to diagnose yourself with only shopping receipts. It is informative in some aspects but greatly oversimplified. I’m not very fond of the discomfort but I seem to understand it.
For one, I helped set up these systems. We talked about how to make the app ‘understand’ the user during its development. We literally had a metric called ‘suggestion acceptance rate’ and we celebrated the times users relied on our predictions.
Let’s be honest, no one presented it as “successfully redirecting user communication patterns.” But that is what high acceptance rate means. Now, experiencing that carefully crafted sensation of being understood, comes with its own share of discomfort. Predictive systems are not just inaccurate, they are too subtle in the way they simplify all the language.
Why suggest type words when they can be accepted and make my communication less personal and more precise? It is a well known fact that predictive text advances rudeness in language because it forces everyone in a particular civilization to sound the same. As more people accept suggestions that are the result of a particular digital collection box, the universal voice tends to blend into a singular identity.
When I tried turning off the feature, I found myself missing it in no time. That is the devil’s bargain with tools: they are genuinely useful while altering our relationship with our thoughts. As with many digital comforts, the price isn’t easy to see until you’re already reliant on the benefit.
I wonder whether the most helpful approach is a conscious recognition of the tension. Now, when I accept suggestions, I question them: “Is this what’s meant to be? Or am I simply being lazy?” Other times, I choose a different word instead of the suggestion, as little acts of defiance against the algorithm.
Every now and then, I get a slight reassertion of control – control which reminds me that while my phone might have an idea of what I would say, it does not have control over my human desire to surprise, to deviate and choose the unexpected. The algorithm may make an attempt at predicting my past, but the next word on my screen still remains my sole property. And that’s what I like to believe when I am about to click “send” and wondering if on the other side, a person is letting their device attempt to guess their words – two algorithms communicating with each other through the frail cover of human proxies.
That makes me wonder what modern day communication regards. But one thing I do know – my phone has all the answers.