My Concerns Around AI Search And Consumer Manipulation

Are we placing too much trust in AI? Explore the hidden costs, privacy risks, and the influence of advertising on AI-driven decisions.

My Concerns Around AI Search And Consumer Manipulation
Photo by Emiliano Vittoriosi / Unsplash

AI is everywhere—from coding assistants that help us work more efficiently (in some cases) to AI-powered toothbrushes that seem more like marketing gimmicks than genuine innovations. But as AI seeps into every aspect of our lives, we must ask: Are we placing too much trust in these systems, and what are the hidden costs? Does the idea of interfacing with a software directly controlled by advertising companies like Google and Facebook to then make a financial/life decision sound crazy?

Misplaced Trust

We don't have to look back that long to see how the big technology giants have used their position to push their own products and services. From Google to Apple, they have all done things to abuse their power. The way traditional search works, we trust Google, DuckDuckGo, Brave, and in some cases Bing to index the internet and provide us with search results. But that's it, our trust is in their ability to provide search results. Trust in the source of information hasn't been in these platforms, but in the resources they link to. Even with the summary views Google has provided, or when you ask your Google Home, it references the true source of the information. And from there, you can decide if you trust it.

However, with AI search, the trust is instead shifted to the AI search service you're using. Yes, Perplexity shows you the sources at the bottom of the summary view. However, I'm very much afraid people are trusting in the content that is presented and not doing the actual research. The biggest reason is the ability of the LLM (Large Language Model) to still get things wrong, use sources that lean toward one political view, or even worse, be sponsored.

The Costs of AI

AI is not cheap to run, as a matter of fact, the amount of money companies are investing in AI is growing. With this spending, not very many companies are seeing a return on their AI investments. Once the capital has been expended, they then need to increase operational expenditure for the electricity to run it and the talent to maintain it (or retraining).

Advertising

With the amount of money companies are investing in AI, they will need to start showing investors that they can profit from these AI systems (me being one of them). Yes, these systems have subscriptions, but you have to keep in mind advertising makes a lot of money. If you just look at Google, they make a lot of money from advertising. Do you know why? Because data collection is profitable and not everyone wants to pay for things. Eventually, advertising will make its way into the system; Google has already said it would. And when Google sees profit growth, then others will too.

The Manipulation

But let's look past just the advertising because companies have been caught in the past altering search results in their favor, or altering search results to censor things. Technology companies could use this same behavior to then push products, services, and ideology that they want, because LLMO (Large Language Model Optimization) will now become a thing. Even if the technology company itself doesn't want to push a political statement, we may see a new wave of SEO with a lot more power. All of this because trust is now in the content the LLM is providing the user. Why? Because users don't click past the first page.

Data Privacy Concerns

The last thing I'm concerned about is data privacy. With Google and Facebook, just as an example, they collect a lot of data about you. Where you go, who you know, what you like and don't like. Many people don't seem to care as much—maybe they aren't aware of the dangers. I am hopeful, and think it is the latter. The reason I am concerned about data privacy and these LLM services is because two things; trust and money. People will trust the LLM because it seems personable and intelligent, and companies will have an incentive because... money. The information you put into an LLM can (and is) being used to train the AI model. As people trust these AI models, they will place more and more data into the chat... maybe even documents that they normally wouldn't.

Concerns

Maybe this is just me. My career is cybersecurity, so I'm always thinking of what can go wrong. I also enjoy investing, and hedging my bets is part of the process. But I also know history repeats itself, and nothing at the core changes. Companies like making money and people can be manipulated, even if you're in their face about it. Right now people, at least in my personal life, seem to care much about AI. But over time, who knows. They may just get hit over the head with it so many times, they are forced to use it in their everyday lives... but then again, no one I know actually keeps their voice assistant around much either.

Personal Reflection

Honestly, in my personal life, I know of no one who is spending money to access AI products other than myself. At least from what I have seen, non-technical people are not all that interested in AI right now. When I read articles online, it also seems like consumers distrust companies that include AI in their products. I was excited about the new Pixel 9 Pro, but after reading the reviews, I decided to cancel my order because I didn't want to spend money on more AI features. The AI features seemed more like a gimmick and wasn't worth the additional costs.