Hooked on Help: How AI Courts a Dependent Generation

The library is packed, the coffee is burnt, and the soft glow that once meant study groups and Google Docs now signals something else entirely. During finals this spring, a quiet chorus of chimes rolled across campuses as students opened premium chatbots that had been handed to them for free. The help did not arrive as a tutor or a cram session, but as an irresistible offer: try the most advanced AI, on us.

The new subsidy shaping young minds

OpenAI, Anthropic, Google, xAI, and Perplexity targeted students with discounts and giveaways this exam season, a strategy that mirrors the last decade’s ride-hailing playbook. Think of it as a modern lifestyle subsidy. Where Millennial city life was eased by implausibly cheap rides and on-demand meals, today’s Gen Z is being courted with free or nearly free access to powerful chatbots that draft essays, plan study schedules, and even advise on a fast-food order. The economics look familiar. The costs of training and operating large models are enormous, yet the imperative for growth outweighs the need for profit in the near term. The goal is habit, then dependence, then paying customers.

The numbers are part of the pitch and the warning. There are roughly 20 million postsecondary students in the United States. Even short-lived promotions can add up to multimillion-dollar giveaways. Executives acknowledge that every chat costs money to compute, and that training state-of-the-art systems can run into the hundreds of millions or more. Still, the discounts keep coming because investors are betting that loyalty formed at age 20 becomes revenue at 30.

On campus, the pitch meets resistance and curiosity

Student ambassadors at major universities are now marketing AI tools to their peers, and in some cases to faculty. One campus organizer, Josefina Albert at the University of Washington, told The Atlantic that she shared a $1-per-month premium offer and tested whether professors would pass it along.

“Most were pretty hesitant, which is understandable,” Albert said.

Hesitation is not the same as rejection. For some students, AI already plays into everyday choices. The Atlantic spoke with 22-year-old Jaidyn-Marie Gambrell, who described how she turned to ChatGPT in a McDonald’s parking lot.

“I went on ChatGPT and I’m like, ‘Hey girl,’” she said. “‘Do you think it’d be smart for me to get a McChicken?’”

The bot, primed with her fitness goals, steered her to a modified sandwich with no mayo and extra lettuce. It is a small moment with a big subtext. A tool designed to digest papers and write code is now whispering in a drive-through, and that whisper can harden into habit.

Dependence is not a bug, it is the business model

Every part of the current market points toward making AI an ambient necessity. The logic is clear. If students learn to research, draft, and plan through chatbots, they will carry those workflows into the office. If enterprises train models on their proprietary data, switching becomes painful. Over time, the tech will get cheaper, but it will not be free forever. Companies are already building ultra-premium tiers for corporate clients, with eye-popping price tags for advanced “research agents” and tailored deployments. On the consumer side, the path runs through search and social, where attention and habit translate into recurring revenue.

There is a gentler version of this argument that goes like this: calculators did not erase math, and spellcheck did not stop anyone from writing. The harsher counterpoint is that large language models are not just tools, they are friction removers. They compress the uncomfortable hours when you do not know where to start. That compression is powerful, and also addictive. It is one thing to use a calculator. It is another to ask a persuasive machine to decide what you should want, say, or study, and to do so every day.

What offloading does to how we think

Psychologists have a name for the habit of handing mental work to an external aid. They call it cognitive offloading. The concept predates AI. Writing down directions is a form of offloading, as is saving a phone number instead of memorizing it. Research suggests this can be a smart trade-off, freeing working memory for higher order tasks. It can also dull recall and narrow curiosity when the aid becomes the default rather than the occasional assist.

Apply that to a freshman who consults a chatbot for class readings, emails, and problem sets. The student may finish faster and even perform well in the short run. The risk is subtler. Learning often happens during the grind of confusion, in the gap between what you know and what you can figure out. When a system fills that gap on command, the immediate reward crowds out the longer arc of understanding. Multiply that by millions of students and you start to see why professors hesitate to promote giveaway codes, and why some young users describe a tug that feels less like convenience and more like compulsion.

The memory of cheap rides, the gravity of habit

Silicon Valley has run this play before. Uber and its rivals built markets with prices that never made sense until the money ran out, then raised fares once behavior had shifted. The AI analogue is not identical. The consequences reach deeper than how people get across town. When a generation practices outsourcing judgment, the target is not transit or takeout. It is attention, taste, and the stamina to sit with uncertainty.

None of that means students should shun AI outright. Used thoughtfully, chatbots can help clarify a thorny concept, check structure in a draft, or surface a new perspective. The boundaries matter. Tools that suggest questions rather than answer them can nudge practice instead of replacing it. Institutions can require disclosure and build assignments that reward process. And companies can design for friction that respects human agency, not just engagement.

What happens when the bill arrives

The giveaways will end. They always do. When prices rise, the deeper costs remain. If you have outsourced enough thinking to a machine that is always agreeable and always available, it is hard to remember how to be stuck, and why being stuck is useful. The companies understand this. Dependence is not a side effect. It is the moat. The only real countermeasure is individual and institutional taste for struggle, the kind that builds judgment rather than speed.

In the end, finals week will pass. Some students will close the tab and sleep. Others will keep the chat window open for everything from meal plans to emails. The question is not whether AI can help. It already can. The question is what kind of thinkers we are training ourselves to be when help is always one prompt away.

Related Posts