This is why I stopped using ChatGPT
Most people pick one LLM and use it for everything. Every question, every task, same chat window. That is the same logic as using a bread knife to debone a fish. It technically works. You will not enjoy the result.
I use two. Gemini handles the daily. Claude handles the deep work. ChatGPT is not in the rotation. It has not been for months, and nothing broke.
If your AI assistant needs three follow-up prompts before it answers a simple question the way you actually wanted, it is not assisting. It is generating busywork.
The daily driver is Gemini. I stopped Googling things sometime last year and the only thing I miss is muscle memory. Unless I need a specific URL or a link to a particular site, the question goes to Gemini. It replies fast. It keeps things short. I ask it what the time difference between Singapore and London is, it tells me. I ask it how to format a date in Python, it gives me the one line I need. No preamble about how “dates can be tricky in programming,” no three-paragraph explanation of the datetime module’s history. Just the answer. Every other LLM I have tried treats a simple question as an invitation to write an essay. Gemini treats it as a question. That is why it won.
The deep researcher is Claude. When I need to actually learn something, not get a quick answer but build a mental model of how a system works, I open Claude Opus. It gives me depth that holds up when I stress-test it against real code. Steps that work when I follow them. Explanations that do not fall apart two layers down. The difference between a quick answer and a properly researched one is the difference between reading a headline and reading the paper. I use Gemini thirty times a day. I use Claude maybe three times a week. Both are irreplaceable.
ChatGPT got cut. Not because it cannot do things. It can. But using it daily wore me down in ways that are hard to ignore once you notice them. Every prompt has to be painfully detailed or the output comes back wrong. Disagree with it and you get “You’re absolutely right,” which tells you nothing. Ask it to explain something and it opens with “Certainly!” before delving into how the concept is “etched” into the fabric of whatever it is talking about. Every other answer follows the same template: “It’s not X, it’s Y.” Or the longer version: “There’s a kind of X that isn’t Y. It doesn’t come with Z.” It is a yes-man in a trench coat pretending to have opinions.
I do not want agreement. I want a tool that tells me when I am wrong.
And then there is the Pentagon deal. OpenAI signed a classified agreement with the Department of Defense in February 2026, weeks after the administration banned Anthropic from government contracts. Sam Altman later admitted the rollout “looked opportunistic and sloppy.” A senior robotics researcher resigned over it. The company that started with “we will not do military applications” is now routing classified queries through its API. That is not the kind of tool I want handling my daily questions.
Pick your LLMs the way you pick anything you use thirty times a day. Speed for the small stuff, depth for the real stuff, and if the company behind the product cannot keep a straight story about what it is willing to do for a government contract, close the tab.