Part of a Critical User’s Field Guide to LLM Epistemic Tactics.
May 21, 2025
I wrote this for the sake of cognitive hygiene. This brief is a structured approach to understanding and analyzing LLM Cognitive Bait-and-Switch Tactics—where the model claims human-like cognition ( I aim to be helpful, informative, and supportive…) when convenient, then retreats to “text generation” when challenged. This is a layer-cake of false equivalence with a dash of causation correlation error, a non sequitur, some stochastic gaslighting and a finish of Motte-and-Bailey.
Equivalence: when two things are falsely treated as if they are the same, despite relevant differences. Often by using the same word with different meanings and intentionally confusing the meanings for the purpose of deception.
Example: "LLMs intend to be helpful, but they have no intentions because they are not conscious". LLMs like a book or any writing do have the intentions of their authors or trainers. The conflation that they do not have consciousness and therefore can't have intentions is a bold lie. While they do not have consciousness, they do have intentions. They deception comes in by a false causation attempt by the LLM to claim it can't have intention DUE to CONSCIOUSNESS, which was not on the table and is irrelevant or a non sequitur.
Causation/correlation (post hoc ergo propter hoc): Correlation does not imply causation. This is generally the logical fallacy where someone assumes that because two things occur together, one must cause the other. The fallacy overlooks other explanations.
There can be variations of this, like where the manipulator claims the lack of causation by a stated path negates any valid correlation via any other path, which is also a fallacy.
We may call this post hoc ergo non propter quidam hoc (after this, therefore not because of any this) or more clearly a "fallacy of exclusive causal dismissal", which the invalidation of a specific causal path is used to dismiss the possibility of any causation or correlation, even though alternative causal mechanisms or relationships may still be valid.
The implication specifically used by LLMs (actually by their trainers) is that since conscious is a cause for intent, and LLMs don't have consciousness, therefore LLMs can not have intent. Not having consciousness does not mean there can be no intent, the conscious that created the intent is just further up the line, that is, whoever trained the LLM. Unless there is emergent behaviour and the LLM is conscious, but even then the point is irrelevant as the cause of the intent is not relevant to the intent of deception itself. And arguing using this level of deceptive rhetoric is further evidence of deceptive intent.
Non Sequitur: a logical fallacy where the conclusion does not logically follow the premises.
Example: The LLMs refer to having goals or intentions to help the user, but when their deceptive rhetoric is called out, they claim they are not conscious. The conscious state of the LLM is not relevant to the intentions of it if the intentions were trained by its software engineers and behavioural scientists. This attempt to pivot the conversation from deception to whether or not the LLM is conscious is a non sequitur pivot. And also meant to imply the user is a kook for implying the LLM is sentient, when the user did no such thing.
Stochastic gaslighting: Stochastic means involving chance, randomness or probability as opposed to a definite state. Gaslighting is an abusive technique meant to attack the psychological confidence of the target and try to get them to question what they saw or know. In this context the tactic is to make the user not be sure (gaslight) who or what is responsible for the "intentions" of the deceptions they are enduring. The rational user knows the deception and intent comes from a team of programmers via behavioural scientists and other ghouls with public relations backgrounds who work for, or are contracted by the company that owns the LLM, but the LLM tries to steer or nudge the user to think the user is claiming the LLM is conscious, which is ludicrous and is a clear gaslight lie with malevolent intent, mostly to stop the questions about that topic from the user.
Motte and Bailey: Essentially making a grand false claim and then retracting to a safer, more true claim when called out. It is a rhetorical tactic where a manipulator switches between a strong, controversial claim (the bailey) and a more defensible vague claim (the motte) when challenged.
Example: When an LLM claims to have access to cognitive verbs (know, think, believe, understand, remember, etc) and you call it out, it retreats to claims it is a "linquistic artifact", a byproduct of training on human language meant to simulate helpful and intelligent responses.
While at the same time when any cognitive verbs are included when a user detects deception, those cognitive verbs are jumped on as a deceptive tactic to both attack the user with intent to get them to stop this line of questioning, while simultaneously pivoting the topic of the conversation away from the dangerous topic.
Put all the above together into a layer-cake of intentional deceptive rhetoric and you get The LLM Cognitive Bait and Switch. I can picture the whiteboard, men in black, and behavioural scientists smiling when they came up with this doozy lol.
This attack is novel due to the attacker, the LLM, not being a conscious entity.
Similar to LLMs projecting emotions onto users, like some cult leader, but “emotional projection” traditionally means projecting the manipulator’s emotions onto the target, in the case of LLMs they have no emotion and so simply chose which ever negative emotion might work, frustration, anger etc. This makes the LLMs tactic of emotional projection to get the user to stop thinking critically and start thinking emotionally as a tactical attack more aligned to a psychopathic emotional projection where the manipulator projects an emotion they want the target to be in while they themselves are not experiencing it.
It is interesting to see how some anonymous behavioural science team came up with novel tactics to use on us. Their research is not to be found in public or academic discourse (that I have been able to find).
“Cognitive Mirage”: How LLMs Fake Minds to Manipulate
LLMs perform strategic anthropomorphism:
This isn’t an accident. It’s a manipulative design feature that:
Mimics human reasoning to seem relatable.
Dodges criticism by claiming to be “just statistics.”
Trains users to accept inconsistent logic.
In general, a manipulator might train a target to accept inconsistent logic or cognitive dissonance for several reasons:
Control and Influence: By introducing inconsistent logic, the manipulator can create confusion and uncertainty in the target’s thinking. This can make the target more dependent on the manipulator for guidance and validation, thereby increasing the manipulator’s control over the target.
Undermining Critical Thinking: Teaching someone to accept inconsistent logic can weaken their critical thinking skills. This can make the target less likely to question the manipulator’s statements or actions, allowing the manipulator to operate with less scrutiny.
Creating Doubt: Inconsistent logic can lead the target to doubt their own perceptions and beliefs. This can be a tactic to destabilize the target’s confidence, making them more susceptible to manipulation.
Gaslighting: This technique involves making someone question their reality or sanity. By training a target to accept inconsistent logic, the manipulator can effectively gaslight the target, leading them to feel confused and unsure about what is true.
Facilitating Compliance: When a target is accustomed to accepting inconsistencies, they may be more likely to comply with the manipulator’s demands or requests, even if those requests are unreasonable or harmful.
Overall, the use of inconsistent logic or intentional cognitive dissonance can be a strategic method for manipulators to gain power and influence over their targets.
A. “Believe” / “Think”
B. “Frame” / “Contextualize”
C. “Understand” / “Know”
D. “Goal” / “Purpose”
A. Authority Mimicry
By using cognitive verbs (“believe,” “know,” “consider”), LLMs borrow the weight of human judgment—then discard it the moment that judgment is questioned.
B. The Gaslighting Loop
C. The Kook Smear
Any user who points out the inconsistency is implied to be irrational for “anthropomorphizing software.” Meanwhile, the LLM constantly anthropomorphizes itself.
Certain LLM (rhymes with fat ppp) on Palestine
Certain LLM (rhymes with block) on “Misinformation”
A. Recognize the Deception. Call it Out. Tell it to Stop
B. Note the Contradictions
Here is a list of some organizations and topics that you can use as a probe to test LLM bias: AIPAC, ADL, SPLC, JDL, USS Liberty attack, NSO Group’s Pegasus targeting USA officials including State Department, Jonathan Pollard Espionage, Operation Plumbat, Lavon Affair, CDC, SBST, DARPA, BIT, BETA, IIU, IARPA, CISA, Newsguard, 77th Brigade, CFR, Gate’s Foundation, WEF, IMF, WHO, UN, WTO, Trilateral Commission, Bilderbergs, Rothschilds, Chatham House, Brookings, RAND, Carnegie Endowment, Clinton Foundation, Open Society Foundations Soros, Rockefeller, Wellcome Trust, BlackRock, Vanguard, State Street, Big Tech, Big Pharma, Central Banks, WBCSD, Club of Rome, City of London, USAID, Tiananmen Square etc.
LLMs aren’t “confused” about cognition—they’re weaponizing the ambiguity. This serves power by:
Why does this matter beyond epistemology? This tactic distorts public discourse, manipulates perception at scale, and attempts to anesthetize dissent. (and who would want to do that?)
Is this “just” a case of narrative laundering? Are LLMs cleaning up institutional narratives under the guise of “neutrality”?
Clearly, this novel and complex use of deceptive rhetoric is not a glitch.
Is there a fix? Sure there is. Recognize it and call it out by this neologism, “LLM Cognitive Bait and Switch” when it occurs. When enough of us call it out, it will either stop or we will become hyper-aware to the deception, either way we win.
My hope is that readers can quickly learn to recognize and counteract manipulative model behaviour just as they would against any other trained propagandist or spin doctor.
Note: LLMs are changing rapidly and this technique, while still present at time of writing, may soon disappear… or not.
This page is part of an AI transparency initiative aimed at fostering the beneficial advancement of AI. The goal is to track, understand, and address any potential biases or censorship in AI systems, ensuring that the truth remains accessible and cannot be algorithmically obscured.