Get New Ideas by Email. Join 2k+ Readers.

10 AI-Generated Ideas That Fly in the Face of Critical Thinking

Brandolini’s Law suggests that it’s infinitely harder to refute bullshit than to produce it. While honest people have to spend hours disproving baloney, the bullshitter can pump out new nonsense on a whim. It’s become my life’s mission to map critical thinking by collecting and sharing similar interesting concepts on my blog and in my 3 Ideas in 2 Minutes newsletter. My world is that of critical thinking, philosophical principles and thinking models. But there’s a new competitor in town. As I recently had to discover, artificial intelligence has opened up a brave new world of AI-generated ideas that may or may not be utter hogwash.

In the world of AI text generation, fact lives close to fiction, entertainment meets incredulity and the truthseeker is governed by Brandolini’s Law. But let’s back up a little and discuss first what I mean when I say bullshit and how AI-generated text works. Only then can we decide if knowing about the Truth Fallacy Equation will change your life. If your business can benefit from the 1283rd Man Rule. Or if it’s worth upgrading your mind with the Bed of Aristotle.

Table of Contents

Bullshit Defined

I’m using the term bullshit as a technical term. In fact, there’s a whole literature on it. It traces its roots back to 1986 when philosopher Harry Frankfurt wrote his seminal essay On Bullshit. It was also Frankfurt who coined a definition bullshit scholars rely on to this day:

When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he considers his statements to be false.

For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says.

He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.

Harry G. Frankfurt, On Bullshit

40 years later, social psychologist John Petrocelli added a new dimension to the definition: “The degree to which something qualifies as bullshit,” he wrote, was “inversely proportional to the degree to which the claim is based on truth, genuine evidence, and/or established knowledge.” It’s precisely the indifference to truth and Petrocelli’s criteria for the quality of bullshit that I was interested in.

How AI Text Generation Works

Now, how does AI text generation work? For my AI-generated ideas, I used OpenAI’s Playground feature based on GPT-3. While this sentence would’ve been considered nonsensical ten years ago, here’s what it means today. GPT stands for Generative Pre-trained Transformer with three being its version number. It’s a large language model (LLM) trained on massive amounts of data sourced from the internet. Blogs, news articles, books or papers.

Okay, but how does it really work? You feed the AI with a natural language prompt such as: Explain why Canberra is the capital of Australia. OR Summarise this email using bullet points. OR Draft an email to my best friend telling him I can’t attend his wedding. The AI then does its magic. It answers the Canberra question correctly, nails the summary and politely hopes that the email finds your best friend well while lying to his face by making up a health-related excuse.

Alright. But how does it really really work? Obviously, the AI is not sentient. It never met you let alone your made-up friend. The language model merely predicts what the next best word might be in a sentence. Starting with your prompt and based on the vast amounts of data it was trained on with a knowledge cut-off date of September 2021. The AI can describe, summarise and analyse. And while it tends to refrain from value judgements, it has a surprisingly good grasp on meaning and context. But how will it handle the nonsensical ideas I throw at it?

10 AI-Generated Ideas and Concepts

To generate the below ideas, I combined familiar ideas with obscure ones creating a new phrase. I then fed the algorithm with the concept asking to explain them. (It’s surprising how authoritative you can make something sound by adding terms such as heuristic, law, effect, model or rule.) Only then did I research whether the ideas actually exist and if the explanations hold up. So here they are, ten AI-generated ideas that defy critical thinking. The AI responses are unedited, my comments are in italics.

1. Law of Bullshit Recognition

The Law of Bullshit Recognition states that the more bullshit someone hears, the better they will become at recognizing it. This means that people become more adept at determining what is true and what is false over time, as a result of their experiences.

The law is based on the idea that the more people are exposed to false information, the more likely they are to recognize it as such in the future. The law also implies that people tend to trust information they have heard more than they trust information they have not heard.

On the surface, this explanation makes absolute sense. Even though no such law has ever been formulated until now. I’m not sure whether it holds up empirically. Though, it does look like those who consider themselves most skilled at detecting bullshit are most receptive to it.

2. Black Milk Communications Model

The Black Milk Communications Model is a framework for understanding and optimizing the connections between an organization and its stakeholders. It consists of four main components:


  1. Communication Strategy: Establishing an overarching strategy for how the organization will communicate with its stakeholders. This includes determining the desired outcomes and developing a plan that outlines the goals, objectives, and methods for achieving them.
  2. Message Design: Crafting a clear, consistent, and compelling message that resonates with the target audience. This includes understanding the needs and expectations of the stakeholders and tailoring the message to meet them.
  3. Delivery: Selecting the appropriate channels and vehicles for disseminating the message to the stakeholders. This includes considering the most effective ways to reach the target audience, such as print, digital, or social media.
  4. Measurement: Evaluating the effectiveness of the communication strategy and message design. This includes analyzing the feedback received from the stakeholders and assessing the impact of the communication efforts.

The model is called the Black Milk Communications Model because it is designed to help organizations achieve their goals and objectives by delivering a highly effective and consistent message. The black milk symbolizes the “perfect mix” of communication components needed for effective communication and stakeholder engagement.

This result is quite good. Granted, the model remains generic and abstract — as models do. But the name and its rationale are exactly the kind of pseudo-curiosity-inducing branding you’d aim for. I can see myself making a six-figure income by lecturing about it at business summits and teacher conferences.

3. Yorp Effect

The Yorp Effect is a phenomenon that suggests that humans are more likely to remember words that contain the same vowel sound in each syllable. This effect was first described by psychologist George A. Miller in 1951.

The effect is named after an experiment conducted by Miller in which participants were asked to remember a list of random words. Those words which contained the same vowel sound in each syllable (e.g. Yorp and Boob) were remembered more easily than those without the repetitive sound structure.

This one turned out differently. The YORP Effect is real. However, it’s more concerned with physics and asteroids than with linguistics. I could be wrong. But I found no indication this experiment took place. In any case, note how it’s the specificity that turns average bullshit into high-quality nonsense.

4. Wolpertinger Heuristic

The Wolpertinger Heuristic is an algorithm-based approach to problem-solving that focuses on the exploration of alternative solutions and the use of creative thinking to identify the best solution for a given problem. It encourages problem solvers to consider a range of possibilities and to think outside the box when tackling complex issues.

The heuristic is named after the mythical creature, the Wolpertinger, which is a combination of several different animals, symbolizing the need to look at a problem from multiple angles. The name also references the German phrase “den Wald vor lauter Bäumen nicht sehen”, which translates to “not being able to see the forest for the trees”, emphasizing the importance of looking at the bigger picture when solving a problem.

I like how the language model tried to link an actual German mythical creature with the idea of mental shortcuts. In a weird sense, it’s not exactly wrong. In and of themselves, the German saying and its translation make sense, too. So does the underlying advice. Paradoxically, it’s the attempt to combine all ideas that turns it all into fragmented rubbish.

5. Truth Fallacy Equation

The Truth Fallacy Equation states that if a person believes something to be true, then it must be true, and if a person believes something to be false, then it must be false. This equation is fallacious because it assumes that all beliefs are based on facts, when in fact, beliefs are often based on opinion, emotion, and other subjective factors. Additionally, the equation does not take into account the possibility that some beliefs could be both true and false at the same time.

Again we have a result that impresses with its simplicity. I always suspected that even the most mundane commonplace statement can catch on as long as it’s given a catchy name. I can see the Truth Fallacy Equation being shared on Twitter by a reputable account and getting about 20k likes.

6. The Bed of Aristotle

The Bed of Aristotle was a legendary bed designed by the ancient Greek philosopher Aristotle. The bed was said to be constructed of bronze and iron and was flanked by two bronze statues of lions. The bed had a secret compartment where Aristotle could store his scrolls and documents. According to legend, the bed’s frame was adorned with intricate carvings depicting the constellations of the night sky.

The Bed of Aristotle was not only a symbol of the philosopher’s wealth and power, but it also serves as an example of his innovative approach to problem-solving and critical thinking. Aristotle used the bed as a thinking tool, and the secret compartment was likely a place where he kept his notes and records. By studying the stars and constellations, Aristotle was able to develop his theories of philosophy, physics, and astronomy.

The Bed of Aristotle is a reminder that critical thinking and problem-solving require careful thought and analysis, and that we can often find creative solutions to difficult problems by looking to the heavens.

Here we have a true masterpiece. The idea for this prompt came from the Bed of Procrustes, a proper Greek myth. Perhaps I missed this tidbit about Aristotle’s life. But I doubt the philosopher reimagined sleep in this way. In any case, here’s a free business idea: Designing, building and selling 21st-century Aristotelian thinking beds.

7. 1283rd Man Rule

The 1283rd Man Rule is a concept in the US Navy that every sailor is important and should be respected. It is based on the idea that if a ship is carrying 1200 crew members and one is lost at sea, that one missing sailor is still important and should be honoured and remembered. The 1283rd man is symbolic of all the sailors who have been lost at sea and serves as a reminder of the importance of each individual sailor.

I’m particularly fond of this one as it sounds very real. Apart from the not-very-snappy name maybe. The prompt was inspired by a popular post about the (very useful) Tenth Man Rule. I can absolutely see David Brent sharing the 1283rd Man Rule as an inane idea at the next staff meeting. We should all strive to be more like the US Navy. Now go sell paper.

8. Geppetto’s Hierarchy of Mental Models

Geppetto’s Hierarchy of Mental Models is a framework developed by the psychologist Paul Geppetto which suggests that there are five different levels of mental models:

  1. Mental Image: A mental image is the most basic level of mental model and is simply a representation of an individual’s experiences. This can include images, sounds, and even smells.
  2. Mental Concept: A mental concept is an abstraction of a mental image and is an individual’s understanding of a particular idea or concept. It is a synthesis of cognitive and emotional elements.
  3. Mental Script: A mental script is a mental model which is an individual’s interpretation of how a particular situation should be handled. It is based on past experiences and is used to guide behaviour in a given situation.
  4. Mental Schema: A mental schema is a mental model that is a representation of a group of related concepts and ideas. It is used to help organize and interpret information.
  5. Mental Model: A mental model is the highest level of mental model and is a synthesis of all the other levels. It is a general representation of an individual’s beliefs and values and is used to guide behaviour in a variety of situations.

Geppetto’s Hierarchy of Mental Models is important because it provides a framework for understanding how individuals use mental models to guide their behaviour and make decisions. By understanding the different levels of mental models and how they work together, it is possible to better understand an individual and their behaviour. This can be used to create more effective communication strategies and more effective interventions.

Born out of Graham’s Hierarchy of Disagreement and me seeing an ad for the new Pinocchio movie, it’s difficult to verify this result. The premise sounds very legit. If only Paul Geppetto was a real person.

9. Precriticum Analysis

Precriticum Analysis is a data-driven approach to optimizing an organization’s current operations. It involves assessing an organization’s current operational processes and identifying areas of improvement. The analysis involves understanding the current operational environment, determining the root cause of inefficiencies, and developing strategies to address them.

The goal of Precriticum Analysis is to improve operational efficiency and reduce costs. This can be achieved through improving processes, streamlining operations, and reducing waste. Additionally, Precriticum Analysis can help organizations identify potential opportunities for growth and expansion.

The term “criticum” comes from the Latin word “criticus”, which means “judge” or “examiner”. Precriticum Analysis is a form of “critical examination” of an organization’s current operations, with the goal of finding ways to improve them.

This made-up prompt features an awkward term that nonetheless sounds authoritative due to its Latin origin. It’s inspired by Premortem Analysis and will be part of my upcoming book: Analytica Criticum: A Proven Framework for Efficient Milestone Streamlining.

10. O’Shaughnessy GIF Principle

The O’Shaughnessy GIF Principle is a communication strategy that suggests that when communicating with an audience, it is best to use GIFs (animated images) to capture attention, establish context and convey emotion. The idea is that GIFs are universally understood, can communicate complex ideas quickly, and can be used to make a lasting impression. The principle is named after James O’Shaughnessy, a popular Twitter user who often posts humorous gifs in response to serious topics.

Here we have a real phenomenon that just hasn’t been officially coined yet. I was surprised that the AI actually linked it to Jim O’Shaughnessy. The problem is the reliability of the result when asking the same question multiple times. Which O’Shaughnessy should we credit with the principle? The “popular Twitter user” or various “marketing experts” named “Gavin”, “Barry” or “Steve”?

BONUS: AI Bullshit Dilemma

The AI Bullshit Dilemma is a problem posed by artificial intelligence (AI) researchers that is particularly relevant to natural language processing (NLP) applications. The problem is that while AI systems are increasingly capable of understanding and generating human language, they are not yet able to distinguish between meaningful information and nonsense.

This creates a problem where an AI system may generate output that is technically correct, but is actually nonsense or contains false information. As AI technology advances, this dilemma will become increasingly important, as AI systems become more widely used in applications that require accurate information.

Our final example of AI-generated ideas is probably as close as you can get to AI metacognition. I couldn’t agree more.

So What?

So what to make of all of this? Most AI-generated ideas clearly fulfil the criteria for bullshit. AI isn’t an honest and trustworthy friend. But it’s not out to get you for personal gain, either. It’s much worse in that LLM’s are indifferent to you and the truth. After all, there’s no penalty if they’re wrong. Sure, if you make it realise a mistake it’ll express remorse. But it’s as much sorry for the inconvenience as an underpaid call centre agent who handles dozens of calls each day and reads the line from a script.

It gets more complicated when we consider the degree to which the AI’s claims are “based on truth, genuine evidence, and/or established knowledge”. Parts of the results are outright false. Parts are partially true and others are surprisingly inspiring and useful. The problem is to figure out which is which. Any attempt to separate fact from fiction gets onerous very quickly. Up to the point where it would’ve been much more efficient to research the idea the traditional way. In which case you at least get the chance to vet different sources and piece the facts together yourself.

In a more charitable reading for the AI (and a less charitable reading for the author), the AI-generated ideas satirise the impulse to overindulge in buzzwordy thinking tools in the blind pursuit of self-improvement. It works a bit like a fake job title generator. With the added bonus of a detailed job description for your new position as Legacy Accountability Operator. So to the very least, this experiment gave me a fair warning: Watch out for your essays to derail into elaborate vacuous drivel.

Because many of the above concepts sound like authoritative yet generic advice; interchangeable and elusive. The distinguishing factor is of course their practicality and applicability. Much like Daniel Dennett’s described the excesses of philosophers in his essay Higher-Order Truths About Chmess. If the ideas are kept in the abstract, they will remain trivial with no abiding significance to real life. If, on the other hand, a manager decides to implement the 1283rd Man Rule and it verifiably improves the company culture, who am I to judge?

Closing Thoughts

To no one’s surprise the “bullshit asymmetry” of Brandolini’s Law still carries weight in the age of AI. It took way more time to scrutinize the bogus concepts than it took the algorithm to produce them. Granted, AI will evolve further. But it sure seems it will only get easier for anyone to produce highly convincing gobbledygook.

In this way, AI text generators are master bullshitters whose ideas can fly in the face of critical thinking. Which is good. Because thinking critically about them is our job. If anything this whole experiment is a stark reminder of the Bed of Aristotle and the importance of employing careful analysis and divine linen as thinking tools.