top of page
Abstract Shapes

INSIDE

PUBLICATIONS

THE AI IMPOSTURE PARADIGM - Synthetic Reality, Cognitive Atrophy, and the Multiplier Collapse of Human Intelligence

online learning video call / tutoring session

What if AI is making us less intelligent? The AI Imposture Paradigm reveals how over-delegating thought to machines erodes our cognitive base. Without Human Intelligence, even infinite AI power equals zero.


THE HOLLOW MORNINGS

Two Tales of Imposture


The Analyst's Ghost


Sarah is a "top performer" at a global consultancy. This morning, she delivered a 40-page strategic expansion plan for a Tier-1 client. Her manager is thrilled with her "efficiency." But there is a ghost in the room.


Sarah did not analyze the market at all. She prompted an AI model and received a result that looked impressive. She did not weigh the risks herself; she simply asked the model for a "risk section." She did not even read the final ten pages. She considered herself out of time for that level of review and, given the seemingly adequate quality of the first two pages she did examine combined with her consistent experience working with this particular AI model, she felt confident about the quality of the entire document.


In completing this task, Sarah's biological Human Intelligence (HI) contribution was near zero, yet she presented a result that might score around 14 on a scale of 20. "Good enough," she told herself.


This is difficult to say, but Sarah has become an AI impostor. Because this is not the first time she has worked this way, her brain is beginning to forget how to perform the very job for which she is being promoted.


You may know Sarah. Or you may know a colleague who has adopted the same approach to working with AI. Worse still, you may have gradually become Sarah yourself, or find yourself seriously tempted to follow her example. It is so convenient, so impressive, so apparently "effective." The problem is that to complete this last piece of "quality" work, you have not really worked at all. You have not even taken the time to truly understand and control the output. You have completely surrendered to the temptation of delegating everything to AI.


The Hollow Mornings : Two tales of AI Imporsture
The Hollow Mornings : Two tales of AI Imporsture

The Echo Chamber Boardroom


In a downtown high-rise, a COMEX meeting is in progress this morning. The President presents a report created entirely by AI, then hands over to the CEO for a detailed explanation accompanied by impressive slides, also generated by AI from the original report. The board members, obviously too busy to read the 50-page document, have all used their personal AI assistants to "summarize the key takeaways." So convenient, is it not? And so easy to accomplish. You simply click on the button already pre-programmed in your browser or email client. There is no longer any need to write a prompt.


As they participate timidly in the meeting, reasoning that the AI recording everything will produce an honest summary to be sent to all participants anyway, so it is not such a significant issue if they are not fully focused, the discussion gravitates toward the three bullet points each AI assistant provided. A terrifying reality emerges: No human in the room has actually processed the primary data. AI is "thinking" and writing instead of the human writer. AI is also "thinking" and reading instead of the human reader. In other words, AI is working not for humans anymore, but for AI itself.


In this scenario, humans are barely spectators of what AIs say to each other, in their name. At the end of the day, what a disgrace: the humans are merely biological relays in a closed loop of synthetic data. They are nodding at an echo, while their collective capacity for deep judgment evaporates day after day. But for the moment, who cares? The CEO gave a very polished presentation created by AI. His report, written by AI, was read only by another AI, which made a summary to create slides for a presentation that, itself, will also be summarized by AI at the end of the meeting.



The Illusion of Productivity


In these two "hollow morning" scenarios, everything appears professional and "efficient," but it is all a facade. Everything is hollow. Without realizing it, humans have already lost control.



STRATEGIC CONTEXT

The Origins of This Reflection


The genesis of this report lies in a disturbing observation of modern AI "efficiency." As the world adopts Artificial Intelligence for day-to-day personal and professional tasks, a subtle but profound shift is occurring in the nature of human labor. We are witnessing the rise of what I call the AI Imposture Risk.


Traditional technology served as a tool, an extension of human intent. A shovel helps a human dig; a calculator helps a human compute. In both cases, the human remains the source of the Initial Intent and the Final Verification.

However, generative AI introduces something fundamentally different: a surrogate intelligence. For the first time in human history, a "result" can exist without a corresponding complete human cognitive process.


This reflection was triggered by a critical realization: if we allow AI to completely replace human creativity, intelligence, decision-making, and action, especially with the rapid development of AI Agents, but humans continue to present those results as their own, we will construct and enter a state of systemic imposture.


I believe this is not merely an ethical lapse; it is a vital risk to our species. If we reach a point where no human is "at the origin" of the results by which we live, we risk creating a world of complete imposture, a scenario that could lead directly to the obsolescence of the human mind.



INTRODUCTION

The Singularity of Resignation


While the "Singularity" is traditionally discussed as the moment artificial intelligence surpasses human capabilities, University 365's June 2025 publication Embracing the Gentle Singularity (U365 INSIDE) reframes this milestone not as a point of obsolescence, but as a journey of co-evolution where AI serves as an extension of human potential. This is the optimistic vision shared by thinkers like Sam Altman and Ray Kurzweil.



However, while I advocate for this harmonious future, I must identify a far more insidious threat: the Singularity of Resignation Risk. This is the dark mirror to the "Gentle Singularity," the moment when humans, seduced by the ease of AI, voluntarily surrender their cognitive agency and cease the very effort required to remain at the center of this co-evolutionary journey. This is the risk I am increasingly witnessing today.


This report explores the AI Imposture Paradigm through four critical lenses:


  1. The Multiplier Law: The mathematical demonstration that Generative AI's potential and useful benefits are predominantly dependent on the level of Human Intelligence (HI) that is employed to direct it.

  2. The Cognitive Death Spiral: How the "over-delegation" of thought and intelligence to AI eventually leads to the evaporation of the human cognitive base.

  3. Cognitive Debt: The long-term neurological price of "outsourcing" our mental faculties to AI.

  4. The Human Intelligence (HI) Protectors: How we must deliberately adopt methods and habits, such as University 365's frameworks (ULM+EVA, LIPS+CARE, UP Method, SL-OS), to serve as the ultimate firewall against human cognitive extinction.



THE AI CO-INTELLIGENCE MULTIPLIER LAW

CIP = HI + (AI × HI)



To understand the Singularity of Resignation risk in an accessible way, we must move from additive thinking to multiplicative logic. Ethan Mollick, in his book Co-Intelligence: Living and Working with AI (U365 INSIDE), explains that humanity's interest lies in cleverly combining the characteristics of human biological intelligence with the growing power of artificial intelligence, while maintaining perfect control over the result of this combination. This requires recognizing the strengths and limitations of each type of intelligence.


To extend this idea and provide a conceptual framework, I define the Total Co-Intelligence Potential (CIP), representing the intelligence power resulting from the smart association of Human Intelligence (HI) and Artificial Intelligence (AI), as follows:


CIP = HI + (AI × HI)


Where:


  • HI (Human Intelligence) = The biological capacity for critical thinking, original intent, decision-making, sensitivity, and ethical judgment.

  • AI (Artificial Intelligence) = The digital multiplier of processing speed and data synthesis.


What we call "Superhuman Potential" at University 365 is precisely this Co-Intelligence Potential (CIP). As an extreme simplification, it can be represented by this formula combining HI and AI, which means: without a decent HI level as the initial conductor, AI could be useless, misused, or inefficient.



Illustrating the Concept


For simplicity, let us experiment with the idea using arbitrary values (HI = 5, AI = 2).

In a healthy AI Co-Intelligence state, an HI value of 5 could be amplified by an AI multiplier of 2, and the total Co-Intelligence Potential in that case would be:


CIP = 5 + (2 × 5) = 15


Since 15 is greater than 5, the result is a Human with AI Co-Intelligence equals a Human with Superhuman Potential.


Here, AI acts as a "Co-Intelligence" multiplier, giving the human much more "intelligence power" at their disposal than they would possess alone. The human remains the pilot, the conductor, enhanced by AI as an intelligence amplifier and booster.


The Imposture Collapse: What Happens When HI Drops?

(HI = 1, AI = 2)


The danger arises when the "ease" of the AI × HI component leads the human to stop exercising their HI.


If the employee over-delegates not only the typing but also the thinking, reading, analyzing, criticizing, and deciding, their HI value begins to atrophy. This occurs because of the biological nature of Human Intelligence, which is subject to neuroplasticity: use it or lose it. Thus, if HI drops to 1 instead of the initial 5, the total Co-Intelligence power will also dramatically decline, even if AI power remains the same or increases.


CIP = 1 + (2 × 1) = 3


The Dramatic Truth


Even with the same powerful AI, the total result (3) is now significantly lower than the initial human capacity alone (5). The "Superhuman" enabled by AI has become a "Sub-human" impostor, because of AI.


In this example with arbitrary values provided solely for illustration, the AI would need to double its "intelligence" power just for the total Co-Intelligence power to return to what the human could accomplish alone, without any AI assistance. The AI efficiency dream has transformed into a total failure.


The Near-Zero-Base Extinction Risk

(HI approaches zero)


If the human intelligence base drops near zero or actually reaches zero, the equation hits the "Extinction Floor," even if generative AI becomes vastly more powerful (for example, AI = 10 instead of 2):


CIP = 0 + (10 × 0) = 0


In a world of total imposture, we could have infinite AI power, but because the human multiplier HI is near zero or exactly zero, the result in terms of Co-Intelligence Potential remains negligible or zero.


In this catastrophic scenario, which we are not so far from, the ideas are AI-generated, the action plan is AI-generated from the AI-generated ideas, the reports are AI-generated from the AI-generated action plan, the presentations are AI-generated from the AI-generated reports, the summaries are AI-generated from the AI-generated presentations. AI is working furiously, but for no one, because no one is home. It becomes a flat, pointless, and useless AI loop. Meaning and purpose have evaporated entirely.



THE SCIENCE OF COGNITIVE EVAPORATION

Evidence of Atrophy



The transition risk from "AI Co-Intelligence" to "AI Imposture" is supported by emerging research into Cognitive Atrophy.


Reduced Cognitive Engagement:

A study cited by Polytechnique Insights (2025), titled "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" by Nataliya Kosmyna and colleagues at the Massachusetts Institute of Technology (MIT), indicates that using Generative AI for complex tasks like essay writing significantly reduces the "intellectual effort required to transform information into knowledge."


The Use-It-or-Lose-It Principle:

Research published in Frontiers in Psychology by Dergaa et al. (2024), titled "From tools to threats: a reflection on the impact of artificial-intelligence chatbots on cognitive health," warns that over-reliance on generative AI can lead to "cognitive laziness," potentially diminishing memory and critical thinking skills.


Digital ADHD (Attention-Deficit/Hyperactivity Disorder):

In an article titled "Are You Feeling Bored? AI Might Be to Blame," published by The Times of India (2025), Dr. E.S. Krishnamoorthy notes that AI-driven environments over-stimulate the frontal lobe, leading to "fleeting thoughts and impulsive behavior," mirroring ADHD brain patterns.

"The boredom many are feeling today does not reflect laziness, but an imbalance in how the modern brain is engaged by AI-driven environments." — Dr. E.S. Krishnamoorthy, Buddhi Clinic


THE FATAL RISK OF HUMAN COGNITIVE DEBT

The Hidden Interest of Surrender


Cognitive Debt is the accumulated loss of neural plasticity and reasoning ability resulting from the persistent use of AI as a surrogate rather than a tool. Every time a human asks an AI to "write this email," "solve this problem," "watch this video for me," or "find new ideas" without engaging in the underlying logic, they are taking out a "loan" against their future intelligence. They are damaging their HI value.


The Interest Rate of Atrophy


Like financial debt, Cognitive Debt accrues interest. Neuroscience research (as referenced by Polytechnique Insights) demonstrates that the brain "prunes" unused neural pathways.


Example: The Executive Summary Void

An employee is asked to create a report. They prompt an AI: "Write a 20-page market analysis." They do not read the sources. They do not synthesize the data. They rely blindly on the AI results and present them as their own. They are satisfied! They believe AI has saved them 10 hours. They feel more productive and efficient, but in reality, they have avoided 5 to 8 hours of "cognitive friction."


The Debt:

Those 5 to 8 hours represented the time when their intelligence was genuinely active and when authentic learning took place. By avoiding the friction, they lose the ability to analyze, to understand, to spot errors, to invent, and to find solutions. Next time, they must use AI because, without that crutch, they no longer know how to analyze a market. Their HI has been sold to pay for today's convenience. This represents a fatal risk.


Hidden AI Risk: Evidence of Atrophy and Ai dependency
Hidden AI Risk: Evidence of Atrophy and Ai dependency


The Default Mode Network (DMN) Bankruptcy


Recent neuropsychiatric research (Dr. E.S. Krishnamoorthy) highlights a catastrophic conflict between the Frontal Lobe and the Default Mode Network (DMN).


The frontal lobe is the brain's "executive" engine, responsible for focus, planning, and task execution. Digital life, and AI-driven interaction in particular, over-stimulates this region with "quick hits" of simulated productivity, creating a state of high-arousal, shallow focus similar to patterns observed in ADHD.


The DMN, by contrast, is the "resting state" network. It activates when we disengage from external tasks to reflect, daydream, and engage in "mental time travel." Crucially, the DMN is the primary biological site for imagination and creative synthesis.


By constantly feeding the frontal lobe with AI-generated solutions, we are effectively starving the DMN. We are creating a "Boredom Crisis" where the brain is never permitted the "constructive stillness" required to form original thoughts.


Without DMN activation, humans feel internally "empty." They lose their capacity for intrinsic thought, the very "Inner Origin" that defines human intelligence. This represents a state of cognitive bankruptcy: we become consumers of ideas, permanently incapable of becoming their authors.



THE DEAD INTERNET AND DEAD CULTURE

A Systemic Example


ULTIMATE DANGER: A world where AI bots "talk" to each other on behalf of a majority of humans, who have become passive spectators or "passengers" with no control over their personal or professional lives.
ULTIMATE DANGER: A world where AI bots "talk" to each other on behalf of a majority of humans, who have become passive spectators or "passengers" with no control over their personal or professional lives.


Consider the "COMEX Loop" from our introductory scenario. It serves as a perfect illustration of the AI Imposture Paradigm in action:


  1. The Employee: Asked to produce a comprehensive report. They use AI to generate 20 pages in 5 seconds. They do not write a single word themselves.

  2. The COMEX Members: Receive the 20-page report. They lack the time to read it and have developed a habit of relying on AI. Each member asks their own AI to "Summarize this 20-page report into 3 bullet points."

  3. The Result: Fifteen different AI "summaries" are generated for 15 different people, each potentially containing critical differences in interpretation and emphasis.

  4. The Reality: AI worked, AI "thought," AI read, and AI wrote. The humans involved contributed nothing substantive. Information was moved and transformed, but absolutely no knowledge or value was created.

If this pattern continues, we will awaken in a world where AI bots are "talking" to AI bots on behalf of humans who have become useless passengers. This is not merely a hypothetical future; it is the "Dead Internet" theory becoming a "Dead Culture."


A "Dead Culture" is an extension of the Dead Internet Theory, which suggests that approximately half of internet traffic already consists of bots (according to Imperva's Report, 2024; see the video "Dead Internet Theory: AI Bots vs. Humans" by CNET). Information is moved, adopts new forms, loses precision and definition along the way, but value and knowledge are never created.


As The World Economic Forum (2025) points out in the article titled "Digital labour ethics: Who's accountable for the AI workforce?" by Greg Shewmaker, "managing AI agents is a labor challenge, not just a software one." Without human accountability at the origin, labor itself becomes a synthetic fraud.


The Dead Internet Theory: AI Bots vs. Humans

A Worrying Personal Confession


I must confess that, as President of University 365 with two decades of experience in higher education, I sadly see this world of imposture drawing closer to us. I observe people who no longer read, who no longer take the time to watch videos on interesting topics they discover on YouTube, who no longer take the time to think for themselves and develop their own ideas. Everything is delegated to AIs under the pretext that they must save time and that AI performs everything faster and better.


The worst part is that the saved time is rarely used to create value or apply human intelligence to invent, create, and innovate. Instead, most of that time is spent over-consuming low-quality, AI-generated content, particularly on social networks.


A dystopian vision of the future that is already partly the present: Humans succumb to an overuse of AI in automated processes to excess and without any control, going as far as automating "exchanges" between humans and social life. Today, there is a resurgence of bots or automation systems (n8n, Make, AI agents, etc.) that publish on social networks or write emails automatically for humans who no longer even read them because they ask other bots or automated processes to summarize and respond on their behalf.
A dystopian vision of the future that is already partly the present: Humans succumb to an overuse of AI in automated processes to excess and without any control, going as far as automating "exchanges" between humans and social life. Today, there is a resurgence of bots or automation systems (n8n, Make, AI agents, etc.) that publish on social networks or write emails automatically for humans who no longer even read them because they ask other bots or automated processes to summarize and respond on their behalf.

HUMAN INTELLIGENCE PRECURSORS AND PROTECTORS

The University 365 Firewall


At University 365, we have identified that the only way to avoid the Near-Zero-Base Singularity is to treat our original methods as HI Protectors and Precursors.


Thanks to University 365's original methods and frameworks, including ULM+EVA, LIPS+CARE, UP Method, and SL-OS, we do not use AI to make human personal and professional life "easier" by lowering our brain involvement. We use AI to make personal and professional life "better" and "stronger," augmenting our brain involvement and commitment.

With the development of adapted and systematic Atomic Habits based on regular stimulation of the brain and lifelong learning discipline using AI, adherence to the principles transmitted by University 365 acts as a precursor to HI, thereby increasing the level of Human Intelligence.


ULM + EVA: The Friction Multiplier


Standard AI seeks to remove friction. University 365 Life Management (ULM) combined with the Explore-Visualize-Act (EVA) engine framework are designed to add productive cognitive friction and develop atomic habits where HI is constantly engaged and trained.


Mechanism:

When a user engages with the EVA cycle, the system does not provide a direct surrogate answer. Instead, the student must Explore the problem and solution spaces through original inquiry, Visualize the underlying logic and potential outcomes by analyzing potential impacts, and only then Act with informed agency. ULM manages this as a lifelong cognitive discipline for every aspect of personal and professional life, compelling the human to maintain their HI base through deliberate effort.


University 365 AI Co-Intelligence Core Concept as Human Intelligence (HI) Precursor & Protector
University 365 AI Co-Intelligence Core Concept as Human Intelligence (HI) Precursor & Protector

LIPS + CARE + UP Method: The Ethical Core


Our LIPS+CARE framework and the UP Method (U365 Prompting Method) are precursors to "Verified Human" intelligence. They ensure the user is always the "Originator" and the "Controller."


The LIPS (Life-Interests-Projects-System) Digital Second Brain, combined with the CARE (Collect-Action Plan-Review-Execute) engine, creates a structured environment where humans must actively engage with information rather than passively consume AI outputs.


The UP Method provides Context Engineering principles that keep the human at the center of every AI interaction, requiring thoughtful input and critical evaluation of outputs.


U.Copilot and U.Coach: The Cognitive Exoskeleton


U.Copilot and U.Coach are HI-dependent tools by design. They require a high-level "Pilot" (the Human) to function effectively. If the Pilot's HI drops, U.Copilot intentionally alerts U.Coach (the human coach), compelling the fellows to re-engage their brain. This is "Anti-Atrophy" by design, a systematic safeguard against the cognitive decline that unchecked AI delegation would otherwise produce.



CONCLUSION

The Mission for the Irreplaceable Superhuman


The AI Imposture Risk represents one of the most significant threats to our dignity as a species. If, because of the omnipresence of AI, we allow ourselves to become "zeros" in our own intelligence equations, we will be replaced not by superior beings, but sadly by statistical models. Unfortunately, there is a high probability that many of us will fall into this trap. In reality, this is already occurring.


There is a non-zero risk that schools, universities, and other educational institutions, whether primary, secondary, or higher, will not manage to adapt quickly enough to the onslaught and power of future artificial intelligence models.


The Google Research Signal


The recent Google Research Experiment "Learn Your Way," which explores how generative AI can transform static textbook materials into an engaging multimedia experience for every student, provides a perfect example demonstrating that the world of education will have to urgently reinvent itself to survive.


Learn Your Way is grounded in learning science and powered by LearnLM, Google's best-in-class pedagogy-infused family of models, now integrated directly into Gemini 3. It adapts content to a learner's selected grade level and personal interests, and generates multiple representations based on the source material, from mind maps and audio lessons to interactive quizzes that enable real-time feedback and further content personalization. It gives students agency over their learning process.


Google's recent efficacy study shows compelling results: students using Learn Your Way scored 11 percentage points higher on a long-term recall test than those using a standard digital reader. Read more on Google's Research blog and Tech Report. This is what I would call an HI Precursor.


University 365 as the Firewall


University 365 aspires to serve as that firewall.


By deeply understanding what underlies AI and by deploying our HI Protectors and Precursor methods, we ensure that our Fellows remain at the origin of their results. We believe in Co-Intelligence, and we train the Human Base to be so powerful that the AI multiplier creates a truly "Superhuman" result, one that is grounded in biological reality, not synthetic imposture.


The formula is clear: CIP = HI + (AI × HI). As we observe AI power levels rising every month, our mission is to ensure that HI level never falls due to AI, but instead increases thanks to AI. That represents a fantastic challenge, and it defines the core purpose of everything we do at University 365.



Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Image by Erik  Lucatero

Become Superhuman

Master AI to stay irreplaceable in every field.

 

 

 

Apply for Admission Today.
Select Your Initial Access Level.


Become a DISCOVERYINSIDER, or SUPERHUMAN Fellow.

Image by Milad Fakurian

Master Your Life with a Digital Second Brain

Turn overwhelm into clarity with LIPS + CARE
U365’s unique framework to organize your goals, projects, and knowledge into a superhuman system for success

bottom of page