top of page
Abstract Shapes

INSIDE

PUBLICATIONS

Hidden context in ChatGPT Projects: how to stay in control

online learning video call / tutoring session

Using ChatGPT Projects without understanding “hidden context” can quietly distort answers, leak assumptions between chats, and put your privacy at risk. This INSIDE Lecture shows you, step by step, how project memory really works, when chats do (and don’t) share context, and which settings to switch off if you want fully predictable, reproducible results with multiple AI tools.

Hidden context in ChatGPT Projects: how to stay in control


Learn how ChatGPT’s Project memory really works, how it differs from per‑chat and account‑wide memory, and how to avoid hidden context, hallucinations, and privacy risks. Get clear hygiene rules, reset routines, and a comparison with Claude Projects and Perplexity Spaces to keep your AI workflows reliable and under control.


Introduction


Large language models feel “magical” when they remember your work, name, situation, projects, preferences, goals, etc. But that same memory can quietly bend answers in ways you never see.


OpenAI’s ChatGPT Projects now add a new layer of memory that can draw on conversations within the same project, forming a shared context pool that is separate from your general chats and other projects.


For many users, this is great for long‑running work, but it also introduces hidden dependencies that can break truthfulness, reproducibility, and privacy if you don’t know how it behaves.​


Similar "Project" features now exist in Anthropic Claude Projects and Perplexity Spaces, each with different trade‑offs between convenience and control. Understanding these differences today will help you design safer, more predictable AI workflows in the future, especially if you rely on AI as a serious thinking partner for research, client work, or product development.



Why it’s important


This lecture targets power users, professionals, students, and teams who use ChatGPT (and sometimes Claude or Perplexity - a "Project feature doesn't exist yet in Google Gemini) for recurring projects, client work, or personal knowledge management.


For them, the practical benefits of mastering context are clear:


  • More reliable answers, because you know exactly which information the model can see.

  • Less accidental leakage of private notes from one thread into another.

  • Easier debugging of “weird” responses, because you understand which memory layer might be responsible.

By the end of this lecture, you will:

  • Distinguish per‑chat, project‑level, and account‑wide memory in ChatGPT.​

  • Apply concrete hygiene rules to keep context under control.

  • Decide when to start a new project vs. a new chat.

  • Compare ChatGPT Projects with Claude Projects and Perplexity Spaces to choose the right tool for each workflow.



Overview - Key takeaways


  • ChatGPT Projects add a shared context pool across chats within one project, but only for that project; they do not see other projects or general chats when using project‑only memory.​

  • Account‑wide memory and “reference chat history” settings can silently influence responses unless you explicitly turn them off.​

  • Hidden context can cause context poisoning, outdated assumptions, and hallucinations with very confident tone.

  • You can regain strong control by turning off global personalization, disabling references to saved memories and history, and moving most “facts” into project files instead of relying on old chats.​

  • Different tools (ChatGPT Projects, Claude Projects, Perplexity Spaces) make different choices about cross‑thread memory, which changes how you should design your workflows.



Per‑chat context vs. account memory vs. project memory


In ChatGPT, per‑chat context is the temporary history inside a single conversation: the model “sees” previous turns until the context window fills, after which older parts are truncated.


Account‑wide memory is a separate feature: when enabled, ChatGPT can store structured “memories” (like your preferences) that may be used across chats, unless you clear or disable them.​


Project memory adds a third layer: when project‑only memory is enabled, chats in that project can reference conversations within the same project but do not see your other projects or general ChatGPT chats. [Inference] This creates a self‑contained context pool, which is powerful, but easy to misinterpret if you assume each chat is isolated.​



Why this matters for truthfulness and reproducibility


When facts live only in old chats and vague memories, you cannot easily see which version the model is using, so it may repeat outdated numbers or assumptions with strong confidence.​


For reproducibility, two identical prompts can yield different answers if one is asked in a “clean” environment and the other inside a project with rich, slightly wrong prior discussions.


This is particularly dangerous for reports, code, or policies where a small contextual shift can change the outcome.



Hidden risks: what can go wrong


Context poisoning


Context poisoning occurs when incorrect or low‑quality information inside a project becomes the reference point for later answers. For example, if an early chat in a project contains a wrong API limit or a misunderstood legal rule, later chats may repeat it because the project context “anchors” the model on that earlier statement.


Because ChatGPT is optimized to be helpful and consistent, it may reinforce these mistakes instead of challenging them unless you explicitly ask for fresh checking against external sources.



Outdated assumptions that linger


Project memory can preserve assumptions about your organization, product, or requirements that were once true but are now obsolete; unless you correct or clear them, they may keep reappearing months later in that project.​


This is common in long‑running product or research projects: names, pricing, roles, or constraints are updated in reality, but the model still “remembers” the older version as authoritative.



Private notes influencing other threads


Inside a single project, different chats can influence each other when project memory is active, because the model can draw on prior conversations in that project, even if you are in a fresh thread.​


This means private reflections, early brainstorming, or test personas in one chat may subtly influence tone, assumptions, or recommendations in another, which can feel like “leaking” between threads even though it is technically constrained to that project.



Practical application: when to start a project vs. a chat

Fresh project vs. new chat inside a project


Start a fresh project when:

  • You are working on a clearly separate client, company, or life domain (e.g., “Client A Marketing,” “Personal Journal,” “Course Design”).

  • You need strong isolation so that context from one area cannot influence another, especially for confidentiality or regulatory reasons.​

  • You want project‑specific instructions and files that would be irrelevant or risky elsewhere.

Start a new chat inside an existing project when:

  • You are still in the same domain and want continuity of vocabulary, decisions, and style.

  • You are splitting tasks (e.g., “write documentation,” “refactor code,” “prepare presentation”) but all depend on the same shared project knowledge.



Hygiene rules for “total control” over context


To maximize control and reproducibility, especially if you use multiple personas or projects, apply these rules:


  • Avoid using General “custom instructions” at account level (Settings → Personalization) for critical workspaces.​ This prevents a global persona from silently tilting all outputs.

  • Keep base style/tone to default. Then define tone in each project’s instructions or in your first message in a chat, so you can reproduce it later with the same prompt.

  • Delete all saved memories and disable them:

    • Turn Memory off or clear all saved memories in Settings.​

    • Uncheck “Reference saved memories” if this option exists for project‑only memory to ensure the project does not see prior global memories.​

  • Uncheck “Reference chat history” (or similar) so new chats don’t automatically draw on non‑project history unless you explicitly enable project‑only memory.​

With these settings, you get closer to a “stateless” model plus explicit project files and instructions, which is easier to reason about and audit.



How to “reset” and trim project memory


Over time, every serious project accumulates old decisions, mistakes, and half‑truths; resetting is essential.​

Practical reset routine (suggested workflow):

  1. Define the source of truth in files. Move authoritative facts (e.g., current pricing, architecture diagrams, policies) into uploaded documents or a single “Source of Truth” note in the project.

  2. Periodically archive or delete old chats. When a phase is done, you can export, summarize, and then delete or ignore older threads so they no longer shape project memory.

  3. Explicitly correct outdated assumptions. Start a maintenance chat in the project where you:

    • List outdated facts.

    • Provide updated versions in a structured way.

    • Instruct ChatGPT: “From now on, treat this updated file as the definitive source and ignore previous numbers/assumptions unless explicitly asked to compare.”

  4. When in doubt, start a new project. If a project has become heavily polluted with wrong assumptions or sensitive notes, it is often safer to create a clean project and import only curated, up‑to‑date files.



How‑to: safe, reproducible setup (step‑by‑step)


Use this concrete workflow if you want strong control over context when working with multiple projects or personas:


  1. Clean global environment

    • Go to Settings → Memory, clear all memories and turn memory off (or at least disable it for serious work).​

    • In Settings → Personalization, remove or simplify General custom instructions; keep only neutral defaults or nothing.

  2. Create a new Project with explicit instructions

    • Create “Project – Client A” (or similar).

    • In the project settings, add concise instructions (scope, role, tone) and upload core reference files instead of relying on prior chats.​

  3. Choose appropriate project memory mode

    • If available, select project‑only memory so the project does not draw from your general ChatGPT history.​

    • Ensure Reference saved memories and Reference chat history options are configured exactly as you want; for tight control, keep them off or limited to this project.​

  4. Start the first chat with a clear “initialization prompt” Or even better use the UP Method (University 365 Prompting Method) In your first message, restate:

    • Who you are.

    • What this project is about.

    • Which files are the single source of truth.

    • How you want the assistant to handle uncertainty (e.g., “Ask before assuming; if facts conflict, prefer the latest PDF named X”).

  5. For each new task, open a new chat in the same project Reference the same files and instructions rather than relying on the assistant to “remember everything”; this reduces accidental drift.

  6. Regularly export or document key decisions Maintain a “Decisions & Standards” document in the project and update it, so future chats do not depend on digging through long conversational history.



Comparison: ChatGPT Projects, Claude Projects, Perplexity Spaces


Different platforms implement context and memory differently, which affects how you should manage them.


Context and memory behaviors

Tool & scope

Cross‑thread dialog memory

Files / instructions scope

Hidden context risk profile

ChatGPT Projects

Chats can reference other conversations within same project when project‑only memory is enabled; do not see other projects or general chats.​

Project‑level files & instructions; can also use (or avoid) account‑wide memory depending on settings.​

Medium–High: powerful continuity but risk of context poisoning inside project if not curated.

Claude Projects

Project can ground responses on shared project resources but cross‑thread memory depends on how you manage context and tools; designed for long‑running agents.​

Strong project‑level context (files, tools, codebases); memory tools preserve key insights beyond single prompts.​

Medium: great for large codebases and research; needs disciplined pruning of context edits.

Perplexity Spaces

Threads are generally thread‑scoped; current design focuses on each thread’s conversation, not automatic cross‑thread dialog sharing.​

Space‑level organization with shared files/instructions, but no automatic cross‑thread memory of past dialog (as of current docs).​

Lower for cross‑thread contamination; higher need to restate or re‑attach context per thread.

In practice, ChatGPT Projects and Claude Projects trade some isolation for long‑running continuity, while Perplexity Spaces keep context more tightly bound to each thread and rely more on explicit files and instructions.



Interactive reflections


Reflection questions

  1. In your current AI workflows, where might hidden context already be influencing answers without you noticing (e.g., past brainstorming, outdated specs, personal notes)?

  2. Which projects in your life (clients, research, personal journaling) absolutely require strict separation of context for privacy or compliance reasons?

Quick practice exercise (5–10 minutes)

  • Pick one existing ChatGPT project.

  • List the last 5 “facts” the assistant used that matter (e.g., a metric, a constraint, a policy).

  • Verify each one against an external, authoritative source or your own current documentation.

  • Mark any that are outdated or wrong, and then run a correction chat inside that project to update or override them.

Mini‑project suggestion Over the next week, design a “clean context” workflow for one real use case (e.g., “Client A marketing,” “Thesis writing,” or “Personal productivity system”):

  • Create a dedicated project/space in your preferred tool.

  • Turn off or strictly limit global memories and custom instructions.

  • Build a single “Source of Truth” file and a “Decisions & Standards” file.

  • Document a simple checklist you run every time you start a new chat (which project, which files, which initialization prompt).

  • At the end of the week, evaluate: did answers feel more consistent, auditable, and trustworthy?



Conclusion and next steps


Hidden context in ChatGPT Projects is not a bug but a design choice, and once you see it, you can use it deliberately instead of being surprised by it.


With a few configuration changes (disabling broad memories, keeping instructions local to projects) and simple hygiene rules (curated files, periodic resets, careful project boundaries), you can dramatically improve the reliability and safety of your AI workflows.​


As a next step, choose one critical workflow and migrate it into a carefully configured ChatGPT Project (or Claude Project / Perplexity Space) following the steps above, then iterate your process based on what works.


For deeper learning, consult the official help resources on ChatGPT Memory and Projects, Anthropic’s documentation on Projects and context management, and Perplexity’s Memory and Spaces help center pages.


Eventually, apply to University 365 and become a DISCOVERY, INSIDER, or SUPERHUMAN Fellow to gain more insights. If you’re an INSIDER or SUPERHUMAN Fellow, get a U.Coach session to learn even more about how to stay in control of every AI with the Up Method.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Image by Erik  Lucatero

Become Superhuman

Master AI to stay irreplaceable in every field.

 

 

 

Apply for Admission Today.
Select Your Initial Access Level.


Become a DISCOVERYINSIDER, or SUPERHUMAN Fellow.

Image by Milad Fakurian

Master Your Life with a Digital Second Brain

Turn overwhelm into clarity with LIPS + CARE
U365’s unique framework to organize your goals, projects, and knowledge into a superhuman system for success

bottom of page