Context Continuity Problem
Share
Guillermo Rauch believes that AI assistants and LLMs are currently limited by context discontinuity across applications, despite the fact that humans typically work sequentially and carry context from one task to another.
Key Points:
-
Current AI limitations in everyday tools:
- Spell checkers don't remember terms you just used in other applications
- Example: "Someone mentions 'b zero' in Slack, then you type it in email and macOS suggests 'via' instead"
- This happens despite you literally just seeing and thinking about that term
-
The power of context in AI:
- "Things get better with better context"
- Code autocompletion improved dramatically when given access to:
- The whole project's codebase
- Content from clipboard (which often represents what's in your mind)
- Related dependencies across files
-
The sequential nature of human work:
- "We typically actually tend to work sequentially"
- "I read an email that is about a problem and then I go with another app and I'm likely gonna discuss that problem"
- Current systems lose this context when switching between apps
-
The potential of AI with proper context:
- "If you just put the right things into the prompt, magic will happen without changing the actual sort of engine of intelligence"
- AI could understand what you're talking about, thinking about, and working on
- It would know how you talk, what you were discussing, and what plans you're trying to communicate
-
The vision for better AI assistants:
- "How can you ingest this series of apps and integrations and systems that people use to do their work and you connect the dots"
- AI should maintain context across applications like an "external memory"
- Similar to the "Black Mirror: The Entire History of You" concept