Back/claude code

Extending Claude Sessions with NotebookLM Integration

Updated 2026-04-11
5 min read
1,095 words

Extending Claude Sessions (Full Guide)

Are you sick of reading "Claude usage limit reached. Your limit will reset at 7pm"? Here's 4 workflows that integrate Claude Code with NotebookLM to bypass limits and offload heavy document analysis to Google.

The Problem

Claude's amnesia is costing you tokens that leave you with 30-45 mins of productivity a day. The billing model means every piece of context burns tokens:

  • Pro plan ($20/month): Hit limits fast
  • Max ($100-$200/month): More runway but heavy research still drains it
  • API: Every token is metered

If you want Claude to analyse 30 documents, cross-reference findings, and produce a report? That's an expensive afternoon.

The Solution: Teng Ling's notebooklm-py

Developer Teng Ling reverse-engineered NotebookLM's internal protocols and published an open-source CLI tool called notebooklm-py. It lets you control NotebookLM entirely from the terminal:

  • Create notebooks
  • Upload sources
  • Run queries
  • Generate slide decks, podcasts, flashcards

Combined with Claude Code's skill system, you get: an AI coding agent with larger research capacity and persistent memory across sessions.

Setting Up the Bridge

Requirements: Python 3.10+, Google account, terminal (macOS/Linux/Windows)

# Install notebooklm-py
pip install notebooklm-py

# Login
notebooklm login

Repository: teng-lin/notebooklm-py

Teaching Claude Code How to Use NotebookLM

Installing the NotebookLM Skill

# Install skill for Claude Code
notebooklm skill install

# Check status
notebooklm skill status

This deploys to:

  • ~/.claude/skills/notebooklm/ (Claude Code)
  • ~/.agents/skills/notebooklm/ (compatible agents like Codex)

Once installed, Claude understands how to create notebooks, upload sources, run queries, and generate outputs through the CLI.

How Claude Decides to Use a Skill

Every skill has a description in its header. Claude reads all available descriptions at startup and matches them to your request. You can also invoke directly: /notebooklm

Building Custom Skills

Invoke /skill-creator in Claude Code and it interviews you about what you want, generates the full SKILL.md, runs automated test prompts, and packages the result.

Four Workflows

Workflow A: Zero-Token Research

Problem: Analyse 30+ documents locally obliterates your token budget.

Fix: Claude orchestrates. NotebookLM does the processing. For free.

Steps:

  1. Gather sources - PDFs, web articles, YouTube transcripts
  2. Create notebook:
    notebooklm create "My Research Project"
    
  3. Upload everything:
    notebooklm source add "./transcript-1.md"
    notebooklm source add "https://example.com/article"
    notebooklm source add "./report.pdf"
    
    (Up to 50 sources on free tier)
  4. Query NotebookLM:
    notebooklm ask "what are the three most important themes across all sources?"
    
  5. Generate deliverables:
    notebooklm generate slide-deck
    notebooklm generate flashcards --quantity more
    notebooklm generate mind-map
    notebooklm generate data-table "compare key concepts"
    notebooklm generate audio "make it engaging" --wait
    
  6. Claude polishes - The only part that uses Claude tokens

The math: Expensive analytical work happens on Google's infrastructure. Claude's tokens are reserved for orchestration and final editing.

Workflow B: Building Expert AI Agents from Web Research

Problem: Vague prompts produce vague agents.

Fix: Use NotebookLM's Deep Research to autonomously gather expert knowledge, then structure into a deployable Claude Code skill.

Steps:

  1. Run Deep Research in NotebookLM - Select "web" source type, enter specific query
  2. Structure output using DBS framework:
    • Direction = step-by-step logic, decision trees, error recovery → core of SKILL.md
    • Blueprints = static reference material, templates, voice guidelines → supporting files
    • Solutions = deterministic code tasks → bundled scripts
  3. Feed to skill-creator - Copy DBS output, paste into Claude Code, invoke /skill-creator
  4. Test and deploy - Skill-creator stress-tests with generated prompts

Result: Vague concept to working, expert-level AI agent in minutes.

Workflow C: Persistent Memory Across Sessions

Problem: Three hours teaching Claude your preferences, close terminal, it's all gone.

Fix: Build a "wrap-up" ritual that extracts session learnings and stores them in a persistent NotebookLM notebook.

Steps:

  1. Install /wrap-up skill that instructs Claude to review and extract:

    • Corrections you made
    • Successful patterns
    • Unresolved issues
    • Key decisions and reasoning
  2. Configure to upload to NotebookLM:

    notebooklm use <master-brain-notebook-id>
    notebooklm source add "./session-summary-2026-04-06.md"
    
  3. Run /wrap-up before closing every session

  4. Add retrieval instruction to CLAUDE.md:

    "Before answering questions about project architecture, historical decisions, or my preferences, query the Master Brain notebook using the NotebookLM CLI."

Result: Your AI agent effectively remembers everything. Storage and retrieval on Google's free infrastructure.

Workflow D: Visual Knowledge Management with Obsidian

Problem: Claude generates research docs that pile up as invisible files.

Fix: Run Claude Code from inside an Obsidian vault so everything is immediately visible in a visual knowledge graph.

Steps:

  1. Launch from vault root:

    cd ~/Documents/MyVault
    claude
    
  2. Create CLAUDE.md at vault root defining:

    • Folder structure
    • Required metadata
    • Linking rules (like this for Obsidian's graph view)
    • Formatting standards
  3. Build custom skills:

    • /research <topic> - Query NotebookLM, create vault note with metadata and cross-links
    • /daily - Generate daily summary
    • /wrap-up - Session memory skill saving directly into vault
  4. Refine in real time - See files appear live in Obsidian

Result: Living, growing knowledge base with NotebookLM handling heavy research.

What Can Go Wrong

Unofficial APIs Mean No Guarantees

notebooklm-py reverse-engineers Google's internal protocols. If Google changes their backend, commands will fail. Treat as power-user productivity tool, not production infrastructure.

Respect Anthropic's Usage Policies

Don't use this to dodge token limits through unofficial harnesses. Ensure usage aligns with your plan.

Data Residency (UK/EU)

Claude's consumer tools process data in the US. GDPR implications are real. Enterprise API offers regional processing.

storage_state.json contains live Google session cookies. Never commit to public repos.

Cookies Expire

Re-authenticate periodically:

notebooklm login

Quick Reference: Essential Commands

# Authentication
notebooklm login

# Notebooks
notebooklm create "Notebook Name"
notebooklm list
notebooklm use <notebook-id>

# Sources
notebooklm source add "./file.pdf"
notebooklm source add "https://example.com"
notebooklm source list

# Queries
notebooklm ask "your question here"

# Generate outputs
notebooklm generate slide-deck
notebooklm generate flashcards
notebooklm generate mind-map
notebooklm generate audio

What to Explore Next

  • Build a personal skill library - Package repetitive workflows
  • Browse the skill ecosystem - Thousands of skills on GitHub and SkillsMP
  • Combine with MCP servers - Model Context Protocol for external services
  • Add Obsidian plugins - Dataview for dynamic queries, Templater for automation

Additional credits: Jack Roberts, Chase, Universe of AI, and Teng Ling

Sources

Linked from