Skip to main content

Installation Issues

Error: Cannot find module ‘context-window’

Cause: Package not installed or not found in node_modules Solution:
# Install the package
npm install context-window

# or with pnpm
pnpm install context-window

# Verify installation
npm list context-window

TypeScript errors after installation

Cause: Missing type definitions or outdated TypeScript version Solution:
# Ensure TypeScript is installed
npm install --save-dev typescript

# Update to latest version
npm install context-window@latest

# Clean and rebuild
rm -rf node_modules package-lock.json
npm install

API Key Issues

Error: “Invalid API key” (OpenAI)

Symptoms:
  • Error message contains “Invalid API key”
  • 401 Unauthorized responses
Solutions:
1

Verify API key format

OpenAI keys start with sk-
echo $OPENAI_API_KEY
# Should output: sk-...
2

Test API key

curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY"
If this fails, regenerate your key at OpenAI API Keys
3

Check .env file

Ensure no extra spaces or quotes:
# Good
OPENAI_API_KEY=sk-abc123...

# Bad
OPENAI_API_KEY="sk-abc123..."  # Remove quotes
OPENAI_API_KEY= sk-abc123...   # No space after =
4

Verify environment loading

console.log("API key loaded:",
  process.env.OPENAI_API_KEY ? "Yes" : "No"
);

// Make sure you're loading .env
import "dotenv/config";

Error: “Invalid API key” (Pinecone)

Solutions:
  1. Verify key in Pinecone Console:
  2. Check environment variable:
    echo $PINECONE_API_KEY
    
  3. Test connection:
    import { Pinecone } from "@pinecone-database/pinecone";
    
    const pinecone = new Pinecone({
      apiKey: process.env.PINECONE_API_KEY || ""
    });
    
    const indexes = await pinecone.listIndexes();
    console.log("Indexes:", indexes);
    

Pinecone Issues

Error: “Index not found”

Symptoms:
  • “Index ‘xyz’ not found”
  • Cannot connect to Pinecone index
Solutions:
  1. Go to Pinecone Console
  2. Check if your index is listed
  3. Verify the index name matches your PINECONE_INDEX environment variable
# In .env
PINECONE_INDEX=context-window

# In code
namespace: "context-window"  # Must match
Create a new index in Pinecone Console with:
  • Dimensions: 1536
  • Metric: cosine
  • Cloud: AWS (us-east-1 recommended for free tier)
New indexes take 30-60 seconds to become ready. Wait and try again.

Error: “Incorrect dimensions”

Symptoms:
  • “Dimension mismatch: expected X, got 1536”
  • Embedding dimension errors
Cause: Pinecone index was created with wrong dimensions Solution:
1

Verify required dimensions

OpenAI’s text-embedding-3-small produces 1536-dimensional vectors
2

Check current index dimensions

In Pinecone Console, view your index details to see its dimension setting
3

Recreate index with correct dimensions

  1. Delete the incorrectly configured index
  2. Create a new index with 1536 dimensions
  3. Re-run your ingestion
Deleting an index removes all stored vectors. Make sure you have your source documents to re-ingest.

Error: “Rate limit exceeded” (Pinecone)

Cause: Too many requests to Pinecone API Solutions:
// Add retry logic with exponential backoff
async function upsertWithRetry(index, vectors, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      await index.upsert(vectors);
      return;
    } catch (error) {
      if (i === maxRetries - 1) throw error;

      const delay = Math.pow(2, i) * 1000;
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
}

// Reduce batch sizes
// Process fewer documents at once

Ingestion Problems

Documents not found

Symptoms:
  • “ENOENT: no such file or directory”
  • Files not being ingested
Solutions:
Paths are relative to where you run the script:
// If running from project root:
data: ["./docs"]  // ✓

// Not from where the code file is located
data: ["../docs"]  // Might be wrong
Use absolute paths if unsure:
import path from "path";
data: [path.join(__dirname, "docs")]
# List files in directory
ls -la ./docs

# Check specific file
ls -l ./document.pdf
# Make sure files are readable
chmod +r ./docs/*
Only .txt, .md, and .pdf files are processedOther file types are silently skipped

PDF parsing fails

Symptoms:
  • Error: “Failed to parse PDF”
  • Empty content from PDF files
Solutions:
  • Scanned PDFs
  • Password-Protected
  • Corrupted Files
  • Large Files
Problem: PDF contains images of text, not actual textTest: Try selecting text in a PDF viewer. If you can’t select text, it’s scannedSolutions:
  • Use OCR software (Adobe Acrobat, Tesseract)
  • Convert to text first
  • Use a text-based PDF instead

Ingestion is very slow

Symptoms:
  • Takes many minutes to ingest documents
  • Seems stuck during ingestion
Causes & Solutions:
CauseSolution
Many documentsExpected behavior - be patient
OpenAI rate limitsUpgrade tier or reduce chunk size
Large filesIncrease chunk size to reduce chunks
Network issuesCheck internet connection
Optimization tips:
// Reduce number of chunks
chunk: { size: 2000, overlap: 100 }

// Process in batches
for (const batch of batches) {
  await createCtxWindow({ data: batch, /* ... */ });
}

Query Issues

Always returns “I don’t know”

Symptoms:
  • Every question returns “I don’t know based on the uploaded files”
  • No relevant answers found
Debugging steps:
1

Verify ingestion succeeded

Check console output during createCtxWindow() for errors
2

Remove score threshold

limits: {
  scoreThreshold: 0  // Remove filtering
}
3

Increase retrieval

limits: {
  topK: 12,               // More chunks
  maxContextChars: 12000  // More context
}
4

Rephrase question

Use terminology that appears in your documents:
// If docs say "authentication"
"How does authentication work?"
"How do I log in?"
5

Check namespace

Ensure you’re querying the correct namespace:
// Creation
vectorStore: { namespace: "docs-v1" }

// Query - must use same namespace
// If using registry, namespace should match

Inconsistent or wrong answers

Symptoms:
  • Answers change between identical questions
  • Answers don’t match document content
  • Contradictory information
Solutions:
limits: {
  topK: 10  // Retrieve more relevant chunks
}
Chunks might be too small and missing context:
chunk: {
  size: 1500,   // Larger chunks
  overlap: 250  // More overlap
}
ai: {
  provider: "openai",
  model: "gpt-4o"  // More accurate than gpt-4o-mini
}
If you have contradictory information in different documents, the AI might use bothClean up your document set for consistency

Slow response times

Symptoms:
  • Questions take more than 5 seconds
  • Timeout errors
Solutions:
// 1. Use faster model
ai: { provider: "openai", model: "gpt-4o-mini" }

// 2. Reduce context
limits: {
  topK: 5,
  maxContextChars: 5000
}

// 3. Add timeout handling
async function askWithTimeout(cw, question, ms = 10000) {
  return Promise.race([
    cw.ask(question),
    new Promise((_, reject) =>
      setTimeout(() => reject(new Error("Timeout")), ms)
    )
  ]);
}

// 4. Implement caching
const cache = new Map();
if (cache.has(question)) return cache.get(question);

Memory Issues

Out of memory during ingestion

Symptoms:
  • “JavaScript heap out of memory”
  • Process crashes during ingestion
Solutions:
1

Increase Node.js memory

# Set higher memory limit
NODE_OPTIONS=--max-old-space-size=4096 node script.js

# Or in package.json scripts
{
  "scripts": {
    "start": "NODE_OPTIONS=--max-old-space-size=4096 node index.js"
  }
}
2

Process files in batches

// Instead of all at once:
data: ["./all-docs"]

// Do in batches:
await createCtxWindow({
  namespace: "batch-1",
  data: ["./docs/part1"]
});

await createCtxWindow({
  namespace: "batch-2",
  data: ["./docs/part2"]
});
3

Increase chunk size

Fewer chunks = less memory:
chunk: { size: 2000, overlap: 200 }
4

Split large files

If you have very large PDFs or text files, split them into smaller files

Runtime Errors

Error: “Context window not found”

Symptom: When using getCtxWindow() Cause: Context window was never created or wrong name used Solution:
import { hasCtxWindow, createCtxWindow, getCtxWindow } from "context-window";

// Check before retrieving
if (!hasCtxWindow("my-docs")) {
  await createCtxWindow({
    namespace: "my-docs",
    data: ["./docs"],
    ai: { provider: "openai" },
    vectorStore: { provider: "pinecone" }
  });
}

const cw = getCtxWindow("my-docs");

Error: “Rate limit exceeded” (OpenAI)

Symptoms:
  • “Rate limit reached for requests”
  • 429 status code
Solutions:
async function askWithRetry(cw, question, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await cw.ask(question);
    } catch (error) {
      if (i === maxRetries - 1) throw error;

      // Exponential backoff
      const delay = Math.pow(2, i) * 1000;
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
}
Visit OpenAI Usage Limits to:
  • Check your current tier
  • View rate limits
  • Upgrade to higher tier
  • Implement request queuing
  • Add delays between requests
  • Cache common questions
limits: {
  topK: 5,               // Fewer chunks
  maxContextChars: 5000  // Less context
}

Network errors

Symptoms:
  • “ECONNREFUSED”
  • “Network request failed”
  • Timeout errors
Solutions:
  1. Check internet connection
  2. Verify firewall settings (ports 443 for HTTPS)
  3. Check proxy settings if behind corporate proxy:
    export HTTP_PROXY=http://proxy:port
    export HTTPS_PROXY=http://proxy:port
    
  4. Retry with exponential backoff (see above)

Environment Issues

.env file not loaded

Symptoms:
  • Environment variables are undefined
  • “API key not set” errors
Solutions:
// Install dotenv
npm install dotenv

// Load at the TOP of your entry file
import "dotenv/config";
// or
import dotenv from "dotenv";
dotenv.config();

// Verify loading
console.log("OPENAI_API_KEY:", process.env.OPENAI_API_KEY ? "Loaded" : "Missing");

Different behavior in production

Common issues:
Set env vars in your deployment platform:
  • Vercel: Environment Variables settings
  • Heroku: Config Vars
  • AWS: Parameter Store or Secrets Manager
  • Docker: Pass via -e flag or .env file
Use absolute paths or path resolution:
import path from "path";
import { fileURLToPath } from "url";

const __dirname = path.dirname(fileURLToPath(import.meta.url));
const docsPath = path.join(__dirname, "docs");
Production environments often have stricter memory limitsConfigure appropriately for your platform

Still Stuck?

If you’re still experiencing issues: When reporting issues, please include:
  • Node.js version (node --version)
  • Package version (npm list context-window)
  • Error message and stack trace
  • Minimal code to reproduce the issue
  • Operating system