Case Study • Consulting / Advisory
Internal Knowledge Assistant
A secure, retrieval-first AI assistant helped a consulting team search ten years of project artifacts in seconds instead of manual digging.
By Phil Maher • Published
Search Time
30–60 min to <2 min
Adoption
88% weekly active use
Proposal Velocity
+35%
Business Context
The client had project knowledge scattered across shared drives, old proposals, and archived deliverables. Senior consultants could eventually find answers, but it was slow and depended on tribal memory.
Primary Constraints
- Strict privacy requirements: no external training on proprietary documents.
- Mixed document quality: scans, slide decks, PDFs, and old exports.
- Need for trust: users needed citations, not just generated answers.
Implementation Approach
I built a secure RAG architecture with ingestion, chunking, embedding, retrieval, and response orchestration layers. Every generated answer included source references and deep links to original internal files.
What Made It Reliable
- Metadata-aware chunking to preserve client/project context.
- Permission-aware retrieval so users only saw authorized documents.
- Citation-first response format to reduce hallucination risk.
- Feedback capture loop to refine prompts and retrieval quality weekly.
Rollout Model
- Pilot with one practice area and a constrained knowledge corpus.
- Measure answer quality and citation precision against benchmark questions.
- Expand to cross-practice search once quality thresholds were met.
- Institutionalize with enablement sessions and playbooks for proposal teams.
Outcome
Average knowledge retrieval dropped to under two minutes. Teams reused prior work more effectively, proposal cycles accelerated, and onboarding time for new consultants decreased because key context was instantly searchable.
Need a Secure Internal Copilot?
If your team has valuable internal knowledge but can’t reliably access it, a retrieval-first assistant usually creates immediate leverage.
