AI Support Desk brings structured ITIL-aligned ticket intake, live AI-powered chat sessions, and semantic knowledge search together in one platform — so IT teams can resolve more, faster, and with less friction.
Users submit vague tickets. Admins spend hours triaging. Critical issues get buried under routine requests. Knowledge base articles go unread because nobody can find the right one. And the best support staff spend more time logging tickets than solving problems.
Most ticketing systems are elaborate databases with a chat window bolted on. We took the opposite approach: start with the AI conversation, and let the structured system emerge from it.
AI-powered.
ITIL-aligned.
Built for real IT teams.
An 8-step guided intake flow captures intent, category, device, symptom, error details, timing, and impact — automatically generating a complete, prioritized ticket before a human ever touches it.
Real-time AI conversations powered by Google Gemini with full transcript saving. AI detects resolution keywords and surfaces a "Resolve Ticket" button when the issue is confirmed solved.
Vector-powered KB search using pgvector. AI pulls the most relevant articles into every chat session context — no more hunting through categories manually.
Full inventory tracking linked to tickets. When a user logs an issue, the system knows their device, software load, and history — no manual entry required.
ITIL-aligned priority calculation using Impact × Timing. Critical outages get immediate routing. Low-urgency questions get queued appropriately — automatically.
Full mobile support including iOS camera/microphone access for live sessions. HTTPS proxy configuration for Safari so mobile users get the full AI experience anywhere.
The structured intake flow guides users through a clear, step-by-step process. By the time a ticket reaches the IT team, it has everything needed to act: category, device, symptom, error messages, timing, and calculated priority.
Sessions auto-save at every step — users can pause and resume without losing data. Abandoned flows are flagged for follow-up. Resolved flows link directly to a live AI chat session with full context injected.
The live chat session is powered by Google Gemini with full mobile support. Every message is transcribed, saved, and linked to the originating ticket. AI has access to the user's device inventory, prior tickets, and the full knowledge base — so answers are grounded in actual context.
Resolution detection is built in. When AI or the user signals the issue is resolved, the ticket closes automatically with a full transcript and resolution summary.
Traditional knowledge bases rely on keyword matching and manual categorization. Our vector knowledge base understands meaning — so when someone describes a problem in their own words, AI surfaces the right article without the user needing to know what to search for.
pgvector-powered embedding search finds the most contextually relevant articles — even when the user's words don't match the article title exactly.
AI pulls top KB matches and injects them into every live chat session automatically. Agents get context before they even ask.
Abandoned intake flows are saved server-side and can be resumed within 24 hours. Nothing is lost when a user gets interrupted mid-ticket.
A NestJS backend with TypeORM handles all API routing, authentication, and database operations. A React frontend delivers a responsive, real-time experience on both desktop and mobile. PostgreSQL with the pgvector extension powers semantic search across the knowledge base.
Docker Compose orchestrates the full stack locally and in production. HTTPS is enforced for all production deployments, with proxy configuration for iOS Safari microphone access in live chat sessions.
24-hour expiring JWT tokens with role-based scopes. Separate permissions for end users, IT staff, and admins.
Production deployments require TLS. Mobile access via HTTPS proxy for iOS Safari camera/mic support.
All ticket changes, session events, and admin actions logged with timestamps for compliance traceability.
API-level rate limiting on chat endpoints prevents abuse. Per-user throttling protects AI backend costs.
Structured intake, intelligent chat, and a knowledge base that actually works — everything your IT team needs to move faster.