My server.js powers a phone-based AI that follows an eight-step flow—from greeting the caller through sending a webhook payload. About 65 % of the code is in place, yet two critical pieces are still breaking the experience: • Identifying the caller’s legal specialty: right now I rely on simple keyword matching and the system often fails to recognise any specialty at all. • Dynamically asking the right follow-up questions: once the specialty is known, the agent should pull five targeted questions from that domain before returning to three-to-ten generic questions. Instead, it stalls or skips entirely. I need these conversation steps to live inside one persistent OpenAI chat state so the AI maintains context across turns. Once the answers are collected the script must continue to fire the existing email and webhook logic unchanged. Deliverables I’m expecting: - Refactor or replace the current keyword-matching block so specialty detection is reliable. - Implement context-aware question sequencing inside the OpenAI conversation. - Prove the flow end-to-end via test calls, showing correct specialty, ordered questions, email, and webhook payload. - Clean, well-commented Node code that drops neatly into the existing repo. Tech stack: Node.js on an Express server; OpenAI API for chat; Twilio voice webhooks handle the call audio. If you have solid experience building conversational flows with OpenAI / Real time LLM human like conversation in call and can dive straight into a half-finished codebase, I’d love to get this wrapped up quickly.