No Agency, Work on EST time zone. My project hinges on bringing Elevenlaps’ TTS API and TouchDesigner together in a seamless, real-time pipeline. I already have key scenes laid out inside TouchDesigner; what I’m missing is the connective tissue that calls Elevenlaps on demand, pipes the returned audio back into TouchDesigner, and keeps the whole loop responsive enough for live use. Here’s what I expect when we wrap up: • A well-commented Python DAT (or external script, if you prefer) that authenticates, sends text, receives the audio stream from Elevenlaps and stores it locally or directly in TouchDesigner’s memory. • A demo .toe file wired to play that audio synchronously with visuals so I can see the round-trip in action. • Clear setup notes so I can swap API keys, change voices, or expand the workflow later. Latency must stay low enough for on-stage cues—ideally under one second from text send to audio output—so efficient buffering and smart threading will be important. If you have ideas for caching or pre-fetching to improve timing, tell me; I’m open to improvements as long as the code stays readable. You’ll get access to my existing TouchDesigner project and a short list of sample text prompts to test against. Deliver back the updated .toe, the Python code, and a brief read-me, and we’re done.