Project Title: Firmware/Software Developer for "Emotion-First" Kid Companion Robot Project Overview: We are building the software logic for a "Kid Companion Robot V1." This is a desktop-sized emotional companion designed for children. Unlike Alexa or Siri, this robot does not use Generative AI, LLMs, or Machine Learning. Instead, it relies on a complex, deterministic state machine to simulate emotions, personality, and companionship. We need a developer to implement the "Emotion Engine," audio handling, visual eye rendering, and local voice command recognition based on a detailed behavior map. Key Responsibilities & Scope of Work: 1. The Emotion Engine (State Machine) You will implement a logic system that manages 27+ distinct emotional states (e.g., Happy, Bored, Focused, Sleeping, Comfort-Seeking) . Logic: The robot’s state must change based on specific triggers (sound energy, touch, time of day, inactivity) . Decay & Stabilization: Emotions must naturally "decay" back to a neutral/calm state over time to ensure emotional safety. Memory: Implement a short-term "mood memory" that temporarily adjusts the robot's tone based on recent interactions. 2. Visual & Audio Output System Expressive Eyes: Render digital eye shapes on a display. You must map specific animations to emotions (e.g., "Blink High" for curiosity, "Squint" for concentrating, "Sleepy" for bedtime). Audio Response: trigger specific audio files (human voice recordings, not TTS) based on the current emotion and trigger event. Lip Sync/Reaction: Eye movements must sync rhythmically during "Dance Mode" or story narration. 3. Sensor & Input Handling (Local Only) Voice Command Recognition: Implement local, rule-based keyword detection for specific commands (e.g., "Play," "Story," "Goodnight"). Note: No conversational AI or cloud processing of speech is allowed. Sound Energy Detection: The robot must detect "loudness" or "crying patterns" to trigger specific modes (e.g., Comfort Mode if crying is detected). Touch Inputs: Differentiate between types of touch (e.g., tap vs. long hold) to trigger reactions like "Curious" or "Sleep wake-up". 4. Content & Cloud Sync (Firebase/SD Card) The system operates offline-first but must check for content updates via Firebase or read from an SD card. Functionality to download/update "Audio Packs" (stories, rhymes, themes like 'Pirate' or 'Space'). 5. Operational Modes Implement logic for distinct operating modes: Interactive: Play Mode (games), Talking Mode (storytelling), Listening Mode (active attention). Passive: Idle/Presence Mode (subtle eye drifts) and Sleep Mode (low power, snoring/breathing). Technical Requirements: Experience: Embedded Systems, Firmware Development, or Game Development (State Machines). Languages: C++, Python, or MicroPython (depending on agreed hardware, likely ESP32 based). Skills: Finite State Machine (FSM) design. Audio processing (FFT for sound levels, local keyword spotting). 2D Graphics/Animation (for eye rendering). IoT/Cloud integration (Firebase) for file updates only. Project Philosophy & Constraints (Strict): No AI/Chatbots: The behavior must be predictable and rule-based. Privacy First: No voice recording or storage allowed. Safety: The logic must prevent "escalation loops" (the robot must de-escalate if a child is loud/chaotic). Deliverables: Source code for the Main Logic Controller (State Machine). Implementation of the "Eye Reaction Mapping" (visuals matched to audio). Integration of Local Voice Commands and Touch/Sound sensors. Documentation of the code and update mechanism.