I need a C++/OpenCV specialist to engineer a cross-platform mobile application for users with low vision that streams the USB camera feed with minimal latency while offering zoom, contrast, brightness and color-mode controls. The build targets both iOS and Android, yet on the Android side the live feed must be driven through the Camera2 API to squeeze every millisecond out of the sensor pipeline. Beyond the standard controls, the heart of the project is an adaptive layer powered by reinforcement learning. It must provide real-time feedback and adjustments, AI automatically change the settings according to user preferences, so the image continuously tunes itself to the individual’s viewing comfort without manual tinkering. Also, it should be controlled by a bluetooth controller. Solid experience integrating C++ vision code with mobile UIs, as well as handling external controller input for one-hand operation, will be essential. Key deliverables • Compiled OpenCV/C++ core delivering <40 ms glass-to-glass latency on recent flagship phones • Camera2 API module with manual exposure, focus and frame-rate control hooks • Reinforcement-learning engine that logs user actions, learns optimal states, and applies them live • Accessible UI/UX with large-print toggles, voice prompts and haptic feedback • Xcode and Android Studio projects, plus build scripts, ready for store submission Acceptance criteria • Latency, battery use and thermal metrics meet targets across at least three test devices per platform • Adaptive AI passes a predefined usability test showing a 30 % reduction in manual adjustments after 24 h of use • Codebase is clean, documented and compiles without warnings on both platforms If your background matches the above and you thrive on squeezing performance out of camera pipelines while weaving in cutting-edge RL techniques, let’s talk.