I’m spinning up several research-driven initiatives that push the limits of AI, GenAI, LLMs and truly agentic systems. The work is project-based: you and I will scope each sprint together, then you’ll dive in hands-on with Python, solid prompt engineering practices and thoughtful model integration to turn hypotheses into validated results. What I need today • Research and development horsepower focused squarely on AI, GenAI and agentic AI tasks. • Practical execution—building or fine-tuning models, crafting or refining algorithms, running controlled experiments, collecting metrics and iterating fast. • Integration of those outcomes into existing or green-field workflows so the research is not just theoretical. A good fit looks like this • You’ve shipped or published work with large language models and know the nuances of prompt design, retrieval-augmented generation, chain-of-thought and similar techniques. • You are comfortable architecting or extending agentic frameworks that can plan, reason and act autonomously. • Clean, reproducible Python is your default; tools such as LangChain, Hugging Face, OpenAI / Azure OpenAI, vector DBS, and orchestration frameworks are familiar territory. Deliverable for each engagement 1. A clearly documented notebook or repo with runnable code. 2. A short read-me detailing model choices, experiment setup, results and next-step recommendations. 3. If integration is part of the scope, an API or service hook demonstrating the workflow in action. I value transparent communication, rapid experimentation and well-reasoned technical decisions. If you thrive on exploring the edge of what LLMs and agentic architectures can do, let’s talk about the first milestone and get started.