I need a production-ready search stack that starts with an ETL flow pulling exclusively from our internal PostgreSQL databases. The pipeline must ingest and transform 38 000+ B2B category records and 5 000–10 000 company profiles, then run cleaning, vectorization, and enrichment steps so every record is categorized and stored in a pgvector-enabled schema. Once the data is in place, a separate microservice should expose a REST API that supports hybrid search: dense vectors (OpenAI text-embedding-3-small) combined with BM25 and blended with RRF scoring. Results have to work equally well in Hungarian and English; huspacy, spaCy, and Open AI are the preferred tools for language handling and any fallback generation. I expect the codebase in Python 3.10+, organised as two deployable units: • ETL package that connects to the existing tables, performs the vector and category enrichment, and writes into PostgreSQL/pgvector with idempotent reruns. • FastAPI microservice offering endpoints for single-query search and batch queries, with Docker files and a short README explaining environment variables and health checks. Acceptance will be based on end-to-end tests: I run the ETL, hit /search with a Hungarian and an English query, and receive ranked results that include both BM25 and vector hits blended by RRF. Detaled RFQ attached.