About the Role: I’m looking for a detail-oriented data researcher and web scraping specialist to build and maintain an automated workflow that collects new remote job postings from Indeed.com every 24 hours. Your work will power a live database for job seekers, so accuracy, reliability, and clean data formatting are key. Responsibilities: Scrape remote job listings posted within the last 24 hours from Indeed.com. Identify and flag listings with the “Urgently Hiring” green label. Extract and structure the following data fields for each listing: Company name Job title / position Salary (if listed) Urgent hire status (Yes/No) Date posted Application link Experience level Education required Automatically push the cleaned dataset into Airtable every 24 hours. Maintain a consistent schema and handle missing fields gracefully (e.g., mark as “N/A”). Ensure the scraping process complies with Indeed’s terms of service and avoids IP bans or blocks. Requirements: Proven experience with web scraping (e.g., Python, Apify, Playwright, Puppeteer, or similar tools). Familiarity with automating workflows and APIs (Airtable API experience is a big plus). Strong attention to detail and ability to deliver accurate, clean data. Ability to troubleshoot scraping issues (e.g., CAPTCHA, rate limits, dynamic content). Optional but helpful: experience with scheduling scrapers via cron jobs or cloud platforms. Deliverables: A functioning scraper script or Apify actor that runs daily without manual intervention. A live Airtable base that updates every 24 hours with fresh listings. Documentation or short Loom walkthrough explaining how the system works. Contract Details: Remote, flexible hours One-time build + potential ongoing maintenance contract