Careers
Our values define who we are – as an individual and as an organization. We believe that a team with aligned values, attitude and culture can achieve great success.

We are growing and looking for talent at multiple levels. We look for alignment with our core values, strong fundamentals, and the ability to apply learning. A start up environment requires high ownership, drive, tenacity, curiosity and the ability to challenge status-quo.
We are looking for Data Engineers, AI Engineers, Data Scientists, and Analytics Engineers with 4-16 years of experience in the modern data and AI stack. If this is of interest, please share your profile highlighting relevant experience to careers@arivueanalytics.com
Data and AI Engineering
Overview
We are seeking a versatile Data & AI Engineer to build production-grade data engineering and AI-enabled solutions for enterprise clients. This is a high-ownership role requiring strong technical depth in data engineering combined with practical AI/ML implementation capabilities.
The ideal candidate brings deep expertise in at least one area (data engineering, AI/ML engineering, or analytics engineering) with working knowledge in adjacent areas and proven ability to learn rapidly. You will architect and deliver end-to-end data and AI solutions – from ingestion and transformation to model deployment and monitoring – working directly with client teams to solve complex operational challenges.
Key Responsibilities
- Design and implement scalable data pipelines for enterprise clients using modern cloud data platforms (Snowflake, BigQuery, Databricks, or equivalent).
- Build and maintain data transformation layers, ensuring data quality, governance, and performance optimization across analytical and operational systems.
- Develop and deploy AI/ML solutions including demand forecasting models, predictive analytics, LLM-powered applications, and knowledge management systems.
- Integrate AI capabilities into existing client systems through APIs, batch processes, and real-time inference pipelines.
- Implement RAG (Retrieval Augmented Generation) systems, vector databases, and semantic search for enterprise knowledge management use cases.
- Partner with analytics engineers to create robust data models that support both traditional BI and AI/ML workloads.
- Establish MLOps practices including model versioning, monitoring, retraining pipelines, and performance tracking.
- Conduct rigorous data validation and reconciliation to ensure accuracy across source systems, transformation layers, and consumption endpoints.
- Collaborate with leadership on technical discovery, translating business requirements into scalable data and AI architectures.
- Document technical designs, data lineage, model assumptions, and operational procedures to enable knowledge transfer and maintainability.
- Work directly with clients to understand requirements, provide progress updates, and ensure successful solution delivery.
- Stay current with emerging AI technologies and evaluate applicability to client use cases in operations, supply chain, and enterprise domains.
Qualifications
- Experience: 4–8 years building production data systems, with at least 2 years incorporating ML/AI components into business applications.
- Data Engineering: Strong experience building data pipelines using Python and SQL on cloud data platforms (Snowflake, BigQuery, Databricks, Redshift, or equivalent). Proficient in at least one cloud platform (AWS, Azure, GCP).
- Data Platforms: Hands-on experience with modern data warehouses or lakehouses for building analytical and ML workloads, including data modeling, transformation, and optimization.
- AI/ML Implementation: Practical experience deploying one or more of: time series forecasting, classification models, LLM applications (RAG, agents, semantic search), or computer vision systems.
- Programming: Strong Python skills for data processing, API development, and ML implementation. Comfortable with software engineering practices (testing, version control, CI/CD).
- Analytics Foundation: Working knowledge of BI tools and data modeling to collaborate effectively with analytics engineers and understand consumption patterns.
- Problem-solving: Ability to architect solutions for ambiguous business problems, make pragmatic technology choices, and troubleshoot complex distributed systems.
- Communication: Strong written and verbal communication skills with ability to explain technical concepts to business stakeholders and document technical architectures clearly.
- Global Delivery: Ability to work in a global delivery model with US-based teams, including flexible hours for collaboration and project coordination.
Preferred
- Experience with orchestration tools (Airflow, Prefect, dbt Cloud) for pipeline automation and workflow management.
- Experience with manufacturing data (MES, ERP systems like SAP), supply chain systems, CPG, QSR, beverage industry, or operational data in discrete manufacturing environments.
- Hands-on experience with vector databases (Pinecone, Weaviate, Chroma) and LLM frameworks (LangChain, LlamaIndex).
- Understanding of classical ML forecasting (ARIMA, Prophet, XGBoost) or optimization for operations use cases.
- MLOps platform experience (MLflow, Weights & Biases, SageMaker, Vertex AI).
- Understanding of discrete manufacturing dynamics (multi-level BOMs, variable lead times, derived demand).
- Experience with real-time data processing (Kafka, streaming architectures).
- Contributions to open-source projects or technical writing demonstrating depth in data/AI domains.
- Knowledge of version control systems and best practices for managing code in a team environment.
What We Offer
- High-impact role serving enterprise clients across industries, solving complex technical challenges with advanced Data and AI solutions.
- Work with modern data stack (cloud platforms, dbt, orchestration) and cutting-edge AI technologies (LLMs, vector databases, agents).
- Direct client engagement – collaborate on technical requirements, provide project updates, see your systems drive business decisions.
- Remote-first culture focused on outcomes.
Location
- Role is remote, in India;
To Apply
- Send us your resume highlighting relevant project experiences to careers@arivueanalytics.com.
AI Engineering
We are seeking a highly skilled Azure AI Engineer who will play a critical role in building Retrieval-Augmented Generation (RAG) systems that transform how organizations access and leverage enterprise knowledge. This is a formative role at Arivue and requires high ownership, technical depth, and a pragmatic approach to solving real business problems. The ideal candidate will bring expertise across Azure AI services (AI Studio, AI Search, OpenAI), prompt engineering, and RAG architecture, with the ability to bridge domain context and cutting-edge AI technology. You will be responsible for the end-to-end lifecycle of AI solution delivery from requirements gathering to production deployment and continuous optimization.
Key Responsibilities
- Lead RAG architecture and implementation, designing retrieval-augmented generation pipelines using Azure AI Studio, Prompt Flow, and Azure OpenAI that enable users to find critical information in seconds instead of minutes.
- Configure and optimize Azure AI Search with vector embeddings, hybrid search, and custom ranking to ensure high retrieval precision for technical documents.
- Engineer prompts and system instructions that understand domain terminology, context, and hierarchies, ensuring answers are accurate, grounded in source documents, and appropriate for technical audiences.
- Build domain-specific context layers, designing data models for hierarchies, taxonomies, and version control that enrich retrieval with operational context.
- Integrate Azure Document Intelligence to parse complex technical PDFs, scanned documents, tables, and diagrams, ensuring all content is properly indexed and searchable.
- Develop evaluation frameworks and quality metrics (precision, recall, answer relevance, groundedness, latency) to continuously measure and improve RAG system performance based on user feedback and production data.
- Deploy AI systems to production on Azure with proper monitoring (Application Insights), cost optimization (caching strategies, model selection), and auto-scaling to handle large query volumes with <2 second response times.
- Collaborate with frontend developers to design APIs that serve AI-generated responses, source citations, confidence scores, and feedback mechanisms to mobile-optimized user interfaces.
- Work directly with subject matter experts to understand operational workflows and translate them into effective AI solutions.
- Build modular, scalable and reusable AI platform components (RAG templates, evaluation datasets, prompt libraries.
- Current knowledge of AI/LLM models and methodologies including fine-tuning and agentic-RAG.
- Document architecture decisions, prompt strategies, and best practices to build organizational knowledge and enable team scaling.
- Architect the data model: Design and build optimized data models, applying best practices for cloud-native data architectures to ensure high performance and scalability.
- Best practices: Strong understanding of AI and Data governance, security best practices, and cloud architecture principles; Implement and enforce data quality checks, validation rules, security, and governance policies throughout the program lifecycle.
- Mentor junior team members: Provide technical guidance and leadership to other engineers on the team.
Qualifications
- 5+ years of overall Data / AI / ML experience, with at least one full life cycle experience focused on LLMs, RAG systems, and production deployments in enterprise or startup environments.
- Strong proficiency in Azure AI Studio, Azure OpenAI (GPT-4, embeddings), Azure AI Search (vector search, hybrid retrieval), and Azure Document Intelligence. You should be comfortable navigating Azure portal, CLI, and designing AI architectures.
- Deep understanding of retrieval-augmented generation patterns — document chunking strategies, embedding models, vector similarity, semantic search, prompt engineering for context injection, hallucination mitigation.
- Writing production-quality code (not just notebooks) for data processing, API development (FastAPI, Flask), and integration with Azure services. Experience with async programming and error handling.
- Demonstrated ability to design effective system prompts, few-shot examples, chain-of-thought reasoning, and output formatting for domain-specific applications.
- API development: Building RESTful APIs that integrate LLMs, handle authentication (Azure AD, OAuth), implement rate limiting, and provide structured responses with citations.
- Evaluation mindset: Experience measuring AI system quality through metrics (MRR, NDCG, answer accuracy) and A/B testing prompt variations and retrieval parameters.
- Cloud deployment: Deploying applications to Azure (App Service, Functions, Containers) with CI/CD pipelines, monitoring, and production troubleshooting skills.
- Communication skills: Strong ability to explain complex AI concepts to non-technical stakeholders and translate business requirements into technical solutions.
- Collaboration: Proven track record of working with cross-functional teams (frontend developers, cloud engineers, domain experts, client stakeholders).
- Problem-solving: A strong ability to analyze complex data problems and devise effective, scalable solutions.
Preferred
- Experience with manufacturing, industrial operations, or technical documentation domains.
- Familiarity with multiple LLM providers and frameworks (OpenAI, Anthropic, LangChain, LlamaIndex, Semantic Kernel).
- Azure certifications: AI-102 (Azure AI Engineer Associate), AZ-900 (Azure Fundamentals), or similar.
- Fine-tuning experience for domain adaptation (not required for initial projects).
- Knowledge of other Azure services: Cognitive Services, Speech Services, Form Recognizer.
- Database design skills (SQL, CosmosDB) for metadata and manufacturing context modeling.
- Data engineering exposure (ETL pipelines, Databricks, data quality) for document processing workflows.
- Experience with vector databases beyond Azure AI Search (Pinecone, Weaviate, Qdrant).
- Broader understanding of Data and AI foundations
- Knowledge of version control systems and CI/CD pipelines for data workflows.
What We Offer
- A dynamic role in a forward-thinking startup focused on creating advanced AI solutions
- Opportunities for professional growth and collaboration on cutting-edge AI technologies.
- Direct client engagement – Collaborate with client executives, conduct discovery workshops, see your AI systems used daily by business users.
- Remote-first culture with flexible hours focused on outcomes, not face-time.
Location
- Role is remote, anywhere in India
If this is you, send us your resume highlighting relevant AI / RAG project experiences to careers@arivueanalytics.com.
Please include:
- Links to GitHub/portfolio with RAG implementations or LLM applications
- Brief description of your most impactful AI project and the business outcome it achieved, and your role in it.
- Your experience with Azure AI services specifically (AI Studio, AI Search, OpenAI)
Analytics Engineering
We are seeking a highly capable Analytics Engineer to convert complex data into clear, decision-ready intelligence. This is a high-ownership role requiring intellectual curiosity, structured problem-solving, and strong execution discipline.
The ideal candidate brings broad expertise across analytics strategy, data modeling, visualization, metric governance, and data validation. You will operate across the full analytics lifecycle – from KPI definition and semantic modeling to data quality assurance and stakeholder adoption – bridging business objectives with technical implementation.
Key Responsibilities
- Lead the design and delivery of scalable analytics solutions for enterprise clients, including standardized KPI frameworks and executive dashboards that enable strategic decision-making.
- Develop and maintain robust semantic models and curated data marts to ensure consistency, reusability, and clarity of business metrics across analytics tools.
- Define, document, and govern core business metrics, ensuring alignment between business logic and underlying data structures.
- Perform rigorous data validation and reconciliation using SQL and other analytical techniques to ensure accuracy between reporting layers and source systems.
- Partner with data engineering teams to optimize data pipelines, refine data models, and enhance performance, scalability, and reliability of analytics assets.
- Apply structured analytical methods to generate insights and communicate findings clearly through data storytelling tailored to diverse stakeholders.
- Drive cross-functional adoption of analytics outputs by collaborating closely with internal teams and customer teams across product, marketing, finance, and operations teams.
- Establish quality assurance frameworks, automated checks, and monitoring mechanisms to ensure data freshness, completeness, and accuracy.
- Maintain comprehensive documentation of metric logic, data lineage, assumptions, and analytical methodologies to promote transparency and reproducibility.
- Contribute to broader data governance practices, including metric standardization, access control, and consistency across reporting environments.
Qualifications
- Experience: 4–8 years of progressive experience in analytics, business intelligence, or data strategy roles within enterprise, consulting, or product companies.
- Data Proficiency: Strong SQL skills with demonstrated capability in data validation, reconciliation, and metric construction.
- Analytical Tools: Hands-on experience with modern BI and analytics platforms (at least one of Omni, Looker, Power BI, Tableau, or similar), including dashboard design and semantic layer optimization.
- Data Modeling: Experience designing semantic layers, dimensional models, or curated data marts for self-service analytics.
- Data Engineering: Working knowledge of modern data stack including ELT/ETL pipelines, dbt, data quality frameworks on any modern data platforms (e.g., BigQuery, Snowflake, Databricks, Redshift, or equivalent).
- Business Acumen: Ability to translate business objectives into measurable KPIs and analytical frameworks.
- Communication: Strong written and verbal communication skills with the ability to explain technical concepts to non-technical stakeholders.
- Collaboration: Demonstrated experience working cross-functionally with engineering, analytics, and business teams.
- Problem-solving: A strong ability to systematically troubleshoot and resolve issues in complex, distributed production systems.
- Ability to work in a global delivery model with US-based teams, including flexible hours for collaboration and project coordination.
Preferred
- Experience in customer analytics, product analytics, marketing analytics, demand planning, or operational intelligence use cases.
- Familiarity with data governance frameworks and centralized metric layers.
- Certifications in analytics, BI platforms, or cloud data technologies.
- Foundational understanding of data engineering concepts, machine learning workflows, or AI-enabled analytics.
- Knowledge of version control systems and best practices for managing code in a team environment.
What We Offer
- High-impact role serving enterprise clients across industries, building production data and AI solutions that drive operational decisions.
- Work with modern data stack (cloud platforms, dbt, modern BI tools) and emerging AI technologies.
- Direct client engagement – collaborate with client executives, participate in workshops, see your AI systems used daily by business users.
- Remote-first culture focused on outcomes.
Location
- Role is remote, can be anywhere in India;
To Apply
- Send us your resume highlighting relevant project experiences to careers@arivueanalytics.com.
Include:
- 2-3 analytics projects you have delivered end-to-end.
- BI tools and data platforms you have worked with.
- Any experience working with US-based clients or distributed teams
- Examples of client-facing or consulting delivery work.
Data Scientist
We are seeking a highly analytical and technically skilled Data Scientist/Engineer to join our team. In this role, you will design, develop, and deploy the core intelligence for a product that will be used directly by our clients to make critical supply chain decisions.
This role requires a unique blend of deep forecasting expertise, robust software engineering skills, and a strong product sense. You will be responsible for translating complex data and models into tangible business value for our customers, owning the solution from initial data ingestion to the final, API-driven prediction.
Key Responsibilities
- Client Collaboration & Discovery: Work directly with customers to understand their specific supply chain challenges, data sources, and business objectives.
- Production-Grade Data Pipelines: Architect and build highly reliable and scalable ETL/ELT pipelines for ingesting and processing diverse and often messy customer data.
- Advanced Forecasting & Explainability:
- Develop and deploy statistical and ML-based forecasting models tailored to customer needs.
- Apply explainability and causal methods to clarify forecast drivers.
- Produce probabilistic forecasts to quantify uncertainty and support risk-aware decisions.
- API-First Model Deployment: Design and deploy forecasting models as robust, low-latency, and well-documented RESTful APIs for easy integration into customer systems.
- Rigorous Model Validation: Implement a rigorous backtesting framework and ongoing monitoring to ensure sustained accuracy and reliability in a live production environment.
- Product Mindset: Continuously think about the end-user experience. Contribute to the product roadmap by identifying new features and opportunities based on your deep understanding of the data and customer interactions.
Qualifications
- Experience: 5+ years of hands-on experience in a data science role, with demonstrable experience building and deploying ML models that served live traffic.
- Core Technical Skills:
- Expert-level proficiency in Python and its data science ecosystem (pandas, NumPy, scikit-learn).
- Strong SQL skills and experience with data pipeline orchestration (e.g., Apache Airflow, Prefect).
- Proven experience building and deploying robust APIs (e.g., using FastAPI, Flask) including knowledge of best practices for error handling, logging, and testing.
- Forecasting & Modeling Expertise:
- Deep practical knowledge of time-series forecasting, feature engineering, and a range of modeling techniques (from ARIMA to Gradient Boosting).
- Experience with explainability libraries (e.g., SHAP, LIME) and an understanding of how to interpret model outputs for business stakeholders.
- Cloud & MLOps:
- Hands-on experience with a major cloud platform (Azure, AWS or GCP).
- Familiarity with containerization (Docker) and model deployment/serving patterns.
- Client-Facing Skills (Essential):
- Excellent communication and presentation skills, with the ability to explain complex technical topics to non-technical audiences.
- A consultative mindset with a strong ability to listen, ask insightful questions, and translate business needs into technical requirements.
Preferred
- Experience in a client-facing role (e.g., solutions architect, technical consultant).
- Direct experience in the supply chain, retail, or CPG industries.
- Experience with probabilistic forecasting libraries (e.g., sktime, flow-forecast) and quantifying uncertainty.
- Experience with large-scale data processing using Spark or Databricks.
- Broader understanding of Data and AI foundations
- Knowledge of version control systems and CI/CD pipelines for data workflows.
What We Offer
- A dynamic role in a forward-thinking startup focused on creating advanced AI solutions
- Opportunities for professional growth and collaboration on cutting-edge AI technologies.
- Direct client engagement – Collaborate with client executives, conduct discovery workshops, see your AI systems used daily by business users.
- Remote-first culture with flexible hours focused on outcomes, not face-time.
Location
- Role is remote, anywhere in India
- If this is you, send us your resume highlighting relevant experiences to careers@arivueanalytics.com.
ML Ops
We are seeking a highly skilled MLOps Engineer who will play a critical role in building the operational backbone for our data science team. You will be responsible for transforming how the organization deploys, monitors, and manages machine learning models at scale. This is a foundational role that requires high ownership, deep technical expertise in cloud infrastructure and automation, and a pragmatic approach to building reliable, scalable systems. The ideal candidate will bring expertise across AWS and/or Databricks, CI/CD for machine learning, and infrastructure-as-code. You will own the end-to-end lifecycle of our production models, from deployment and monitoring to continuous optimization, enabling our data scientists to focus on building next-generation algorithms for personalization, loyalty, and marketing attribution.
Key Responsibilities
- Lead the architecture and implementation of CI/CD pipelines for machine learning, using tools like Jenkins, GitLab CI, or native cloud services to automate model training, testing, and deployment.
- Configure and optimize the ML platform on AWS (SageMaker, S3, Lambda) and/or Databricks to ensure efficient, scalable, and cost-effective model training and serving.
- Engineer robust monitoring and alerting systems that track model performance, data drift, and prediction latency, ensuring production models are always performing optimally.
- Build a centralized model registry and governance framework (e.g., using MLflow) for versioning, tracking, and managing the lifecycle of all machine learning models.
- Integrate data validation and quality checks into ML pipelines to ensure the integrity of data used for training and inference.
- Develop evaluation frameworks and quality metrics (accuracy, precision/recall, latency, business-specific KPIs) to continuously measure and improve the performance of production models.
- Deploy ML systems to production on AWS/Databricks with proper monitoring (CloudWatch, Application Insights), cost optimization, and auto-scaling to handle high-throughput prediction requests reliably.
- Collaborate with data scientists to package their models and code for production, and with application developers to design APIs that serve model predictions to end-user applications.
- Work directly with the data science team to understand their workflows and build tools and automation that accelerate the path from research to production.
- Build modular, scalable, and reusable platform components (e.g., pipeline templates, feature store integrations, monitoring dashboards).
- Document architecture decisions, deployment processes, and best practices to build organizational knowledge and enable team scaling.
- Implement and enforce best practices for AI and Data governance, security, and cloud architecture principles across all machine learning systems.
Qualifications
- 3-8 years of overall experience in MLOps role, with full life cycle experience deploying and managing machine learning models in a production environment.
- Strong proficiency in cloud platforms, preferably Azure / AWS and Databricks. You should be comfortable navigating the cloud portal, CLI, and designing production-grade infrastructure.
- Deep understanding of the MLOps lifecycle — CI/CD for ML, experiment tracking, model versioning, data and concept drift monitoring, and automated retraining.
- Writing production-quality code (not just notebooks) in Python for automation, API development (FastAPI, Flask), and integration with cloud services.
- Demonstrated ability to design and implement robust automation pipelines for infrastructure and model deployment.
- API development: Experience building or deploying RESTful APIs for model serving that handle authentication, logging, and monitoring.
- Evaluation mindset: Experience implementing systems to measure and monitor ML system quality through technical metrics (latency, error rates) and model performance metrics (accuracy, drift).
- Cloud deployment: Proven experience deploying applications and infrastructure to AWS or a similar cloud platform using Infrastructure as Code (Terraform, CloudFormation) and CI/CD pipelines.
- Communication skills: Strong ability to explain complex technical concepts to data scientists and other stakeholders and translate their needs into technical solutions.
- Collaboration: Proven track record of working with cross-functional teams (data science, software engineering, product).
- Problem-solving: A strong ability to systematically troubleshoot and resolve issues in complex, distributed production systems.
Preferred
- Experience in a CPG, Retail, QSR, e-commerce, or other consumer-facing industry.
- Familiarity with multiple ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn, XGBoost).
- Direct experience with MLflow for experiment tracking and model registry management.
- AWS or Databricks certifications.
- Deep experience with large-scale data processing using Apache Spark.
- Data engineering exposure (ETL/ELT pipelines, data warehousing, data quality frameworks).
- Knowledge of version control systems and best practices for managing code in a team environment.
What We Offer
- A dynamic role in a forward-thinking startup focused on creating advanced AI solutions
- Opportunities for professional growth and collaboration on cutting-edge AI technologies.
- Direct client engagement – Collaborate with client executives, conduct discovery workshops, see your AI systems used daily by business users.
- Remote-first culture focused on outcomes.
Location
- Role is remote, can be based anywhere in India
If this is you, send us your resume highlighting relevant project experiences to careers@arivueanalytics.com.
