We are partnering with a Luxury Retail group for talented professionals across several key areas within data engineering, analytics, and AI. If you're passionate about building scalable data solutions, developing intuitive analytics applications, or driving insights through advanced modeling, we'd love to hear from you.
1. Data Engineer / Data Analyst
Job Duties
- Design, build, and maintain large‑scale ETL/ELT data pipelines on Databricks using PySpark.
- Develop production‑ready data models, optimizing performance, scalability, and reliability.
- Transform complex business requirements into analytical datasets supporting finance, retail, operations, and leadership reporting.
- Monitor pipeline performance, troubleshoot issues, and ensure high data availability and quality (99%+ uptime).
- Collaborate closely with BI, data science, and business teams to deliver analytics-ready datasets.
- Implement data governance, documentation, version control, and engineering best practices.
- Build automated workflows for ingestion, transformation, and validation of high‑volume datasets.
Candidate Requirements
- 3-8+ years in data engineering, analytics, or BI engineering roles.
- Strong expertise in Databricks, PySpark, SQL, and delta lake architectures.
- Hands‑on experience designing end‑to‑end data pipelines in cloud environments (Azure preferred).
- Strong understanding of data modelling, performance optimisation, and pipeline orchestration.
- Experience working with large datasets (100M+ rows).
- Ability to translate requirements into scalable technical solutions.
- Familiarity with Fabric, ADF, or other orchestration tools is a plus.
2. Databricks Apps Developer (Streamlit)
Job Duties
- Build full‑stack analytical applications using Streamlit, integrating Databricks compute and data sources.
- Develop backend APIs, transformations, and business logic to support app functionality.
- Design polished, user-friendly UI/UX for both business and technical users.
- Deploy and maintain applications with CI/CD and containerisation when applicable.
- Implement secure authentication, data access controls, and performance optimisation.
- Partner with BI, data engineering, and product teams to deliver fast, stable internal tools.
Candidate Requirements
- Strong hands-on experience with Streamlit for production‑grade applications.
- Solid Python development background with clean, modular coding practices.
- Good understanding of Databricks notebooks, clusters, and data access layers.
- Experience integrating APIs, cloud services, and real‑time or batch datasets.
- Ability to build both backend logic and front‑end layouts.
- Nice to have: Docker, GitHub Actions, deployment on Azure/GCP/AWS.
3. BI Modeler (Power BI / Databricks / Google Analytics)
Job Duties
- Develop robust BI models and datasets using Databricks, Google Analytics, and Power BI.
- Build dashboards and visualisation layers that support executive decision-making across finance, retail, and operations.
- Optimise data models for speed, scalability, and user adoption (e.g., DAX optimization, star-schema modelling).
- Collaborate with business stakeholders to convert insights into measurable business actions.
- Manage BI governance, workspace management, deployment pipelines, and version control.
- Conduct training sessions to help teams adopt self‑service analytics.
Candidate Requirements
- Strong experience with Power BI, DAX, Databricks SQL, and Google Analytics.
- Proven ability to design and optimise semantic models and enterprise dashboards.
- Solid understanding of finance or supply chain metrics (a plus).
- Experience building KPI frameworks and analytics for senior leadership.
- Strong sense of UI/UX for data storytelling and business clarity.
4. Data Scientist (including AI / ML / LLM / RAG)
Job Duties
- Build machine learning and AI models across regression, classification, time‑series, or deep learning.
- Develop LLM-based applications, RAG pipelines, vector search, prompt engineering, and agent workflows.
- Design and deploy production-grade ML systems (MLOps, CI/CD, observability).
- Partner with engineering teams to integrate AI solutions into business processes.
- Conduct research and experimentation to evaluate new AI technologies and models.
- Deploy models using Azure, GCP, or AWS infrastructure.
- Communicate insights to stakeholders, explaining technical concepts in business terms.
Candidate Requirements
- Strong expertise in Python, ML frameworks (PyTorch, TensorFlow, or similar).
- Hands-on experience with LLMs, RAG, vector databases, Langchain, Pinecone, or similar.
- Experience deploying ML systems on cloud environments.
- Solid knowledge in feature engineering, data preparation, and model evaluation.
- Experience with containerisation (Docker) is a plus.
- Bonus: Computer vision, agentic systems, or MLOps experience.