LHH

Data Science Specialist

Reading

Posted 4 days ago

Early applicant

Contract

Data Scientist – Bayesian Hierarchical Modelling (R / Python / AWS) Overview We are seeking a highly capable Data Scientist with strong experience in Bayesian hierarchical modelling and advanced statistical techniques to join a growing data and analytics capability. This role sits across data science, data engineering, and backend development, supporting the delivery of scalable models, robust data pipelines, and high-quality insight products. You will work with complex, high-volume datasets, applying statistical rigour to solve real business problems, while also contributing to the engineering layer that enables analytics at scale.

Key Responsibilities Design, build, and deploy Bayesian hierarchical models to support forecasting, inference, and decision-making Develop and maintain data pipelines and ETL processes, ensuring reliable, clean, and well-structured datasets Contribute to data “plumbing” and backend data services that support analytics and modelling workflows Work with large and complex datasets using Python and R Build and deploy scalable data solutions within AWS environments (e.g. S3, Glue, Lambda, Redshift, or equivalent services) Develop dashboards and data visualisations to translate complex model outputs into clear, actionable insights for stakeholders Support backend development where required, particularly around data APIs, pipelines, and integration layers Collaborate with data engineers, analysts, and business stakeholders to define requirements and deliver end-to-end solutions Ensure model performance, validation, monitoring, and continuous improvement Contribute to best practices across data science, engineering, and cloud-based data architecture

Key Skills & Experience Essential Strong experience in Bayesian statistical modelling and hierarchical modelling techniques Proficiency in Python and R for data science and modelling Strong grounding in statistical modelling, probability, and inference methods Experience building and maintaining ETL pipelines and data workflows Experience with data engineering / data “plumbing” in cloud or distributed environments Working knowledge of AWS services (e.g. S3, Glue, Lambda, Redshift, or similar) Experience building dashboards using tools such as Power BI, Tableau, or similar Strong ability to manipulate, clean, and structure large datasets Ability to communicate complex analytical outputs in a clear and usable way Desirable Exposure to backend development (APIs, services, or data layer engineering) Experience with probabilistic programming tools such as Stan or PyMC Experience operationalising data science models in production environments Familiarity with modern data stack tooling and cloud-native architectures Experience working in Agile delivery teams Exposure to real-time or large-scale data systems

Soft Skills Strong analytical and problem-solving capability Comfortable working across both engineering and analytical domains Strong stakeholder communication skills Ability to work independently and take ownership of delivery Commercial awareness and ability to translate data into business value

What This Role Offers Opportunity to work across full-stack data science and data engineering Exposure to advanced Bayesian modelling in a production environment Hands-on work with cloud infrastructure (AWS) and modern data pipelines Opportunity to shape how data is engineered, modelled, and consumed across the business High-impact role where statistical insight directly influences decision-making