
Technical Specialist - Multi-Agent Security
Early applicant
Hybrid
Employee
Full-Time
Mid Level
You'll sit alongside the Programme Director, Alex Obadia, as well as the rest of the team in ARIA’s Trust Everything, Everywhere opportunity space–Sarath Murugan, Edith Clare-Hall, Nicola Greco—working on ARIA’s Scaling Trust programme. You’ll be exposed to every aspect of the programme's technical management from design through approval, project selection, and delivery. We’re looking for an adaptable technical professional who’s strongly motivated by the opportunity to catalyse a technological step change. You’ll be working in a bold, talented & agile team, funding projects across the entire R&D ecosystem, from startups to universities, in the pursuit of groundbreaking advancements that will shape the future of science & technology.
Requirements
- Strong technical foundations in physics, engineering, or similar, enabling understanding of complex systems and the constraints of the physical world.
- Proven experience in a "building" role (e.g., software/infrastructure engineer at an AI lab, AI security startup, high-level independent developer).
- Personal experience writing, testing, and shipping complex codebases or AI infrastructure.
- Demonstrable applied AI experience, with deep thought on AI's implications.
- Demonstrated experience in AI red-teaming, formal verification, or security research.
- Ability to interrogate projects at a granular level, evaluating technical feasibility of complex milestones from a first-principles engineering perspective.
- Entrepreneurial mindset, prioritizing impact over accolades and willing to think outside the box.
- Independent, with a track record of taking projects from start to completion and self-imposing standards of excellence.
- Hungry and ambitious, pushing team-level achievements and setting personal goals.
- Ability to uncover non-obvious opportunities and risks, connecting teams, ideas, and research threads.
- Comprehension and articulation of complex technical concepts and timelines with clarity and conviction.
- Highly adaptable, comfortable with uncertainty and a fast-paced environment.
- Experience in robotics, "nature" cryptography, or autonomous systems (desirable).
- Track record of implementing theoretical findings into working prototypes or tools (desirable).
- Practical or research-led experience in multi-agent reinforcement learning (MARL), agentic game theory, or designing mechanisms for coordination in untrusted environments (desirable).
- Familiarity with "Scaling Trust” primitives such as Trusted Execution Environments (TEEs), Zero-Knowledge Proofs (ZKPs), or programmable cryptography (desirable).
- Obsession with niche technologies, future technologies, and the future itself (desirable).
- Successful performance of technical assignments requiring unconventional or novel approaches (desirable).
- Strong working knowledge of the UK R&D ecosystem (desirable).
- Scientific qualification, with a preference for PhD in Physics/Maths/Engineering or similar deep technical experience in AI security, embodied AI, algorithmic game theory, etc.
Responsibilities
- Act as the technical lead, with the Programme Director and Programme Specialist, to shape and deliver a programme enabling AI agents to coordinate in untrusted settings.
- Review project proposals and grant applications, assess technical milestones, and stay informed about Creator team developments.
- Plan, lead, and contribute to technical discussions in project meetings, workshops, and reviews.
- Provide evidence-based technical insight to support high-quality decisions and strategic direction.
- Confidently communicate complex scientific ideas to diverse stakeholders.
- Identify emerging trends and surface promising technologies, people, and ideas.
- Co-author white papers, open calls, and technical reviews.
- Drive project delivery with the Programme Specialist and Creator teams, tracking milestones and spotting opportunities for cross-pollination.
- Serve as the technical bridge between the Programme Director and functional teams.
- Represent the programme at events, workshops, and talks.
- Build trusted relationships with world-class researchers, labs, and founders.
- Collaborate with ARIA’s Activation Partners to ensure Creator teams access necessary tools and networks.
- Collaborate across ARIA on funding models, budgets, tooling, and operational mechanisms.
- Contribute as a member of ARIA’s Technical Specialists ('T-Specs'), sharing best practices and learnings.
- Help build and sustain ARIA’s scientific and operational culture.
Benefits
- Salary: c.£70,000, c.£90,000, c.£105,000 (offers based on benchmarked experience and selection criteria)
- Employment Type: Full-time
- Contract: 3 years fixed-term (with possibility for extension)
- Annual Leave: 27 days provision, with option to buy/sell additional days
- Working Arrangement: Hybrid; 60% in office / 40% at home
- Development: Supportive environment for learning and development opportunities
- Family Leave: Enhanced arrangements
- Employee Assistance: Free and confidential 24/7 programme
- Volunteer Days: 2 paid days
- Pension: 5% defined contribution scheme with Smart Pension
- Other: Cycle to Work scheme
Application Process
- If you don't meet 100% of criteria but believe you can excel, we encourage you to apply.
- Use your cover letter to detail your interests and what you hope to bring to the role.
- Notify us if you require any reasonable adjustments during the recruitment process.
- Travel may be required to different locations around the UK and internationally.
About ARIA
ARIA is a new kind of R&D funding agency that funds scientists and engineers to pursue research at the edge of the possible. We aim to address enormous societal challenges and opportunities through science and technology, activating the UK’s world-class R&D in new ways. We are growing our team to develop bold new approaches that can scale.
Contact
- Learn more about how we work here: https://www.aria.org.uk/
- Find out more about what the Trust Everything Everywhere team is working on here: https://www.aria.org.uk/
- Read ARIA's ethical and social responsibility policy here: https://www.aria.org.uk/
- Read about the 3Rs principle here: https://www.aria.org.uk/
Skills
AI
Software engineering
Infrastructure
AI security
Formal verification
Red teaming
Robotics
Cryptography
Autonomous systems
Multi-agent reinforcement learning
Game theory
Trusted execution environments
Zero-knowledge proofs
Programmable cryptography
Physics
Engineering
Maths
PhD


