Job description
Location: Remote (with occasional onsite required)
Experience: 8+ years
About the Role
We’re looking for a Staff Security Engineer to focus on security into our AI-first development lifecycle from the ground up.
You’ll work alongside Staff Engineers building high-performance applications, ensuring our systems meet rigorous compliance standards while enabling rapid, AI-assisted development.
This role isn’t about bolting security reviews onto the end of a pipeline. You’ll shape how security integrates with agentic development patterns, defining guardrails that let teams move fast without compromising compliance or data protection.
What You’ll Do
Architect secure systems that meet HIPAA and other regulatory requirements while supporting high-performance Go/Rust backends and TypeScript frontends
Define security patterns for AI-assisted development—ensuring agentic workflows don’t introduce vulnerabilities through prompt injection, data leakage, or unvalidated AI-generated code
Build compliance into the SDLC—automate security checks, audit logging, and policy enforcement that work with (not against) AI-first workflows
Threat model AI-integrated systems—identify risks unique to long-running agent loops, multi-agent orchestration, and LLM-powered features
Own security architecture decisions and influence technical direction across the team
Establish secure development practices that the team can adopt without friction, including AI-assisted security reviews
What We’re Looking For
8+ years in security engineering, with experience in application security, infrastructure security, or security architecture
Familiarity with modern backend stacks—you can review Go or Rust code and understand TypeScript frontend security concerns
Experience securing AI/ML systems—or strong interest and aptitude in LLM security risks (prompt injection, training data exposure, model manipulation)
Practical compliance mindset—you know how to satisfy auditors without creating engineering bottlenecks
Threat modeling skills—you think in attack surfaces and can anticipate risks in novel architectures
Nice to Have
Experience with agentic development patterns (BMAD or similar) from a security perspective
Background in secure SDLC tooling (SAST, DAST, dependency scanning, policy-as-code)
Contributions to security frameworks or compliance automation
Experience with PHI/PII data handling at scale
Certifications: CISSP, CCSP, or healthcare-specific security credentials
Why This Role
Security in AI-first development is an emerging discipline. You won’t be following a playbook—you’ll be writing it. You’ll have direct influence on how we balance velocity with compliance, shaping patterns that protect sensitive data while enabling teams to leverage agentic workflows effectively.
This posting is for an existing vacancy that we are actively looking to fill.
We use artificial intelligence (AI) to assist in the screening, assessment, and selection of applicants.
ABOUT US
iTMethods builds sovereign AI infrastructure for regulated industries. For 20 years, we’ve earned the trust of 100+ enterprise customers in financial services, healthcare, pharma, semiconductor, defence, and technology. Our platform — The Fortress Family (Forge, Reign, BioCompute) — helps organizations govern AI where compliance isn’t optional and evidence matters.
🔗 itmethods.com





