SWE Expert
Hatch
Australia · Sydney, NSW, Australia · Macquarie Park NSW 2113, Australia · Davao City, Davao del Sur, Philippines
Posted on Mar 14, 2026
This is a SWE Expert role with Mercor based in Sydney, NSW, Australia
== Mercor ==
Role Seniority - mid level, senior, junior
More About The SWE Expert Role At Mercor
Mercor is seeking SWE Experts to support the design of evaluation-ready workflows for advanced AI systems. This engagement focuses on translating ambiguous requirements into structured, repeatable artifacts that can be tested automatically. You’ll produce clearly specified deliverables (documentation + scripts) that enable consistent assessment of agent performance across scenarios. Work is contract-based, outcome-oriented, and optimized for reproducibility and clear acceptance criteria.
Key Responsibilities
Contract and Payment Terms
Mercor partners with leading AI labs and enterprises to train frontier models using human expertise. You will work on projects that focus on training and enhancing AI systems. You will be paid competitively, collaborate with leading researchers, and help shape the next generation of AI systems in your area of expertise.
Before we jump into the responsibilities of the role. No matter what you come in knowing, you’ll be learning new things all the time and the Mercor team will be there to support your growth.
🟢 Please consider applying even if you don't meet 100% of what’s outlined 🟢
Key Responsibilities
A Final Note: This is a role with Mercor not with Hatch.
== Mercor ==
Role Seniority - mid level, senior, junior
More About The SWE Expert Role At Mercor
Mercor is seeking SWE Experts to support the design of evaluation-ready workflows for advanced AI systems. This engagement focuses on translating ambiguous requirements into structured, repeatable artifacts that can be tested automatically. You’ll produce clearly specified deliverables (documentation + scripts) that enable consistent assessment of agent performance across scenarios. Work is contract-based, outcome-oriented, and optimized for reproducibility and clear acceptance criteria.
Key Responsibilities
- Convert high-level objectives into tightly scoped, testable deliverables with clear inputs/outputs and measurable success criteria.
- Create structured documentation that defines expected behavior, constraints, and edge cases in a way other evaluators can reuse.
- Build lightweight automation scripts to support evaluation flows (e.g., generating required artifacts, validating outputs, enforcing format rules).
- Write deterministic Python verifier scripts that check completion via final state or output validation (files, directories, content assertions).
- Design prompts/tasks that reliably elicit the target workflow behavior while avoiding leakage of internal instructions or implementation details.
- Implement robust error handling and actionable failure messages in verification tooling.
- Develop plausible but ineffective “baseline” or “distractor” approaches to confirm evaluation discrimination (i.e., the solution must use the intended approach).
- Maintain clean artifact hygiene: versionable structure, consistent naming, minimal ambiguity, and reproducible execution.
- Strong Python skills (file system operations, parsing, validation, test-style assertions, deterministic execution).
- Experience with evaluation harnesses, automated grading, or QA-style verification (unit/integration test mindset).
- Familiarity with prompt design and LLM evaluation methodologies (closed-ended tasks, leakage avoidance, reliability testing).
- Comfort with structured specs and documentation conventions (Markdown, YAML frontmatter patterns, well-scoped requirements).
- Working knowledge of common developer tooling: Git, CLI workflows, virtual environments, dependency management.
- Bonus: embeddings/similarity concepts (e.g., cosine similarity) for “looks relevant but fails” negative-control design.
- Ability to communicate clearly and keep scope controlled without relying on domain-specific context.
- Deliverables are primarily documentation + scripts intended to support automated evaluation and consistent replay.
- Emphasis on: determinism, reproducibility, closed-ended outcomes, and strong verifier reliability.
- Tasks and validators should be resilient to superficial shortcuts and confirm the intended workflow is actually used.
- Work can include designing negative controls (distractors) that appear credible while failing for principled reasons.
- Time-sensitive elements should be explicitly date-bounded where applicable.
Contract and Payment Terms
- You will be engaged as an independent contractor.
- This is a fully remote role that can be completed on your own schedule.
- Projects can be extended, shortened, or concluded early depending on needs and performance.
- Your work at Mercor will not involve access to confidential or proprietary information from any employer, client, or institution.
- Payments are weekly on Stripe or Wise based on services rendered.
- Please note: We are unable to support H1-B or STEM OPT candidates at this time.
Mercor partners with leading AI labs and enterprises to train frontier models using human expertise. You will work on projects that focus on training and enhancing AI systems. You will be paid competitively, collaborate with leading researchers, and help shape the next generation of AI systems in your area of expertise.
Before we jump into the responsibilities of the role. No matter what you come in knowing, you’ll be learning new things all the time and the Mercor team will be there to support your growth.
🟢 Please consider applying even if you don't meet 100% of what’s outlined 🟢
Key Responsibilities
- 🔄 Converting objectives
- 📚 Creating documentation
- ⚙️ Building automation scripts
- 🐍 Python skills
- 🔍 Evaluation harnesses experience
- 📝 Prompt design familiarity
- 📄 Structured documentation knowledge
- 🛠️ Developer tooling knowledge
- 📊 Embeddings/similarity concepts
A Final Note: This is a role with Mercor not with Hatch.