FinTech Australia
FinTech Australia
About
About Us
What is Fintech
Contact Us
Policy
Policy
Policy Working Groups
Events
Events Calendar
The Finnies
Intersekt Festival
Members
Corporate Partners
Fintech Careers
Jobs Board
eLearning
Resources
Ecosystem Map
Regulatory Map
Investor Map
EY Fintech Census
Services Directory
News
News
Podcast
Member Portal
FinTech Australia
FinTech Australia
About
About Us
What is Fintech
Contact Us
Policy
Policy
Policy Working Groups
Events
Events Calendar
The Finnies
Intersekt Festival
Members
Corporate Partners
Fintech Careers
Jobs Board
eLearning
Resources
Ecosystem Map
Regulatory Map
Investor Map
EY Fintech Census
Services Directory
News
News
Podcast
Member Portal
Folder: About
Folder: Policy
Folder: Events
Members
Corporate Partners
Folder: Fintech Careers
Folder: Resources
Folder: News
Member Portal
Back
About Us
What is Fintech
Contact Us
Back
Policy
Policy Working Groups
Back
Events Calendar
The Finnies
Intersekt Festival
Back
Jobs Board
eLearning
Back
Ecosystem Map
Regulatory Map
Investor Map
EY Fintech Census
Services Directory
Back
News
Podcast
hero

Companies you'll love to work for

0
companies
0
Jobs
For Employers
Add your job
listings
Contact Us
For Employers
Find Candidates
Directly
Talent Pool
For Candidates
Help Recruiters
Find You
Talent Network
Search 
jobs
Explore 
companies
Join talent network
Talent
My job alerts

Software Engineer @ Mercor

Hatch

Hatch

This job is no longer accepting applications

See open jobs at Hatch.See open jobs similar to "Software Engineer @ Mercor" FinTech Australia.
Software Engineering
Australia
Posted on Feb 3, 2026

Location: US-Based and Non-US-Based

Type: Full-time or Part-time Contract Work

Fluent Language Skills Required: English

Why This Role Exists

Mercor partners with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems. These systems are used across a wide range of everyday and professional scenarios, and their effectiveness depends on how clearly, accurately, and helpfully they respond to real user questions.

In coding and software engineering contexts, conversational AI systems must demonstrate correct reasoning, strong problem-solving ability, and adherence to real-world engineering best practices. This project focuses on evaluating and improving how models reason about code, generate solutions, and explain technical concepts across a variety of programming tasks and complexity levels.

What You’ll Do

  • Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness
  • Conduct fact-checking using trusted public sources and authoritative references
  • Conduct accuracy testing by executing code and validating outputs using appropriate tools
  • Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies
  • Assess code quality, readability, algorithmic soundness, and explanation quality
  • Ensure model responses align with expected conversational behavior and system guidelines
  • Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines

Who You Are

  • You hold a BS, MS, or PhD in Computer Science or a closely related field
  • You have significant (5+ years) real-world experience in software engineering or related technical roles
  • You are an expert in at least two relevant programming languages (e.g., Python, Java, C++, C, JavaScript, Go, Rust, Ruby, SQL, Powershell, Bash, Swift, Kotlin, R, TypeScript, HTML/CSS)
  • You are able to solve HackerRank or LeetCode Medium and Hard–level problems independently
  • You have experience contributing to well-known open-source projects, including merged pull requests
  • You have significant experience using LLMs while coding and understand their strengths and failure modes
  • You have strong attention to detail and are comfortable evaluating complex technical reasoning, identifying subtle bugs or logical flaws

Nice-to-Have Specialties

  • Prior experience with RLHF, model evaluation, or data annotation work
  • Track record in competitive programming
  • Experience reviewing code in production environments
  • Familiarity with multiple programming paradigms or ecosystems
  • Experience explaining complex technical concepts to non-expert audiences

What Success Looks Like

  • You identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions
  • Your feedback improves the correctness, robustness, and clarity of AI coding outputs
  • You deliver reproducible evaluation artifacts that strengthen model performance
  • Mercor customers trust AI systems to assist reliably with real-world coding tasks

Why Join Mercor At Mercor, experienced software engineers play a direct role in shaping how AI systems reason about and generate code. This remote role allows you to apply your technical expertise to high-impact AI development work, improving systems used by developers around the world.

We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.

Contract and Payment Terms

  • You will be engaged as an independent contractor.
  • This is a fully remote role that can be completed on your own schedule.
  • Projects can be extended, shortened, or concluded early depending on needs and performance.
  • Your work at Mercor will not involve access to confidential or proprietary information from any employer, client, or institution.
  • Payments are weekly on Stripe or Wise based on services rendered.
  • Payments are in USD and subject to exchange rates
  • Please note: We are unable to support H1-B or STEM OPT candidates at this time.

About Mercor

Mercor partners with leading AI labs and enterprises to train frontier models using human expertise. You will work on projects that focus on training and enhancing AI systems. You will be paid competitively, collaborate with leading researchers, and help shape the next generation of AI systems in your area of expertise.

🟢 Please consider applying even if you don't meet 100% of what’s outlined 🟢

Key Responsibilities

  • ✅ Evaluating LLM-generated responses
  • 🔗 Conducting fact-checking
  • ✍️ Annotating model responses

Key Strengths

  • 💻 Software engineering experience
  • 🖥️ Programming languages expertise
  • 🔍 Attention to detail
  • 🤖 Experience with RLHF
  • 🏆 Competitive programming
  • 📝 Code review experience

Why Mercor is partnering with Hatch on this role. Hatch exists to level the playing field for people as they discover a career that’s right for them. So when you apply you have the chance to show more than just your resume.

A Final Note: This is a role with Mercor not with Hatch.

This job is no longer accepting applications

See open jobs at Hatch.See open jobs similar to "Software Engineer @ Mercor" FinTech Australia.
See more open positions at Hatch
Privacy policyCookie policy
FINTECH AUSTRALIA

FinTech Australia exists to help our country become one of the world’s top markets for fintech innovation and investment.

IMPORTANT LINKS
  • Privacy Policy
  • Member Login
  • Join Fintech Australia
  • Contact Us
© 2023 FinTech Australia