x
Get our mobile app
Fast & easy access to Jobstore
Use App
Congratulations!
You just received a job recommendation!
check it out now
Browse Jobs
Companies
Campus Hiring
Download App
Jobs in Singapore   »   Jobs in Singapore   »   LLM Red-teaming Operation, Analyst
 banner picture 1  banner picture 2  banner picture 3

LLM Red-teaming Operation, Analyst

Bytedance Pte. Ltd.

Bytedance Pte. Ltd. company logo

About ByteDance

Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok, Helo, and Resso, as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content.


Why Join Us

Creation is the core of ByteDance's purpose. Our products are built to help imaginations thrive. This is doubly true of the teams that make our innovations possible. Together, we inspire creativity and enrich life - a mission we aim towards achieving every day. To us, every challenge, no matter how ambiguous, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always. At ByteDance, we create together and grow together. That's how we drive impact - for ourselves, our company, and the users we serve. Join us.


About the team

As a core member of our LLM Global Data Team, you'll be at the heart of our operations. Gain first-hand experience in understanding the intricacies of training Large Language Models (LLMs) with diverse data sets, and become a leader building and growing a vibrant team together with us!


Your Role Will Involve:

1. Work closely with LLM core teams to devise a red-team strategy for LLM and manage internal red-teaming operations.

2. Coordinate resources across T&S, Data, and product teams to execute red-team tests and generate comprehensive reports, which include, but are not limited to, vulnerability assessments.

3. Work with internal researchers and external experts on monitoring the latest developments and industry best practices on red-teaming and advise on the long term strategy.

4. Lead research on novel issues related to privacy and also broader questions of fairness, accountability, and transparency related to LLMs.

5. Set the research directions and strategies to make our AI systems safer, more aligned and more robust.


Qualifications

Minimum Qualifications:

1. Solid understanding of key risks previously facing ByteDance corporate and products. (Or) extensive experience in LLM (Large Language Models) red-teaming or in red-teaming for other content/products.

2. A minimum of three years of work experience, with required skills in LLM Adversarial attacks or Jail-breaking methods.

3. A team player who knows how to assert influence appropriately. Excellent coordination and persuasion skills are crucial for success in this role.

4. Professional proficiency in English


Preferred Qualifications:

1. Global education or working experience

2. Direct research experience over SOTA AI safety topics such as RLHF, adversarial training, robustness, and more.


ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.

✱   This job post has expired   ✱

Sharing is Caring

Know others who would be interested in this job?