About Doubao (Seed)
Founded in 2023, the ByteDance Doubao (Seed) Team, is dedicated to pioneering advanced AI foundation models. Our goal is to lead in cutting-edge research and drive technological and societal advancements.
With a strong commitment to AI, our research areas span deep learning, reinforcement learning, Language, Vision, Audio, AI Infra and AI Safety. Our team has labs and research positions across China, Singapore, and the US.
Leveraging substantial data and computing resources and through continued investment in these domains, we have developed a proprietary general-purpose model with multimodal capabilities. In the Chinese market, Doubao models power over 50 ByteDance apps and business lines, including Doubao, Coze, and Dreamina, and is available to external enterprise clients via Volcano Engine. Today, the Doubao app stands as the most widely used AIGC application in China.
Why Join Us
Creation is the core of ByteDance's purpose. Our products are built to help imaginations thrive. This is doubly true of the teams that make our innovations possible.
Together, we inspire creativity and enrich life - a mission we aim towards achieving every day.
To us, every challenge, no matter how ambiguous, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.
At ByteDance, we create together and grow together. That's how we drive impact - for ourselves, our company, and the users we serve.
Join us.
About The Team
The Doubao Large Language Model (LLM) team is dedicated to aggressively advancing the next generation of LLMs, tackling fundamental challenges in LLM development head-on. Our areas of concentration include model self-learning, memory capabilities, long-text generation, and interpretability. We dive deep into the latest technologies and create comprehensive solutions from concept to completion. In our endeavor to adopt LLMs in real-life scenarios, we persistently seek out methods to enhance applications through technological innovation.
Responsibilities
1. Skilled in discovering simple and universal ideas for optimizing large models and applying them to models of various scales to improve performance;
2. Explore the boundaries of large-scale models and perform extreme system optimization to improve model performance and efficiency;
3. Promote the work of data establishment, instruction fine-tuning, preference alignment, and model optimization to improve the quality and adaptability of the model;
4. Relevant application landing, including generation creation, logical reasoning, code generation, etc;
5. Conduct in-depth research and explore more usage scenarios of the model in future life, and expand the application scope of the model.
Qualifications
Minimum Qualifications
1. At least hold a Master degree in computer science, electrical engineering, statistics, applied mathematics, data science or related disciplines.
2. Proficient in coding ability, familiar with data structure and basic algorithm foundation, skilled in C/C++ or Python;
3. Familiar with algorithms and technologies related to NLP and CV, with a good research record in the relevant field;
4. Excellent problem analysis and problem-solving skills, able to deeply solve problems existing in large-scale model training and application;
5. Good communication and collaboration skills, able to explore new technologies with the team and promote technological progress.
Preferred Qualifications
1. Familiar with large model training and RL algorithms;
2. In the field of large models, those who lead projects or papers with too much influence are preferred.
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.