About ByteDance
Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok and Helo as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content.
Why Join Us
Creation is the core of ByteDance's purpose. Our products are built to help imaginations thrive. This is doubly true of the teams that make our innovations possible.
Together, we inspire creativity and enrich life - a mission we aim towards achieving every day.
To us, every challenge, no matter how ambiguous, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.
At ByteDance, we create together and grow together. That's how we drive impact - for ourselves, our company, and the users we serve.
Join us.
About the team
The ByteDance Large Model Team is committed to developing the most advanced AI large model technology in the industry, becoming a world-class research team, and contributing to technological and social development. The Large Model Team has a long-term vision and determination in the field of AI, with research directions covering NLP, CV, speech, and other areas. Relying on the abundant data and computing resources of the platform, the team has continued to invest in relevant fields and has launched its own general large model, providing multi-modal capabilities.
The Machine Learning (ML) System sub-team combines system engineering and the art of machine learning to develop and maintain massively distributed ML training and inference system/services around the world, providing high-performance, highly reliable, scalable systems for LLM/AIGC/AGI.
As a vital AI infrastructure for the company, our machine learning system integrates our most up-to-date R&D results in AI algorithms and systems. Come and join us, you will get the chance of building large-scale machine learning systems, and working with the best AI system and algorithm researchers and engineers.
Responsibilities
- Responsible for the design and development of large-scale ML system architecture such as solving technical system problems on high concurrency, reliability, scalability, etc
- Develop end-to-end solutions on deep model inference for internal business units such as Search, Recommendation, and relevant Large Language Model (LLM) based systems etc
- Provide highly automated and extremely performant model optimization solutions for frameworks such as PyTorch and TensorFlow. Some technical solutions includes subgraph matching, compilation optimization, model quantization, heterogeneous hardware, etc.;
- Manage the large-scale GPU computing power cluster for our global businesses by improving utilization rates of the computing power through methods such as elastic scheduling, GPU overselling, and task orchestration;
- Engage in cross functional collaboration with the algorithm department to conduct joint optimization of algorithms and systems.
Minimum Qualifications:
- At least 3 years of experiences and above proficiency in C/C ++ and Python languages in the Linux environment, with relevant experience in large-scale machine learning systems or search and promote recommendation systems;
- Familiar with at least one machine learning framework (Tensorflow/PyTorch/MxNet or other self-developed frameworks).
- Background knowledge and experience in at least one of the following: GPU programming, compiler, model quantization, and GPU cluster scheduling.
- Have the ability to solve problems independently, a good team spirit, and excellent ability to disassemble complex problems.
- Have a strong sense of responsibility, good learning ability, communication skills, and self-drive.
- Have good working documentation habits and write updated workflow and technical documents as required in a timely manner.
Preferred Qualifications:
- Experience with recommendation/advertising/search off-line inference system architecture;
- Understand the GPU hardware architecture, understand the GPU software stack (CUDA, cuDNN), and have experience in GPU performance analysis.