TikTok will be prioritizing applicants who have a current right to work in Singapore, and do not require TikTok's sponsorship of a visa.
TikTok is the leading destination for short-form mobile video. Our mission is to inspire creativity and bring joy. TikTok has global offices including Los Angeles, New York, London, Paris, Berlin, Dubai, Singapore, Jakarta, Seoul and Tokyo.
Why Join Us
Creation is the core of TikTok's purpose. Our platform is built to help imaginations thrive. This is doubly true of the teams that make TikTok possible.
Together, we inspire creativity and bring joy - a mission we all believe in and aim towards achieving every day.
To us, every challenge, no matter how difficult, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.
At TikTok, we create together and grow together. That's how we drive impact - for ourselves, our company, and the communities we serve.
Join us.
About the Team
The Applied Machine Learning (AML) - Enterprise team provides machine learning platform products on VolcanoEngine with cloud native resource scheduling system which intelligently orchestrates different tasks and jobs with minimised costs of every experiment and maximised resource utilisation, rich modelling tools including customised machine learning tasks and web IDE, and multi-framework high performance model inference services.
In 2021, through VolcanoEngine, we released this machine learning infrastructure to the public, to provide more enterprises with reduced costs of computation power, lower barriers to machine learning engineering and deeper developments in AI capabilities.
Responsibilities
Responsible for Ark Large Model Platform development on Volcano Engine, researching systematic solutions on large model solution implementations and applications in various industries, striving to reduce the IT cost of large model applications, meeting the users' ever-growing demand for intelligent interaction and improving the lifestyle and communications of users in the future world.
- Maintain a large-scale AI cluster and develop state-of-the-art machine learning platforms to support a diverse group of stakeholders.
- Tackle extremely challenging tasks which include, but are not limited to, delivering highly efficient training and inference for large language models, managing extremely effective distributed training jobs across clusters with over 10,000 nodes and GPU chips, and constructing highly reliable ML systems with unparalleled scalability.
- The work encompasses various aspects of LLMOps (Large Language Model Operations), such as resource scheduling, task orchestration, model training, model inference, model management, dataset management, and workflow orchestration.
- Investigate cutting-edge technologies related to large language models, AI, and machine learning at large, such as state-of-the-art distributed training systems with heterogeneous hardware, GPU utilization optimization, and the latest in hardware architecture.
- Employ a variety of technological and mathematical analyses to enhance cluster efficiency and performance.
Qualifications
Minimum Qualifications
- B. Sc or higher degree in Computer Science or related fields from accredited and reputable institutions with 5 years of R&D experience in the fields of cloud computing or large-scale model systems.
- Experience in Golang/C++/Cuda development with a solid understanding of Linux systems and popular cloud platforms such as Volcano Engine Cloud, AWS, and Azure Cloud.
- Profound knowledge of cloud-native orchestration technologies like Kubernetes, coupled with experience in large-scale cluster maintenance, job scheduling optimization, and cluster efficiency enhancement with a strong grasp on various foundational areas of computer science, including computer networking, the Linux file system, object storage services, and SQL as well as NoSQL databases.
- Experience in developing ML platforms or MLOps platforms. Experience in distributed machine learning model training, ML model fine-tuning, and deployment.
- Self-motivated, thirst for innovation, collaborative working aptitude, and consistently uphold high standards in coding and documentation quality.
Preferred Qualifications:
- Familiar with High-Performance Computing (HPC) stacks, which may include but is not limited to computing with Cuda/OpenCL, networking with NCCL/MPI/RDMA/DPDK, and model compiling with MLIR/TVM/Trition/LLVM.
- Experience in large language model (LLM) training and development, including large-scale foundational model training aligned with Scaling Laws, efficient fine-tuning techniques such as Lora/P-Tuning/RLHF, model inference optimization, and transformations of model structures for optimizations like Sparse/MoE/LongContext.
TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.