Responsibilities
Established in 2023, the ByteDance Doubao (Seed) Team is dedicated to building industry-leading AI foundation models. We aim to do world-leading research and foster both technological and social progress.
With a long-term vision and a strong commitment to the AI field, the Team conducts research in a range of areas including natural language processing (NLP), computer vision (CV), and speech recognition and generation. It has labs and researcher roles in China, Singapore, and the US.
Leveraging substantial data and computing resources and through continued investment in these domains, our team has built a proprietary general-purpose model with multimodal capabilities. In the market, Doubao models power over 50 ByteDance apps and business lines, including Doubao, Coze, and Dreamina, and was launched to external enterprise clients through Volcano Engine. The Doubao app is the most used AIGC app in China.
Why Join Us
Creation is the core of ByteDance's purpose. Our products are built to help imaginations thrive. This is doubly true of the teams that make our innovations possible. Together, we inspire creativity and enrich life - a mission we aim towards achieving every day. To us, every challenge, no matter how ambiguous, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always. At ByteDance, we create together and grow together. That's how we drive impact - for ourselves, our company, and the users we serve. Join us.
About the team
Welcome to the GAI-Vision team, where we lead the way in developing foundational models for multi-modal visual understanding and generation. Our mission is to solve the challenge of visual intelligence in AI. We conduct cutting-edge research on areas such as vision and language, large-scale vision models, and generative foundation models. Comprising experienced research scientists and engineers, our team is dedicated to pushing the boundaries of foundation model research and implementing our innovations across diverse application scenarios. We foster a feedback-driven environment to continuously enhance our foundation technologies. Come join us in shaping the future of AI and transforming the product experience for users worldwide.
Responsibilities
- Conduct research on computer vision, deep learning and AI, addressing a wide range of challenges in deep learning, computer vision, AIGC, graphics, large multi-modality models, diffusion models, video generation, 3D generation, video understanding, self-supervised learning, and autoregressive models.
- Explore the application of large-scale/super-large-scale visual foundation models, and contribute to the development of new technologies and products leveraging artificial intelligence.
Qualifications
Minimum Qualifications
- Hold a Ph.D. degree in computer science, electrical engineering, statistics, applied mathematics, data science or related disciplines.
- Possess research and practical experience in one or more areas of computer vision, encompassing multimodal generation (e.g., text-to-image, image, video, 3D generation and editing), diffusion models, GANs, transformers for generation tasks, vision-language models, large-scale training and RLHF.
- Proven track record of high-impact research.
- Collaborate effectively with team members.
- Ability to work independently.
Preferred Qualifications
- Demonstrate impactful publications in leading AI conferences (e.g., CVPR, ECCV, ICCV, NeurIPS, ICLR, SIGGRAPH, SIGGRAPH Asia) and journals (e.g., TPAMI, JMLR).
- Achievement as a winner in international academic competitions.
- Proficiency in one of the differentiable programming frameworks such as PyTorch, TensorFlow, JAX, etc.
- Possess coding skills in C/C++ and Python.
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.