x
Get our mobile app
Fast & easy access to Jobstore
Use App
Congratulations!
You just received a job recommendation!
check it out now
Browse Jobs
Companies
Campus Hiring
Download App
Jobs in Singapore   »   Jobs in Singapore   »   Education / Training Job   »   Research Scientist/Engineer, Multimodal Interaction & World Model
 banner picture 1  banner picture 2  banner picture 3

Research Scientist/Engineer, Multimodal Interaction & World Model

Bytedance Pte. Ltd.

Bytedance Pte. Ltd. company logo

Established in 2023, the ByteDance Doubao (Seed) Team is dedicated to building industry-leading AI foundation models. We aim to do world-leading research and foster both technological and social progress.


With a long-term vision and a strong commitment to the AI field, the Team conducts research in a range of areas including natural language processing (NLP), computer vision (CV), and speech recognition and generation. It has labs and researcher roles in China, Singapore, and the US.


Leveraging substantial data and computing resources and through continued investment in these domains, our team has built a proprietary general-purpose model with multimodal capabilities. In the market, Doubao models power over 50 ByteDance apps and business lines, including Doubao, Coze, and Dreamina, and was launched to external enterprise clients through Volcano Engine. The Doubao app is the most used AIGC app in China.


Why Join Us

Creation is the core of ByteDance's purpose. Our products are built to help imaginations thrive. This is doubly true of the teams that make our innovations possible.

Together, we inspire creativity and enrich life - a mission we aim towards achieving every day.

To us, every challenge, no matter how ambiguous, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.

At ByteDance, we create together and grow together. That's how we drive impact - for ourselves, our company, and the users we serve.

Join us.


About the team

Welcome to the Multimodal Interaction & World Model team. Our mission is to solve the challenge of multimodal intelligence、virtual reality world interaction in AI. We conduct cutting-edge research on areas such as Foundations and applications of multimodal understanding models, Multimodal agent and inference, Unified models for generation and understanding, World Model. Our team is comprising experienced research scientists and engineers who are dedicated to developing models that boast human-level multimodal understanding and interaction capabilities. The team also aspires to advance the exploration and development of multimodal assistant products. We foster a feedback-driven environment to continuously enhance our foundation technologies. Come join us in shaping the future of AI and transforming the product experience for users worldwide.


Responsibilities

- Explore and research multi-modal understanding, generative, machine learning, reinforcement learning, AIGC, computer vision, artificial intelligence and other cutting-edge technologies.

- Explore the basic model of large-scale/ultra-large-scale multi-modal understanding and generation interweaving, and carry out extreme system optimization; Data construction, instruction fine-tuning, preference alignment, model optimization; Improve the ability of data synthesis, scalable oversight, model reasoning and planning, build a comprehensive, objective and accurate evaluation system, and explore and improve the ability of large models.

- Explore and break through the advanced capabilities of multi-modal models and world models, including but not limited to multi-modal RAG, visual COT and Agent, and build a universal multi-modal Agent for GUI/ games and other virtual worlds.

- Use pre-training, simulation and other technologies to model various environments in the virtual/real world, provide the basic ability of multi-modal interactive exploration, promote application landing, and develop new technologies and new products with artificial intelligence technology as the core.

Qualifications

Minimum Qualifications:

- Bachelor degree or above, computer, electronics, mathematics and other related majors.

- Have in-depth research in one or more fields such as computer vision, multimodal, AIGC, machine learning, rendering generation, etc..

- Excellent analytical and problem solving skills; Ability to solve large model training and application problems; Ability to explore solutions independently.

- With good communication and collaboration skills, proactive work, able to harmoniously cooperate with the team to explore new technologies and promote technological progress.


Preferred Qualifications:

- With excellent basic algorithms, solid foundation of machine learning, familiar with CV, AIGC, NLP, RL, ML and other fields of technology, CVPR, ECCV, ICCV, NeurIPS, ICLR, SIGGRAPH or SIGGRAPH Asia and other top conferences/journals published papers are preferred.

- With excellent coding ability, proficient in C/C++ or Python programming language, ACM/ICPC, NOI/IOl, Top Coder, Kaggle and other competition winners preferred.

- In the fields of multimodal, large model, basic model, world model, RL, rendering generation, leading projects with too much influence is preferred.


ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.

Sharing is Caring

Know others who would be interested in this job?