Project Overview
We are seeking a highly motivated and talented Research Engineer to join our team for the project “Towards an Immersive Workplace” project within the five-year, ambitious research program on "Mens, Manus and Machina—How AI Empowers People, Institutions and Cities in Singapore (M3S)" supported by NRF CREATE.
The Researcher will work in the broad area of 3D spatial sensing and spatial computing, using cutting-edge sensing modalities such as LIDAR and neuromorphic (event-based) vision sensors. The proposed research will explore the use of AI techniques to process such spatial/event data on resource-constrained embedded and wearable devices, addressing system metrics such as energy, bandwidth and latency. The research is also expected to use such advances to create novel spatial computing and mixed/augmented reality applications, which support human attention-aware processing of physical environment context. Such applications, in turn, will support the M3S vision of collaborative and interactive execution of tasks between teams of humans and robots/machines. Such work is anticipated to result in publications at prestigious mobile/wearable, human-machine interaction and user interface venues.
Responsibilities
- Conduct research on developing spatial computing systems with a focus on optimizing them to execute on resource-constrained edge devices.
- Publish research findings in top-tier conferences and journals in the fields of applied AI, spatial computing, pervasive and mobile computing.
- Design and develop end-to-end research prototypes to demonstrate real-world use cases using our proposed spatial computing systems.
Requirements
- B.Sc./M.Sc. in Computer Science, Artificial Intelligence or a related field.
- Strong background in machine learning, multi-modal processing, natural language processing, computer vision.
- Experience with developing end-to-end pervasive systems with machine learning pipelines.
- Proficiency in programming languages such as Python, C#, or C++.
- Proficiency in python machine learning libraries such as PyTorch or Tensorflow.
- Excellent communication skills and the ability to work collaboratively in a multidisciplinary team.
- Prior experience on 3D Computer Vision, 3D point-cloud processing, 3D localization algorithms such as SLAM, Vision Language Models (VLM) would be an added advantage.
- Prior experience in developing AR/VR apps for smart-glasses using Unity or similar technologies would be an added advantage
- Prior experience on multi-modal processing with language and vision modalities would be an added advantage.
- Prior experience on working with Neuromorphic event data for vision modality would be an added advantage.
- Proficiency in converting ML models for mobile devices using Tensorflow Lite, TensorRT or similar technologies would be an added advantage.
- Experience in publishing in reputable journals and conferences would be an added advantage.
To apply, please visit our website at: https://portal.smart.mit.edu/careers/career-opportunities
Interested applicants are invited to send in their full CV/resume, cover letter and list of three references (to include reference names and contact information). We regret that only shortlisted candidates will be notified.