Key responsibilities
- Conduct research on key cutting-edge technologies to achieve desired trustworthiness of AI (LLM, AIGC, DNN ets.,), including defense against adversarial attacks, model confidentiality protection, data privacy protection, model verification, and content security;
- Implement tools and modules for evaluating trustworthy AI properties, enhancing them and/or detecting violations during runtime to enhance our products and services.
- Collaborate with universities to bridge the gaps between academic research and industry practices.
- Participate in relevant standardization activities and contribute to eco-systems.
- Provide insights and strategies for AI trustworthiness in products and services, and to design the best AI governance, security and privacy protection guidelines.
Requirements:
- Ph.D. in Computer Science or related field.
- Publication record in security or AI conferences (e.g. CCS, S&P, USENIX SECURITY, NDSS, NeurIPS, ICML, ICLR, AAAI, etc), other related conferences.
- Programming proficiency in Python, other languages are a plus.
- Hands-on experience working with deep learning toolkits such as Tensorflow, PyTorch or Mindspore.
Next Step
Click “apply” or send resume to: Ryce [email protected]
EA Licence No.91C2918| Personnel Registration No. R23117258