|
We are seeking Scientists / Senior Scientists to advance cutting-edge research in AI Safety, with a focus on adversarial robustness, hallucination, interpretability, alignment, and ethical deployment of AI systems.
You are expected to conduct cutting-edge R&D on AI safety related projects/grants to design, develop, and evaluate novel methods to ensure AI models behave reliably and in alignment with human values These positions involve developing various AI-driven applications, systems and platforms such as for healthcare, digital economy, advanced manufacturing and public security.
Job Requirements:
- PhD in Computer Science, Artificial Intelligence, Machine Learning, or a related field.
- Strong track record of research in AI Safety (e.g., publications in related conferences like ICML, NeurIPS, ICLR, AAAI, ACL, EMNLP, CVPR, ICCV, and SIGIR, or journals such as JMLR, AIJ, IJCV, IEEE Transactions and ACM Transactions).
- Proficiency in machine learning frameworks (e.g., PyTorch, TensorFlow).
- Experience with large-scale models (LLMs, VLMs, diffusion models) .
- Excellent problem-solving, communication, and teamwork skills.
- For Senior Scientist: Demonstrated leadership in research projects and collaborations.
|