|
A*STAR Centre for Frontier AI Research (A*STAR CFAR) is seeking a motivated and skilled Research Engineer to join our team focused on advancing trustworthy AI, with a specific emphasis on privacy-preserving methods for adapting large language model (LLM) to downstream tasks. The successful candidate will contribute to cutting-edge research on how to effectively tune or adapt LLMs without compromising both data privacy and model privacy.
Key Responsibilities:
Successful candidates will be responsible, but not limited to:
-
Conduct research and development of LLM tuning paradigms.
-
Directly contribute to experiments of privacy preserving LLM tuning, including designing experimental details, writing reusable code, running evaluations, and organizing results.
-
Directly contribute to Agentic AI of code generation, compilers, debugging, etc.
-
Evaluate utility, privacy, and efficiency trade-offs among compared baselines.
-
Publish high-impact research in top-tier venues. (e.g., NeurIPS, ICLR, ACL, IEEE S&P).
-
Contribute to open-source tools and frameworks; and potentially guide junior researchers or interns.
Requirements:
-
Degree in Computer Science, Machine Learning, or related field.
-
Strong background in deep learning, natural language processing, or large-scale optimization.
-
Demonstrated experience in working with open-source LLMs. (e.g., fine-tuning, instruction tuning, prompt engineering)
-
Familiarity with privacy-preserving machine learning concepts. (e.g., federated learning, synthetic data).
-
Strong programming skills in Python and experience with ML frameworks. (e.g., PyTorch, HuggingFace Transformers).
-
Good written and verbal communication skills.
Please submit your CV, a short research statement (if available) to Yin_haiyan@cfar.a-star.edu.sg and Li_Jing@cfar.a-star.edu.sg.
|