Location: Remote
Duration: 2–4 months (project-based)
Type: Contract / Research Collaboration (Paid)
About the Project
We are looking for a Master’s or PhD student to work on fine-tuning large language models (LLMs) for domain-specific tasks. The goal is to take an existing pretrained model (e.g., Meta AI’s LLaMA-class models or similar) and specialize it for a narrow, high-value use case using efficient fine-tuning techniques.
This is a hands-on applied project designed for someone who wants real-world experience deploying and optimising LLM systems.
Help drive the next wave of applied AI by demonstrating how fine-tuned LLMs can unlock advanced, real-world use cases beyond general-purpose foundation models. Organizations that require domain-specific accuracy, self-hosted deployments, customisable workflows, or performance beyond out-of-the-box capabilities increasingly rely on fine-tuned models to meet those needs.
Through this project, you will contribute to building specialised AI systems that deliver improved accuracy, efficiency, and control compared to out-of-the-box models. You will also help bridge the gap between academic knowledge and real-world application by applying fine-tuning techniques to solve concrete business problems.
What You’ll Work On
- Fine-tuning pre-trained LLMs on small to medium datasets (500–20k examples)
- Implementing parameter-efficient fine-tuning (e.g., LoRA-style methods)
- Optimising training for cost and performance
- Running experiments on GPU cloud infrastructure
- Evaluating model performance and tradeoffs (specialisation vs generalisation)
- Deploying fine-tuned models for inference
Experience
- Strong Python skills
- Experience with deep learning frameworks: PyTorch (preferred) or TensorFlow
- Experience with Hugging Face Transformers or similar ecosystems
- Hands-on experience training or fine-tuning transformer models on GPUs (local or cloud-based)
- Previous experience using cloud platforms for model training or deployment (e.g., AWS, GCP, Azure, RunPod or similar GPU providers)
- Experience working with or fine-tuning open-weight LLM families (Gemma-3, Qwen-3.5, Llama 4, GPT-OSS, Mistral...)
- Hands-on experience with LoRA
Understanding of:
- Fine-tuning vs pretraining
- Overfitting and generalization
- Model evaluation
- Strong business awareness: ability to understand the context of the fine-tuning task and translate domain requirements into clear modeling objectives
What you bring
- MSc or PhD student in Computer Science, Machine Learning, AI, or related field
- Alternatively, 6 months of hands-on experience training and fine-tuning deep learning models
- Has worked on LLMs in research or industry
- Has fine-tuned at least one transformer model
- Comfortable working independently
- Interested in applied AI and real-world constraints (cost, latency, memory)
What You’ll Gain
- Real-world experience fine-tuning large models (30B–100B parameter class)
- Exposure to production constraints and deployment
- Opportunity to co-author technical writeups if applicable
- Strong applied portfolio project
- 100% Remote Work: Work from anywhere with flexibility and autonomy
- Dynamic, High-Impact Projects: Work on cutting-edge ML and GenAI solutions across diverse industries
- International Clients: Collaborate with global organizations and solve real-world challenges at scale
- Urban Sports Club Membership: Supporting your physical and mental wellbeing
- Monthly Bolt Credits: For rides
- Company Events & Offsites: Regular team gatherings to connect, collaborate, and celebrate
Top Skills
TensorOps New York, New York, USA Office
New York, New York, United States
Similar Jobs
What you need to know about the NYC Tech Scene
Key Facts About NYC Tech
- Number of Tech Workers: 549,200; 6% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Capgemini, Bloomberg, IBM, Spotify
- Key Industries: Artificial intelligence, Fintech
- Funding Landscape: $25.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Greycroft, Thrive Capital, Union Square Ventures, FirstMark Capital, Tiger Global Management, Tribeca Venture Partners, Insight Partners, Two Sigma Ventures
- Research Centers and Universities: Columbia University, New York University, Fordham University, CUNY, AI Now Institute, Flatiron Institute, C.N. Yang Institute for Theoretical Physics, NASA Space Radiation Laboratory



