Students move from "API consumers" to "Model Owners." Deep-dive into taking a generic foundation model (Llama 3.1 or Mistral) and fine-tuning it for specialized tasks using No-Code tools.
AI enthusiasts, developers, and data hobbyists aged 16+ wanting control, privacy, and ownership.
Fine-tuning with HuggingFace AutoTrain, Cloud Training (MonsterAPI), Local Inference (Ollama), and Training Visualization (Weights & Biases).
Data Literacy (Spreadsheets), Logical Workflow Mapping, Conceptual AI Understanding.
Session 1: July 6 - July 17, 2026
Session 2: August 3 - August 14, 2026 (EN)
This course is offered in both English or Chinese. Check the dates for your language preference.
Tuition: NT$XX,000
Deposit: NT$2,000
Early Bird Deal: Save 15% (Book by
March 1st)
The hub for hosting, sharing, and downloading open weights models and datasets.
Tools for serving LLMs locally with optimized inference speeds.
Efficient fine-tuning frameworks (QLoRA) to customize models on consumer hardware.
Quantization formats for running massive models on standard laptops (CPU/metal).
| Day | Topic | Hands-On Activity |
|---|---|---|
| 01 | Model Zoos | The Download: Pulling Llama-3-8B from Hugging Face. |
| 02 | Local Inference | The Localhost: Running models via Ollama and LM Studio. |
| 03 | Quantization | The Shrink Ray: Converting models to GGUF (4-bit). |
| 04 | Prompt Templates | Chat Formats: Understanding ChatML and Instruction tuning. |
| 05 | API Serving | The Endpoint: Exposing a local model as an OpenAI-compatible API. |
| Day | Topic | Hands-On Activity |
|---|---|---|
| 06 | Dataset Prep | The Clean-Up: Formatting JSONL data for training. |
| 07 | LoRA & PEFT | Parameter Efficient Tuning: Concepts and adapters. |
| 08 | Fine-Tuning Run | The Training: Fine-tuning Mistral on a custom dataset using Unsloth. |
| 09 | Evaluation | The Benchmark: Testing the fine-tuned model vs base model. |
| 10 | Merged Models | The Frankenstein: Creating a model merge (SLERP/TIES). |