No description
Find a file
john 02daa7bb97 Add SFT training script and run Qwen3-0.6B-Base fine-tune
Train Qwen3-0.6B-Base (596M params) on 36K folksy proverb pairs
using full SFT with HuggingFace TRL. 3 epochs, 11 min on RTX 4090.

Results: train_loss=0.954, eval_loss=1.032, test_loss=1.031
Model checkpoint at folksy-model/final/ (not committed — 1.2 GB)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 22:07:23 -04:00
corpus Add naturalization pass — 9,025 sayings, 36K training pairs 2026-03-10 07:24:37 -04:00
data Fix generator quality issues and run initial corpus pipeline 2026-03-10 04:33:56 -04:00
examples Initial 'folksy idiom' generator 2026-02-15 14:04:25 -05:00
schemas Initial 'folksy idiom' generator 2026-02-15 14:04:25 -05:00
scripts Add SFT training script and run Qwen3-0.6B-Base fine-tune 2026-03-31 22:07:23 -04:00
.gitignore Add SFT training script and run Qwen3-0.6B-Base fine-tune 2026-03-31 22:07:23 -04:00
CORPUS_GENERATION_SPEC.md corpus generation (work from mid february) 2026-03-09 19:52:09 -04:00
CORPUS_QUALITY_REVIEW.md Add SFT training script and run Qwen3-0.6B-Base fine-tune 2026-03-31 22:07:23 -04:00
EVALUATION.md corpus generation (work from mid february) 2026-03-09 19:52:09 -04:00
folksy_generator.py Fix generator quality issues and run initial corpus pipeline 2026-03-10 04:33:56 -04:00
FOLKSY_GENERATOR_SPEC.md Initial 'folksy idiom' generator 2026-02-15 14:04:25 -05:00
GPU_TRAINING_REQUIREMENTS.md Add SFT training script and run Qwen3-0.6B-Base fine-tune 2026-03-31 22:07:23 -04:00
GRAPH_ENHANCEMENT_SPEC.md corpus generation (work from mid february) 2026-03-09 19:52:09 -04:00