- [ Nov 2024 ] Democratizing AI Model optimization with the new Olive CLI
- [ Nov 2024 ] Unlocking NLP Potential: Fine-Tuning with Microsoft Olive (Ignite Pre-Day Lab PRE016)
- [ Nov 2024 ] Olive supports generating models for MultiLoRA serving on the ONNX Runtime
- [ Oct 2024 ] Windows Dev Chat: Optimizing models from Hugging Face for the ONNX Runtime (video)
- [ May 2024 ] AI Toolkit - VS Code Extension that uses Olive to fine tune models
- [ Mar 2024 ] Fine-tune SLM with Microsoft Olive
- [ Jan 2024 ] Accelerating SD Turbo and SDXL Turbo Inference with ONNX Runtime and Olive
- [ Dec 2023 ] Windows AI Studio - VS Code Extension that uses Olive to fine tune models
- [ Nov 2023 ] Elevating the developer experience on Windows with new AI tools and productivity tools
- [ Nov 2023 ] Accelerating LLaMA-2 Inference with ONNX Runtime using Olive
- [ Nov 2023 ] Olive 0.4.0 released with support for LoRA fine-tuning and Llama2 optimizations
- [ Nov 2023 ] Intel and Microsoft Collaborate to Optimize DirectML for Intel® Arc™ Graphics Solutions using Olive
- [ Nov 2023 ] Running Olive Optimized Llama2 with Microsoft DirectML on AMD Radeon Graphics
- [ Oct 2023 ] AMD Microsoft Olive Optimizations for Stable Diffusion Performance Analysis
- [ Sep 2023 ] Running Optimized Automatic1111 Stable Diffusion WebUI on AMD GPUs
- [ Jul 2023 ] Build accelerated AI apps for NPUs with Olive
- [ Jun 2023 ] Olive: A user-friendly toolchain for hardware-aware model optimization
- [ May 2023 ] Optimize DirectML performance with Olive
- [ May 2023 ] Optimize Stable Diffusion Using Olive