diff --git a/assets/alibaba.yaml b/assets/alibaba.yaml index 3e1c9f30..2e900a97 100644 --- a/assets/alibaba.yaml +++ b/assets/alibaba.yaml @@ -95,9 +95,9 @@ url: https://qwenlm.github.io/blog/qwen1.5/ model_card: https://huggingface.co/Qwen/Qwen1.5-72B modality: text; text - analysis: Base models are evaluated on MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, CMMLU, - all standard English and Chinese benchmarks, and chat models are evaluated on Chatbot Arena, - AlpacaEval, MT-Bench, etc. + analysis: Base models are evaluated on MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, + BBH, CMMLU, all standard English and Chinese benchmarks, and chat models are + evaluated on Chatbot Arena, AlpacaEval, MT-Bench, etc. size: 72B parameters (dense) dependencies: [] training_emissions: unknown @@ -118,15 +118,15 @@ organization: Qwen Team description: Qwen 1.5 is the next iteration in their Qwen series, consisting of Transformer-based large language models pretrained on a large volume of data, - including web texts, books, codes, etc. Qwen 1.5 MoE is the MoE model of the Qwen - 1.5 series. + including web texts, books, codes, etc. Qwen 1.5 MoE is the MoE model of the + Qwen 1.5 series. created_date: 2024-03-28 url: https://qwenlm.github.io/blog/qwen-moe/ model_card: https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B modality: text; text - analysis: Base models are evaluated on MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, CMMLU, - all standard English and Chinese benchmarks, and chat models are evaluated on Chatbot Arena, - AlpacaEval, MT-Bench, etc. + analysis: Base models are evaluated on MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, + BBH, CMMLU, all standard English and Chinese benchmarks, and chat models are + evaluated on Chatbot Arena, AlpacaEval, MT-Bench, etc. size: 14B parameters with 2.7B parameters for activation (MoE) dependencies: [] training_emissions: unknown diff --git a/assets/maya.yaml b/assets/maya.yaml index 44c6d5cf..e18d1fbe 100644 --- a/assets/maya.yaml +++ b/assets/maya.yaml @@ -2,7 +2,8 @@ - type: model name: GodziLLa 2 organization: Maya Philippines - description: GodziLLa 2 is an experimental combination of various proprietary LoRAs from Maya Philippines and Guanaco LLaMA 2 1K dataset, with LLaMA 2. + description: GodziLLa 2 is an experimental combination of various proprietary + LoRAs from Maya Philippines and Guanaco LLaMA 2 1K dataset, with LLaMA 2. created_date: 2023-08-11 url: https://huggingface.co/MayaPH/GodziLLa2-70B model_card: https://huggingface.co/MayaPH/GodziLLa2-70B