Skip to content

Commit

Permalink
yaml test
Browse files Browse the repository at this point in the history
  • Loading branch information
JustinLin610 committed Apr 4, 2024
1 parent 322c0e9 commit 04b18e8
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 9 deletions.
16 changes: 8 additions & 8 deletions assets/alibaba.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -95,9 +95,9 @@
url: https://qwenlm.github.io/blog/qwen1.5/
model_card: https://huggingface.co/Qwen/Qwen1.5-72B
modality: text; text
analysis: Base models are evaluated on MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, CMMLU,
all standard English and Chinese benchmarks, and chat models are evaluated on Chatbot Arena,
AlpacaEval, MT-Bench, etc.
analysis: Base models are evaluated on MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP,
BBH, CMMLU, all standard English and Chinese benchmarks, and chat models are
evaluated on Chatbot Arena, AlpacaEval, MT-Bench, etc.
size: 72B parameters (dense)
dependencies: []
training_emissions: unknown
Expand All @@ -118,15 +118,15 @@
organization: Qwen Team
description: Qwen 1.5 is the next iteration in their Qwen series, consisting of
Transformer-based large language models pretrained on a large volume of data,
including web texts, books, codes, etc. Qwen 1.5 MoE is the MoE model of the Qwen
1.5 series.
including web texts, books, codes, etc. Qwen 1.5 MoE is the MoE model of the
Qwen 1.5 series.
created_date: 2024-03-28
url: https://qwenlm.github.io/blog/qwen-moe/
model_card: https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B
modality: text; text
analysis: Base models are evaluated on MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, CMMLU,
all standard English and Chinese benchmarks, and chat models are evaluated on Chatbot Arena,
AlpacaEval, MT-Bench, etc.
analysis: Base models are evaluated on MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP,
BBH, CMMLU, all standard English and Chinese benchmarks, and chat models are
evaluated on Chatbot Arena, AlpacaEval, MT-Bench, etc.
size: 14B parameters with 2.7B parameters for activation (MoE)
dependencies: []
training_emissions: unknown
Expand Down
3 changes: 2 additions & 1 deletion assets/maya.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@
- type: model
name: GodziLLa 2
organization: Maya Philippines
description: GodziLLa 2 is an experimental combination of various proprietary LoRAs from Maya Philippines and Guanaco LLaMA 2 1K dataset, with LLaMA 2.
description: GodziLLa 2 is an experimental combination of various proprietary
LoRAs from Maya Philippines and Guanaco LLaMA 2 1K dataset, with LLaMA 2.
created_date: 2023-08-11
url: https://huggingface.co/MayaPH/GodziLLa2-70B
model_card: https://huggingface.co/MayaPH/GodziLLa2-70B
Expand Down

0 comments on commit 04b18e8

Please sign in to comment.