Skip to content

Commit

Permalink
smollm:135m for testing purposes
Browse files Browse the repository at this point in the history
smollm is a model by Hugging Face, it's good for testing and CPU
inferencing.

Signed-off-by: Eric Curtin <[email protected]>
  • Loading branch information
ericcurtin committed Jan 9, 2025
1 parent fbac528 commit a9ecebc
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 7 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ You can `list` all models pulled into local storage.
```
$ ramalama list
NAME MODIFIED SIZE
ollama://tiny-llm:latest 16 hours ago 5.5M
ollama://smollm:135m 16 hours ago 5.5M
huggingface://afrideva/Tiny-Vicuna-1B-GGUF/tiny-vicuna-1b.q2_k.gguf 14 hours ago 460M
ollama://granite-code:3b 5 days ago 1.9G
ollama://granite-code:latest 1 day ago 1.9G
Expand Down
4 changes: 2 additions & 2 deletions docs/ramalama-list.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ List all Models downloaded to users homedir
```
$ ramalama list
NAME MODIFIED SIZE
ollama://tiny-llm:latest 16 hours ago 5.5M
ollama://smollm:135m 16 hours ago 5.5M
huggingface://afrideva/Tiny-Vicuna-1B-GGUF/tiny-vicuna-1b.q2_k.gguf 14 hours ago 460M
ollama://granite-code:3b 5 days ago 1.9G
ollama://granite-code:latest 1 day ago 1.9G
Expand All @@ -41,7 +41,7 @@ ollama://moondream:latest 6 days ago
List all Models in json format
```
$ ramalama list --json
{"models": [{"name": "oci://quay.io/mmortari/gguf-py-example/v1/example.gguf", "modified": 427330, "size": "4.0K"}, {"name": "huggingface://afrideva/Tiny-Vicuna-1B-GGUF/tiny-vicuna-1b.q2_k.gguf", "modified": 427333, "size": "460M"}, {"name": "ollama://tiny-llm:latest", "modified": 420833, "size": "5.5M"}, {"name": "ollama://mistral:latest", "modified": 433998, "size": "3.9G"}, {"name": "ollama://granite-code:latest", "modified": 2180483, "size": "1.9G"}, {"name": "ollama://tinyllama:latest", "modified": 364870, "size": "609M"}, {"name": "ollama://tinyllama:1.1b", "modified": 364866, "size": "609M"}]}
{"models": [{"name": "oci://quay.io/mmortari/gguf-py-example/v1/example.gguf", "modified": 427330, "size": "4.0K"}, {"name": "huggingface://afrideva/Tiny-Vicuna-1B-GGUF/tiny-vicuna-1b.q2_k.gguf", "modified": 427333, "size": "460M"}, {"name": "ollama://smollm:135m", "modified": 420833, "size": "5.5M"}, {"name": "ollama://mistral:latest", "modified": 433998, "size": "3.9G"}, {"name": "ollama://granite-code:latest", "modified": 2180483, "size": "1.9G"}, {"name": "ollama://tinyllama:latest", "modified": 364870, "size": "609M"}, {"name": "ollama://tinyllama:1.1b", "modified": 364866, "size": "609M"}]}
```

## SEE ALSO
Expand Down
2 changes: 1 addition & 1 deletion docs/ramalama-serve.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ require HTTPS and verify certificates when contacting OCI registries
### Run two AI Models at the same time. Notice both are running within Podman Containers.
```
$ ramalama serve -d -p 8080 --name mymodel ollama://tiny-llm:latest
$ ramalama serve -d -p 8080 --name mymodel ollama://smollm:135m
09b0e0d26ed28a8418fb5cd0da641376a08c435063317e89cf8f5336baf35cfa
$ ramalama serve -d -n example --port 8081 oci://quay.io/mmortari/gguf-py-example/v1/example.gguf
Expand Down
1 change: 1 addition & 0 deletions shortnames/shortnames.conf
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
"granite:8b" = "ollama://granite3.1-dense:8b"
"ibm/granite" = "ollama://granite3.1-dense:8b"
"ibm/granite:2b" = "ollama://granite3.1-dense:2b"
"smollm:135m" = "ollama://smollm:135m"
"ibm/granite:7b" = "huggingface://instructlab/granite-7b-lab-GGUF/granite-7b-lab-Q4_K_M.gguf"
"ibm/granite:8b" = "ollama://granite3.1-dense:8b"
"granite:7b" = "huggingface://instructlab/granite-7b-lab-GGUF/granite-7b-lab-Q4_K_M.gguf"
Expand Down
4 changes: 2 additions & 2 deletions test/system/040-serve.bats
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ verify_begin=".*run --rm -i --label RAMALAMA --security-opt=label=disable --name
skip "Seems to cause race conditions"
skip_if_nocontainer

model=ollama://tiny-llm:latest
model=ollama://smollm:135m
container1=c_$(safename)
container2=c_$(safename)

Expand Down Expand Up @@ -99,7 +99,7 @@ verify_begin=".*run --rm -i --label RAMALAMA --security-opt=label=disable --name
skip "Seems to cause race conditions"
skip_if_nocontainer

model=ollama://tiny-llm:latest
model=ollama://smollm:135m
container=c_$(safename)
port1=8100
port2=8200
Expand Down
2 changes: 1 addition & 1 deletion test/system/helpers.bash
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ RAMALAMA_NONLOCAL_IMAGE_FQN="$RAMALAMA_TEST_IMAGE_REGISTRY/$RAMALAMA_TEST_IMAGE_
# Because who wants to spell that out each time?
IMAGE=$RAMALAMA_TEST_IMAGE_FQN

MODEL=ollama://ben1t0/tiny-llm:latest
MODEL=ollama://smollm:135m

load helpers.podman

Expand Down

0 comments on commit a9ecebc

Please sign in to comment.