Skip to content

installing and running local LLM on android device usign termux

Notifications You must be signed in to change notification settings

LegitCoconut/llama-on-android

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 

Repository files navigation

ollamallama-on-android

A simple guide on how to install llama on your system using termux and proot-distro

recorded video of running llama at the end of the page

Installation of proot-distro

Install termux from Fdroid or from github

https://f-droid.org/en/packages/com.termux/

Update all the pakages in termux

  pkg update 

Now install git and curl on termux

  pkg install git
  pkg install curl

Now clone the proot-distro repository to your home folder

  git clone https://github.com/termux/proot-distro.git

Move to the proot folder and run the installation script

  cd proot-distro
  ./install.sh

After installation of proot list the available distro by

  proot-distro list

Then install ubuntu from the list and then login to ubuntu

  proot-distro install ubuntu
  proot-distro login ubunut

for more info use following command

  proot-distro help

Installation of ollama client

ollama client software helps you to run local LLM on your device

Install ollama by running the command in ubuntu by proot-distro

  curl -fsSL https://ollama.com/install.sh | sh

After installing ollama client start the service

  ollama serve &     - runs in backround
  ollama serve      - open new session

Install LLM according to the ram availablility of yout phone

now use this command to list all the available LLMs in the s/m

  ollama list

For me there was an error due to time not syncing up with the current time so the LLM was not getting downloaded from their server

Install NTPDATE to sync the time of the ubuntu with actual time

  apt install ntpdate

Now continue with the installation of LLM

  ollama run llama3.2:1b 

this installs and runs the smallest and lightest LLM in your phone and automatically runs it for you

some commands

  /bye  - exits from llama
  /clear  - clears current memory

make sure to kill the ollama processes after use

  top
  kill <pid-code>

please do intsall LLM which is lesser than avg available RAM

for more LLM go to https://ollama.com/library

Installation of webinterface for chatting with ollama through local web interface

We are going to use open-WebUI for connecting llama API , install the open-WebUI using python or run on docker

Make sure that you are using python 3.11 , other wise it will lead to error and faulty installation

  python --version

Now install the open-webui

  pip install open-webui

Start the webUI server

    open-webui serve

after sucessfull installation run the open-webUI web interface

  localhost:8080

the first login will be the deafault admin and you can customise a lot on the webui

  • run multiple model at the same time
  • customise your own model based on other existing one
  • make new user account , restrict acess and a lot more
  • integrate mistral and other AI to include more features

Screenshots

proot-distros installed

proot-distro

ollama sucesffuly installed

ollama installed

sample output from ollama

sample output

sample video

ollama.mp4

Support

For support, email [email protected] or [email protected]

About

installing and running local LLM on android device usign termux

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published