tys's Guide to Local, Mobile LLMs

This guide is currently in-progress!

TL;DR

The era of Local LLMs on Mobile Devices is here!
iOS: Use LocalTavern
Android: Use ChatterUI
You need 6GB of RAM to run 4B models, or 12GB of RAM to run 8B models (q4_k_m).
(read: you need recent hardware!)

Introduction

Briefly, a "Local LLM" refers to the operation of an LLM (Local Language Model) on your own hardware, instead of connecting to a third-party service such as ChatGPT or Horde. This provides greater flexibility and privacy, but at the cost of limited performance and the often significant cost of acquiring capable hardware.

With the increasing capability of mobile devices and increasing cost (and rarity) of desktop computers, cellphones and tablets offer a far more valuable, and capable investment for those planning to run Local LLMs.

This guide is focused solely on deployment of Local LLMs on mobile hardware such as phones and tablets. Other guides can be found at the end of this document.

Why Mobile?

For example: $1,100 purchases an iPhone (17 Pro) capable of running an 8B model locally out-of-the-box. In addition to capable, portable AI hardware, you also receive a new cellphone, with warranty, that works out-of-the-box.

By contrast, $1,100 (may, prices vary) purchase the parts for a similarly-capable PC, but then requires sophisticated technical knowledge to assemble and provision the computer. While this route has significantly more flexibility, the costs are significantly higher as you...still need a cellphone.

While having both modern mobile hardware and dedicated PC capability is ideal, the future is less and less desktop/laptop oriented. Therefore, this guide has been written and oriented towards multi-functional, mobile hardware.

Hardware Requirements

For the purposes of this guide, Local LLM hardware can be divided into three categories - Traditional Computers, Integrated Computers, and Mobile Devices.

Traditional Computers

Traditional Computers refers to the typical architecture where the CPU is discrete from RAM (even if the RAM and/or CPU is soldered or otherwise permanently fixed to the motherboard). Most powerful machines used for Local LLM inference are these PCs equipped with powerful GPU(s), generally aiming to provide a large amount of VRAM. While this expensive architecture is not a bad choice, it is well covered in other guides and not otherwise mentioned here.

Integrated Computers and Apple Silicon

In contrast, some computers are integrating the RAM with the CPU. This (briefly stated) makes the RAM fast enough to be used for LLMs like VRAM. Speeds are still lower than GPUs/VRAM, but this is offset by the large amount of system RAM available.

The most notable implementation of this architecture is Apple with their Apple Silicon, featured in their M-powered devices. As a result, some of the largest models are most readily run on Apple Hardware with large amounts of system RAM such as the 512GB equipped Mac Studio.

Support for these devices is mostly built-in to existing desktop software, and is best covered in other guides.

Mobile Devices

Predating the modern LLM-capable Integrated Computer, mobile devices have long integrated the RAM with the CPU. Recent advances in processor design as well as RAM amounts have made this category very suitable for Local LLM inference.

My general rule-of-thumb is that models 2/3 the size of the system (at 4-bit precision) RAM can be loaded: e.g. 8B models can be loaded on devices with 12GB of RAM or more.

iOS

The locked down nature of iOS limits Local LLM software to those specifically created for or ported to the platform.

While many apps exist which can load LLMs on-device, only one supports character cards and character chats locally, specifically LocalTavern.

Android

Android users have two paths for Local LLMs. Termux and dedicated Apps.

Termux provides a terminal emulator, allowing a user to interact with the Linux system underpinning Android. Thanks to this, the gold-standard AI Chat software solutions can be used directly. Here are the instructions to install SillyTavern (frontend), and llama.cpp (backend).

Like iOS, many apps can load LLMs on-device. The only one we can currently recommend for the purposes of character chats is ChatterUI. Note, this is not currently available on an official app store, so sideloading must be enabled (and used at your own risk!)

Statuo's Local LLM Guide (PC-focused)

Edit

Pub: 16 Dec 2025 06:52 UTC

Edit: 22 Dec 2025 05:15 UTC

Views: 15