Private LLM - Local AI Chatbot 12+

Secure Private AI Chatbot

Numen Technologies Limited

    • £6.99

Description

Now with the latest Meta Llama 3.3 70B model (on Apple Silicon Macs with 48GB or more RAM) and the Qwen 2.5 and Qwen 2.5 Coder (0.5B to 32B) family of models.

Meet Private LLM: Your Secure, Offline AI Assistant for macOS

Private LLM brings advanced AI capabilities directly to your iPhone, iPad, and Mac—all while keeping your data private and offline. With a one-time purchase and no subscriptions, you get a personal AI assistant that works entirely on your device.

Key Features:

- Local AI Functionality: Interact with a sophisticated AI chatbot without needing an internet connection. Your conversations stay on your device, ensuring complete privacy.

- Wide Range of AI Models: Choose from various open-source LLM models like Llama 3.2, Llama 3.1, Google Gemma 2, Microsoft Phi-3, Mistral 7B, and StableLM 3B. Each model is optimized for iOS and macOS hardware using advanced OmniQuant quantization, which offers superior performance compared to traditional RTN quantization methods.

- Siri and Shortcuts Integration: Create AI-driven workflows without writing code. Use Siri commands and Apple Shortcuts to enhance productivity in tasks like text parsing and generation.

- No Subscriptions or Logins: Enjoy full access with a single purchase. No need for subscriptions, accounts, or API keys. Plus, with Family Sharing, up to six family members can use the app.

- AI Language Services on macOS: Utilize AI-powered tools for grammar correction, summarization, and more across various macOS applications in multiple languages.

- Superior Performance with OmniQuant: Benefit from the advanced OmniQuant quantization process, which preserves the model's weight distribution for faster and more accurate responses, outperforming apps that use standard quantization techniques.

Supported Model Families:

- Llama 3.3 70B
- Llama 3.2 Based Models
- Llama 3.1 Based Models
- Phi-3 Based Models
- Google Gemma 2 Based Models
- Mixtral 8x7B Based Models
- CodeLlama 13B Based Models
- Solar 10.7B Based Models
- Mistral 7B Based Models
- StableLM 3B Based Models
- Yi 6B Based Models
- Yi 34B Based Models
- Qwen 2.5 Based Models (0.5B to 32B)
- Qwen 2.5 Coder Based Models (0.5B to 32B)

For a full list of supported models, including detailed specifications, please visit privatellm.app/models.

Private LLM is a better alternative to generic llama.cpp and MLX wrappers apps like Ollama, LLM Farm, LM Studio, RecurseChat, etc on three fronts:
1. Private LLM uses a faster mlc-llm based inference engine.
2. All models in Private LLM are quantised using the state of the art OmniQuant quantization algorithm, while competing apps use naive round-to-nearest quantization.
3. Private LLM is a fully native app built using C++, Metal and Swift, while many of the competing apps are (bloated) Electron based apps.

Optimized for Apple Silicon Macs with the Apple M1 chip or later, Private LLM for macOS delivers the best performance. Users on older Intel Macs without eGPUs may experience reduced performance.

What’s New

Version 1.9.5

- Support for downloading 16 new models (varies by device capacity).
- Three new Llama 3.3 based uncensored models: EVA-LLaMA-3.33-70B-v0.0, Llama-3.3-70B-Instruct-abliterated and L3.3-70B-Euryale-v2.3.
- Hermes-3-Llama-3.2-3B and Hermes-3-Llama-3.1-8B models.
- FuseChat-Llama-3.2-1B-Instruct, FuseChat-Llama-3.2-3B-Instruct, FuseChat-Llama-3.1-8B-Instruct, FuseChat-Qwen-2.5-7B-Instruct and FuseChat-Gemma-2-9B-Instruct models.
- FuseChat-Llama-3.2-1B-Instruct also comes with an unquantized variant.
- EVA-D-Qwen2.5-1.5B-v0.0, EVA-Qwen2.5-7B-v0.1, EVA-Qwen2.5-14B-v0.2 and EVA-Qwen2.5-32B-v0.2 models.
- Llama-3.1-8B-Lexi-Uncensored-V2 model
- Improved LaTeX rendering
- Stability improvements and bug fixes.

Thank you for choosing Private LLM. We are committed to continue improving the app and to making it more useful for you. For support requests and feature suggestions, please feel free to email us at support@numen.ie, or tweet us @private_llm. If you enjoy the app, leaving an App Store is a great way to support us.

Ratings and Reviews

4.3 out of 5
14 Ratings

14 Ratings

Tony the Vampire ,

Amazing!

Can’t believe my iPad is so powerful!! Works a charm on my M1. I download Phi3 no problem. You can also get it talk via clicking on text then speech. I then downloaded another model, which wasn’t show in list of installed models. I had to quit the app and go back into it to see the new models, then voila! [It may seem obvious but worth mentioning, some users may not quit the app, and quick to act in leaving negative feedback.] Can i make a request? Can you add the best model of Aya23 for translations?

Developer Response ,

Thanks for the review! Also thanks for reporting the downloaded model list synchronization issue. We've fixed it and it'll go out with the next update. We'd have loved to add the aya-23-8B model, but sadly it's licensed under a cc-by-nc license making it legally untenable for us to add it. We'll be adding the newer QWen2 models soon, which are liberally licensed and were trained on 29 languages (Aya models were trained on 23 languages). We expect those models to do well on translation tasks.

ben_hdk853 ,

Fantastic!

A clean and easy-to-use Mac-native implementation of Llama. It's early days, but I'd love to see this on the roadmap:
1. Ability to upload a file
2. Ability to read a folder of files (imagine being able to ask questions about a bunch of documents!)
3. Sharing options, incl. Markdown support

ben_hdk853 ,

Fantastic

Hard to believe this kind of power is available locally. Can’t wait to see how this tech develops. Would love to be able to upload documents or large text files.

Events

App Privacy

The developer, Numen Technologies Limited, indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer’s privacy policy.

Data Not Collected

The developer does not collect any data from this app.

Privacy practices may vary based on, for example, the features you use or your age. Learn More

Supports

  • Family Sharing

    Up to six family members can use this app with Family Sharing enabled.

You Might Also Like

MLC Chat
Productivity
Local Brain
Productivity
YourChat
Productivity
PocketGPT: Private AI
Productivity
Hugging Chat
Productivity
PocketPal AI
Productivity