Install an AI model on your laptop.
Build something real with it. We ship you parts for the next one.
No API keys. No cloud bills. No data leaving your machine.
LLMs, vision, voice — just you, Ollama, and whatever you can dream up.
$ ollama pull gemma4:e2b pulling manifest... done $ ollama run gemma4:e2b >>> help me build a study app I'd love to help. What subject are you studying? I can generate flashcards, quizzes, or explain concepts in simpler terms. I run entirely on your machine. Your notes stay private.
Every AI product you use sends your data to someone else's computer. Your conversations, your homework, your personal notes, all processed on servers you don't control.
What if it didn't have to be that way?
Ollama lets you run real AI models on your own laptop — language, image, voice, and more. No internet required. No API key. No cost per token. You pull a model, you run it, you build on top of it. Everything stays on 127.0.0.1.
localhost is a Hack Club You Ship, We Ship. Build an app powered by a local AI model. Ship it. We ship you hardware to build the next one.
One command. Mac, Windows, Linux. Two minutes. You now have a local AI runtime.
Pull a small model. Write an app that talks to it. Track your hours on Hackatime.
Open source on GitHub. README with setup instructions. A demo video showing local inference.
We mail you parts for your next build — mics, cameras, sensors, or a custom macropad.
LLMs are the most common starting point, but image (generation & recognition) and voice models count too — as long as they run locally on your machine.
Ship a project and pick from the hardware pool — parts to help you build your next one. Mics, cameras, sensors, microcontrollers, inputs. Track your coding on Hackatime so we can see the work behind the build.
The brain for anything physical. Wire it to buttons, screens, motors — whatever your next project needs.
Motion, distance, temperature, light, touch. Feed real-world signals into your local models.
Plug it in, pipe audio to a local voice model. Dictation, wake words, ambient transcription — your call.
Small, cheap, fast. Point a vision model at the real world instead of at your screen.
Custom keys bound to your models. Hotkey voice dictation, one-button image capture, anything scriptable.
Encoders, buttons, LEDs, small displays, speakers and more!.
Your app runs inference locally on consumer hardware. LLMs via Ollama, llama.cpp, or LM Studio. Image and voice models via any local runtime. No cloud inference.
Web app, desktop app, or something visual. Not just piping text through a terminal. Someone non-technical should be able to use it.
Public GitHub repo. README with setup instructions. Modelfile included (if not publicly available). Multiple commits showing real work over time.
Short recording showing the app working with the local model on your machine.
Install Hackatime so your coding time is logged automatically. We use it to see the work behind the build.
Must do more than forward prompts & be a chat UI. Add context, memory, data processing, a creative concept. Make it yours. (see examples for inspiration)
Real projects running real models on real hardware. Every page load shows a different set, or you can browse them all.
Join #localhost on the Hack Club Slack. Install Ollama. Start building.