Local  ·  Open Source  ·  Fine-tuned

Code that
thinks ahead.

Lumen is an AI model fine-tuned on Qwen2.5-Coder‑7B for agentic coding. Pair it with LocalCode to run shell commands, use git, hunt bugs, and ship features — fully local, no cloud.

terminal — lumen
$ollama pull thealxlabs/lumen
pulling manifest...
pulling model weights ████████████ 100%
4.7 GB · Qwen2.5-Coder-7B-Instruct · LoRA

$localcode --model thealxlabs/lumen
LocalCode ready · Lumen loaded · tools enabled

> fix the auth bug in session.ts and push to main
Reading session.ts · Found issue on line 47
Applying fix · Running tests · 12/12 passed
git commit -m "fix: resolve token expiry race condition"
git push origin main · Done
>

Agentic Workflows
Shell Execution
Git Native
No Cloud Required
Bug Hunting
Qwen2.5-Coder-7B
LoRA Fine-tuned
Runs Offline
Agentic Workflows
Shell Execution
Git Native
No Cloud Required
Bug Hunting
Qwen2.5-Coder-7B
LoRA Fine-tuned
Runs Offline

7B
Parameter
base model
LoRA
Fine-tuning
method
0
API keys
required
Free
Open weight
on Ollama

Capabilities

More than
autocomplete.

Lumen is the model. Pair it with LocalCode to unlock the full agentic developer loop — terminal, git, runtime, and beyond.

Agentic Reasoning

Fine-tuned on multi-step task loops so Lumen plans, acts, and self-corrects. LocalCode provides the tools that let it follow through.

Shell Execution

Via LocalCode, Lumen can run terminal commands, install packages, start servers, and read output to plan its next move.

Git & GitHub

LocalCode gives Lumen access to git — committing, branching, pushing, and opening PRs, all driven by the model's reasoning.

Bug Hunting

Lumen reads error traces and reasons about root cause. LocalCode handles running the tests and applying the patch to your files.

Fully Local

Lumen runs on Ollama, LocalCode runs on your machine. No API keys, no telemetry, no cloud. Your code never leaves your hardware.

Qwen2.5-Coder Base

Built on one of the strongest open-weight code models, then LoRA fine-tuned to excel at the agentic patterns LocalCode enables.


How it works

Lumen is the brain.
LocalCode is the hands.

Lumen provides the intelligence — trained to plan, reason, and direct. LocalCode provides the tools that let it actually act on your codebase.

01

Pull Lumen on Ollama

The model runs locally via Ollama. Fine-tuned with LoRA on Qwen2.5-Coder-7B-Instruct to excel at agentic planning and tool use.

02

Connect it via LocalCode

LocalCode is the agent runtime. It connects Lumen to your filesystem, shell, git, and GitHub — giving the model real tools to work with.

03

Give it a goal

Tell LocalCode what to do. Lumen reasons about the task, calls tools, reads results, and keeps going until the job is done.

04

Nothing leaves your machine

Lumen on Ollama, LocalCode on your machine. No API keys, no cloud, no data leaving your hardware. Ever.

localcode config
# localcode.config.json
{
  "model": "thealxlabs/lumen",
  "provider": "ollama",
  "tools": [
    "shell",
    "git",
    "file_rw",
    "github"
  ]
}

# Then just run:
localcode "fix failing tests in auth/"
localcode "refactor utils.ts and open a PR"

# Lumen reasons. LocalCode acts.

Powered by LocalCode

The agent runtime
that does the work.

Lumen is the model — it reasons and plans. But agentic features like shell execution, git operations, file editing, and GitHub access are all provided by LocalCode, an open-source agent runtime built by TheLocalCodeTeam.

TheLocalCodeTeam/localcode
Lumen provides
Reasoning
Planning
Code generation
Task decomposition
Self-correction
LocalCode provides
Shell execution
Git & GitHub
File read/write
Test running
Tool orchestration

Get started

Up in a few minutes.

You need Ollama and LocalCode installed to use the full agentic stack.

bash
# 1. Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# 2. Pull Lumen
ollama pull thealxlabs/lumen

# 3. Install LocalCode (for agentic features)
# github.com/TheLocalCodeTeam/localcode
localcode --model thealxlabs/lumen
thealxlabs/lumen
Qwen2.5-Coder-7B-Instruct
LoRA fine-tuned
4.7 GB
Powered by LocalCode

The model. The runtime.
The whole stack, local.

Pull Lumen for the intelligence. Grab LocalCode for the tools. Ship features without leaving your machine.