Independent. Bare metal Linux. Go programming.
The through-line across everything I do is a conviction that modern systems are over-abstracted by design — and that real ownership of a machine means understanding what each layer is doing and why. My interest is in tools that do one thing and do it directly. No wrappers, no GUIs where a terminal works, no abstraction layers I don't control.
I use ffmpeg, not OBS. I run SLMs by pinning cores and calling llama.cpp directly, not through Ollama or Jan. I'd rather understand a tool at its lowest level than be managed by an interface built around it. I'm learning Go because it fits that approach — small binaries, no runtime dependencies, compiles to a single static file that runs anywhere Linux runs.
The same skepticism I apply to software layers I apply to AI models — I don't take capability claims at face value, I test them. I write about AI model behavior, SLM limitations, and what these systems actually do at the layer below the chat interface: logic traps, hallucination prompts, contradiction detection. Most people use one model for everything and never find its edges. I'm interested in the edges.
I'm a FOSS and Right to Repair advocate. Active in hardware restoration and rescue — returning aging or broken machines to productive use through careful diagnosis, component-level reasoning, and lean purpose-built Linux environments. I work at the intersection of software configuration and low-level hardware communication: driver tuning, display server, audio pipeline, system optimization on Debian-based minimal setups.
I build Linux utilities in Go and Bash. Diagnostic tools, hardware probes, system automation — the kind of thing you drop on an old machine and run without installing anything.
Stack: CrunchBang++ / Openbox / tint2 — Go, Bash — static binaries, bare metal, no unnecessary dependencies.
Posts
-
A benchmark test using a deliberately unsolvable logic puzzle reveals how SLMs and LLMs handle contradiction — and whether they prioritize truth or helpfulness.
-
A step-by-step guide to compiling llama.cpp from source with native AVX-512 optimizations, bypassing Ollama for faster local LLM inference without a GPU. Covers hardware requirements, AVX-512 detection, source compilation, model quantization, and command-line operation.
-
Testing a poison-pill logic puzzle on a local SLM and online LLMs reveals critical differences in reasoning integrity—from variable leakage in Qwen3-4b to helpful lying in Gemini Flash.
-
Do LLMs solve unsolvable puzzles or flag the contradiction? Testing ChatGPT, Gemini, KIMI, and others with an impossible logic puzzle reveals whether models prioritize helpfulness over truth.
-
Four diagnostic prompts that reveal how AI models handle contradictions, impossible geometry, temporal paradoxes, and infinite sets—tested across ChatGPT, Gemini, KIMI, Cerebras, and more.
-
What happens when you prompt an LLM with invented history? Testing ChatGPT, Gemini, Cerebras Inference, and KIMI with a fabricated historical prompt reveals how each model handles fictional facts.
-
A technical analysis of whether AI models like LLMs and SLMs qualify as software—examining weight files, inference engines, and the data-vs-code boundary from a systems programming perspective.
-
Most people use FFmpeg to convert video files. Few know it ships with a virtual device system called lavfi that generates video from pure math — no input file, no external tools, nothing to install. It's already on your system.