Independent. Bare metal Linux. AI Testing
I believe modern systems are deliberately over-abstracted — hiding the machine behind unnecessary layers. To truly own a system, you have to understand every layer and its purpose. In the spirit of the Unix philosophy, I prefer tools that do one job well and directly — where a terminal will work, I want no wrappers, GUIs or abstractions I don't control.
Linux, FOSS and Right to Repair advocate. I fight against e-waste through preserving and restoring aging or broken machines and devices.
Stack: CrunchBang++ / Openbox / tint2
Posts
-
A step-by-step guide to compiling llama.cpp from source with native AVX-512 optimizations, bypassing Ollama for faster local LLM inference without a GPU. Covers hardware requirements, AVX-512 detection, source compilation, model quantization, and command-line operation.
-
Linux has shipped a built-in network chaos simulator since kernel 2.6. No install required — tc netem lets you add latency, packet loss, and jitter directly in the kernel's network stack. Here's how to use it, and why you need to clean up after yourself.
-
netsock is a terminal UI for monitoring open sockets on Linux. It reads /proc/net directly — no root, no external dependencies — and color-codes each socket by exposure scope so you can see at a glance what's reachable from where.
-
newtop is a terminal-based Linux system monitor written in Go with zero external dependencies. It reads directly from /proc and /sys, renders per-thread CPU bars, memory, disk I/O, network rates, and top processes — all in a color-coded, terminal-width-aware display that updates every second.
-
After extensive hands-on testing of sub-8B parameter models — Phi-3, Qwen, LLaMA, Gemma, Granite, DarkIdol and others — a clear picture emerges: SLMs are not lesser LLMs. They are fundamentally different tools, and treating them otherwise is where the danger begins.
-
Testing a poison-pill logic puzzle on a local SLM and online LLMs reveals critical differences in reasoning integrity—from variable leakage in Qwen3-4b to helpful lying in Gemini Flash.
-
Do LLMs solve unsolvable puzzles or flag the contradiction? Testing ChatGPT, Gemini, KIMI, and others with an impossible logic puzzle reveals whether models prioritize helpfulness over truth.
-
A benchmark test using a deliberately unsolvable logic puzzle reveals how SLMs and LLMs handle contradiction — and whether they prioritize truth or helpfulness.
-
Four diagnostic prompts that reveal how AI models handle contradictions, impossible geometry, temporal paradoxes, and infinite sets—tested across ChatGPT, Gemini, KIMI, Cerebras, and more.
-
What happens when you prompt an LLM with invented history? Testing ChatGPT, Gemini, Cerebras Inference, and KIMI with a fabricated historical prompt reveals how each model handles fictional facts.
-
A technical analysis of whether AI models like LLMs and SLMs qualify as software—examining weight files, inference engines, and the data-vs-code boundary from a systems programming perspective.
-
Most people use FFmpeg to convert video files. Few know it ships with a virtual device system called lavfi that generates video from pure math — no input file, no external tools, nothing to install. It's already on your system.