February 18, 2026

Making a Lisp Compiler with Vibe Coding



 


I recently finished building my own Lisp compiler. This has been a long-term interest of mine, dating back to an interview I read as a kid with Gary Kildall, the creator of CP/M. I tried P-LISP then, but I was too young to stick with it. Over the years, I kept poking at the language, but I never really connected with the traditional Emacs and SLIME workflow. It wasn't until VS Code and the Alive extension that the environment felt right.

I've always liked how Lisp forces you to think deeply, but I've been frustrated by its lack of progress. Deployment is often difficult, and compilers like SBCL produce massive size of executable because they bundle the entire runtime and garbage collector. After looking at various other Lisp implementations and not finding what I wanted, I decided to build my own.

Using Cursor and a "vibe coding" approach, I put together a working version in a few days and spent a few more refining it. It isn't perfect, but it works. I chose to avoid a heavy garbage collector in favor of reference counting to keep the final executable small and the compiler streamlined.

This project made me think about the history of "low-code" tools. I remember using Garry Kitchen’s GameMaker on the Apple II in the 80s; it is still impressive what he achieved with only 64KB of RAM. While the industry has tried and failed to make coding "easy" for decades, tools like Cursor finally make the process feel fluid.

Working with the AI still requires me to know exactly what I want and to supervise the architectural choices, but it significantly compresses the timeline. Without it, re-learning the specifics of ELF formats, Linux system calls, and compiler theory would have taken months. Instead, I have a compiler that produces native Linux executable after just a few days of work.

In the past, bringing an idea to life required years of discipline and a painful amount of effort to navigate endless frameworks and platforms. It was often more exhausting than it was rewarding. Now, that friction is gone. I can finally focus on the build itself, making the act of creation fun again without the usual pain.


















February 16, 2026

In-house CI/CD

 




With many projects on GitHub, the CI/CD bills finally reached a point where I couldn’t ignore the cost anymore. I decided it was time to move the operation in-house and build my own pipeline.

I settled on Gitea paired with its native Actions runners. To be honest, it wasn't exactly a "plug-and-play" experience.  I had to navigate a hybrid setup involving dedicated VMs and Docker containers. But the result was worth the friction.

I’m not going to bore you with a step-by-step tutorial. Between documentation and LLMs, you can find the specific commands for your OS easily. Instead, I want to talk about the architecture and the "why" behind it. In the age of AI, the hard part isn't writing the code; it’s knowing what’s possible and asking the right questions to get there.

Typical CI/CD pipeline

```mermaid
graph LR
    subgraph Development
    A[Plan] --> B[Develop]
    end

    subgraph "CI Pipeline"
    B --> C[Commit]
    C --> D[Build]
    D --> E[Test]
    end

    subgraph "CD Pipeline"
    E --> F[Deploy]
    F --> G[Operate]
    end

    %% Styling
    style G fill:#f96,stroke:#333,stroke-width:2px
    style C fill:#bbf,stroke:#333
```

Overview

Ultimately, the architecture boils down to this:
```mermaid
graph TD
    subgraph Control_Center [Central Server]
        Gitea[Gitea Instance]
    end

    subgraph Build_Farm [Dedicated Runner Nodes]
        Gitea -- "Job Dispatch" --> Linux[Linux Box]
        Gitea -- "Job Dispatch" --> Win[Windows Box]
        Gitea -- "Job Dispatch" --> Mac[Mac Studio/Mini]
    end

    %% Styling for clarity
    style Gitea fill:#62943e,color:#fff,stroke:#333
    style Linux fill:#f1f1f1,stroke:#333
    style Win fill:#f1f1f1,stroke:#333
    style Mac fill:#f1f1f1,stroke:#333
```
PoC will be
```mermaid
graph TD
    subgraph Host_Machine [Host Environment]
        
        subgraph Docker_Network [Docker Engine]
            Server[Gitea Server Container]
            L_Runner[Linux Runner Container]
        end

        subgraph Virtual_Layer [Hypervisor / VM]
            subgraph Win_VM [Windows VM]
                W_Runner[Windows Runner]
            end
            
            subgraph Mac_VM [macOS VM]
                M_Runner[macOS Runner]
            end
        end

        Server <--> L_Runner
        Server <--> W_Runner
        Server <--> M_Runner
    end

    %% Styling
    style Server fill:#62943e,color:#fff
    style Docker_Network stroke-dasharray: 5 5
    style Virtual_Layer stroke-dasharray: 5 5
```



While this PoC proved to be a success, it wasn't without its hurdles. The real 'make or break' factor here is your virtual networking configuration. Because every environment is unique-ranging from local subnets to cloud VPCs, you’ll need to bridge the gap between your containers and VMs in a way that suits your environment/setup. It’s the primary puzzle to solve, but once the routing is clear, the rest falls into place.