February 16, 2026

In-house CI/CD

 




With many projects on GitHub, the CI/CD bills finally reached a point where I couldn’t ignore the cost anymore. I decided it was time to move the operation in-house and build my own pipeline.

I settled on Gitea paired with its native Actions runners. To be honest, it wasn't exactly a "plug-and-play" experience.  I had to navigate a hybrid setup involving dedicated VMs and Docker containers. But the result was worth the friction.

I’m not going to bore you with a step-by-step tutorial. Between documentation and LLMs, you can find the specific commands for your OS easily. Instead, I want to talk about the architecture and the "why" behind it. In the age of AI, the hard part isn't writing the code; it’s knowing what’s possible and asking the right questions to get there.

Typical CI/CD pipeline

```mermaid
graph LR
    subgraph Development
    A[Plan] --> B[Develop]
    end

    subgraph "CI Pipeline"
    B --> C[Commit]
    C --> D[Build]
    D --> E[Test]
    end

    subgraph "CD Pipeline"
    E --> F[Deploy]
    F --> G[Operate]
    end

    %% Styling
    style G fill:#f96,stroke:#333,stroke-width:2px
    style C fill:#bbf,stroke:#333
```

Overview

Ultimately, the architecture boils down to this:
```mermaid
graph TD
    subgraph Control_Center [Central Server]
        Gitea[Gitea Instance]
    end

    subgraph Build_Farm [Dedicated Runner Nodes]
        Gitea -- "Job Dispatch" --> Linux[Linux Box]
        Gitea -- "Job Dispatch" --> Win[Windows Box]
        Gitea -- "Job Dispatch" --> Mac[Mac Studio/Mini]
    end

    %% Styling for clarity
    style Gitea fill:#62943e,color:#fff,stroke:#333
    style Linux fill:#f1f1f1,stroke:#333
    style Win fill:#f1f1f1,stroke:#333
    style Mac fill:#f1f1f1,stroke:#333
```
PoC will be
```mermaid
graph TD
    subgraph Host_Machine [Host Environment]
        
        subgraph Docker_Network [Docker Engine]
            Server[Gitea Server Container]
            L_Runner[Linux Runner Container]
        end

        subgraph Virtual_Layer [Hypervisor / VM]
            subgraph Win_VM [Windows VM]
                W_Runner[Windows Runner]
            end
            
            subgraph Mac_VM [macOS VM]
                M_Runner[macOS Runner]
            end
        end

        Server <--> L_Runner
        Server <--> W_Runner
        Server <--> M_Runner
    end

    %% Styling
    style Server fill:#62943e,color:#fff
    style Docker_Network stroke-dasharray: 5 5
    style Virtual_Layer stroke-dasharray: 5 5
```



While this PoC proved to be a success, it wasn't without its hurdles. The real 'make or break' factor here is your virtual networking configuration. Because every environment is unique-ranging from local subnets to cloud VPCs, you’ll need to bridge the gap between your containers and VMs in a way that suits your environment/setup. It’s the primary puzzle to solve, but once the routing is clear, the rest falls into place.


January 31, 2026

Automation

 



I’ve been messing around with AI coding assistants since the very beginning; back when they were nothing more than "autocomplete" tools and couldn’t actually write full code. Watching them evolve has been wild. Now, in many cases, they can write faster and more accurately than most human developers. Still, they need guidance. Using AI feels a lot like working with mid-level developers; you still have to point them in the right direction, and they don’t really have the depth of real-world experience.

With AI, I can juggle three or more projects at once. Research that used to take days? a few seconds to a few hours. Syntax errors? Almost never. But it’s exhausting. When I lead multiple projects with human teams, we take our time; research, development, mid-project checkpoints every few days. With AI, things move so fast I’m checking in multiple times a day. Sometimes it just gets stuck; it struggles with creative solutions, and I have to step in and guide it. Even so, it’s like having 2-3 developers on each project. Running three projects at that pace will wear anyone down.

I’ve seen a lot of claims that this process can be fully automated; that humans barely need to intervene. My goal is to get close to that, so I can sit at a single screen, approve things, guide the AI, and skip hopping between multiple IDEs and terminals. It reminds me of managing human teams; early on, I’d walk around, hold multiple meetings, check in constantly. Later, I could step back and rely on reports and high-level updates. AI seems to be following a similar path; starting as a junior developer, moving to mid-level, then senior, and eventually lead. Hopefully, one day it’ll reach architect-level thinking and maybe even handle some managerial tasks.

I’m excited and also worried. It will definitely eliminate many jobs. But at the same time, we can’t stop this evolution. Just like the industrial revolution in the past; people smashed the machines and all, but eventually every factory used them.

And AI amplifies our ability. Imagine this: without computers, power tools, and machines, we couldn’t build cars, airplanes, or rockets. With AI, we can reach much higher goals.

On the business side, I can see AI replacing roles like accounting, HR, marketing, and sales sooner rather than later. Who knows; maybe one day an entire company could run almost entirely on AI.


Backup using Restic




I used to rely on my own shell scripts to back up different directories and files scattered around my system. They worked, but I was always tweaking them whenever something new came up. A while back, I discovered a tool called Restic and ran it in testing for about a year. It’s worked really well, so I’m writing down my setup here. Restic does basically the same job my scripts did, but better ad more. My scripts kept a list of files and directories, and crontab would use that list to run the backups. Restic has features like remote server support, encryption, snapshots, and more.

Flow / Design



Two crontab steps with separate schedules:


### 1. run restic backup with file list

```mermaid
flowchart LR
    A[Crontab Trigger] --> B

    subgraph B[Restic Backup Script]
        C[Restic Backup Uses Files List]
        D[Restic repository on backup server\nusing a VeraCrypt-encrypted disk]
        C --> D
    end


```

### 2. run cloud backup

```mermaid
flowchart LR
    A[Crontab Trigger] --> B

    subgraph B[Cloud Backup Script]
        C[Unmount VeraCrypt Images]
        D[Rclone Copy VeraCrypt Images to Cloud]
        E[Remount VeraCrypt Images]
        C --> D --> E
    end


```