So …. yeah …. it didn’t happen….

So, way back in 2019, I decided I was going to start a blog. I was going to write posts regularly. I was going to take the nuggets of wisdom I’ve accumulated in a plethora of areas and share them. Not only was I going to share my knowledge but I was also going to use this medium as a way to share my interests and opinions.

None of this happened…

I could say that this was due to COVID-19 and the ensuing global pandemic that consumed 2020 but that wouldn’t be true. I actually completely forgot that this was a goal of mine as I got consumed with the other pots I had in the fire. My bad!

So, now it’s June of 2021.

I’m revisiting this goal with the intent to follow through this time. I’m going to try to put out content regularly on software engineering topics as well as my other interests like Star Trek.

Please, my friends, hold me accountable.

When I made this decision back in 2019, I felt this would be a fun endeavor and a means of catharsis. I’m excited. I know this will be great!

Looking forward to this journey!

Encapsulation

A recently read a book where a contributor was arguing the importance of Encapsulation in Software. I agree. Encapsulation is fundamental to code being maintainable and extensible. But, what does this actually mean in practice? What should be encapsulated? Where do we draw the boundaries? I don’t think there’s a hard and fast rule that can be applied to all scenarios to given someone an indisputable answer. However, I do think that experience, pragmatism, and leaning on a set of good foundational principles can help someone get it right – most of the time.

The contributor’s position was that encapsulation should be around behavior and state. In theory I agree – sort of. Personally, I would have phrased it as: Encapsulate state and expose behavior through interfaces. He further says that an object encapsulates both state and behavior, where the behavior is defined by the actual state. An example about a door object follows where it provides an open and close operation where the behavior would depend on the current state. This does make sense and would be how I would describe the primitives involved.

Next came the example where they used Customer, Order, and Item to illustrate an example of “good” encapsulation. The Order object would know it associated Customer object and its addItem() method would encapsulate functionality around validation like ensuring the Customer had sufficient credit or funds to pay for the additional item. From the description I assume they were implying something like this in the Order class:

void addItem(Item item) {
    if (customer.canAffordIt(item.getPrice()) {
        items.add(item);
    }
    throw new InsufficientFundsException();
}

Then it goes on to say that some engineers would choose to put that sort of validation logic into a OrderManager or OrderService and that would be “wrong” because it would turn Customer, Order, and Item into just record type and introduce a single class that would contains a procedural method with a lot of internal if-then-else constructs which would be easily broken and almost impossible to maintain. I think we really need to be careful with statements like that. They oversimplify (and trivialize) the thought process that should go into how we define boundaries and introduce logical separation that allow for functionality to be extended and grow organically over time. Let’s not do that …

I often recite the SOLID code design principles to junior engineers where I emphasize the Single Responsibility Principle which simply states that a class/method/function should encapsulate a single piece of functionality. You need to think the system through and ensure that your encapsulation makes sense and provides the right boundaries for future expansion that won’t result in spaghetti code or highly coupled code that requires tons of branching and high cyclomatic complexity. The previous example, for instance, introduces coupling between the Customer and Order classes. What if an order now has multiple customers attached to it? What if order validation needs to be performed conditionally? What if we had more complex validation that would only be applied for specific customer types? There are lots of other scenarios I could conjure up for how the code in the addItem() method would become more complex and tightly coupled over time. How would you even effectively unit test the functionality over time?

My advice is to prioritize encapsulation BUT also prioritize logical separation and interfaces. One good measure of success here would be the ease (or difficulty) of unit testing. If you need to write overly complex code to test a simple change, you need to rethink your architecture.

I may follow up this post will an example of this in action. I will update this with a part two if I do. I could even post the code on GitHub =)

Till then, cheers!

Running Gitlab Runner from Container Station

So! I have a QNAP NAS and I finally decided that I should use it to set up a private development environment and finally get around to creating some of the MANY side projects I have rolling around in my head.

The cool thing about my QNAP is I can install QNAP’s Container Station which gives me Docker. I’m not the biggest fan of Container Station but it’s not horrible for container management in non-mission critical situations – like my new private development environment. I found a few ways to back up my instance to ensure that I don’t lose anything valuable. I might do a separate post about that later.

Through Container Station, I set up GitLab with a few clicks and was pretty much off to the races. By turning on Port Forwarding in my router, I was able connect to it outside of my LAN and push code to my instance.

So…

Now that I got Gitlab up and running, why not try and get CI/CD working?

I decided to see if I can get the Gitlab Runner up and running through Container Station so I could take advantage of Gitlab’s CI/CD. I installed the image and get it up and running. I also went through the GitLab Runner docs to configure it so my GitLab instance could see/use it.

All was well! … Then this happened when I submit a job to kick off the pipeline…

Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock

All the documentation and forums out there explain pretty quickly what the root cause of this is and how to fix it. However, all those venues assume you’re running Docker through the command line. Container Station has a few flaws. One of the major ones is that you can’t edit a lot about your containers once they’re created.

The fix for this problem is simple. Mount the host server (my QNAP)  /var/run/docker.sock file to the gitlab-runner container. In Container Station, you can’t add shared folders (mounts) after container creation. So, I had to delete my container and create a new one to see if I can mount it during creation. Then, I encountered another hurdle. The Container Station UI only let’s you mount folders that you can navigate to in their UI. Luckily, this is a pretty easy hurdle to get over.

Here’s how I got it to work!

I ssh’d into my QNAP and went to directory that I knew I could navigate to in the UI and made a symbolic link to my QNAP’s /var/run folder. Then in the UI, I selected my linked folder. I set up the Gitlab Runner once again like I had it before. BOOM! Everything worked!

I know this isn’t the most amazing solution or an amazing problem even to begin with but I was surprised at the lack of information on Container Station out there in the interwebs. That’s why I decided to write this up so that if others out there are bored and decide they want to set up their own GitLab instance using Container Station and want to do CI, they can use this hack as well.

Excelsior….