One problem you’ll sometimes encounter when working with cloud services from AWS, Azure or Google cloud is that developing locally can be made more difficult when working with services that do not have a standardized interface with an implementation readily available for local installation. For instance, when working with a pub/sub system that is compatible with Kafka you can just install a minimal Kafka cluster locally and all is good. But what to do when the APIs offered by the service you need are not standardized? That’s when emulators come in. In the rest of this post I’m going to focus on Azure, since that’s what I’m working with most often.
Tea Key-Value Store Write Ahead Log
I my previous post I introduced the Tea Key-Value Store. The store is designed for easy-to-use integration in existing processes without making any assumptions about the hosting process’ needs. In some cases, that is surely enough and gives developers just what they need to store semi-structured data with lookups through point queries.
However, one think I’ve wanted to implement for a while too was a write-ahead log (WAL) that can be use for recovery scenarios. The idea for a WAL is conceivably simple: before data is actually committed to persistent storage, a record describing that data is written to the WAL. The WAL itself is typically a pre-allocated file of some size that only ever gets appended to or deleted, but never overwritten. How much space needs to be pre-allocated depends on the usage as well as other factors like frequency of flushing data to disk or the size of typical records themselves. With the Tea Key-Value Store of course you can configure the size of the pre-allocated WAL should you select to use the WAL in the first place.
Tea Key-Value Store

I keep a little black book with ideas for businesses or projects and sometimes also technology I want to learn more about. One of those things I wanted to learn more about was the inner workings of a key-value store. I wanted to know how to allow for virtually infinite growth of such a store without sacrificing read or write speeds or how to best organize the data on a disk.
Photo Search Improved
A while ago I discussed the Photo Search tool that I’ve created and that I use to index all my photos. One thing that had bothered me from the beginning was the need to use Python to load and use the models. I’m sure that there are some cases where using Python is not the worst choice, but those use cases typically involve rapid prototyping and not so much production-like scenarios where things like efficiency and resource consumption matter more.
Azure Blob Commands

I run a few of my workloads on VMs in Azure. Some of them deal with data and content that changes over time, and accordingly I like to have the data backed up periodically. Microsoft provides the AzCopy tool for uploading files to Azure blob storage alright, and it works very well with managed identities assigned to VMs (and other services in Azure).
But some of the same properties that apply to data also apply to the backups of that data: their value diminishes over time, so keeping backups for an extended amount of time is pointless. Accordingly, I always want to delete old backups after some time.
Photo Search
With the advent of publicly available LLMs and embedding models, I figured I’d kill two birds with one stone: I’d learn a bit about using such models, and I’d build a tool that lets me use a semantic search on my photos.
I keep those photos on a NAS in my home network, and frequently back them up using bart - my back-up and restore tool. So all I really need is a web site for showing the photos and letting me search them. That’s why I built photo search, a tool that uses publicly available multi-lingual models that work both on text and images to index and query photos based on the contents.
Container images with golang from scratch
One of the things I like about golang (and Rust too, by the way) is that it’s
quite simple to build really small container images by statically linking the
executables, and using scratch
as the base image. I’ve done this a few times
in the past, and was doing it again just recently. Except that this time around,
I ran into issues: the container would crash soon after it started.
Time-based One-time Passwords

I recently had to switch phones, because my old phone conked out. I had an app on that phone that I used for short-lived MFA codes for various logins I use. That app was a poor choice, because it didn’t allow for a backup of the secrets used for the code generation, so I had to go to the relevant logins and one by one remove MFA, then add it again. While doing so, I was wondering how this stuff works underneath, so I started looking into this.
Generate Code with NSwag

First, let me state this more precisely: this is a post about generating c# code for ASP.Net Core from an Open API definition at build time using NSwag. If you’re looking for steps to generate code by using the NSwag toolchain manually, you won’t find that here. If you’re looking for a way to generate an Open API definition from an existing ASP.Net Core app using the NSwag toolchain, you won’t find that here either. In that latter case though you’ll get a statement from me telling you that for a professional service you probably shouldn’t do that: you wouldn’t define your interfaces after making the implementation either, right?
Signing HTTP Messages in .Net with NSign

One of the things I have been working on at work over the past few months is an
open source implementation for .Net of the
standard-to-be for HTTP message signatures.
I’ve ended up calling this NSign
which granted is a bit broad – the libraries
deal only with HTTP signatures – but I found that the name quite fitting.
The general idea of HTTP message signatures is that clients and/or servers can create and verify digital signatures or message authentication codes over HTTP messages, that is either request or response messages. As the standard-to-be puts this: