Intro

Yes, dear readers (or reader if you look at my umami analytics, a singular which might very well be me): I let Gemini loose on my ~/src directory. Ok, that’s a clickbait headline. It wasn’t quite like that.

Here’s what really happened. I have a bunch of CLI scripts, mostly written in Go, that suffer from neglect and are in need of repairs and the usual “upgrade to latest and greatest”. I really don’t have time and patience for the silliness of doing that, so I thought that it is a good opportunity to try AI coding and a perfect way to let a LLM (in my case Gemini) show off its coding skills. I mentioned in my previous update that I’m using Antigravity, so there are no obstacles preventing me from actually executing on this idea.

Oh no, you say: Yet another old coding monkey discovering coding agents. Yes, I’m the bulldog in the video below.

Anyways, if you’re still here, for this blog post I’ll show how Gemini improved two of my existing scripts and also how it created two more. We’ll start with the two existing ones, but before we start, let me make some general observations.

  • I suspect that I’m benefiting from using Antigravity instead of more advanced AI coding harnesses that have shed the IDE part. Having the IDE gives it a familiar touch and eases me into this experience. I can still view the diffs in a “civilized” way. When necessary, I can jump in and make edits. Antigravity has sane defaults and best practices built in, for example it already does what this planning blog post describes (implementation plan, task list with completed steps marked, final walkthrough).

  • It was interesting to see where the AI got into trouble. For example, it struggled with some Bazel dependencies. I think the fact that I am on the bleeding edge with my toolchains (Bazel 9.0, Go 1.26 etc) hurt the AI’s abilities to perform certain toolchain configurations. Maybe because the training it got stopped before the versions I use and it couldn’t deal with incompatibilities of what it learned with what it faced. The most common trip-up was with apple_support_cc in MODULE.bazel. I had to intervene and stop the endless loop of increasingly bizarre attempts to make it work. Luckily I know Bazel well, so I could help.

  • Similarly, the AI didn’t know about the latest idioms and APIs in a language. For example in Go, running go fix ./... did find some (not many tbh) places to fix up.

  • It was really good when told exactly what to do, like “refactor the type on line so and so into an interface and make a second implementation that saves to markdown”.

  • I told it to “spruce up the READMEs” and it sprinkled them all over with emojis. Kinda hilarious, it made the READMEs look like they’re from the 2010’s. It also made the somewhat cringy hero images (from my prompts, yes, I confess). I think I will keep the README changes because they feel ironically cool.

  • I enjoyed doing explorations with the AI. I asked it for options to accomplish a certain thing and asked it questions about each option. One observation here though: I had to be really harsh and clear that I do not want to launch into modifications yet, just discuss options. The AI was very eager to start coding and would do so if not told clearly to only present options and do planning.

  • It helped the AI a lot if I cloned dependencies locally and let the AI have access to them. Otherwise it would try to open a Chrome browser window and navigate to the dependency and read the page pixel by pixel (I kid you not) to get more information.

  • This next observation will probably sound arrogant (and I really don’t mean it that way): I am genuinely puzzled by this constant drumbeat that you have to learn these new AI skills of coding or else. What skills are we talking about? I told it in plain language what I want done and it did it. Is there more to it? Yes, I guided it from time to time, I chose from the presented options and I verified the results. ¯\_(ツ)_/¯

  • The experience was both impressive when it worked (which it did most of the time) and frustrating when it didn’t. When it didn’t, it felt weirdly familiar, a bit like being on the phone with XFinity technical support, as in both parties of the conversation have no clue why something isn’t working and we are throwing darts in the dark. I know, the metaphor is not perfect (in this case the roles were reversed: the frustrating party was actually trying to do something for me instead of telling me to try something or do a reboot of the router). Also when it worked and the AI executed perfectly, it felt unsatisfying, like it should have been harder than “huh, it only took 2 mins to do all these edits”.


mastosync

mastosync hero

This is an existing code base and I let the AI do very specific tasks:

  • Add the ability to save Mastodon toots to local markdown files (the existing code only supported saving to Notion).
  • Add the ability to also save Bluesky skeets
  • Add a command to run it as a MCP server exposing all the commands as MCP RPCs

This worked all like a charm, the AI did everything and I was done in no time.


templeton

templeton header

Also an existing code base. Same experience. We were done very quickly. Main new feature was support for asking the user for template variable values with a form if the values are not provided in a YAML file or the command line. I chose huh for the form implementation. I had some follow-up requests having to do with template formatters and form validators and I had to tell the AI how I want the currency formatter implemented.


cookcc

cookcc hero

I described this tool previously on this blog. Brand new tool, exclusively written by Gemini. I enjoyed exploring with Gemini what my cooking recipe file format options are. In the end we decided to support cooklang and markdown and we (well, the AI 🙂) implemented both.


sealpdf

sealpdf hero

Again a brand new tool. Here there was a lot of verifying involved on my side. I was being prompted by the AI to do stuff, the AI turned the table on me. It made code changes and then told me to check if it worked and how to check it.

The tool is supposed to sign PDF documents with PAdES-compatible digital signatures. There are two options for Go libraries that do that: unipdf and pdfsign. I initially chose unipdf even though it is commercial and you need a license. I thought it looked newer and better maintained. Well, me and Gemini didn’t succeed in making it work. Every combination of API calls we tried didn’t yield a valid signed PDF. Each time I had to verify the generated PDFs with LibreOffice and Adobe Acrobat and tell Gemini what I’m seeing and what error message popped up. The unipdf license I had was a free, metered one and I exhausted my quota for the month testing the various attempts (I know, exhausted the quota for the whole month, that’s really stingy metering).

We gave up and switched to pdfsign and then got it running and working in minutes. Go figure. I guess the lesson here is to choose your libraries carefully.


Conclusion

It would be foolish not to experiment with AI coding these days. It is an area where LLMs have made immense progress and having verifiable results makes using them useful and productive. It is clear that the profession of software engineer is in the middle of a transformation and all of us are on this journey. I am not sure where this journey will end up but I do know it’s good to explore what is possible and play with these new tools. I’ll leave you with some interesting reading and watching/listening links: