Category Archives: programming

DigiMo: talk to my online avatar

After having built so many AI applications I finally set down to showcase my skills and toolbox in the shape of an online AI assistant: maurice.instrukt.ai. I started out by giving it a calendar tool (in my n8n backend), so that one can book a meeting with myself to talk about all things AI. I then built a knowledge base with info about my journey as a developer, and also gave it access to my WordPress blogs, so that it can share the topics I am interested in and write about. Go ahead and ask what I wrote about AI! (It should mention this exact blog post 😉

You might notice how the models that are used in the workflows sometimes make decisions that are not optimal, but that is the state of current affairs. I just moved away from using Claude’s api as it was going haywire and not doing as I instructed, so I reverted to OpenAI’s gpt-4o-mini. We are only weeks away from being able to use o3-mini, which has already surpassed most humans in intelligence. Imagine being helped by such intelligent assistants!

Bonus for me is that I had to look at my online stuff and clean it all up. I got rid of a lot of useless tags and archived a lot of my posts 🙂

My AI dev journey towards no code with n8n

Over the last few years, I’ve spent a lot of time researching AI and building projects with it. I began by coding projects by hand, sometimes with AI’s assistance, to prototype ideas and understand the ins and outs of working with AI. As an architecture purist, I quickly fell in love with LangChain, even though it was barely usable and manageable in its early days. I encountered countless problems, tried many workarounds, and learned valuable lessons. Now, I feel confident saying: I know what to build with AI and how to do it. But along the way, something changed in me that I never expected.

I’ve always relied on code editors and their powerful plugins to build my solutions. This approach worked well for years. However, when AI entered the picture, I found myself coding brittle solutions too slowly—even with AI as a development tool. Realizing the need to iterate faster, I started exploring no-code platforms that could support me in the long run. As an old-school developer with over 30 years of experience, I didn’t anticipate the transformative value n8n would bring to my workflow.

With n8n, I can rapidly assemble automation workflows that incorporate AI agents with clearly defined and limited scopes of operation. This capability has put me on an entirely new path.

Advantages of n8n

The benefits of using n8n are clear:

  1. Speed of Development: I can iterate and deploy solutions far more quickly than before.
  2. Easy Sharing: Developers can build and share solutions effortlessly. Rich workflows are often just a copy-paste away, ready to adapt and integrate into new projects.
  3. Simplified Understanding: While open-source Git repositories provide reusable code, such code is often scattered across multiple files. This makes understanding and integration much more complex and error-prone. With n8n, workflows are visual, providing a clear view of how they operate throughout their lifecycle. Whether you’re a hardcore developer or a savvy newcomer, the visual nature of workflows simplifies the process.

This shift has brought joy to my daily work. I can effortlessly add new tools to my AI agents in no time. While I still code frontends, having n8n as a backend accelerates my iterations to the point where I’d recommend it to anyone.

Limitations and Feature Requests

Of course, n8n isn’t without its limitations. For instance:

  • Streaming Responses: It doesn’t yet support streaming responses.
  • Simplified RAG Solutions: The current Retrieval-Augmented Generation (RAG) implementation lacks metadata output, which is frustrating.

I hope these issues will be resolved soon as the n8n team grows and can implement feature requests more quickly. To contribute to this growth, here’s my own list of outstanding requests that I believe would add immense value:

  • AOP-Style Hooks: Enable calling HTTP endpoints before and after a tool’s execution, or introduce an n8n event node to create listeners that translate n8n events into custom actions. For example, this could support notification systems.
  • Failover LLM: Retry failed operations with another LLM provider to enhance reliability.
  • Conditional AI Agent Tools: Allow tools to be bypassed based on conditions. This would reduce noise and streamline the logic for LLMs to act upon.

Final Thoughts

n8n has fundamentally changed the way I approach backend development. Its visual workflows, ease of sharing, and speed of iteration have made it a game-changer in my AI projects. While there are some areas for improvement, the potential of this platform is enormous, and I’m excited to see how it evolves. Let’s support the team and help them implement these critical features faster!

itsUP: lean, automated, poor man’s infra

Now that I am working on my personal AI assistant I figured it would be nice to mention one of my better projects here: itsUP. It deserves attention as it is a fully automated always up docker compose solution with Traefik, and it runs ALL of my apps and services on one Raspberry Pi 5 at home (with 500GB ssd attached). It might be a bit over engineered, and its approach not your flavor, but I am happy with it. I just edit one yaml file, build artifacts, test and deploy, and there we go! Heck, it even accepts web hooks so that your repos can instrukt it to roll up your new containers. Such a breeze 🙂

Oh, and did I mention that I also made an api so that my itsUP agent can talk to it? I can ask it everything and tell it to update configuration or even add a new service!

Use Indy News Assistant to get to the TRUTH

Use Indy News Assistant to get to the TRUTH

After noticing the recent increase in (self)censorship by mainstream media I created the “Indy News Assistant” (a custom ChatGPT agent). It takes any topic and provides both news sources as well as latest YouTube videos published on the matter.

Here’s the link: indy-news-assistant

To make it easy for those without a ChatGPT subscription here is a public Streamlit app using the same api: indy-news.streamlit.app

It uses vectors and bm25 to make search work, and I think it is an interesting and cheap approach. Unfortunately vector searches, like LLMs, suffer from producing irrelevant output, and so this comes with a YMMV warning 😉

Source: github.com/morriz/indy-news

Coming back to Kubernetes

After traveling for a long time I started playing with tech again. I started building a crypto currency trading app for the Kraken Exchange API. The resulting app can be downloaded here: expo.io/@morriz/krakenfx-react-native.

But then I started playing with Kubernetes again, and started working on mostack: a stack with Kubernetes best practices. This was a hard and long road past obscure pitfalls and learnings. Some I just have to give back in the hope you may avoid them.

Drone CI/CD.

To automate software building we need a CI/CD build system. I chose to go with Drone, as I like the simplicity of working with docker containers, and it’s open source and not SaaS. But Drone uses Docker in Docker (dind) and that gave me the following problem:

Drone starts the host docker container running the dind with a custom network. Probably for good reasons, but this makes it impossible to resolve any cluster ips from known kubernetes service names.
I needed to docker push to a locally running docker-registry service, as well as make kubectltell the api server to update deployments. Since there is no way around this, I had to use the host docker socket and manually instrument the wiring of the plugins. Including the custom dns settings. Please see the .drone.yml in the morriz/nodejs-demo-api how I did that. For more information around my dns related issues see my posts in the drone discourse .

Helm

The biggest challenge in k8s userland is the deployment of the manifests. Ideally one would like to have a uniform approach to apply the entire new desired cluster state in one go. Preferably automated after a git push to the cluster repo. For now I chose to experiment with Helm, which allows me to make one root ‘Chart’ (the name they use for a ‘package’) for the entire cluster, with app subcharts that describe the components running on the cluster. But somehow the Helm people have decided to use a ‘Tiller’, which is an agent pod listening to the helm client. Supposedly it helps in managing the cluster, but the logician in me says it goes against the unidirectional flow of stateless architectures. I wanted to avoid running the agent, and luckily the ‘template’ helm plugin lets met do that. You can install it with helm plugin install https://github.com/technosophos/helm-template. Now we can just apply the entire application state (from the root folder) like this: helm template -r mostack . | kubectl apply -f -

Another downside to using helm is the fact that I can’t deploy subcharts in their own namespaces. But that option might come in the future.

Happy helming!

Milieuzone Utrecht app

De gemeente Utrecht is de eerste met een verbod op 15 jaar oude diesel auto’s in het centrum. Omdat ik het nogal problematisch vind om mijn 15 jaar oude BMW 530D -die ik met zoveel liefde heb behandeld- weg te doen, heb ik een app gemaakt die me waarschuwt wanneer ik de milieuzone nader.

Ik heb de app aangeboden aan de app store, en wacht op bevestiging. Ik heb hier alvast een pagina aangemaakt met details over de Mileuzone Utrecht app.

Ook heb ik hier een web versie online gezet: de web versie van de Milieuzone Utrecht app

Mocht je er wat aan hebben, dan zou ik het leuk vinden als je een reactie plaatst 🙂

Docker for finer grained DevOps

While working with AWS’ rudimentary image bootstrapping, allowing me to either boot and configure from a supported image, or directly boot from our own custom image, I came to realize the price and frustration for this archaic mechanism of bringing up a new operational node to scale out or update/rollback nodes. There had to be a better way.

So I started looking around for other ways of deploying and managing infrastructure. And there was Docker! It was a couple of months old, but I was sure it would take the world by storm and started experimenting with it. It would allow me to build one image with all the necessary infrastructure to run an app, and deploy it everywhere! And if I needed to upgrade part(s) of the infrastructure, I could do so very easily, and just have my nodes update by pulling in diffs! Super cool!

Now I knew I was slowly being sucked into DevOps land, but just had to go with my guts and explore this beautiful new territory, even tho it wasn’t my core expertise I was building on. This attitude allowed me to dive right in and get to know the ins and outs and the do’s and dont’s of building docker architectures. I don’t want to give detailed instructions how to do things on this blog, because there is enough of that to be found, but let me just do what I do best, and that is to inspire others to try the stuff I am excited about.
And if it’s one thing I am very excited about, it is Docker and this whole new movement in DevOps land, with such things as CoreOS utilizing automated centralized configuration managment such as EtcD. There’s a whole slew of PaaS offerings coming our way, and our developers lives will be made a whole lot easier thanks to the initial work of the dotCloud people 🙂

Event store with Node.js and AWS

It’s been a while since I posted anything here, but a lot has happened on the front. I will give a quick update about the things that have interested me since then

In 2013 I created my first auto scalable event store architecture for a huge client in Node.js, involving custom web servers receiving events from different endpoints in different formats, meta-tagging them and then injecting them into amazon queues, with processors on the other end enriching and transforming the events for storage in AWS DynamoDB. Post processors would be run periodically to store aggregates in S3. It was required to auto-scale to handle 200.000 events per second. (Yes, you read that right). I created a stateless architecture with the code for all the roles (server, processor, post-processor etc), built into one repo, which would be tarred and deployed onto S3 by our Bamboo server, to allow new nodes to be bootstrapped with that. The node itself was already booted by puppet with a role to perform, and thus knew it’s role to play. For hot upates and rollbacks we’d tell a Saltstack master to update a certain range of nodes, which would then pull the wanted source from the S3 registry again and update themselves without downtime. Pretty nifty, but rather proprietary.

The company I worked for used Puppet for configuration management, but also for app deployment, which I thought was the wrong approach. Puppet is imo not designed for realtime deployment, but rather for booting and maintaining vm’s from config. That is how I came across Saltstack’s powerful realtime command capabilities, and decided to script our deployment process to be controlled by Saltstack. I actually haven’t updated on that front in a long time, but I saw it fit the bill for our needs and I was so bold to build it into our POC.

Too bad we hadn’t learned about Google’s Go language back then, otherwise I would have scratched myself behind the ears and probably opted for that, instead of Node.js for our highly concurrent applications.

It’s all coming together now

I was asked to speak for a small group of people on behalf of my Javascript expertise at a meeting of frontend web developers. Of course I said yes, and started thinking about it. I wanted to use my newly learned lessons from The Art of Hosting. I also told my host I wanted to take a personal approach, and would like to include my own stories. I said I wouldn’t be needing a beamer or flipover. He was very interested and let me go my way.

When we went to the presentation room, I saw a round table. Just big enough to host us all. We were with 9 people, so that created an intimate space.
I started with a check-in and asked each person to tell us who they are, what inspires them, and what they’d expect from the evening.
When the first person started all became engaged, and we listened with interest when each took their turn. Some talked about their professional self, others took a more personal route. Wonderful!
I felt my body relaxed, my mind clear, and was able to truly listen to the others, and get familiar with their faces. Questions were asked from genuine interest.

Finally I told my own story. I was able to be fully at ease and look everybody in the eye. That was a first for me, and I attribute that to the initial check-in round. And the small group as well.
I talked about my personality, my lack in degrees, my initial insecurities because of that, and how I overcame that by reading whatever I could about my area of expertise. About how I got to know myself better, enabling me to become more solid and gain integrity. That I sometimes need to manage my overenthusiasm.
The group responded multiple times by asking questions and recognitions.

In essence I was telling authentic stories, exposing my weaknesses, and how I gained strength by accepting and getting to know them. Their faces told me they were intrigued, sometimes amazed, but all of them were engaged. Some faces started showing minor agitation, which I think was some impatience for my build up, or the mismatch with their expectations.

So to move them towards the topic of the evening (Javascript) I then went on to speak about the moment I fell for Javascript, and who was on my path to inspire me.
I talked about my open source project called backbone-everywhere, and that it was meant to be a demo for a startup. This project involves bleeding edge open source javascript, which is common ground to most of us. So we ended with a discussion about our area of interest, which we all hoped to talk about.

Afterwards I asked them for feedback, what they thought about the format, about my way of hosting.
They all preferred our participatory setup over a regular presentation, and felt energized.
Because I hardly got any critical feedback I kept asking for it.
One person then told me he got a little bit frustrated for not knowing how and where it would go. I thanked him, and explained I am learning to detect such signals in the heat of the moment, so I can ask what is needed.
Some people told me that more structure in the informative section would be nice. I agreed there. The lack of a visual presentation gave them a new experience and engagement, but I realize that I should have some visual structure for stories that involve lots of technical aspects.

When most of the people were gone, I remained with the two intitators of the evening. They were very enthusastic about my approach and we talked setting up a new Javascript course together.
What an energizing and fruitful night! It’s wonderful to see everything come together, and life aspects seeping into work, and vice versa.

 

Backbone everywhere

I finally put my newly built Node.js MVC stack on github! You can download it here: backbone-everywhere.

What’s so special about it? Here’s my list of exciting features:

  • Pages are rendered on the Node.js server by Backbone and jQuery.
  • All script resources are bundled by browserify and fileify,  uglified by uglify, and gzipped by connect-gzip for fast loading and deployment on other possible javascript environments.
  • The entire Backbone MVC stack works on the server, and is loaded in javascript enabled browsers to take over from there.
  • The app state is reflected in the url by means of HTML5’s pushState, or using hash notation when not supported.
  • The same app state is regained for non-javascript browsers pulling full requests from the server, so no worries about SEO!
  • All client / server communication is handled by socket.io (ajax is sooo 2009) and subscribed clients are updated with published models.
  • A JSON-RPC service listening to ‘/api’ requests, with an easy to build on service layer. Handy for non-web contexts such as mobile devices.
  • All data is persisted in Redis through an adaptation of mine of backbone-redis, enabling indexing, sorting and  foreign key lookups.

For me this is a whole new approach at engineering web applications, but I think I’ve managed to get a grip on it.
Not only that, it gave me a great impulse to reconnect with the pioneers of tomorrow. Because what I have done was build on top of the stuff from people with great vision.
Big shout out to the open source community, and the people willing and wanting to share. The sum of it’s parts will eventually overcome the current patent trolling paradigm.

What are you waiting for? Dig in!