Blog

  • milieuzone utrecht app

    De gemeente Utrecht is de eerste met een verbod op 15 jaar oude diesel auto’s in het centrum. Omdat ik het nogal problematisch vind om mijn 15 jaar oude BMW 530D -die ik met zoveel liefde heb behandeld- weg te doen, heb ik een app gemaakt die me waarschuwt wanneer ik de milieuzone nader.

    Ik heb de app aangeboden aan de app store, en wacht op bevestiging. Ik heb hier alvast een pagina aangemaakt met details over de Mileuzone Utrecht app.

    Ook heb ik hier een web versie online gezet: de web versie van de Milieuzone Utrecht app

    Mocht je er wat aan hebben, dan zou ik het leuk vinden als je een reactie plaatst :)

  • Google’s golang for president

    Being a hungry geek I can’t help myself from innovating myself, and so I read blogs here and there on the current state of software and architecture. But I didn’t really have any alarm bells going off the last couple of months when I came across Google’s Go language. I think it was just a classic example of my assumptions getting in the way (knowing Docker was built using Go, I figured it was some new low level generic language). But now that I finally started studying it, it appeals to me more and more.

    You see, after having advocated Node.js for some years now, and seeing the architectural shift towards frontend middleware becoming a reality, I never really looked for anything better or more suited for that. And that is exactly where Go fits in. It’s such an elegant solution to the need of scalable applications that handle concurrency and parallelism gracefully. It’s still a functional language, but at the same time it’s blocking! It’s kinda weird that I am excited about that, since I have been addicted to events for the last years, and have a hard time shedding that skin. But I have seen the complexity of large scale applications that are built upon callbacks and promises, and it secretly made me wish for something simpler. Something that did not make us do custom code-(re)structuring all the time. But the flexibility just kept me in love and favor it above anything else.

    And now I found Go, and Rust, but that is another story that might not have a happy ending.

  • My pentesting crash course

    Oh yeah, after a silent retreat of about 4 months, I was super hungry for new knowledge, and couldn’t resist to dive into the world of internet security. I got myself up to par with the current state of affairs with regards to vulnerabilities and exploits, pentesting distros, and learnt the basics of crypto technology to make sense of it all. I was getting kinda paranoid and gloomy when I found out that the cyber criminals where winning and already had a huge head start. All the vulnerabilities that were found and left in place by vultures such as the NSA and other criminals allowed for mass surveillance and infiltration and manipulation of our digital lives, including our finances.

    So I just had to study on, to know what is going on, what I could do, or what I SHOULD do. But I am not sure anymore, maybe I just want to stay with the sheep and pretend I am not interesting to any party, and can manage to keep my data intact and safe from criminals by rotating passwords and such. Or should I go completely off the grid and hope to turn my signals into noise? I have no such illusions, knowing where and how my data is tapped into. What I can do from now on is use encryption that the NSA did not get their hands on (like RSA-ECC/AES/SHA-3). Please google for yourself. You can start by checking the links on this post by Bruce Schneier.

  • Docker for finer grained DevOps

    While working with AWS’ rudimentary image bootstrapping, allowing me to either boot and configure from a supported image, or directly boot from our own custom image, I came to realize the price and frustration for this archaic mechanism of bringing up a new operational node to scale out or update/rollback nodes. There had to be a better way.

    So I started looking around for other ways of deploying and managing infrastructure. And there was Docker! It was a couple of months old, but I was sure it would take the world by storm and started experimenting with it. It would allow me to build one image with all the necessary infrastructure to run an app, and deploy it everywhere! And if I needed to upgrade part(s) of the infrastructure, I could do so very easily, and just have my nodes update by pulling in diffs! Super cool!

    Now I knew I was slowly being sucked into DevOps land, but just had to go with my guts and explore this beautiful new territory, even tho it wasn’t my core expertise I was building on. This attitude allowed me to dive right in and get to know the ins and outs and the do’s and dont’s of building docker architectures. I don’t want to give detailed instructions how to do things on this blog, because there is enough of that to be found, but let me just do what I do best, and that is to inspire others to try the stuff I am excited about.
    And if it’s one thing I am very excited about, it is Docker and this whole new movement in DevOps land, with such things as CoreOS utilizing automated centralized configuration managment such as EtcD. There’s a whole slew of PaaS offerings coming our way, and our developers lives will be made a whole lot easier thanks to the initial work of the dotCloud people :)

  • Event store with Node.js and AWS

    It’s been a while since I posted anything here, but a lot has happened on the front. I will give a quick update about the things that have interested me since then

    In 2013 I created my first auto scalable event store architecture for a huge client in Node.js, involving custom web servers receiving events from different endpoints in different formats, meta-tagging them and then injecting them into amazon queues, with processors on the other end enriching and transforming the events for storage in AWS DynamoDB. Post processors would be run periodically to store aggregates in S3. It was required to auto-scale to handle 200.000 events per second. (Yes, you read that right). I created a stateless architecture with the code for all the roles (server, processor, post-processor etc), built into one repo, which would be tarred and deployed onto S3 by our Bamboo server, to allow new nodes to be bootstrapped with that. The node itself was already booted by puppet with a role to perform, and thus knew it’s role to play. For hot upates and rollbacks we’d tell a Saltstack master to update a certain range of nodes, which would then pull the wanted source from the S3 registry again and update themselves without downtime. Pretty nifty, but rather proprietary.

    The company I worked for used Puppet for configuration management, but also for app deployment, which I thought was the wrong approach. Puppet is imo not designed for realtime deployment, but rather for booting and maintaining vm’s from config. That is how I came across Saltstack’s powerful realtime command capabilities, and decided to script our deployment process to be controlled by Saltstack. I actually haven’t updated on that front in a long time, but I saw it fit the bill for our needs and I was so bold to build it into our POC.

    Too bad we hadn’t learned about Google’s Go language back then, otherwise I would have scratched myself behind the ears and probably opted for that, instead of Node.js for our highly concurrent applications.