Over one year ago, after noticing the lack of a mobile app to trade on the Kraken exchange, I set out to build one. A good opportunity to learn React Native I thought. After some months of hard work it had all I wanted. But when I was already using it myself for a while I lost interest. Well, maybe also because the whole crypto market crashed. Anyway, I (literally) took distance and started traveling again.
But a little over a month ago I broke my hip and could only lie down. I was separated from the world and stepped into my programming bubble once again. And so I finally manage to finish up the app, so I hereby want to present to you the first production release of the new KrakenFX app for Android (iOS still pending review).
I specifically wanted the app to stay simple, beautiful and easy to use. Because some of these apps out there are so ugly and hard to grasp, it just made me cry ;p
So why don’t you head over to the Play Store and try it out? It makes trading on Kraken fun again. Their api rarely times out nowadays, which tells me that Kraken has made an effort to stay in the game as one of the major exchanges.
The purchase of the Workflow app by Apple, and releasing it as Shortcuts app in their latest iOS 12 release, is a step forward in personal automation if you ask me. A huge step, but let’s zoom in to what is possible from a developers standpoint.
Shortcuts are a succession of actions, like ones interfacing with the native device capabilities exposed (camera, maps, text messaging etc), script actions such as setting/getting a variable and looping over a list of found/selected items, or even other inline shortcuts. When working with them you find that you can almost always find a way to realize the idea in your head with the building blocks provided. But that is exactly the limitation of the current implementation. They are linearly executed predefined building blocks that take an input and create an output. This leads to very cumbersome programming, with simple constructs like filtering/sorting becoming a huge headache.
In order to execute function-like behaviour you typically first park the main threads value in a variable, then extract what you need into new variables, do some processing (recursion or looping bringing even more headaches), and then ‘get variable’ to come back to the main thread. I find myself wishing for a real scripting environment all the time. Of course Apple tries to keep the attack vectors to a minimum with this approach, but maybe in the future a proofing layer over a scripted approach can achieve the same result. Making us developers happy shortcut coders.
But, being a power user seeing automation possibilities first of all, I felt the need to create some shortcuts. I spent quite some moments in my car lately, and am disappointed about Siri’s shortcomings when it comes to dictation in other languages and delegating results to other apps. So I created the following shortcuts:
As you can see I have named the shortcuts with “>” in front (expressing it expects a previous input), or with “>” behind (expressing it is a building block for other shortcuts). I hope these shortcuts and it’s implementation details can serve you as well. I still have to find out how to publish my shortcuts in an open source manner like on GitHub. Maybe I will just create a repo with a doc of iCloud hosted shortcut URLs.
Ok, I have had it. I am so frustrated about my Danalock lock and app that I have to tell the world what is bothering me about it.
My Danalock v2 was not strong enough (and was ugly big), and broke it’s internals after some rotations. Ok, so Danalock didn’t want to refund me for v2, but offered to buy v3 for half price. Take it or leave it. So, with a bitter taste in my mouth, I took it.
V3 fared better, and lasted a year. Then it just fell off, and the company admitted that the mounting ring must have worn out. So they sent me a new one and it works again (for a year?).
But what frustrated me even more was the app. It was the most broken UX you imagine from the start, and they never managed to make it any better. I strongly advised them to hire a good UX team, but they only made features less usable. Check out what my guests now see after they finally managed to make an account:
Actually, the red and green icon is greyed out and only becomes coloured during their stay. Ok, I can live with that. Most guests understand that, but a lot of guests keep asking me about what is going on. Those that have an active key, and still see the button halves (indicating that the app has trouble connecting via bluetooth to the lock, because not in range or flaky) start pressing the button halves to discover that they have “no rights to unlock the door remotely”. Where in the interface does it even become clear that remotely opening doors is an option? Sure, Danalock offers a “danalock bridge” but I don’t have one registered on the lock, so why offer that interface to the users??
Then, when they arrive at the doorstep and manage to connect with the lock the interface changes to this:
What button would you press to open the lock? The big fat green one, right? WRONG! You have to press the small red one. (The green one deep-locks the door!) How simple could it be? Maybe two buttons that say “open” and “lock”?
How they conceive of such a confusing interface is beyond me. If a user test group would see this they would certainly fail to use that app. I suspect the Danalock developers think they are smarter than their user base.
But why did I choose this lock above others? They offer airbnb integration. Check out their promise, as it is nowhere near what they deliver. It never managed to work because of the following:
I have to choose which listing to associate with the lock, but I want them all associated, as I have multiple rooms in one house with one lock. Why force that choice?
I suspect this related to issue #2, as I sometimes reassociate the lock to another listing when a booking comes in:
Invitations send links that always seem to be expired, frustrating guests, so I still have to send one manually.
I can’t send invitations on the day that my guest arrives (or even tomorrow!), so I can never serve short bookings. Why have that limitation?
So I was very disappointed and quickly avoided that shitty part of the app, and started sending links manually. But then my guests started complaining about manual invitation links also not working or being expired, even though I just sent them, within the 24 hours on the next screenshot:
Again, their “smart” developers introduced limitations that don’t serve any business purpose. Not only that, their links miraculously expired even within the 24 hour time window! These bugs managed to frustrate our guests so much, that our airbnb ratings for “Check-in” experience started going down. So with text messaging I had to prepare my guests for a not-so-nice-and-sometimes-failing experience, because I needed them to be able to come in with that app!
But what is bothering me most about all of this is the lousy stance Danalock has to bug reporters like me. Instead of supporting us and evaluating our needs and grievances, we are left in the cold. One year ago, after threatening to tell the truth about my experiences to the world they immediately changed their tone and stopped defending themselves and used polite language in their responses. Months later they introduced zendesk, but none of the grievances were met. More guests got put off by their product. Nothing has changed in over a year of me trying to work with them. Nothing but hiding behind their choices for their stupid logic and promises. Leaving me with cleaning up their mess, and making me do way too much work to manage guest entry to my property.
So out it is, from the bottom of my festering gut…let’s hope this post helps to bring focus to the Danalock team.
After traveling for a long time I started playing with tech again. I started building a crypto currency trading app for the Kraken Exchange API. The resulting app can be downloaded here: expo.io/@morriz/krakenfx-react-native.
But then I started playing with Kubernetes again, and started working on mostack: a stack with Kubernetes best practices. This was a hard and long road past obscure pitfalls and learnings. Some I just have to give back in the hope you may avoid them.
To automate software building we need a CI/CD build system. I chose to go with Drone, as I like the simplicity of working with docker containers, and it’s open source and not SaaS. But Drone uses Docker in Docker (dind) and that gave me the following problem:
Drone starts the host docker container running the dind with a custom network. Probably for good reasons, but this makes it impossible to resolve any cluster ips from known kubernetes service names.
I needed to docker push to a locally running docker-registry service, as well as make kubectltell the api server to update deployments. Since there is no way around this, I had to use the host docker socket and manually instrument the wiring of the plugins. Including the custom dns settings. Please see the .drone.yml in the morriz/nodejs-demo-api how I did that. For more information around my dns related issues see my posts in the drone discourse .
The biggest challenge in k8s userland is the deployment of the manifests. Ideally one would like to have a uniform approach to apply the entire new desired cluster state in one go. Preferably automated after a git push to the cluster repo. For now I chose to experiment with Helm, which allows me to make one root ‘Chart’ (the name they use for a ‘package’) for the entire cluster, with app subcharts that describe the components running on the cluster. But somehow the Helm people have decided to use a ‘Tiller’, which is an agent pod listening to the helm client. Supposedly it helps in managing the cluster, but the logician in me says it goes against the unidirectional flow of stateless architectures. I wanted to avoid running the agent, and luckily the ‘template’ helm plugin lets met do that. You can install it with helm plugin install https://github.com/technosophos/helm-template. Now we can just apply the entire application state (from the root folder) like this: helm template -r mostack . | kubectl apply -f -
Another downside to using helm is the fact that I can’t deploy subcharts in their own namespaces. But that option might come in the future.
While I was minding my health on many levels in a beautiful place called Ängsbacka, friggin criminals got access to my wordpress site and used it as a spam server. Thanks to my good friend hosting it, it was pulled offline for further inspection.
But while in retreat I recovered from serious back problems and stiffness, and decided to take a whole year off to travel the world and stay healthy. I did not want to slip back into my previous life, which involved way too much sitting behind a computer. So I quickly fixed this blog by putting back the wp_posts table in a new setup. And by doing just that I regained all my pages and posts. Nice!
So without much further ado I present the new and stable iD!OTZ wordpress website, based on the twenty ten something theme 😉
De gemeente Utrecht is de eerste met een verbod op 15 jaar oude diesel auto’s in het centrum. Omdat ik het nogal problematisch vind om mijn 15 jaar oude BMW 530D -die ik met zoveel liefde heb behandeld- weg te doen, heb ik een app gemaakt die me waarschuwt wanneer ik de milieuzone nader.
Being a hungry geek I can’t help myself from innovating myself, and so I read blogs here and there on the current state of software and architecture. But I didn’t really have any alarm bells going off the last couple of months when I came across Google’s Go language. I think it was just a classic example of my assumptions getting in the way (knowing Docker was built using Go, I figured it was some new low level generic language). But now that I finally started studying it, it appeals to me more and more.
You see, after having advocated Node.js for some years now, and seeing the architectural shift towards frontend middleware becoming a reality, I never really looked for anything better or more suited for that. And that is exactly where Go fits in. It’s such an elegant solution to the need of scalable applications that handle concurrency and parallelism gracefully. It’s still a functional language, but at the same time it’s blocking! It’s kinda weird that I am excited about that, since I have been addicted to events for the last years, and have a hard time shedding that skin. But I have seen the complexity of large scale applications that are built upon callbacks and promises, and it secretly made me wish for something simpler. Something that did not make us do custom code-(re)structuring all the time. But the flexibility just kept me in love and favor it above anything else.
And now I found Go, and Rust, but that is another story that might not have a happy ending.
Oh yeah, after a silent retreat of about 4 months, I was super hungry for new knowledge, and couldn’t resist to dive into the world of internet security. I got myself up to par with the current state of affairs with regards to vulnerabilities and exploits, pentesting distros, and learnt the basics of crypto technology to make sense of it all. I was getting kinda paranoid and gloomy when I found out that the cyber criminals where winning and already had a huge head start. All the vulnerabilities that were found and left in place by vultures such as the NSA and other criminals allowed for mass surveillance and infiltration and manipulation of our digital lives, including our finances.
So I just had to study on, to know what is going on, what I could do, or what I SHOULD do. But I am not sure anymore, maybe I just want to stay with the sheep and pretend I am not interesting to any party, and can manage to keep my data intact and safe from criminals by rotating passwords and such. Or should I go completely off the grid and hope to turn my signals into noise? I have no such illusions, knowing where and how my data is tapped into. What I can do from now on is use encryption that the NSA did not get their hands on (like RSA-ECC/AES/SHA-3). Please google for yourself. You can start by checking the links on this post by Bruce Schneier.
While working with AWS’ rudimentary image bootstrapping, allowing me to either boot and configure from a supported image, or directly boot from our own custom image, I came to realize the price and frustration for this archaic mechanism of bringing up a new operational node to scale out or update/rollback nodes. There had to be a better way.
So I started looking around for other ways of deploying and managing infrastructure. And there was Docker! It was a couple of months old, but I was sure it would take the world by storm and started experimenting with it. It would allow me to build one image with all the necessary infrastructure to run an app, and deploy it everywhere! And if I needed to upgrade part(s) of the infrastructure, I could do so very easily, and just have my nodes update by pulling in diffs! Super cool!
Now I knew I was slowly being sucked into DevOps land, but just had to go with my guts and explore this beautiful new territory, even tho it wasn’t my core expertise I was building on. This attitude allowed me to dive right in and get to know the ins and outs and the do’s and dont’s of building docker architectures. I don’t want to give detailed instructions how to do things on this blog, because there is enough of that to be found, but let me just do what I do best, and that is to inspire others to try the stuff I am excited about.
And if it’s one thing I am very excited about, it is Docker and this whole new movement in DevOps land, with such things as CoreOS utilizing automated centralized configuration managment such as EtcD. There’s a whole slew of PaaS offerings coming our way, and our developers lives will be made a whole lot easier thanks to the initial work of the dotCloud people 🙂
It’s been a while since I posted anything here, but a lot has happened on the front. I will give a quick update about the things that have interested me since then
In 2013 I created my first auto scalable event store architecture for a huge client in Node.js, involving custom web servers receiving events from different endpoints in different formats, meta-tagging them and then injecting them into amazon queues, with processors on the other end enriching and transforming the events for storage in AWS DynamoDB. Post processors would be run periodically to store aggregates in S3. It was required to auto-scale to handle 200.000 events per second. (Yes, you read that right). I created a stateless architecture with the code for all the roles (server, processor, post-processor etc), built into one repo, which would be tarred and deployed onto S3 by our Bamboo server, to allow new nodes to be bootstrapped with that. The node itself was already booted by puppet with a role to perform, and thus knew it’s role to play. For hot upates and rollbacks we’d tell a Saltstack master to update a certain range of nodes, which would then pull the wanted source from the S3 registry again and update themselves without downtime. Pretty nifty, but rather proprietary.
The company I worked for used Puppet for configuration management, but also for app deployment, which I thought was the wrong approach. Puppet is imo not designed for realtime deployment, but rather for booting and maintaining vm’s from config. That is how I came across Saltstack’s powerful realtime command capabilities, and decided to script our deployment process to be controlled by Saltstack. I actually haven’t updated on that front in a long time, but I saw it fit the bill for our needs and I was so bold to build it into our POC.
Too bad we hadn’t learned about Google’s Go language back then, otherwise I would have scratched myself behind the ears and probably opted for that, instead of Node.js for our highly concurrent applications.