The three observability pillars, erm, signal types (logs, metrics, traces) are not equally important to every role and in every situation. In this post we discuss the transition from traditional monitoring of servers by sysadmins to exercising observability of services and business impact by developers & platform teams. And we throw in some Machine Learning, for good measures.
Back in the mid 90s, when I commenced my university studies, I once had to visit the sysadmins because I fat-fingered the (paper)form applying for mail access. I distinctly remember the racks, cables, and servers and the seriousness about in the air…
Apologies for the clickbaity title but since I have your attention now: in this post I describe my terminal setup in a little greater detail. I’m using a combination of tmux + Alacritty + Fish shell (tAF for short) and here’s how I’ve got it configured. I’m pragmatic about the setup, so if you think kitty is the best terminal, if you are into
screen, if you are convinced the only shell worth using is
zsh or Oil then I have one thing for you: YOU’RE WELCOME TO USE WHATEVER YOU LIKE. You do you and I do …
In the late 90s, when I studied telematics, we had that one course that did not really enthuse me too much. Maybe it was because I was down the path for the software aspects of the studies (one had to choose a path towards MSc) or maybe the topic in itself wasn’t exciting, I can’t remember. What I do remember is that it sounded and felt pretty much like what you can read up on Wikipedia:
Systems. Feedback loops. Transfer functions. SISO/MIMO. Process variables. Use cases in mechanical and electrical systems.
Fast forward some twenty plus years and I…
Where I’m contemplating about cloud native compute and how we’re moving more and more into a polyglot setup.
So, what is this about? What do I mean by cloud native compute, why is it polyglot and what are the challenges we’re facing?
It’s really horses for courses, applied to cloud native compute. Pick the “right” compute form for a given workload; in real-world setups, many of those compute forms, such as containers or Function-as-a-Service (FaaS), will co-exist. It’s not a zero-sum game.
I’ve been wanting to write about the topic of polyglot cloud native compute (PCNC) for some time now…
1993A bunch of folks at the National Center for Supercomputing Applications (NCSA) write the specification for calling command line executables from Web servers. This evolves into CGI or the Common Gateway Interface, which — if you’re a fancy-pants — would call by its royal Internet name RFC 3875.
1996 The Standardization community strikes again: SQL/PSM or Persistent Stored Modules is published as an extension of SQL-92. With this, we can not only make our Web servers less secure and slower, but now also relational databases.
We use different programming languages and development environments to write apps. Each language comes with a different flow and also we typically go through different phases, from prototyping to integration-level activities to incrementally adding features or fixing bugs once the app is in production. Now, the expectation of a developer coming from a “traditional” environment to Kubernetes is in general that their natural and well-known workflow changes as little as possible. This article reviews where we stand in Kubernetes-land concerning developing apps and where we may be heading.
How do you wait for something to happen with
kubectl? I used to use a
while true loop in a shell script and check with a complicated
kubectl get command until I’d see a certain condition, such as
condition=Ready would be met. No more! :)
Meet the kubectl wait command and see it in action here.
First, let’s create a job called
worker that does something utterly useless in itself (print the word
stdout and pause for 3 seconds ten times):
$ kubectl version --short
Client Version: v1.12.0
Server Version: v1.11.0$ kubectl create ns waitplayground$ kubectl…
I’m on my way home from Berlin where we had a really good Cloud Native Computing Foundation (CNCF) meetup on the topic of applied Kubernetes security, hosted by the good folks of Kinvolk. This report sums up what happened and has all the slide decks for you to binge read if you feel like it.
We kicked off with a talk of Kinvolk’s own Michael Schubert:
Recently, I had a look at a bunch of shell scripts on my computer that I’m using to quickly ramp up a debug pod or publish a service in a Kubernetes cluster. I thought to myself: why not packaging them nicely and share them so that others also can benefit from it?
Meet kn, short for Kubernetes native, that you might find useful to quickly jump into a Kubernetes cluster in order to poke around or even test-drive a networked app, sharing it on the public net. Sounds fun? Let’s jump into the deep end!
This is what a typical…
ABC as in Always Be Controlling—yeah I know, I lifted the ABC moniker from the ’92 movie Glengarry Glen Ross’ character Blake —and with that I mean you should not only have RBAC enabled, obviously, but you should always create and use dedicated service accounts for your apps. In the very least for Kubernetes native apps but it doesn’t hurt if you just get into the habit of doing:
$ kubectl create ns myapp
$ kubectl -n myapp create sa thesa
So now you have prepared a dedicated service account
thesa in the namespace
myapp and I do hope for…