Passed the AWS DevOps Engineer Professional Exam. Was challenging.

github.com/faisalman/ua-parser npm ua-parser-js 0.7.29 contains malicious payload which executes additional binaries.

AWS Cloud Map MCS Controller is a must have if you are running a Multi Tenant EKS Setup in your Organization.

If you are only focused on executing, that’s what we call the Leeroy Jenkins approach. :D

Success from tool to product is adoption and sustainability.

Shifting Spotify Engineering from Spreadsheets to Backstage Talk is really funny designed in epochs. Stone Age looks really familiar.

First time hearing about a Helm organizing tool called Helmfile. Looks very useful, same as the application set concept in Argo basically where you keep one declarative file of all your value files in the Helm Chart.

Currently on the DAPR conference. Awesome tool

First Day of Kube Con and already some really interesting new developments. The Multi Cluster Ingress is a game changer. EKS new Service Discovery on a Multi-tenant environment and the new AWS Kubernetes Controller were the most interesting takeaways from last days AWS Container Day. Also interested in the new Gateway implementation that has been a feature request since I basically started working with K8.

Golden Article about how to go live with a tech demo and the importance of live traffic on localhost.

Do your demos like a boss at KubeCon

blog.alexellis.io/kubecon-demo

Created an issue for this. github.com/Trendyol/kink/issue. Depending on the answer will create a PR with the necessary SEC context.

Show thread

Kind in Kubernetes seems like an interesting project, but the security impact is too high. Kind Pods run fully privileged. github.com/Trendyol/kink/blob/

Was excited for a second thinking this had a way around it.

Their solution for this is JSON diff blocks, that tell fleet to ignore changes to a specific part of the yaml. This however is super anti-gitops, because it's not the repo telling the cluster what the state should be. It's the cluster (fleet-agent) telling a human what the repo should be. Moreover, this is impossible to predict for an arbitrary resource, and so is antithetical to automating management/generation of the gitops repo.

Show thread

It tries to figure out what the applied resources should look like but it can't account for things like mutating webhooks so it's almost always wrong, and when that happens the bundles go into state "modified" which makes it look like something is wrong. I suspect this could be fixed by doing a server-side apply, or just not peering as deeply into applied yamls as it does.

Show thread

The fleet.yaml files control what is in a bundle, they have to be littered everywhere.

Show thread

For some reason, resources are applied alphabetically. So fleet will fail to start the yaml called app-that-needs-sql.yaml, because it's missing the sql-details.yaml configmap, even though both are in the same folder in the gitops repo. I don't know how, but flux has no problem with such a thing, so I classify it as poor implementation rather than a limitation.

Show thread

Bundle names, which are based on folders in the gitops repo cap out at 63 characters, which means that deeply nested gitops repos just don't work.

Show thread

Bundles (fleet's crd for resources to apply) are not a useful abstraction. They can contain anything from a single config map all the way to a helm chart and several different yamls. So deleting or modifying a bundle can affect an arbitrary amount of resources.

Show thread

My biggest problem with Rancher fleet currently is that when something doesn't work, it's very hard to locate an error message telling you what actually is holding things up. You might see the fleet agent giving a 401, or a bunch of resources in state "missing", while they are definitely in the gitops repo.

But to list some others:

Show older
Mastodon

Welcome to my Blog! I mostly talk about DevOps, Cloud, Linux and Kubernetes. Huge Tech, Chess and Outdoor Fan.