https://github.com/faisalman/ua-parser-js/issues/536 npm ua-parser-js 0.7.29 contains malicious payload which executes additional binaries.
First Day of Kube Con and already some really interesting new developments. The Multi Cluster Ingress is a game changer. EKS new Service Discovery on a Multi-tenant environment and the new AWS Kubernetes Controller were the most interesting takeaways from last days AWS Container Day. Also interested in the new Gateway implementation that has been a feature request since I basically started working with K8.
Golden Article about how to go live with a tech demo and the importance of live traffic on localhost.
Do your demos like a boss at KubeCon
https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF .Probably the best Hardening Guidance out there for Kubernetes.
Kind in Kubernetes seems like an interesting project, but the security impact is too high. Kind Pods run fully privileged. https://github.com/Trendyol/kink/blob/42be76dabeb3b5743d8ed34d9ac301b0d32ea1b3/cmd/run.go#L212
Was excited for a second thinking this had a way around it.
Their solution for this is JSON diff blocks, that tell fleet to ignore changes to a specific part of the yaml. This however is super anti-gitops, because it's not the repo telling the cluster what the state should be. It's the cluster (fleet-agent) telling a human what the repo should be. Moreover, this is impossible to predict for an arbitrary resource, and so is antithetical to automating management/generation of the gitops repo.
It tries to figure out what the applied resources should look like but it can't account for things like mutating webhooks so it's almost always wrong, and when that happens the bundles go into state "modified" which makes it look like something is wrong. I suspect this could be fixed by doing a server-side apply, or just not peering as deeply into applied yamls as it does.
The fleet.yaml files control what is in a bundle, they have to be littered everywhere.
For some reason, resources are applied alphabetically. So fleet will fail to start the yaml called app-that-needs-sql.yaml, because it's missing the sql-details.yaml configmap, even though both are in the same folder in the gitops repo. I don't know how, but flux has no problem with such a thing, so I classify it as poor implementation rather than a limitation.
Bundle names, which are based on folders in the gitops repo cap out at 63 characters, which means that deeply nested gitops repos just don't work.
Bundles (fleet's crd for resources to apply) are not a useful abstraction. They can contain anything from a single config map all the way to a helm chart and several different yamls. So deleting or modifying a bundle can affect an arbitrary amount of resources.
My biggest problem with Rancher fleet currently is that when something doesn't work, it's very hard to locate an error message telling you what actually is holding things up. You might see the fleet agent giving a 401, or a bunch of resources in state "missing", while they are definitely in the gitops repo.
But to list some others:
There are Terraform Modules out there for ordering Dominos Pizza lol. https://github.com/ndmckinley/terraform-provider-dominos
Cloud / DevOps / Automation Expert. Managing Kubernetes and Cloud on scale. Chess Player, White-hat Hacker and IT Nerd.
Welcome to my Blog! I mostly talk about DevOps, Cloud, Linux and Kubernetes. Huge Tech, Chess and Outdoor Fan.