It’s the last day of CNCF KubeCon & ServiceMeshCon North America — you can register and watch on-demand.
The CNCF Survey 2020 is out with some amazing results. The Cloud-Native adoption, container orchestration driven solutions are on the rise. And so are all the associated eco-system tooling and solution enablers.
Let's start with a good backdrop story.
There comes a time when you have your shiny, new Kubernetes Cluster.
It works, It performs magic, enables you to perform idempotent microservice releases with your CI/CD release model of choice.
Yes, in production.
You have thought through the application load balancers for your Kubernetes (Nginx or any other) ingress controllers, WAF protection, and so on.
Your K8s production microservice (ought to) comes with all the best-practices in place; resource
HorizontalPodAutoscaling, and even
NetworkPolicy to keep SecOps function happy.
This blog post intends to bring both technical and management readers alike to explore the bigger organisational perspective, as well as to appreciate the finer details. This is an effort to help you make better decisions on the topic of The service mesh. …
We live in turbulent times. We need information, the right information.
Information is Important.
Then Context is important.
And Timing is important.
You see, I run a small Slack group, intended to be for Financially Savvy Investor “want-to-be” (all one word)
And therein our group, we discuss news, opportunities, and strategies. It’s a hobby — But not financial advice :D
The challenge is to ensure that we, as a group are up-to-date with the relevant social and economic developments, — something we discuss at length in more relevant channels. We have #Crypto #Stocks #Buy-To-Lets #Economy channels to name a few.
There are a number of Slack integrations available(paid), but my purpose was to get more bespoke Twitter updates into the relevant #updates or #news channel on slack, without much coding, faff, and on a shoestring budget. As it make it smart, pay only for the resource it needs to run on.
So I did. It works out to cost about $0.40/month for all my streaming needs. …
I am really glad I have booked off all 3 days (UK time meant 4–5 pm start) of Cloud-native KubeCon.
I have several years of experience working with Kubernetes and I am CKA and CKAD certified. Having delved into the abyss of true native GitOps CI/CD, and recently with The Service Mesh, I thought I was ready for this.
I believed, with my experience, I was reasonably well versed with the technologies, the vendors, and the Open Source community offering by now. Oh, how wrong I was.
This was so Q1 2020.
It should not be a surprise, looking at the (CNCF) Cloud Native Landscape — it is vast and growing yet. …
This is my first KubeCon, and in the unfortunate 2020, this was all online. What I am after was The Content — and I was very much pleased.
Conference cost $75 + $20 for ServiceMeshCon — my area of interest.
The sheer amount of Kubernetes Cloud-native content, particularly in regards to service mesh was astounding. I have booked several days off work on this study leave, thinking I can do 2-for-1 and get Kubernetes Security Certification — which was announced as expected, done in between.
I wish. No way. “Aint nobody got time for that”. …
I vividly recall my very own Kubernetes cost-optimization exercise during exciting times working at Loveholidays. We were keen on a lean, mean Kubernetes infrastructure-as-code (DevOps) GitOps operating machine.
Cost Observability was not immediately one of them.
(Back in 2018/19) With only recent migration from the on-prem to Google Cloud Platform and a cloud-native migration at that, — we embraced Kubernetes hands-on. Now the migration was complete, it was well due to review the ever-growing Kubernetes infrastructure costs and figure a good process to keep such quite important detail under control.
At first, the cost reduction exercise was in order.
This is by no means an exhaustive list of cost optimisations, but it is the place to start, to ensure your infrastructure spending goes a long way. …
It’s been roughly 37 minutes since I completed my Terraform Associate Exam, receiving that much satisfying “Pass” notification, — as I now write this.
I hope my guide helps you pass this exam on first attempt as well.
This was a 100% remotely proctored exam and booked via Hashicorp website, and actually, the exam itself taken me around half-hour to complete.
This is a recently new-launched exam by HashiCorp and is timely welcomed by the DevOps and IaC community, in-my-humble-opinion.
The exam is about an hour-long, featuring around 60 odd questions — I had 57 to be exact.
You’re allocated 2 hours for the exam. — as advised by the proctor, but my countdown timer showed 60 minutes. Odd, but that is plenty of time I think. Without discussing the details of the exam — which I’m prohibited to, — I can say that it’s a well-suited all-rounder, and sufficiently thorough to validate the guiding principles of Infra-structure-as-code, the DRY principles and covers the classic “how-to”/“where-to” which every good terraform dev-operator ought to know. …
Welcome to my Kubernetes how-to series, where I intend to breakdown and showcase the how-tos and the gotchas of the Kubernetes configuration.
If you’re here, you are aware that the POD-to-POD communication on the [any] Kubernetes Cluster is available to all namespaces and all PODs, — It’s free for all.
Irrespective if you are using VPC native subnet, or your Kubernetes comes with its own internal IP subnet.
The main limiting of such Pod-to-Pod communications being the end-Container port-configuration itself.
Otherwise, as itis lacking any container-specific header whitelisting, you are able to telnet/netcat to other Pod’s ports without any restrictions or limitations. …
This is one certification you need to have if you are considering taking on the Cloud Migration or Infrastructure Transformation efforts — or if you have just completed one and now you are completing this certification to clean up that “loose” certification paperwork.
(And if it’s the latter, do get in touch — we’re hiring!)
The Google Professional Cloud Developer page for this exam says;
A Professional Cloud Developer builds scalable and highly available applications using Google recommended practices and tools that leverage fully managed services
Good news, and pleased to announce, — I have passed this Exam!
Very pleased and very grateful, taking all the precautions and protections against COVID-19. …
When it comes to GitOps efforts, amongst the many caveats and the varied snags to watch out for when configuring these, — is the DNS toil. I have been long procrastinating to get a running demo of this External-DNS https://github.com/kubernetes-incubator/external-dns for a little while, alas it is here now. And it’s so dang straight forward.
External-DNS undertakes all that management, — mapping FQDN to a service and an ingress. Albeit the Kubernetes Service DNS management will require a public IP address, provisioned with a
loadBalancer type. This simplifies the DNS management — A records added and removed automatically, as your K8 services are deployed and removed. You will probably not want to use the K8 Service with `ExternalIP` DNS mapping as it is to incur IP provisioning costs PER such service. …