With the rise of cloud, cloud-native and a lot of other buzzwords, we also see the rise of all kinds of Dev(fill here)Ops practices. While I’m an avid advocate of the DevOps mindset, all the buzz around pipelines sounds old-school to me.
What is GitOps?
GitOps evolved from DevOps. The specific state of deployment configuration is version-controlled (using Git for example). Changes to configuration can be managed using code review practices, and can be rolled back using version-controlling.
A quick look at Wikipedia brings you the above description. It is stuffed away somewhere in the DevOps article. Like with DevOps, there is a whole religion around what tool is the best to use, and people even claim to be ‘DevOps Engineers’ whatever that may be. In most cases they mean ‘cloud infrastructure’ because DevOps is a mindset, and not a function.
It doesn’t help that some tools like Azure DevOps are actually called this way, but hey, VSTS is also not really a snappy name IMHO.
To explain GitOps in my own words and in a single sentence: Instead of pushing your code and deploy, your system pulls the software from the version control system.
A nice one-liner by WeaveWorks is: GitOps is Continuous Delivery meets Cloud Native
So, it is the same as DevOps Pipelines right? No it is not. From a distance it can seem to be the same. In a way your code, if configured correctly, will be deployed automatically with CICD.
The fundamental difference is that your system (for example Kubernetes) and not your DevOps CICD pipeline, is in the lead. Remember, the settings of the k8s system itself also need a repository.
Let’s look at a typical CICD setup:
As you can see in the example above, the Azure DevOps pipelines are in the lead. The Build step pulls in the code, the Release step creates the artifact and pushes it towards a registry, and also installs it on AKS. But this does not include the state of your cluster.
Let’s say your DevOps pipeline trigger failed, or someone made changes manually to the AKS applications or configuration. AKS has no idea where the code is located so it cannot check if there are any changes. This is where GitOps can be a solution. Let’s look at the following example:
In the example above you can see that (in this case Flux) is used to pull the desired state towards your Kubernetes cluster. This pulling makes sure that you are always up-to-date. And it can also detect changes so if anything or anyone changes the configuration manually, this ‘drift’ will be resolved by the desired state pulling with Flux!
Yes, this complicates your setup, but you always know that your clusters are exactly how you described them in the desired state. Especially in situations with Kubernetes, deployment and configuration can be really complex, so these tools make sure you are in control of your systems.
Example of GitOps tools
So I need Kubernetes? No. In theory any system can use the GitOps setup, but for Kubernetes a lot of tools and so called operators are readily available.
In the previous paragraph, Flux is drawn in the picture. There are many more tools out there, but we see convergence on two major tools: FluxandArgo. The resources section at the end of the blog shows more tools and resources if you want to dive in the deep end of GitOps.
“The GitOps Working Group is a WG under the CNCF App Delivery SIG.
The focus of the GitOps WG is to clearly define a vendor-neutral, principle-led meaning of GitOps. This will establish a foundation for interoperability between tools, conformance, and certification. Lasting programs, documents, and code are planned to live within the OpenGitOps project.”
The main goal of the OpenGitOps CNCF Sandbox project is to define a vendor-neutral, principle-led meaning of GitOps. This will establish a foundation for interoperability between tools, conformance, and certification through lasting programs, documents, and code.
GitOps has been around for several years, and continues to evolve. CNCF has OpenGitOps as Sandbox project and embraced Flux and Argo as Incubating projects. The shift in mindset can be difficult, as some people still try to embrace DevOps as a principle.
Watch this space for more information and an upcoming eBook on our take and practices surrounding GitOps!
To sum it up, basically the idea is to decentralize everything and create a unlimited computer, A.K.A. The Internet Computer. According to the ICA, any software can run decentralized, removing the needs for big cloud vendors. Obviously you still need servers to host your Internet Computer nodes on, so yes there is still need for datacenters IMHO.
Some big tech companies are stated as fellows but the showcases on the main contributor site look like nice players: https://dfinity.org/showcase. The main contributor is DFINITY, and as with everything blockchain, I’m always highly skeptical of the actual goals that people want to achieve.
A closer look
Ok, so what does it mean? How does it work? ‘The Internet Computer is a public blockchain that hosts smart contracts’. Well smart contracts on the blockchain aren’t new, they’ve been around since 2015 and can be seen as small programs that run when some conditions are met. Sounds a lot like a dApp, a decentralized application.
A decentralized application is an application that can operate autonomously, typically through the use of smart contracts, that runs on a decentralized computing, blockchain system.
Investigating this also led me to similar technologies, like Solana and Polkadot. The difference here is that The Internet Computer is running everything on the blockchain, not just the contracts. This means you do not need a separate hosting for the cloud workloads, but actually run everything on The Internet Computer.
Will The Internet Computer actually take over the web 3.0? Will it change everything we do in the public cloud and render AWS, GCP and Microsoft Azure useless? Well I’ve heard the same talk about bitcoin and the likes, but I truly find it intriguing from a technical perspective. I see use cases arise from this and if the ICA is “truly independent” it could be something to watch in the upcoming years.
It is really easy to blow all your budget when you do not know the impact of the choices you make in the cloud. And while there are many tools available to run analysis on your bill, or showing where your money went, would it not be smarter to shift everything left?
34 per cent of large enterprises set a budget of at least $10m a year while 62 per cent of small business went for under $100k. Of those spanking between $2m and $10m on the cloud, almost half overspent (compared to 27 per cent of those budgeting until $100k
In a race for speed of delivery, putting restrictions on cloud environments can be contra-productive. On the other hand money can fly out the window if you do not keep everything in check. So what is the solution?
If I would get paid a euro for each time someone says ‘Let’s spin up a VM for that!’ or ‘We have this VM running anyway’, I would have been a rich man.
Although I do advocate reuse, stuffing many services on a machine ‘because it runs there anyway’ will cause a huge problem if there is a shift in architecture or platform. In the end all services need to be carved out and that can be a real pain in the behind. Also it does not really scale, for obvious reasons.
I’ve written in a previous blog that for writing maintainable code, you really should automate your tests. Currently we see Terraform really taking off as the standard to provision and deploy almost everything in cloud like Azure and AWS. In this short blogpost I will show how you can start testing your Terraform code using the Open Policy Agent.