GitOps Primer

Originally posted on SogetiLabs:
https://labs.sogeti.com/gitops-primer/

With the rise of cloud, cloud-native and a lot of other buzzwords, we also see the rise of all kinds of Dev(fill here)Ops practices. While I’m an avid advocate of the DevOps mindset, all the buzz around pipelines sounds old-school to me.

What is GitOps?

GitOps evolved from DevOps. The specific state of deployment configuration is version-controlled (using Git for example). Changes to configuration can be managed using code review practices, and can be rolled back using version-controlling.

https://en.wikipedia.org/wiki/DevOps#GitOps

A quick look at Wikipedia brings you the above description. It is stuffed away somewhere in the DevOps article. Like with DevOps, there is a whole religion around what tool is the best to use, and people even claim to be ‘DevOps Engineers’ whatever that may be. In most cases they mean ‘cloud infrastructure’ because DevOps is a mindset, and not a function.

It doesn’t help that some tools like Azure DevOps are actually called this way, but hey, VSTS is also not really a snappy name IMHO.

To explain GitOps in my own words and in a single sentence: Instead of pushing your code and deploy, your system pulls the software from the version control system.

A nice one-liner by WeaveWorks is: GitOps is Continuous Delivery meets Cloud Native

https://www.weave.works/technologies/gitops/

GitOps versus DevOps Pipelines

So, it is the same as DevOps Pipelines right? No it is not. From a distance it can seem to be the same. In a way your code, if configured correctly, will be deployed automatically with CICD.

The fundamental difference is that your system (for example Kubernetes) and not your DevOps CICD pipeline, is in the lead. Remember, the settings of the k8s system itself also need a repository.

Let’s look at a typical CICD setup:

DevOps workflow with containers
Source: https://cloudblogs.microsoft.com/opensource/2018/11/27/tutorial-azure-devops-setup-cicd-pipeline-kubernetes-docker-helm/

As you can see in the example above, the Azure DevOps pipelines are in the lead. The Build step pulls in the code, the Release step creates the artifact and pushes it towards a registry, and also installs it on AKS. But this does not include the state of your cluster.

Let’s say your DevOps pipeline trigger failed, or someone made changes manually to the AKS applications or configuration. AKS has no idea where the code is located so it cannot check if there are any changes. This is where GitOps can be a solution. Let’s look at the following example:

GitOps CI/CD architecture
Source: https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/conceptual-gitops-ci-cd

In the example above you can see that (in this case Flux) is used to pull the desired state towards your Kubernetes cluster. This pulling makes sure that you are always up-to-date. And it can also detect changes so if anything or anyone changes the configuration manually, this ‘drift’ will be resolved by the desired state pulling with Flux!

Yes, this complicates your setup, but you always know that your clusters are exactly how you described them in the desired state. Especially in situations with Kubernetes, deployment and configuration can be really complex, so these tools make sure you are in control of your systems.

Example of GitOps tools

So I need Kubernetes? No. In theory any system can use the GitOps setup, but for Kubernetes a lot of tools and so called operators are readily available.

In the previous paragraph, Flux is drawn in the picture. There are many more tools out there, but we see convergence on two major tools: Flux andArgo. The resources section at the end of the blog shows more tools and resources if you want to dive in the deep end of GitOps.

There is a nice writeup (sponsored by Red Hat) on deciding between Flux and Argo. Depending on your situation it can fall either way. GitOps on Kubernetes: Deciding Between Argo CD and Flux – The New Stack.

OpenGitOps

The emergence of GitOps led to the OpenGitOps project.

OpenGitOps is a set of open-source standards, best practices, and community-focused education to help organizations adopt a structured, standardized approach to implementing GitOps.

https://opengitops.dev/

The principles of GitOps are four-fold:

  1. Declarative
    system managed by GitOps must have its desired state expressed declaratively.
  2. Versioned and Immutable
    Desired state is stored in a way that enforces immutability, versioning and retains a complete version history.
  3. Pulled Automatically
    Software agents automatically pull the desired state declarations from the source.
  4. Continuously Reconciled
    Software agents continuously observe actual system state and attempt to apply the desired state.

“The GitOps Working Group is a WG under the CNCF App Delivery SIG.

The focus of the GitOps WG is to clearly define a vendor-neutral, principle-led meaning of GitOps. This will establish a foundation for interoperability between tools, conformance, and certification. Lasting programs, documents, and code are planned to live within the OpenGitOps project.”

The main goal of the OpenGitOps CNCF Sandbox project is to define a vendor-neutral, principle-led meaning of GitOps. This will establish a foundation for interoperability between tools, conformance, and certification through lasting programs, documents, and code.

Concluding

GitOps has been around for several years, and continues to evolve. CNCF has OpenGitOps as Sandbox project and embraced Flux and Argo as Incubating projects. The shift in mindset can be difficult, as some people still try to embrace DevOps as a principle.

Watch this space for more information and an upcoming eBook on our take and practices surrounding GitOps!

Resources and further reading

OpenGitOpshttps://opengitops.dev/
GitOps eBookhttps://www.gitops.tech/
Weaveworks Guide to GitOpshttps://www.weave.works/technologies/gitops/
Harnesshttps://harness.io/blog/devops/what-is-gitops/
Fluxhttps://fluxcd.io/
Argo CDhttps://argo-cd.readthedocs.io/en/stable/
Jenkins Xhttps://jenkins-x.io/
CNCFhttps://www.cncf.io/

Cloud Economics and Terraform; How to start

Originally posted on SogetiLabs:
https://labs.sogeti.com/cloud-economics-and-terraform-how-to-start/

It is really easy to blow all your budget when you do not know the impact of the choices you make in the cloud. And while there are many tools available to run analysis on your bill, or showing where your money went, would it not be smarter to shift everything left?

34 per cent of large enterprises set a budget of at least $10m a year while 62 per cent of small business went for under $100k. Of those spanking between $2m and $10m on the cloud, almost half overspent (compared to 27 per cent of those budgeting until $100k

https://www.hashicorp.com/blog/cloud-budgets-busted-almost-40-overspent-last-year

In a race for speed of delivery, putting restrictions on cloud environments can be contra-productive. On the other hand money can fly out the window if you do not keep everything in check. So what is the solution?

Continue reading “Cloud Economics and Terraform; How to start”

Infrastructure-as-Code with Pulumi

Using a public cloud like Microsoft Azure or Amazon AWS? Chances are you’ve been using templates like ARM or CloudFormation.

Hashicorp has done a terrific job making Terraform with the ability to template any API, and deploy and manage stacks in Azure, AWS and many, many more.

I’ve been following Pulumi for a while, and their approach is a bit different than the previous stated tools.

Continue reading “Infrastructure-as-Code with Pulumi”

Software Maintainability in the Cloud Era

Originally posted on SogetiLabs: https://labs.sogeti.com/software-maintainability-in-the-cloud-era/

The shift to cloud, and with that, to PaaS services or low code alternatives like LogicApps push the actual code developers see and use to the background.

There is an ISO standard on software quality, and the maintenance best practices are well written and explained in the book Building Maintainable Software. Within low code systems, applying these guidelines can be less obvious and it can be a difficult task automating and testing the quality of your code with tools like SonarQube.

Should we even worry about the underlying code? Absolutely. The principles still adhere and creating a spaghetti of your low code systems can cause major issues on maintenance or adding new features.

Let’s focus on three points of the maintainability guidelines:

  1. Write code once
  2. Couple architecture components loosely
  3. Automate development pipeline and tests

In no way these are the most important items, but for this example an easy entry into the low code space.

1. Write code once

Just like any other audit of software, you still can avoid writing duplicate code. In a platform like LogicApps it can be easy to repeat a custom call to something like a custom HTTP API.

In traditional languages like C#, you have many options to reuse your code. You could create a library, or you can create a package and make it available via NuGet. Within low code systems, these same packaging mechanisms not always exist.

Taking the example of LogicApps, the solution could be to create Custom Connectors. These will wrap your custom API calls into a reusable component you can share within your organisation, or even outside.

2. Couple architecture components loosely

If your components are tightly coupled, it can be troublesome to replace or refactor your components. The impact will be on each and every other component that has a high coupling with your part.

Again taking the LogicApps for example, let’s state our LogicApp calls another component directly using HTTP. The Azure Portal gives you this out of the box, letting you call Azure Functions directly from your LogicApp.

This goes agains the principle of loosely coupling. The reason is that your call is directly bound to that function, so changing the interface or location of your function, impacts the LogicApp directly. In this case, it would simply break and stop working.

To solve this problem, a simple solution is to decouple the LogicApp and the Azure Function using a queueing mechanism. This way, the message to the Azure Function is put on a queue by the LogicApp, and the Azure Function listens on a queue. Now, if the developer of the Azure Function changes location or even use another platform, there is no need to change the LogicApp.

Obviously this would require you to make an agreement on the contents of the messages on the queue.

3. Automate development pipeline and test

Automating your CICD pipelines allows you to more easily build, test and deploy your code. In case of a language like C# or Java, you can easily run tests, build your code, and create packages or deployments. A tool like Azure DevOps can combine these steps and lets you create a wealth of quality gates, checks and processes to guide your team.

When using low code platforms, it can be cumbersome to get the code into your version control systems. Nevertheless, many of those platforms do give you the tooling. LogiApps for example has template creators, and systems like OutSystems have their own CICD ecosystem.

In the end the automation allows you to more easily add steps to your CICD process, and allow you to deploy more frequently without any hassle. The addition of automated testing will absolutely be beneficial to the overall quality of your product.

Concluding

Treat your low code just like you would any other codebase. Almost all guidelines of mainainability can be mapped to your product. Some can may require a bit more investment, but in the end I truly believe it will help building a maintainable and high-quality (low) code base.

If you want to expand on the process part of high-quality software, please also take a look at the follow-up book of this series: Building Software Teams