Why you should stop spinning up VMs

Originally posted on SogetiLabs: https://labs.sogeti.com/why-you-should-stop-spinning-up-vms

If I would get paid a euro for each time someone says ‘Let’s spin up a VM for that!’ or ‘We have this VM running anyway’, I would have been a rich man.

Although I do advocate reuse, stuffing many services on a machine ‘because it runs there anyway’ will cause a huge problem if there is a shift in architecture or platform. In the end all services need to be carved out and that can be a real pain in the behind. Also it does not really scale, for obvious reasons.

Yes let’s put everything in a pod/container!

A big cluster of VMs is still just infrastructure chewing away your money in the cloud. Do not forget that in many cases, a k8s cluster, Service Fabric cluster or whatever-cluster-cloud-native-you-like in the end is mostly a bunch of VMs running 24/7 for a lot of money.

We underestimate the power of simplicity

Let’s take an example. If you want to run a simple process, for example ingest some data and put it in a store, you could go full ETL/ELT. Alternatively you can leverage simple things, like Azure Functions or LogicApps. With connectors to almost all services, it is really easy and simple to ingest data using these services.

And it’s cheap too!

Azure Functions (ran in Azure) run really cheap, as the first million(!) executions are free anyway.

What about portability?

If you start out with Azure Functions on the Azure Platform, and later on need to run on a cluster like Kubernetes, you’re in luck. Azure Functions run basically anywhere so hosting them in a container or pod, should be a piece of cake. Development is also easy exactly because of the portability.

How to start?

Microsoft has done really great work in the training area. For Azure Functions, you can simply go to: Create serverless logic with Azure Functions – Learn | Microsoft Docs. This module will get you started easily.

Want to kickstart your development? Reach out and we’ll help!

Everything is Code. Why and how you should test your Terraform

Originally posted on SogetiLabs: https://labs.sogeti.com/everything-is-code-why-and-how-you-should-test-your-terraform/

’ve written in a previous blog that for writing maintainable code, you really should automate your tests. Currently we see Terraform really taking off as the standard to provision and deploy almost everything in cloud like Azure and AWS. In this short blogpost I will show how you can start testing your Terraform code using the Open Policy Agent.


With Terraform, engineers write declarative statements that, when applied, result in for example a network, of a database:

Write Declarative Configuration Files

This obviously is great because you can automate and repeat your steps minimizing the manual labour of setting up services. Next to that, you can reuse components so all provisioned services will adhere to the same standard.

What about standards and compliancy?

In the previous example, a network will be created with some IP range, in a specific location from Google.

If you use any cloud, you should have some policies in place. It is not a free-for-all, do whatever you want, scenario. It could be very important where your data resides. In that case you don’t want to enable teams to deploy in non-compliant regions.

With code however, especially in the sample shown above, it is not directly clear what is allowed. Some examples of questions DevOps team may have are:

  • Am I allowed to create a public IP address on a machine or database?
  • Am I allowed to deploy services in the US or only in Europe?

Worst case, the team finds out when they deploy their services onto their cloud environment. In case of Azure, policies can block a certain location, blocking the entire deployment of their stack.

This means they have to go back to trace why something is wrong, adjust the code, go through all steps again and hope they did not miss anything.

Shift left

One solution to this problem is to test as early as possible. If a team gets feedback on their code, this will save a lot of time in the stages later on. It will also give the teams the confidence they are doing things right!

Open Policy Agent (OPA)

OPA states the following on their website:

Stop using a different policy language, policy model, and policy API for every product and service you use. Use OPA for a unified toolset and framework for policy across the cloud native stack.

Whether for one service or for all your services, use OPA to decouple policy from the service’s code so you can release, analyze, and review policies (which security and compliance teams love) without sacrificing availability or performance.

Source: Open Policy Agent website

Using OPA for testing your Terraform plans

You can use Open Policy Agent for a variety of use cases. Testing Terraform plans is only one of them, but really interesting for DevOps team.

The language for writing the OPA policy is called rego. It is easy to understand and should be relatively simple to learn. To get a grasp, take a look at the explanation of the language on the OPA site: https://www.openpolicyagent.org/docs/latest/policy-language/.

A very complete manual is located at https://www.openpolicyagent.org/docs/latest/terraform/ and helps you understand writing a test.

The great thing about the tool, is that you can use it in your pipelines, running the eval command in for example Azure DevOps. There is even an installertask for Azure Pipelines.

Getting started

If you are using Terraform, just download OPA, and go to the getting started page. Use the given example to test against your own Terraform plan outputs.

Just want to play around with the rego language? The site https://play.openpolicyagent.org/ contains several examples, and makes exploring easy, as it is an online playground without the need for installing anything!

Is low code replacing traditional development?

Originally posted on SogetiLabs: https://labs.sogeti.com/is-low-code-replacing-traditional-development/

Spoiler alert; no.

Low code vs Traditional Dev

My colleague, friend and SogetiLabs Fellow Daniel Laskewitz and I frequently talk about this topic. His field of expertise as Microsoft MVP covers the Microsoft PowerPlatform including low code systems like Power Automate (formerly Flow).

All too often people see a division between low code and traditional development using languages like C#, Java, TypeScript and Go.
In the real world however, these systems work together perfectly.

Most of the times, you cannot solve a problem with only low code. Think about scenario’s where you should link to old legacy systems or complex API calls. In those cases low code without any enhancement cannot natively connect to those systems.

Behold custom connectors

In the Microsoft ecosystems, custom connectors allow you to bridge this gap. This way, the low code system can interact with any system you write a connector for. This may be common knowledge, but the fact is that most developers do not see how big this really is.

This means you can link any PowerApp, Microsoft Flow, or LogicApps to your custom connector, and reuse those within your entire organisation.
You could even publicly publish these if you have a service you want to expose. So if you are an ISV, this can help you get more traction on your product.

Bridging the gap

In the end it all comes down to developers of any system and language understanding the capabilities of the platforms they and their companies are using. For low code developers this means sometimes calling in the help of traditional developers. And more importantly, this also means traditional developers should learn that these low code systems can help you simplify (and thus speed up!) your development by using ready-to-roll systems and connectors available to you.

As there are over 325 connectors available, that should really speed making connections up!

Get started!

Want to explore custom connectors? Look at these resources or feel free to contact me or Daniel, we strongly believe bridging this gap between low code and traditional dev is key for succes in the future of development!

Infrastructure-as-Code with Pulumi

Using a public cloud like Microsoft Azure or Amazon AWS? Chances are you’ve been using templates like ARM or CloudFormation.

Hashicorp has done a terrific job making Terraform with the ability to template any API, and deploy and manage stacks in Azure, AWS and many, many more.

I’ve been following Pulumi for a while, and their approach is a bit different than the previous stated tools.

Continue reading “Infrastructure-as-Code with Pulumi”

Software Maintainability in the Cloud Era

Originally posted on SogetiLabs: https://labs.sogeti.com/software-maintainability-in-the-cloud-era/

The shift to cloud, and with that, to PaaS services or low code alternatives like LogicApps push the actual code developers see and use to the background.

There is an ISO standard on software quality, and the maintenance best practices are well written and explained in the book Building Maintainable Software. Within low code systems, applying these guidelines can be less obvious and it can be a difficult task automating and testing the quality of your code with tools like SonarQube.

Should we even worry about the underlying code? Absolutely. The principles still adhere and creating a spaghetti of your low code systems can cause major issues on maintenance or adding new features.

Let’s focus on three points of the maintainability guidelines:

  1. Write code once
  2. Couple architecture components loosely
  3. Automate development pipeline and tests

In no way these are the most important items, but for this example an easy entry into the low code space.

1. Write code once

Just like any other audit of software, you still can avoid writing duplicate code. In a platform like LogicApps it can be easy to repeat a custom call to something like a custom HTTP API.

In traditional languages like C#, you have many options to reuse your code. You could create a library, or you can create a package and make it available via NuGet. Within low code systems, these same packaging mechanisms not always exist.

Taking the example of LogicApps, the solution could be to create Custom Connectors. These will wrap your custom API calls into a reusable component you can share within your organisation, or even outside.

2. Couple architecture components loosely

If your components are tightly coupled, it can be troublesome to replace or refactor your components. The impact will be on each and every other component that has a high coupling with your part.

Again taking the LogicApps for example, let’s state our LogicApp calls another component directly using HTTP. The Azure Portal gives you this out of the box, letting you call Azure Functions directly from your LogicApp.

This goes agains the principle of loosely coupling. The reason is that your call is directly bound to that function, so changing the interface or location of your function, impacts the LogicApp directly. In this case, it would simply break and stop working.

To solve this problem, a simple solution is to decouple the LogicApp and the Azure Function using a queueing mechanism. This way, the message to the Azure Function is put on a queue by the LogicApp, and the Azure Function listens on a queue. Now, if the developer of the Azure Function changes location or even use another platform, there is no need to change the LogicApp.

Obviously this would require you to make an agreement on the contents of the messages on the queue.

3. Automate development pipeline and test

Automating your CICD pipelines allows you to more easily build, test and deploy your code. In case of a language like C# or Java, you can easily run tests, build your code, and create packages or deployments. A tool like Azure DevOps can combine these steps and lets you create a wealth of quality gates, checks and processes to guide your team.

When using low code platforms, it can be cumbersome to get the code into your version control systems. Nevertheless, many of those platforms do give you the tooling. LogiApps for example has template creators, and systems like OutSystems have their own CICD ecosystem.

In the end the automation allows you to more easily add steps to your CICD process, and allow you to deploy more frequently without any hassle. The addition of automated testing will absolutely be beneficial to the overall quality of your product.


Treat your low code just like you would any other codebase. Almost all guidelines of mainainability can be mapped to your product. Some can may require a bit more investment, but in the end I truly believe it will help building a maintainable and high-quality (low) code base.

If you want to expand on the process part of high-quality software, please also take a look at the follow-up book of this series: Building Software Teams

Scanning Blob storage for viruses with Azure Functions and Docker

While working on a side project, someone asked me how to scan for viruses in a cloud native and serverless landscape. That made me think about a project I did a couple of years back. During that project we used ClamAV that was installed on a VM. We scanned files that end-users uploaded within an ASP.NET application, and everything was hosted on datacenter VMs somewhere in the Netherlands.

ClamAV® is an open source antivirus engine for detecting trojans, viruses, malware & other malicious threats.

In this blogpost I will show a proof of concept using a Docker image and Azure Functions to create a simple automated virusscanner for Azure Blob storage.

Continue reading “Scanning Blob storage for viruses with Azure Functions and Docker”