CloudBees CI Support for Windows Containers on the Google Kubernetes Engine (GKE)

Support for Windows containers has been a stable part of Kubernetes since v1.14, meaning that you can have a node pool consisting of Windows servers and create pods that run in that pool. (You can also have multi-platform clusters that contain both Windows and Linux node pools.) We’ll take a look at how to use CloudBees CI (formerly known as CloudBees Core) to define Jenkins Pipeline tasks that should be run on Windows nodes to automate multi-platform development efforts. A video summary of this blog post can be found below.

NOTE: This is a blog post, not official technical documentation. As of today (1 May 2020), following these instructions will work with CloudBees CI version 2.222.1.1. As GKE’s Windows node support moves from the rapid release channel, you may need to change what you see here. 

Setup 

Creating a Cluster 

We’ll start from scratch by creating a cluster from the command line. Windows node support requires us to use gcloud beta and the rapid release channel. Here’s the command:

 $ gcloud beta container clusters create [CLUSTER_NAME] \ 
  --enable-ip-alias \
  --num-nodes=2 \ 
  --release-channel=rapid \ 
  --machine-type=n1-standard-2

It’s also crucial to specify the machine type of n1-standard-2. Jenkins executors require two cores, so the default machine type of n1-standard-1 won’t work. Finally, Windows nodes require alias IP, so we need the —enable-ip-alias option as well.

Creating a Windows Node Pool

By default the cluster has a pool of Linux nodes. We need to create a new node pool based on Windows images. Here’s the command:

  $ gcloud container node-pools create [NODE_POOL_NAME] \ 
  --cluster=[CLUSTER_NAME] \
  --image-type=WINDOWS_LTSC \
  --enable-autoupgrade 
  --machine-type=n1-standard-2

For reasons we won’t get into here, we need to specify a Windows image type of WINDOWS_LTSC. See Windows Container Version Compatibility for details about different levels and types of Windows images. 

Installing Core

With the cluster and node pool created, we’re going to make the minor assumption that you can install Core from here. If you need any help, the CloudBees Core on Google Kubernetes Engine (GKE) installation guide is a great resource. 

Running a Simple Pipeline

Now that everything is set up, let’s define a new pipeline. To make sure everything is working, we’ll start with a simple Linux pipeline:

 podTemplate {node(POD_LABEL) {sh 'cat /etc/os-release'}}

This simply prints the /etc/os-release file on a Linux node. Your results will look something like this: 

 . . .
Running on simple-1-4ft20-k3n1h-0sj97 in /home/jenkins/agent/workspace/simple
[Pipeline] {

[Pipeline] sh

+ cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.1 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.1"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.1 (Ootpa)"
. . . 
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.1"
. . .

And the crowd goes wild. We have no configuration for the podTemplate; we’re just using whatever comes out of the box with no other customizations. There’s no container step, so the shell command runs inside the same container as the Jenkins agent. That container is basically just a Universal Base Image with a Java runtime environment and the agent JAR, so there’s not a lot there. 

Running a Pipeline that Needs a Windows Node

The main event, of course, is to create a pipeline that uses a Windows node to get some work done. Here’s a sample pipeline:

podTemplate(yaml: '''
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: jnlp
    image: jenkins/jnlp-agent:latest-windows
  - name: shell
    image: mcr.microsoft.com/powershell:preview-windowsservercore-1809
    command:
    - powershell
    args:
    - Start-Sleep
    - 999999
  nodeSelector:
    kubernetes.io/os: windows
''') {
    node(POD_LABEL) {
        container('shell') {
            powershell 'Get-ChildItem Env: | Sort Name'
        }
    }
}

In this case we’ve got some YAML that defines a couple of images we’re going to use, specifies that things need to run on a Windows node, then invokes a PowerShell command. The YAML defines two Kubernetes containers. The first, jnlp, contains the JNLP agent that Jenkins uses to kick things off. The second, shell, uses an image from the Microsoft Container Registry (mcr). That is the Windows image we’ll use to run powershell. We define a command and some arguments for that image as well. 

The really crucial piece here is the nodeSelector at the end of the YAML section. This is what tells Kubernetes that any node created for this pipeline should be a Windows node. From there the pipeline simply uses the shell container to run a PowerShell command that prints a sorted list of all of the environment variables in the shell.

Here’s an excerpt from the results: 

‘windows2-1-vnwz3-krkxc-b6bh7’ is offline
Agent windows2-1-vnwz3-krkxc-b6bh7 is provisioned from template windows2_1-vnwz3-krkxc
---
apiVersion: "v1"
kind: "Pod"
metadata:
  . . . 
Running on windows2-1-vnwz3-krkxc-b6bh7 in /home/jenkins/agent/workspace/windows2
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] powershell


Name                           Value                                           
----                           -----                                           
ALLUSERSPROFILE                C:\ProgramData       
. . .
BUILD_DISPLAY_NAME             #1                                              
BUILD_ID                       1                                               
BUILD_NUMBER                   1                                               
BUILD_TAG                      jenkins-windows2-1                              
CJOC_PORT                      tcp://10.0.55.129:80
. . . 

As you can see, the pipeline provisioned a Windows node and ran the requested code on it. A real-world example could pull code from a repository and run Windows-specific build tools against it. Managing a multi-platform CI environment from a single source is extremely powerful. 

Acknowledgments

The author would like to thank Jesse Glick, Ben Rich, and Kenneth Rogers for technical support and advice while writing this article.