Overview of Spotinst, Elastigroup and Ocean
Spotinst helps businesses increase DevOps agility by automating multi-cloud infrastructure management and dramatically reduce cloud costs through intelligent workload allocation and leverage of cloud excess capacity (aka Spot Instances). Spotinst allows developers to focus on building applications, without the worry of choosing, managing or scaling infrastructure.
With Spotinst, companies save approximately 80% on cloud infrastructure costs by leveraging Spot Instances with ease and confidence. Using advanced algorithms and historical data, Spotinst predicts interruptions ahead of time and seamlessly migrates instances into different instances while ensuring high-availability and application consistency.
Spotinst Elastigroup contains a collection of EC2 instances, GCP or Azure virtual machines that are treated as a logical grouping for the purposes of automatic scaling and management. Elastigroup enables you to use features such as automatic replacement of Spot and On-Demand Instances, health check, scaling policies, blue-green deployments and more.
At re:Invent 2018, we’ve released a new product dedicated for containers workloads called, Spotinst Ocean.
Spotinst Ocean provides an abstraction layer on top of virtual machines allowing to deploy Kubernetes clusters without the need to manage the underlying VMs for the worker nodes.
Spotinst Ocean learns what resources the containers need and how long they run. Ocean then uses this information to densely pack Pods, to ensure fast and cost-effective sequences. By combining Pods awareness with price prediction, Ocean interacts with the Kubernetes Scheduler to place Pods where they won’t be interrupted.
One way for doing CI/CD on all clouds
Jenkins is one of the leading automation servers in the market, running CI/CD pipelines in many organizations worldwide. Spotinst offers a single cluster software that can run on each cloud provider independently, while still offering the same experience, APIs and features.
When trying to do CI/CD across multiple clouds, and although Jenkins offers direct integrations with multiple cloud providers, it can be difficult to prepare one a CI/CD pipeline to run across providers that.
CI/CD for 80% less of the costs
Cloud Excess Capacity is a great way to reduce infrastructure cost. Typically, you can save between 60-90% from the regular on-demand compute cost. With Spotinst, you can achieve this easily by configuring your Jenkins Server to automatically scale a designated Slaves up and down using Spot Instances depending on the number of jobs to be completed. This allow you to get these resources for 80% less of the cost with high availability.
Jenkins X offers a new way by running your CI/CD pipelines on top of K8s. In this case, Spotinst’s Ocean can maximize your reduction even further by running the right size of spot instances for the workload that is being run on K8s.
CI/CD delivered on serverless infrastructure
By combining the serverless Jenkins concept with Ocean’s serverless Kubernetes experience, companies can deploy, run and manage CI/CD and applications without managing the infrastructure for it. It dramatically reduces the operational overhead, freeing up DevOps times and continuously and automatically optimizes the process to achieve the most optimal experience.
This is how it works
In this blog, we’re going to demonstrate the serverless experience of CI/CD pipelines on top of K8s, using Jenkins X and Ocean by Spotinst. Ocean will manage and scale up/down K8s worker nodes, while Jenkins will manage the CI/CD pipeline of our applications.
This is a high-level overview of the operations we want to demonstrate in this blog is:
- Create an EKS cluster with Ocean
- Install Jenkins X on top of Ocean cluster
- Create a demo application - Java spring
- Jenkins X will create a 0.0.1 version of this application and promote it to the staging environment
- Make a change to the application (add an index page) and create a PR to the master branch
- Jenkins X will create a preview environment that will trigger a scale event in Ocean to add another K8s node for the application to be deployed on
- Approve the PR with an “approved” label - Prow will merge this PR automatically and promote the change to the staging environment
- Clean (`jx gc preview` command) the preview environment - Ocean will catch unutilized K8s node and will trigger down scale event
Infrastructure used in this blog:
Programmatic user for Github will be used to create the appropriate repositories in Github.
- EKS cluster managed by Ocean
- Jenkins X installed on the EKS cluster created above
- AWS Route53 subdomain to register the ingress controller (Can be any other DNS if ingress controller is already installed on the cluster)
Tools required for this blog:
- kubectl - CLI for communicating with K8s clusters
- aws-iam-authenticator - used to authenticate kubectl users with EKS clusters
- jx - Jenkins x CLI
- Helm - the package manager for K8s
- JDK - will be used for the Java Spring demo application
Creating Programmatic User for Github
This user will be used during the installation of Jenkins X or when creating new applications using Jenkins X.
- In your Github account go to settings → developer settings → Personal access tokens and click on “Generate new token.”
- Name your token “jx_token,” and select the following permissions.
And click on “Generate token” at the bottom of the page.
Write down this token in a secure place as this is the last time you’ll be able to retrieve this token. You’ll be using this token when you’ll install Jenkins X (see YOUR_GITHUB_API_TOKEN in this blog).
Installing EKS and Spotinst’s Ocean
Sign-in to or sign-up for Spotinst
If you’re already a Spotinst customer login to your dashboard or sign-up for 14 days of free trial at Spotinst website.
Create your EKS cluster with Ocean
Spotinst Ocean will provision, manage and scale the nodes for your cluster, but you will need to create your Kubernetes master node.
We can create a new Amazon EKS cluster through the Ocean dashboard using CloudFormation. As an alternative to EKS you can also provision and manage your own Kubernetes cluster on AWS using the open-source kops tool.
In order to create EKS cluster using Ocean, please use the Ocean creation wizard.
Click on “Generate Token”
Fill in the “Cluster Name”, Region and “Key Pair” and click on Launch CloudFormation Stack
In the CloudFormation you can go over the configurations and click on “Create Stack”, it will take about 15 minutes to create an EKS cluster.
Going back to Spotinst console, follow Step 4 and run the commands from your cli
- “aws eks update-kubeconfig —name <cluster name>”
- check the connectivity to your EKS cluster by running “kubectl get svc”
Install the Spotinst controller by running the controller installation script:
curl -fsSL http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/cluster-controller/scripts/init.sh | \
SPOTINST_ACCOUNT=<Account ID> \
SPOTINST_CLUSTER_IDENTIFIER=<cluster name> \
*all the parameters will be filled in automatically by Spotinst console
Installing Jenkins X on the cluster created above
In order to install Jenkins X on our EKS cluster, we’ll use the `jx install` command. There are many params that this command gets (available at `jx install —help`). We’ll review some of them:
- —domain - the domain that our ingress endpoints will be exposed on
- —provider - according to this param, the installation configure different type of integration (for example - if `eks` or `aws` is supplied here, the installation process will try to match a subdomain in Route53 service, use ECR for container registry, etc.)
Note: If using EKS/AWS, please make sure your worker nodes iam-profile has permissions to read and write images to ECR as Jenkins X will use it as a container registry.
In your terminal type the following command. Make sure to replace the <> placeholders with your :
jx install —no-tiller —default-environment-prefix=jenkinsx —prow=true —tekton=true —domain=<REPLACE_WITH_YOUR_DOMAIN> —default-admin-password=<YOUT_DEFAULT_ADMIN_PASSWORD> —git-username=’tsahiduek’ —provider=’eks’ —git-api-token=<YOUR_GITHUB_API_TOKEN>
Follow the different stages of the installation process - please pay attention to:
- Letting Jenkins X create an Nginx ingress controller
- Using your Github user as the pipeline Github user
- Using “Kubernetes Workloads: Automated CI+CD with GitOps Promotion” as the default workload build pack
- Github organization to create the environment repository
Note: The installation process should take several minutes, depends on internet connection.
At the end of the installation process Jenkins X installed its components in the `jx` namespace which include pods of services such as Prow, Nexus, ChartMuseum and more.
In this demo, the cluster CPU utilization after the Jenkins X installation was around 78%.
Now that we have installed Jenkins X on top of our K8s cluster, let’s create a demo application. By creating an application, Jenkins X will create and configure a Github repository that contains our application. After that, it will trigger an initial build to create an image for our application and eventually will deploy it to the staging environment (which is a different namespace in our K8s cluster).
Use the following command to create this demo application:
jx create spring -d web -d actuator
This will create an empty Java Spring repo in Github and will create a 0.0.1 version of this template application.
These are the activity and build logs generated upon creation of the repo:
When the creation process has ended we should have this application installed on our staging environment:
jx get applications
APPLICATION STAGING PODS URL
jx-spring-demo 0.0.1 1/1 http://spring-demo.jx-staging.jenkinsx.ek8s.com
Hitting this url will launch the Spring default error page.
Now, let’s make a small change to our application, commit and publish it as a “preview environment”. A preview environment is a deployment of our application to a temporary and standalone namespace exposed by a temporary and standalone service. The build and deployment process of this environment will need more computing power out of our cluster, which might don’t have this amount of resources available. No worry, this is exactly the reason we manage this K8s cluster with Ocean.
The change that we’ll make is to add a homepage to our application. As a big fan of the iconic television show Married with Children, I’ve decided to create a simple web page which presents a picture of Al Bundy. The steps are:
- Git checkout to a new branch - `git checkout -b add-index-page`
- Add an index.html file under “src/main/resources/static/index.html”
<title> Al Bundy </title>
<img src=”https://babbletop.com/wp-content/uploads/2018/02/al-bundy-schockiert-1-thumb-960-retina.jpg” height=”300”
3. Scale our deployment to three Pods (might be needed for load testing) - edit our helm `values.yaml` under “charts/spring-demo” for our deployment to create three Pods by updating `replicaCount: 3`
4. Commit and push the new branch - `git add . && git commit -am “index page added” && git push origin add-index-page`
5. Create a PR from the new branch to the master branch - this will trigger a build job that eventually will create a preview environment.
Clicking on the link in the PR will launch our preview environment:
That’s great! We now have our change deployed and ready to be tested with no effect on our staging environment. Behind the scenes, Ocean triggered a scale-up event for those Pods to be allocated (Pods were in `pending` status). As you can see, our cluster scaled up to three nodes.
After we’ve tested that our preview environment is functioning well, we can approve the PR. All we have to do is to add the label “approved” to the PR. Prow that was installed and configured as part of the X installation process will catch this Github event and will merge this PR automatically.
This will trigger an automatic promotion to the staging environment, resulting in a new version of our application:
jx get applications
APPLICATION STAGING PODS URL
jx-spring-demo 0.0.2 3/3 http://spring-demo.jx-staging.jenkinsx.ek8s.com
Hitting the link of our staging environment will result in our updated application:
We’ve just deployed our app to the staging environment after validating all of our tests passed successfully and we’re happy with the result.
Now, you’re probably wondering what about the preview environment? Will it stay in the cluster and consume unnecessary CPU and memory? Well, of course not. Jenkins X does “garbage collecting” in the preview environment periodically, but for the purpose of this demo, let’s trigger this garbage collection manually: `jx gc previews` will do the job. This command will terminate the preview environments pods and free allocated CPU from our K8s cluster. That means our cluster is now over-capacity as we freed up CPU and memory from our cluster. This time, Ocean will detect unused resources and identify which node can be terminated while keeping all other pods running on other nodes. Ocean will drain the node and then will terminate it to keep the utilization of the cluster as efficient as can be.
Uninstalling Jenkins X is easy. Just type `jx uninstall` and follow the prompts of the command.
In this blog, we’ve demonstrated how to reduce the cost of our infrastructure in several dimensions:
- Jenkins X uses K8s pods for CI/CD instead of “regular” instances
- Ocean will right size the instances needed for the pods in the cluster
- Jenkins X will remove unused preview environments and by that will free up resources from our cluster
- Ocean will identify unused instances that can be terminated, drain them and eventually terminate them as they are not utilized at all
- Learn more about Jenkins®
- Find out how you can get Jenkins X support
- See how Jenkins X speeds up CI/CD in this webinar