Save 40% or more in 40 Seconds

Sunshower.io’s optimizing algorithms help you save time and money on the cloud. There’s no upfront cost, and our results are better than our competitors. It’s cloud computing optimization unlike anything else out there? So how do we do the thing?

Imagine you need to rent a storage unit — you have a bunch of boxes, but nowhere to put them. No problem! There are a ton of companies out there that will rent you a storage space. You do a quick search, and find over 30 self-storage companies scattered across town. You don’t have time to talk to everyone to compare prices (who does?), so you call a company whose name you recognize. You’re not exactly sure what size storage unit you need, so they talk you into renting a 10 x 20 unit, “just in case”, at which point you end up with a storage unit that looks like the photo above.

End result: you’re paying for a lot more than what you need. Sure, you could move to another facility, but who wants to negotiate with another company, then give up their whole day to move a truck and switch facilities? Easier to stay put, and keep that extra space.

Buying too much “just in case” is a very common things for companies on the cloud, too. Why?

  • There are an overwhelming amount of cloud service provider choices
  • There are an overwhelming number of options on each of those cloud service providers
  • The UIs of cloud service providers are confusing
  • It’s hard to know exactly what you need and what you’re using

That’s where Sunshower.io comes in

When you work with us, we securely run metadata about your resource usage through a proprietary algorithm designed to find the exact right fit for your cloud compute needs. We use machine learning to calculate millions of data points, factor in fluctuations in data usage over time, and come up with a cloud plan that ensures you aren’t overpaying “just in case.” We find you a fit so good, we can save our customers 40% or more on their monthly cloud compute bill.

(Think Cinderella’s glass slippers, with good arch support and just enough wiggle room for your toes.)

Over time, this kind of cloud savings can be game-changing. Just imagine the decisions you could make even with an extra 20% of your monthly cloud spend back in your pocket, like hiring another engineer, or launching a great social media campaign. And that’s what we’re all about at Sunshower.io: helping you focus on what matters — your business.

To that end, we’re excited to announce our just-launched AWS EC2 optimizer

If you’re currently using Amazon Web Services EC2 for your cloud infrastructure, our service (colloquially known as Anvil) has been specifically tailored to analyze your data and come up with a better cloud usage plan to help with AWS cost optimization. The bottom line: Our AWS cost optimization can help you save money on AWS with just a few clicks.

Not using AWS EC2? We promise you won’t be left out. We’re launching AWS RDS optimization next, and we’ll be releasing optimizations for more public clouds as we go along. (Google Cloud and Azure are next up on the list, but let us know your needs and we can re-prioritize!)

‘Cloud Management Platform’ Undersells It

Because we need a snappy way to refer to what we do, we call ourselves a ‘cloud management platform.’ But that undersells it. Why? Because cloud management platforms and solutions are for the big guys — the ones with IT budgets the size of small countries, or at least small counties. Twee though it may sound, we want to be the great equalizer for cloud computing, supporting engineers of all stripes.

Why does this matter?

More and more, engineers aren’t coming from a Computer Science background, they’re coming from code academies or more ad hoc backgrounds (hello, yes, I majored in journalism). And even when they are, cloud computing isn’t really taught in schools. So you wind up in a job, and suddenly you have to figure out how to deploy things. Or you start a company, and you realize “wow, I can’t just have this run on my localhost.” Or maybe your infrastructure is in the cloud already, and you realize you just wasted $200 on an instance you forgot about. Or your infrastructure is in the cloud and costing you the salaries of three good engineers a month, and you have to figure out how to keep the engineers and not the cloud cost.

And so the Googling and Stack Overflowing begins, except half the information is out of date because things are always changing, and the numbers are almost always wrong because cloud service providers are constantly changing their pricing structure. And don’t even get me STARTED on how or if you should containerize your software.

It is in these moments that you wish that there was a cloud management system that could just do the thing for you — help you pick a cloud, help you migrate across clouds, help you deploy with Docker just like you had been with machine images, help you with cloud cost optimization.

And that, THAT is our why. Because no matter how new or how senior, there will be a moment when you don’t have the support you need. We didn’t pick the name ‘Sunshower’ because it was cute and fit with the cloud theme (okay, we kind of did). We picked it because dealing with the cloud shouldn’t have to be so dang hard. It should be able to be easy, and rewarding, and maybe even fun.

Need some help on the cloud? Visit us at Sunshower.io.
Spending too much on AWS? Learn more about our Anvil optimizer.

Unit-testing Aurelia with Jest + JSDOM + TypeScript + Pug

All source can be found at: Aurelia-Aire

Testing UI components is one of the more difficult parts of QA. In Stratosphere (one of the key components of our cloud management platform), we’d been using Karma + Jasmine, but the browser component had been complicated by the fact that providing a DOM to tests in a fast, portable manner subject to memory and speed constraints can be pretty challenging. Sure, initially we did the PhantomJS thing, then Chrome Headless came out, but writing UI tests just never really felt natural.

Then, last week, we decided to open-source our component framework, Aire, built on top of UIKit+Aurelia, and that created an exigent need to fix some of the things we’d been limping along with, most importantly testing. The success of OSS projects depends on quite a few things, but I consider providing a simple way to get contributors up-and-running critical.

Simple set-up

Internally, Aurelia uses an abstraction layer (Aurelia PAL) instead of directly referencing the browser’s DOM. Aurelia will (in principle) run on any reasonable implementation of PAL. Aurelia provides a partial implementation OOTB, Aurelia/pal-nodejs, that will enable to (mostly) run your application inside of NodeJS.

Project Structure

Our project structure is pretty simple: we keep all our components and tests under a single directory, src:


aire
├── build
│   └── paths.js
├── gulpfile.js
├── index.html
├── jest.config.js
├── jspm.config.js
├── jspm.dev.js
├── package.json
├── package-lock.json
├── src
│   ├── main
│   │   ├── aire.ts
│   │   ├── application
│   │   ├── button
│   │   ├── card
│   │   ├── core
│   │   ├── core.ts
│   │   ├── dropdown
│   │   ├── events.ts
│   │   ├── fab
│   │   ├── form
│   │   ├── icon
│   │   ├── init
│   │   ├── init.ts
│   │   ├── loader
│   │   ├── nav
│   │   ├── navbar
│   │   ├── offcanvas
│   │   ├── page
│   │   ├── search
│   │   ├── table
│   │   ├── tabs
│   │   └── widget
│   └── test
│   │   ├── button
│   │   ├── core
│   │   ├── init
│   │   ├── render.ts
│   │   ├── setup.ts
│   │   └── tabs

...etc

At the top of the tree you’ll notice jest.config.js, the contents of which look like this:

Basically, we tell Jest to look under src for everything. ts-jest will automatically look for your Typescript compiler configuration, tsconfig.js in its current directory, so there’s no need to specify that.

Our tsconfig is pretty standard for Aurelia projects:

Test

If you just copy and paste our tsconfig.json and jest.config.js files while following the outlined directory structure, everything will Just Work (don’t forget to npm i -D the appropriate Jest and Aurelia packages.)

At this point, you can use aurelia-test to write tests a la:

hello.html

hello.ts

hello.test.ts

Now, you can run your tests with npx jest:

aire@1.0.0 test /home/josiah/dev/src/github.com/sunshower/aurelia-aire/aire
npx jest

PASS src/test/button/button.spec.ts
PASS src/test/tabs/tab-panel.spec.ts
PASS src/test/init/init.spec.ts
PASS src/test/core/dom.spec.ts

Test Suites: 4 passed, 4 total
Tests: 12 passed, 12 total
Snapshots: 0 total
Time: 3.786s
Ran all test suites.

Enabling support for complex DOM operations

That wasn’t too bad, was it? Well, the problem we encountered was that we use the excellent UIKit framework, and they obviously depend pretty heavily on the DOM. Any reference in Aire to UIKit’s Javascript would fail with a ReferenceError: is not defined error. Moreover, if we changed the Jest environment from node to jsdom, we’d encounter a variety of errors along the lines of TypeError: Failed to execute 'appendChild' on 'Node': parameter 1 is not of type 'Node' which I suspect were caused by pal-nodejs creating DOM elements via its own jsdom dependency while Jest was performing DOM operations using its jsdom dependency. In any case, the solution turned out to be to define a single, global jsdom by importing jsdom-global. Once we discovered this, we encountered other issues with browser-environment types and operations not being defined, but this setup.js resolved them:

At this point we could successfully test Aurelia + UIKit in NodeJS.

The final piece

All of our component views are developed in Pug, and I didn’t like that we’d be developing in Pug but testing using HTML. The solution to this turned out to be pretty simple: adding Pug as a development dependency and creating a pretty simple helper function:

With that final piece in place, our test looks like:

Conclusion

The benefits of writing tests in this way became apparent the moment we cloned the project into a new environment and just ran them via npm run test or our IDE’s. They’re fast, don’t require any environmental dependencies (e.g. browsers), and allow you to run and debug them seamlessly from the comfort of your IDE. But, perhaps most importantly, these are fun to write!

Eulogy for the Old-Fashioned: Things We’ve Innovated Out Of Existence

As a kid, I remember having my mom drop me off at the library so I could rummage through the card catalog. I still have warm-and-fuzzy nostalgic feelings about spending hours pulling books off the shelves, checking the indexes, and finding exactly the pieces of information I needed. It’s a nice memory, but would I trade away the Internet to get another chance at flipping through musty drawers of tiny typewritten cards? No way. You’d have to be bonkers to go backwards and do research like that. I’d rather sit at home in my pajamas and access hundreds of sources online in seconds, thanks very much.

Innovation is a funny thing. We get so accustomed to doing something one way, that it never occurs to us that it could be different. Better. Think about all of the things we used to consider perfectly fine until we figured out a better way:

  • Driving to the store to rent a movie (RIP, Blockbuster)
  • Placing a personal ad in the newspaper
  • Pulling over to make call from a pay phone
  • Planning a route using a tri-fold city map
  • Sending a fax
  • Looking numbers up in the Yellow Pages
  • Listening to music on a Walkman
  • Checking out what’s on tonight in the TV Guide
  • Using a typewriter with a bottle of Wite Out handy
  • Checking spelling using a dictionary
  • Looking facts up in an encyclopedia
  • Keeping business contacts in a Rolodex
  • Calling the theater for showtimes
  • Making a mixtape

At one point, someone asked: Why isn’t there a better way? Then they changed the game.

That’s what we did for the cloud at Sunshower.io.

People used to think it was okay to be surprised by their cloud bill, or to spend hours, sometimes days, sometimes months, deploying applications. We used to find it totally acceptable to ask developers to become experts in how to deploy their software, while still … you know … finding the time to actually write said software. Not anymore. Sunshower.io has all the tools you need to work with the cloud faster, simpler, and more efficiently than ever before. We’re a cloud management platform that’s dedicated to offering better cloud resource management strategies and simple cloud cost optimization tools.

Why take a boat across the ocean when you could fly? You have stuff to do, deadlines to meet. Check out our beta on DigitalOcean to see how we can offer you a more innovative cloud management system.

Github Pages with TLS and a Backend

When you’re spending your days (and nights) developing the best cloud management platform possible, blogging consistently is hard. That said, we’re redoubling our efforts to get content out semi-regularly, even if it’s just to post something simple and (hopefully) helpful. To that end, I’d like to discuss setting up Github pages with a backend.

The Problem

 

Github.com allows you to host static content via its pages feature.  This is fantastic because it makes it trivial to create, deploy, and update a website.  This is how https://sunshower.io is hosted.

But what if you wanted to interact with, say, a database, to track signups?  Furthermore, what if you wanted to do it all over TLS?  This tutorial presumes that your registrar is AWS and your DNS is configured through Route53.

 

Set up Github DNS Alias Records For Github Pages

This one’s easy: In your Route53 Hosted Zone, create an A record that points to Github Page’s DNS servers:

>185.199.110.153

> 185.199.109.153

> 185.199.108.153

> 185.199.111.153

 

 

Then, check in a file in your pages repository under /docs called CNAME containing your DNS name (e.g. sunshower.io)

 

Push that sucker to master and you should have a bouncing baby site!

 

Publish your backend with the correct CORS Preflights

We pretty much just have a plugin for Sunshower.io that registers unactivated users.  Create an EC2 webserver/Lambda function/whatever to handle your requests.  The thing you have to note here is that your backend will have to support preflight requests.  A preflight request is an OPTIONS request that your server understands, and responds with the set of cross-origin resource sharing (CORS​ ) that your backend understands.

This is because your page, hosted at a Github IP, will be making requests to Amazon IPs, even though both are subdomains beneath your top-level domain.  For a JAX-RS service at, say, https://mysite.com:8443/myservice, you will need two methods:

 

@Path("myservice")
@Produces({
MediaType.APPLICATION_JSON,
MediaType.APPLICATION_XML,
})
@Consumes({
MediaType.APPLICATION_JSON,
MediaType.APPLICATION_XML,
})
public interface SignupEndpoint {

@POST
RegistrationConfirmationElement signup(RegistrationRequestElement request);
@OPTIONS
Response getOptions();
}

Note that there must be an @OPTIONS method for each actual conversation method that you want to interact with (e.g. the @POST method here).  What will happen is that a preflight will be made to the same path as the request, and the server will respond with whether that request is allowed.  You can widen the scope of @OPTIONS responses, but you should have a pretty good reason for doing so.

 

The actual @OPTIONS method will look something like:


@Override
public Response getOptions() {
return Response.status(Response.Status.OK)
.allow("POST")
.header("Access-Control-Allow-Origin", "*")
.header("Access-Control-Allow-Methods", "POST, OPTIONS")
.header("Access-Control-Max-Age", 1000)
.header("Access-Control-Allow-Headers", "origin, x-csrftoken, content-type, accept")
.build();
}

where the allow value and Access-Control-Allow-Methods values align with the request type of the actual backend methods.

 

Set up a SSL-enabled ELB

The first thing to do here is to create a certificate using AWS’s wonderful Certificate Manager Service.

Request a certificate by navigating to Certificate Manager > Request a Certificate in the AWS Management Console.  Request a public certificate, and then request it for the fully-qualified name that you want your backend to appear under (e.g. backend.sunshower.io).  If you have access to any of the follow e-mail addresses at your domain:

  • administrator@your_domain_name
  • hostmaster@your_domain_name
  • postmaster@your_domain_name
  • webmaster@your_domain_name
  • admin@your_domain_name

then select e-mail verification, otherwise select DNS verification and proceed according to the instructions provided by AWS.

 

Once you’ve verified your certificate, create an elastic load balancer on whatever availability zones your backend spans.  If you only have one instance in one availability zone (shame shame!), add that availability zone first.  Create a listener targeting the port that your actual backend is listening on, and add a security group with an ingress port that corresponds to the port that you want your public-facing backend to listen on (for instance, if you want your backend to respond to requests to https://backend.sunshower.io, configure your ingress to be 443.

From there, configure a target that points to your actual backend. If your backend is hosted on EC2 instances, select the instance type for the target, otherwise select ip.  Reference the IP/instances of your backend and create the ELB.

 

Configure a DNS A Record for your ELB

The last thing we need to do is create an A Record that points to your ELB.  If, previously, you requested a TLS certificate for backend.mysite.com, you’ll want to create an A-record whose name is backend, alias=true, with an alias target that is the ELB you created in the previous step.  Save that sucker and you are good to go!

 

 

 

Announcing Sunshower.io Preview!

We’re thrilled to announce a preview launch of Sunshower.io, a cloud management platform that offers cloud resource management and cloud optimization. ! I’d like to take a little time to explain what we do.

Beautifully Simple Multicloud Management

The first thing we do is provide a simple, unified interface for managing your public clouds. This means that you provide some read-only access to your cloud, which we store securely in our vault, and then we go discover whatever infrastructure is in your cloud(s) and manage and organize it for you.

Let me step through a quick example or our public cloud management system:

Discover your Resources (aka “Systems”)

The first thing we need to do is to discover your resources. Upon logging into Sunshower.io, you’ll be presented with the System Discovery Wizard. A System (i.e. Weather System) is the set of all infrastructure associated with a set of cloud accounts. So, if you have:

  • 1 Azure Scale Set with 4 active members that is your development cluster (azure-dev)
  • 1 Azure Scale Set with 20 active members that is your production cluster (azure-prod)
  • 1 AWS Autoscaling Group with 10 active members that is your AWS dev cluster (aws-dev)
  • 1 AWS Autoscaling Group with 30 active members that is your AWS production cluster (aws-prod)

Then you will create a system with at least 2 credentials, one for AWS, the other for Azure.

wizard-accounts.PNG

You can add as many accounts from as many cloud providers as you want. Each of the cloud providers is implemented as a (relatively) simple plugin, and if you add a new plugin (e.g. Google Cloud), you’ll be able to add credentials for that cloud, too.

Once all your accounts have been added, we’ll go through and perform the actual discovery. Once that completes, you’ll be presented with a topological overview of your cloud infrastructure:

topological-overview

The group color is used in the topology view as the color between edges of nodes, and in the geography view to color connections between regions:

geography-view.PNG

And yes, you can totally spin the globe!

Use Your Groups

Grouping is pretty fundamental to Sunshower.io:

  • Access control is based on groups
  • Management operations can be performed on entire groups (yes, you can spin down every node in a cluster by stopping its group)
  • Deployments are based on groups

For instance, you can SSH into an individual machine or an entire group. It’s pretty typical to have a ton of identical machines, so executing the same series of commands produces identical output from each machine. We de-duplicate the output and provide you with the results. Just store your private key in our vault (the actual key is stored in the excellent HashiCorp Vault). But, hey, tailing logs across a bunch of machines has never been so easy!

ssh

Visual Deployments (aka “Strata”)

One feature that we’re really excited about is visual modeling of deployments. Basically, you start off with a series of commands (e.g. shell commands), and you compose them together:

visual-modeling

From there, you can select a deployer format (e.g. Docker or Packer or even just the userdata section of an AMI, whatever you have plugins installed for) and viola! You have a deployment that you can share with coworkers, publish to everyone, or share with a specific group. For instance, you might create a Stratus that

  • Installs Java
  • Installs NodeJS
  • Installs Gulp

And you can generate a Dockerfile for it, a Packer file, an Azure Image, or an AMI without changing a thing!

Visual Infrastructure Modeling for Systems

But let’s say you don’t have any existing infrastructure and you want to create some. Now, you can log into your various cloud providers’ consoles and spin up whatever you need, but what if you want to create some infrastructure and try it out across clouds? Enter our visual infrastructure modeling for Systems. You can quickly model your infrastructure and deploy it out to any supported cloud (below is the cluster structure for Sunshower.io’s deployment)

system-designer.png

You can also export the model to a variety of provisioners like AWS CloudFormation, Azure Resource Manager, or Hashicorp Terraform. We’re currently figuring out what generating Kubernetes manifests looks like, but you’ll be able to do that soon too.

Finally, Anvil for Cloud Optimization

All of the data that we collect about your infrastructure we use to build a model of your tasks. If you think about purchased infrastructure as a shipping container, then it makes sense to purchase the smallest shipping container that can fit all of your packages. Anvil extends this analogy by allowing you to define new dimensions for your packages (think 5 or more), and we’ll figure out the smallest shipping container with those dimensions possible across any infrastructure for which there’s an infrastructure plugin. For instance, here’s an example of us spinning up a suboptimal configuration of resources and running Anvil on it:

anvil.png

It yields a much more compact (dense) configuration for your infrastructure. In fact, it typically yields an optimal configuration. You can even configure it to model packages based off of their greatest historical dimensions (like peak hours) so that you’ll never under-provision again, even while saving a substantial amount on your infrastructure. The result is total cloud cost optimization.

Conclusion

Thanks for sticking around! I wanted to provide a list of Sunshower.io’s current features to give everyone a better idea as to how it’s used and what it can do for you. Many people will only need one or two of the features, and we’d like to get some feedback as to which might be the most valuable for your organization so that we can get them to you ASAP.

Installing OpenShift on ESXi and CentOS 7

At Sunshower, we’ve been happily using Docker Compose and Docker Swarm for development and deployment respectively. These technologies make it a snap to build and deploy code, and the effort involved in setting them up is quickly offset by their utility.

We’ll continue to use Compose for development, but for better or worse, the industry has spoken and declared Kubernetes the winner of the container orchestration wars. And by this I mean that Swarm is not offered as a service by any of the major CSPs, but each of them either have or are working towards offering turnkey Kubernetes offerings. So, no sense in swimming upstream. We decided to create a Kubernetes deployment for Sunshower. However, a lot of the Kubernetes public cloud offerings are relatively expensive, which is a barrier for a self-funded startup like Sunshower. Fortunately, we got a very generous donation of hardware that I deployed OpenShift to.

Kubernetes vs. OpenShift

OpenShift is pretty much Kubernetes with some extras that make it extra attractive if you’re going to manage your own infrastructure. We’re using it because it also supplies builds.

Getting Started

Configuring your infrastructure

Configuring your infrastructure is the most tedious and error-prone part of this exercise, but if you don’t get it right, it will bite you.

Infrastructure step 1: Create a base VM image

Download a CentOS ISO (minimal is fine) to ISO_PATH where ISO_PATH is some accessible location on your local hard drive. I was not able to get uploads to work with the ESXi web client, so you’ll need to use the older ESXi thick client. Select the following options:

  1. Hardware Compatibility (only if you’re interacting with ESXi through VMWare Workstation): Workstation 11.x
  2. Installer disc image (iso): ISO_PATH
  3. Virtual Machine Name: centos-base (or whatever)
  4. Processors: (at least 2 are required, whether that’s 2 processors, 1 core/processor or whatever)
  5. 8GB memory
  6. Network type: Bridged !important
  7. Defaults for I/O Controller types, disk type, select a disk
  8. Disk capacity: 50 GB

Once your VM is up and running, run:

yum install update 
reboot now

When your VM comes back up, install some basics:

yum install -y open-vm-tools git docker wget 
systemctl stop network #if you're ssh\'d in, you'll lose access.  Do this through the VMWare console
chkconfig network off
chkconfig NetworkManager on
systemctl NetworkManager start
nmcli dev connect ens33 # You might have a different bridged interface name--check by running nmcli

Infrastructure step 2: Create the VM inventory

This is a bit of a chore since ESXi doesn’t even let you effing clone a VM. If you’re using Workstation, I’d recommend creating all the clones locally, then upload them to the ESXi host.

If you’re not using VMWare Workstation, manually clone each VM by:

  1. Creating the base VM (previous step)
  2. Browse the datastore you want to clone a VM to, create a new folder with the VM’s name (e.g. openshift-cluster-manager)
  3. Copy every file from the base VM’s directory to the new folder except for the log files

For our installation, we’ll have 1 master and 3 workers. If you need HA, you need 3 or 5 masters and however many workers. If you need a production-ready cluster with dozens or hundreds of workers, I do consult =).

Infrastructure step 2: assign static IPs

If you’re using OpenStack/AWS/vSphere, or are running your own DNS server, this is an optional step. Since ESXi does not have any available mechanism for dynamically deploying new virtual machines, your installation will be pretty static, so pick a naming convention, pick an adequate network size, and assign each cluster-node’s interface MAC to an IP. This obviously doesn’t scale well, but eh, it’s fine for a local cluster. One day I’ll show you how to create something a little less hands-on with OpenStack.

Install OpenShift using the convenient installer script provided by Grant Shipley

Select an initiator node. This node will have the role of APIserver, so select one of your masters (or your only master). Clone (Grant Shipley’s installer script)[https://github.com/gshipley/installcentos] with

git clone https://github.com/gshipley/installcentos

Install it with:

echo "console.yourdomain.com" >> /etc/hosts

export DOMAIN=yourdomain.com
export USERNAME=<username> #maybe administrator or something
export PASSWORD=<some super strong password>
cd installcentos && ./install-openshift.sh

This takes a while. Go get some coffee. Take a walk. If your installation node does not have a static IP, it will probably change due to the network reconfiguration this step performs and hork your installation.

If it completes successfully, visit console.yourdomain.com and you should see:

openshift-login

Install your cluster nodes

From your initiator node, copy your ssh key to each of the nodes you want to add to the cluster:

hosts=(os1.sunshower.io
os2.sunshower.io
os3.sunshower.io)

for i in "${hosts[@]}" do
    ssh-copy-id root@$(i)
done;

Otherwise you’ll be typing a lot of passwords.

On the initiator node, edit /etc/ansible/hosts:

[OSEv3:children]
masters
nodes
new_nodes
etcd

[OSEv3:vars]
openshift_deployment_type=origin  # if you bought enterprise, this would be enterprise
os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant  #Important!  The installer defaults to openshift-ovs-subnet for the nodes, but the master is running multitenant.  The openshift node process will fail to start without this
os_firewall_use_firewalld=true  #iptables sucks
osm_cluster_network_cidr=10.0.0.0/24  #change to whatever your network is
openshift_metrics_install_metrics=true  #optional

[masters]
console.sunshower.io  #or whatever your current hostname is

[etcd]
console.sunshower.io  # production installations would have several of these

[new_nodes]
os1.sunshower.io        openshift_schedulable=true  # All the DNS names of your nodes.  Make sure that they're either in your /etc/hosts file or your DNS server is correctly configured
os2.sunshower.io        openshift_schedulable=true
os3.sunshower.io        openshift_schedulable=true


Then, run:

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-node/scaleup.yml. These playbooks are installed by atomic-openshift-utils. After about 10 minutes, the installation process should complete and you should be able to run:

<br />oc get nodes
NAME        STATUS    ROLES            AGE       VERSION
10.0.0.10   Ready     compute,master   1h        v1.9.1+a0ce1bc657
10.0.0.4    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.5    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.6    Ready     <none>           47m       v1.9.1+a0ce1bc657
[root@localhost ~]# kubectl get nodes
NAME        STATUS    ROLES            AGE       VERSION
10.0.0.10   Ready     compute,master   1h        v1.9.1+a0ce1bc657
10.0.0.4    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.5    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.6    Ready     <none>           48m       v1.9.1+a0ce1bc657

EDIT: WordPress’s Markdown escaping is pretty broken–sorry for all the HTML escape codes–I’ll fix them when I get a moment.

DevOps Without the DevOps Part 4: Finally, Containers!

In previous segments, we discussed how to collect project dependencies, and use Maven, Gradle, and the Spring Maven Gradle plugin to organize your project dependencies in a maintainable and traceable fashion. In this post, we’re going to take that setup and create a clean, reproducible build using Docker.

Understanding Containerization

A containerized application’s lifecycle is composed of several steps:

  1. Defining the container
  2. Building the container
  3. Tagging the container
  4. Pushing the container to a registry
  5. Running the container

We’ll be using Docker as our containerization technology, but it’s not the only option. Ensure that it’s installed for your platform before you continue.

Note that these instructions are intended for Linux-based machines. Containers differ from VMs in a very important way, namely that containers share the OS kernel of their container host.

What this means is that, say that my host operating system is Debian, based on Linux Kernel 4.9.0.6-amd64 x86_64. Even if my container uses a different OS base (say, Ubuntu), the container and the host should be kernel-compatible (so, no Linux Kernel 3.10.x-ARM, for instance.)

These considerations are rarely an issue for many software projects. We use Debian-based containers because they’re similar to our dev environments and tooling, but Alpine is a really good option for many projects and has some advantages in production (e.g. resource utilization is anecdotally better.)

Step 1: Defining the Container

In the same way that we identified the compile and runtime dependencies of our project in the first series, let’s think about what we need to build the project. On my development system, I typically build a project by building and publishing the project’s Bill-Of-Materials POMs (mvn clean install -f bom), then by building the project’s artifacts with Gradle and publishing them to my local Maven repository (gradle clean build publishToMavenLocal, or, with Gradle’s nifty shortcuts gradle cle b pTML). So, I obviously need Gradle and Maven installed in my container.

So, what we’re going to do here is:

  1. Install the correct version of Java
  2. Download Maven from the Maven project site and install it into the container
  3. Download Gradle from the Maven project site and install it into the container

Let’s see if we can get rid of any of these steps by selecting the correct base container. Searching Docker hub, I see that Oracle provides JDK images. openjdk:8u141-jdk

My Dockerfile becomes simply:

FROM openjdk:8u141-jdk
ENTRYPOINT /bin/bash

Breaking down these instructions:

FROM openjdk:8u141-jdk says, “ask the local Docker registry to find an image with ID openjdk:8u141-jdk and derive from that. If you can’t find that image locally, reach out to hub.docker.com and see if it’s there`

Building the container pulls the base image, then executes all of the commands in the Dockerfile, which produces a new container:

Sending build context to Docker daemon  220.7kB
Step 1/2 : FROM openjdk:8u141-jdk
8u141-jdk: Pulling from library/openjdk
3e17c6eae66c: Already exists 
74d44b20f851: Already exists 
a156217f3fa4: Already exists 
4a1ed13b6faa: Already exists 
77980e5d0a6d: Already exists 
5458607a81d3: Already exists 
e34cf8338f42: Already exists 
2f3d3da5c56e: Already exists 
2ade7a861e3f: Already exists 
Digest: sha256:4b0c879909b729d67d13e5004f5564df85a5f9c1c3820c13e41151edf1f1b1c0
Status: Downloaded newer image for openjdk:8u141-jdk
 ---> 74c95c985a85
Step 2/2 : ENTRYPOINT /bin/bash
 ---> Running in a4bd5943abcd
Removing intermediate container a4bd5943abcd
 ---> b1c22add1692
Successfully built b1c22add1692 # <-- IMPORTANT, this is your container ID, referenced as $CID.

We can check this new container out by running docker run -it --rm $CID which will drop you into a shell that looks something like:

docker-shell

Now, most base containers don’t have many programs installed. The JDK base containers do have Java, which is the important one.

java2

Now, we can install Maven and Gradle really quickly.

Installing the Prerequisites

I like wget for simple downloads, but curl would work just as well. But we need to install Git anyway, so add:

RUN apt-get update
RUN apt-get install -y git-core wget

To your Dockerfile and build:

Sending build context to Docker daemon  220.7kB
Step 1/4 : FROM openjdk:8u141-jdk
 ---> 74c95c985a85
Step 2/4 : RUN apt-get update
 ---> Running in 2bfa3d396ac6
Get:1 http://security.debian.org stretch/updates InRelease [94.3 kB]
Ign:2 http://deb.debian.org/debian stretch InRelease
Get:3 http://deb.debian.org/debian stretch-updates InRelease [91.0 kB]
Get:4 http://deb.debian.org/debian stretch Release [118 kB]
Get:5 http://deb.debian.org/debian stretch Release.gpg [2434 B]
Get:6 http://deb.debian.org/debian stretch-updates/main amd64 Packages [12.1 kB]
Get:7 http://deb.debian.org/debian stretch/main amd64 Packages [9530 kB]
Get:8 http://security.debian.org stretch/updates/main amd64 Packages [468 kB]
Fetched 10.3 MB in 1s (5807 kB/s)
Reading package lists...
Removing intermediate container 2bfa3d396ac6
 ---> 4984ada9d0c8
Step 3/4 : RUN apt-get install -y git-core wget
 ---> Running in 6e6a79b1c1ab
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
  git-core
The following packages will be upgraded:
  wget
1 upgraded, 1 newly installed, 0 to remove and 64 not upgraded.
Need to get 801 kB of archives.
After this operation, 8192 B of additional disk space will be used.
Get:1 http://deb.debian.org/debian stretch/main amd64 wget amd64 1.18-5+deb9u1 [800 kB]
Get:2 http://deb.debian.org/debian stretch/main amd64 git-core all 1:2.11.0-3+deb9u2 [1410 B]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 801 kB in 0s (3242 kB/s)
(Reading database ... 29522 files and directories currently installed.)
Preparing to unpack .../wget_1.18-5+deb9u1_amd64.deb ...
Unpacking wget (1.18-5+deb9u1) over (1.18-5) ...
Selecting previously unselected package git-core.
Preparing to unpack .../git-core_1%3a2.11.0-3+deb9u2_all.deb ...
Unpacking git-core (1:2.11.0-3+deb9u2) ...
Setting up wget (1.18-5+deb9u1) ...
Setting up git-core (1:2.11.0-3+deb9u2) ...
Removing intermediate container 6e6a79b1c1ab
 ---> 67169937ccdd
Step 4/4 : ENTRYPOINT ["/bin/bash"]
 ---> Running in cd4e07152d23
Removing intermediate container cd4e07152d23
 ---> 2c1ab0b17981
Successfully built 2c1ab0b17981

Now, your container will have both wget and git installed.

Setting Environment Variables

If you define an ENV variable in Docker, the value of that ENV can either be passed into the container, or you can specify a default value (or both). We want to be able to reference (and change) both the Gradle version and the Maven version so that if we want to upgrade either, we just pass in new versions when we’re building the container and viola!

# Environment Variables
ENV PROJECT_NAME workspace
ENV GRADLE_VERSION 4.3.1
ENV MAVEN_VERSION 3.5.2
ENV BASE_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin


Now, when we run the container, we have access to those environment variables:

echo $GRADLE_VERSION
4.3.1

Install Gradle

RUN mkdir -p /opt/build/tools/gradle #Create directory for gradle
RUN wget https://services.gradle.org/distributions/gradle-$GRADLE_VERSION-bin.zip -O /opt/build/tools/gradle.zip  # Download gradle from gradle.org 
RUN unzip -d /opt/build/tools/gradle /opt/build/tools/gradle.zip # Unzip gradle 
ENV GRADLE_HOME=/opt/build/tools/gradle/gradle-$GRADLE_VERSION/bin #Export gradle location as GRADLE_HOME

Now, one really sweet thing about Docker is that, each of these commands defines a new layer. If the textual value of the command that builds a layer doesn’t change, then that layer is retrieved from the layer cache upon subsequent builds. What this means is that the URL for the Gradle download could change, and that wouldn’t break our container.

Install Maven

RUN mkdir -p /opt/build/tools/maven
RUN wget http://www-eu.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.zip \
-O /opt/build/tools/maven.zip
RUN unzip -d /opt/build/tools/maven /opt/build/tools/maven.zip
ENV MAVEN_HOME=/opt/build/tools/maven/apache-maven-$MAVEN_VERSION/bin

Once these have all executed, we have a container with Maven, Gradle, and Git installed and ready to use! In the next post, I’ll discuss how to store credentials securely and pass them into the container so that we can pull our project from source-control and build it.

Of course, if you don’t want to maintain your own, this base image is available on docker hub as sunshower/sunshower-base:latest

Server-Sent Events with Undertow, Spring 5, and Resteasy 4

Resteasy 3.5 introduced Server-Sent Events (SSE), and there weren’t any good resources for showing how to get it up-and-running, so I thought I’d put together a quick how-to guide.

Add your dependencies

Up your org.jboss.resteasy:resteasy-jaxrs version to 4.0.0.Beta2. This should automatically pull in a JAX-RS API version 2.1 unless you’ve specified a version of JAX-RS directly. If you have, upgrade to 2.1. This will provide access to the SSE component of JAX-RS 2.1 (SseEventSink, Sse, etc.).

Define an SSE method


@Path("test")
@Produces({MediaType.APPLICATION_JSON})
@Consumes({MediaType.APPLICATION_JSON})
public interface TestService {

  @GET
  @Path("{id}/events")
  @Produces(MediaType.SERVER_SENT_EVENTS)
  void subscribe(@PathParam("id") String id, @Context SseEventSink sink, @Context Sse sse);// It's ok to put JAX-RS annotations on the interface.  Recommended, in fact.

  @POST
  @Path("test")
  TestEntity save(TestEntity testEntity);

  @GET
  @Path("{value}")
  @Produces({MediaType.TEXT_PLAIN})
  String call(@PathParam("value") String input);
}

Implement that biz:

  public void subscribe(String id, SseEventSink sink, Sse sse) {
    service.execute(
        new Thread(
            () -> {
              try {
                sink.send(
                    sse.newEventBuilder()
                        .name("domain-progress")
                        .data(String.class, "starting domain " + id + " ...")
                        .build());
                Thread.sleep(200);
                sink.send(sse.newEvent("domain-progress", "50%"));
                Thread.sleep(200);
                sink.send(sse.newEvent("domain-progress", "60%"));
                Thread.sleep(200);
                sink.send(sse.newEvent("domain-progress", "70%"));
                Thread.sleep(200);
                sink.send(sse.newEvent("domain-progress", "99%"));
                Thread.sleep(200);
                sink.send(sse.newEvent("domain-progress", "Done."))
                    .thenAccept(
                        (Object obj) -> {
                          sink.close();
                        });
              } catch (final InterruptedException e) {
                e.printStackTrace();
              }
            }));
  }

Write a test-case:

 @Test
  void ensureSseWorks() throws InterruptedException {

    ResteasyWebTarget path =
        ((ResteasyWebTarget) webTarget).path(TestService.class).path("1/events");
    SseEventSource source =
        SseEventSource.target(path).reconnectingEvery(10, TimeUnit.SECONDS).build();
    try (SseEventSource s = source) {
      System.out.println("a");
      s.register(
          e -> {
            System.out.println("d");
            System.out.println(e.readData(String.class));
            System.out.println("e");
          },
              System.out::println);
      System.out.println("b");
      s.open();
      System.out.println("c");
      Thread.sleep(1000);
    }
  }

Experience failure

Failure 1

If you’ve done all that, it won’t work. The first error you’ll get is that your @Context SseEventSink is null. Unhork yourself by adding

  @Bean
  public SseEventSinkInterceptor sseEventSinkInterceptor() {
    return new SseEventSinkInterceptor();
  }

to your Spring configuration.

Failure 2

The second failure you’ll experience is on the client-side: No MessageBodyReader for “text/event-stream”. This is fixed by adding a

  @Bean
  public SseEventProvider sseEventOutputProvider() {
    return new SseEventProvider ();
  }

to your Spring client configuration.

Experience success

Ahh, delicious success

a
b
c
d
starting domain 1 ...
e
d
50%
e
d
60%
e
d
70%
e
d
99%
e
d
Done.
e

Sunshower-Test

Plugging Sunshower again, you can get all this goodness by annotating your test-class with @io.sunshower.test.ws.EnableJAXRS if you have io.sunshower.test:test-ws:1.0.0-SNAPSHOT as a dependency.

Unit-Testing Complex External Dependencies

The question of how to unit-test complex external dependencies arises pretty frequently. This is near-and-dear to our hearts because we have many of them. Mocking out complex responses is tedious and error-prone, so I’ll tell you what we do instead.

1: The setup

Stratosphere has an contract for obtaining instances from a cloud service provider, viz.,


public interface ListInstancesOperation extends ProviderOperation<List<Instances>> {
      @Override
      public List<Instance> perform();

      @Override
      public Provider getProvider(); 
     
      @Override
      public Secret() getSecret();
//...etc.

}

perform() will typically interact with the provider service endpoint. The body of perform is quite simple:


  public List<Instance> perform() {
    AmazonEC2 client = createClient();

    DescribeInstancesResult result = client.describeInstances();
    return toInstances(result.getReservations());
  }

This doesn’t look like a super-testable method, so what do we do?

2: Get the actual response and write it to a file

Before you write any code that interacts with an external dependency, you have to understand how that dependency behaves. I recommend keeping a set of IAM credentials in ~/.aws/credentials for testing. Once you do that, actually perform the request and see what you get back.

response

Now, we serialize the response using Java’s default serialization mechanism to a file that we check into source-control.

DescribeInstancesResult result = client.describeInstances();
Objects.write(result, relativeToRoot("src/test/resources/ec2/list-instances.obj")); // our utilities for writing an object using serialization.

3: Mock it real good

Recall that our method under test had 3 statements. One to create the client, one to make the request to the client, and one to map the results. The client’s method under test is public, so we can mock that. We gave createClient default visibility so that we could mock that while not exposing it as part of our Operation API, and then we put all our actual logic into a private method, whose behavior we want to test.

We can now set up our tests to mock out the external operation with a real result:


  private Secret secret;
  private AmazonEC2Client client;
  private EC2ListInstancesOperation operation;

  @BeforeEach
  void setUp() {
    secret = new Secret();
    operation = new EC2ListInstancesOperation(secret, "us-west-2", (AWS) AWS.getInstance());
    operation = spy(operation);

    client = mock(AmazonEC2Client.class);
    given(client.describeInstances())
        .willReturn(Objects.read("src/test/resources/ec2/list-instances.obj", true));
    given(operation.createClient()).willReturn(client);
  }


And test it as follows:

 @Test
  void ensureInstanceFirewallsAreCorrect() {
    List<Instance> perform = operation.perform();
    Instance instance = perform.get(0);
    assertThat(instance.getFirewalls().size(), is(1));
    Firewall firewall = instance.getFirewalls().iterator().next();
    assertThat(firewall.getName(), is("launch-wizard-2"));
    assertThat(firewall.getSecured().contains(instance), is(true));
  }

This approach works pretty well for external dependencies that don’t change frequently (like the public APIs of large cloud service providers), but less well for external dependencies that do. What are some approaches you use?