At Sunshower.io, we write software for people who write software. We’re pleased to announce something new to help folks scale their software: Zephyr, a next-generation plugin framework written in Java. Zephyr is an OSGi alternative — inspired by the best parts of it while dramatically reducing complexity and improving interoperability with existing frameworks and ecosystems.
Zephyr was born from our frustration with existing module systems. We started off using Wildfly and embedding OSGi, but this proved inadequate for the complex dependency graphs we encountered while developing the Sunshower platform. In particular, continually copy/pasting around manifests to import the dozens of packages from various frameworks was tedious and error-prone (and auto-generating them wasn’t much better, in fact). It greatly increased the complexity of our builds and deployments as we’d continually need to rev released versions of modules. This is to say nothing of the complexities of testing module interactions, or the joys of a ClassNotFoundException appearing suddenly after weeks of smooth operation caused by a forgotten Package-Imports declaration.
After over 18 months of working around framework limitations, we looked at the “Kernel” that arose from coping with these problems and decided “Hey, this is pretty useful. Let’s get rid of underlying systems and just use that.” And now we’re open-sourcing it.
Small but mighty, Zephyr aggressively and automatically parallelizes management operations while running in less than 512KB of memory. It intelligently manages all aspects of plugin lifecycle, including dependency resolution. Deploying new plugins is quick and painless. And, of course, setting up plugin dependencies for tests is, well, a breeze.
While we wrote it in Java, Zephyr works with whatever languages you normally use by installing language runtimes as plugins. You can have multiple frameworks running side by side, eliminating a lot of overhead associated with rewrites, scaling and transitioning architectures.
Zephyr is available on Github under an MIT license. Enterprise support contracts are available. Go check out the website, the docs or the repository. We’d love to have you involved!
In case you were wondering, this isn’t just another Independence Day blog post talking about the Sunshower platform and how it will bring you freedom, blah blah blah. Rather, this is a blog post emphasizing that the ideals that led to the American Revolution, both within Great Britain and the colonies itself, are alive and well in the American startup culture.
What’s a Cloud Management Platform? (Part 2: Cloud Optimization Edition)
Two weeks ago, we talked about some of the ways that a Cloud Management Platforms (CMP) helps users relieve the headaches associated with DIY cloud resource management. This week, we’ll look at a few more compelling reasons to use a Cloud Management Platform like Sunshower.io for your cloud optimization and cloud resource management.
BizWest: Sunshower.io Now Tracks Cloud Usage on Amazon
By BizWest Staff — June 19, 2019
FORT COLLINS — Sunshower.io, a Fort Collins startup that helps companies optimize cloud computing, has just launched its Amazon Web Services EC2 cloud management platform. Soon, it will bring similar cloud management tools out of beta in order to service customers using Microsoft Azure and the Google Cloud Platform.
What’s a Cloud Management Platform? (And Why Do You Need One?) Part 1 of 2
Our official tagline at Sunshower.io is “beautifully simple cloud management and optimization.” But why do you need a Cloud Management Platform like Sunshower.io? When you work with a Cloud Service Provider (CSP) like AWS or Azure, doesn’t the CSP do the cloud optimization for you? Isn’t it the CSP’s job to make sure what you’re running in the cloud is rightsized, your applications are easy to view and manage, and that you’re getting the best possible value for your money? That’s what you’re paying them for, right?
We pushed out another update to the Sunshower platform yesterday.
On the Back End
We upgraded from Java 9 to Java 12 and from WildFly 14 to 16. As part of this, we also moved our L2 cache from Ignite to Infinispan. (We love Ignite, and are still using it for other things, but there was a memory leak in the version of Hibernate we were using, and Ignite was preventing us from upgrading).
On the Front End
When you come into your system and we don’t have your optimizations ready to go, you’ll now see a big refresh button. You can also rerun the optimization at any time by hitting the refresh button in the upper right:
The optimization summary now lists instances by current machine price, descending. It is currently paged, so users with more than 12 instances can expect to see some new navigation:
You’ll also see a little green trophy by any instances that are fully optimized. Good luck, and get optimizing!
The next release will contain enhancements related to regions: support for limiting optimization recommendations by region, as well as the ability to group and view infrastructure by region. As always, please let us know any feature requests.
Corey makes the argument that upgrading an m3.2xlarge to a m5.2xlarge for a savings of 28% is the correct course of action. We have a user with > 30 m3.2xlarge instances whose CPU utilization is typically in the low digits, but which spikes to 60+% periodically. Whatever, workloads rarely crash because of insufficient CPU — they do, however, frequently crash because of insufficient memory. In this case, their memory utilization has never exceeded 50%.
Our optimizations, which account for this and other utilization requirements, indicate that the “best fit” for their workload is in fact an r5.large, which saves them ~75%. In this case, for their region, the calculation is:
The approximate monthly difference is $8891.40/month
Now, these assume on-demand instances, and reserved instances can save you a substantial amount (29% in this case at $0.380 per instance/hour), but you’re locked in for at least a year and you’re still overpaying by 320%.
“An ‘awful lot of workloads are legacy’ -> Legacy workloads can’t be migrated”
So, this one’s a little harder to tackle just because “an awful lot” doesn’t correspond to a proportion, but let’s assume it means “100%” just to show how wrong this is according to the points he adduces:
If you’ve heard of cloud computing at all, you’ve heard of Amazon Web Services (AWS), Microsoft Azure and Google Cloud. Between the three of them, they’ll be raking in over $50 billion in 2019. If you’re on the cloud, chances are good you’re using at least one of them.
The latest RightScale State of the Cloud Report pegs AWS adoption at 61%, Azure at 52% and Google Cloud at 19% (see the purple above). What’s more, almost all respondents (as denoted in blue) were experimenting with or planned to use one of the top three clouds. Which, if you math that up, means that 84% of respondents are going to be using AWS at some point, 77% will be using Azure and 55% will be using Google Cloud.
Multi-cloud strategies are definitively A Thing, contrary to some folks’ opinions and the overwhelming one-cloud-to-rule-them-all desire of AWS. So it’s worth comparing them. On a broad level, AWS rocks and rolls with capabilities set to lock you into their cloud, while Azure’s great for enterprises and Google Cloud’s your go-to if you want to do AI. But, as with all things, there’s more to it than that, and it’s not just where you can get the best cloud credit deals.
You wouldn’t think that the primary issue with optimizing cloud computing workloads would be getting good data. Figuring out math problems (hello, integer-constrained programming) worthy of a dissertation, sure. Writing a distributed virtual machine, maybe. Getting good data about a workload to run against good data about what the viable machines to put it on are? Not so much.
Well, you would be wrong. While the majority of the IP is in said math problems, the majority of the WORK is in the data — getting it and cleaning it up. And the data problem alone is enough to make you realize why everyone just picks an instance size and rolls with it until it doesn’t work anymore.
Last week we started the work to expand our platform from AWS-only to Azure. One of the first steps to that is what we call a “catalog”: a listing of all the possible virtual machine sizes across all possible regions with all of their pricing information (because, of course, pricing and availability vary). You would hope that this sort of catalog would be readily accessible from a cloud service provider (CSP). At the moment, the state-of-the-art is the work of many open-source contributors working together to scrape different CSP sets of documentation.
For AWS, we love ec2instances.info for this information, though we still had to get all of the region information in less savory ways. Different folks have attempted to do similar things for Azure, but Azure doesn’t make it easy. Pricing is different across Linux and Windows, because of course it is, but the information they give you when trying to look at pricing is missing some bits: