0 comments on “The Big Three: Comparing AWS, Azure and Google Cloud for Computing”

The Big Three: Comparing AWS, Azure and Google Cloud for Computing

If you’ve heard of cloud computing at all, you’ve heard of Amazon Web Services (AWS), Microsoft Azure and Google Cloud. Between the three of them, they’ll be raking in over $50 billion in 2019. If you’re on the cloud, chances are good you’re using at least one of them.

The latest RightScale State of the Cloud Report pegs AWS adoption at 61%, Azure at 52% and Google Cloud at 19% (see the purple above). What’s more, almost all respondents (as denoted in blue) were experimenting with or planned to use one of the top three clouds. Which, if you math that up, means that 84% of respondents are going to be using AWS at some point, 77% will be using Azure and 55% will be using Google Cloud.

AWS, Azure & GCP market share

Multi-cloud strategies are definitively A Thing, contrary to some folks’ opinions and the overwhelming one-cloud-to-rule-them-all desire of AWS. So it’s worth comparing them. On a broad level, AWS rocks and rolls with capabilities set to lock you into their cloud, while Azure’s great for enterprises and Google Cloud’s your go-to if you want to do AI. But, as with all things, there’s more to it than that, and it’s not just where you can get the best cloud credit deals.

0 comments on “Everything is a Data Problem”

Everything is a Data Problem

You wouldn’t think that the primary issue with optimizing cloud computing workloads would be getting good data. Figuring out math problems (hello, integer-constrained programming) worthy of a dissertation, sure. Writing a distributed virtual machine, maybe. Getting good data about a workload to run against good data about what the viable machines to put it on are? Not so much.

Well, you would be wrong. While the majority of the IP is in said math problems, the majority of the WORK is in the data — getting it and cleaning it up. And the data problem alone is enough to make you realize why everyone just picks an instance size and rolls with it until it doesn’t work anymore.

Last week we started the work to expand our platform from AWS-only to Azure. One of the first steps to that is what we call a “catalog”: a listing of all the possible virtual machine sizes across all possible regions with all of their pricing information (because, of course, pricing and availability vary). You would hope that this sort of catalog would be readily accessible from a cloud service provider (CSP). At the moment, the state-of-the-art is the work of many open-source contributors working together to scrape different CSP sets of documentation.

For AWS, we love ec2instances.info for this information, though we still had to get all of the region information in less savory ways. Different folks have attempted to do similar things for Azure, but Azure doesn’t make it easy. Pricing is different across Linux and Windows, because of course it is, but the information they give you when trying to look at pricing is missing some bits:

Screenshot comparing B-Series instances on Azure

That’s right, you get vCPU, RAM and storage. No notion of IOPS or networking, which might be enough for some folks, but we think you deserve better. But hey, maybe we’ll add the B1S to our estimate and see what that looks like?

Azure B1-Series estimator screenshot

I mean, I guess that’s better? Per some definition of those words? The hidden $50 for storage transaction units makes me want to die a little, though. Pretty impressive how less than two cents an hour can balloon so fast, isn’t it? Notably we’re still not getting information beyond vCPU, RAM and disk space.

So, how do we get that? We go spelunking through the Azure docs. Again, it get split into Linux vs Windows, though as far as I’ve been able to tell thus far, they are wholly the same. Digging into them for the B-series, we finally start to get something meaty!

B-series screenshot from Azure docs

Behold! IOPS. No information on networking, though, beyond the number of NICs (network interface controllers). Well, that’s a bummer. Is that the case for all of the machines? Ha, no — just scroll down to the D-Series.

D-series screenshot from Microsoft Azure docs

This is where we run into trouble for our page-scripting heroes. Microsoft Azure sort-of-kind-of provides the same information about each of its instance families, but not universally, leaving you to extrapolate expected network bandwidth and so much more.

All that being what it is, we’d like to introduce you to our Azure catalog and the tools to generate it, and to encourage you to fill in any information you can fill in. And soon we’ll be introducing you to our Azure offering itself … after we fix the problem of getting data from Azure Monitor, of course. 😂

0 comments on “Perfectly Provisioned: 22 Random Things That Fit Perfectly Into Each Other”

Perfectly Provisioned: 22 Random Things That Fit Perfectly Into Each Other

When two random things fit together perfectly, it creates a special kind of magic — Like stumbling across a way to bring order to the chaos of everyday life. Maybe that’s what makes these 22 photos so satisfying?

1) Cat crammed into a box

2) A pill and a ruler

0 comments on “Before You Buy a Reserved Instance, Read This”

Before You Buy a Reserved Instance, Read This

Reserved Instances are an enormous investment.

At first glance, that statement might seem counter-intuitive. Reserved Instances (RIs) are widely advertised as the best way to save big on your Amazon Web Service (AWS) cloud compute bill. And in many cases, they are. With Reserved Instances, companies commit to long-term usage by agreeing to rent virtual machines for a set amount of time (typically 1 to 3 years) in exchange for a significantly lower rate than on-demand pricing. When viewed through this lens, they appear to be a vital part of an AWS cost management strategy.

Cost Savings

Take Amazon EC2 as an example. When compared to on-demand pricing, Amazon EC2 RIs offer customers potentially deep discounts — sometimes as much as 75%, per their marketing. While reserving cloud capacity in advance seems like the smart thing to do because it has the potential to deliver a significant amount of savings, the savings promised by RIs often have a dangerous downside — and any missteps can have substantial costs for your company.

The calculations involved in deciding which RI to purchase can be frustratingly complicated. One year or 3 year contract? What about tenancy? Instance size? Region and zone? New or from the marketplace? And don’t forget about the nuance of offering class — do you want your RI standard, convertible or scheduled?

These calculations are difficult, but absolutely vital when committing to a RI. Rather than signing a contract for exactly what you have now (in terms of size, region, and tenancy) and guessing at the length of time that will fit your instance, it’s essential to understand the exact shape of your usage needs. Without that kind of granular insight into your workload, it’s impossible to choose a RI that will be the right fit six months from now, let alone three years in the future.

In the end, many companies buy RI capacity that ends up exceeding their actual needs, because they’re already using capacity that exceeds their needs. Unfortunately, committing to more capacity than you actually need can be very costly over the length of a RI contract. When that happens, the long-term return on investment (ROI) ultimately evaporates.

In addition to the challenges presented by accurately forecasting usage needs, RI contracts also lock companies into an instance’s older technology as innovative (and often less expensive) new instances speed past on their way to market. This is why long-term RI contracts can be especially devastating to startup companies — when your revenue or funding starts to dry up, this is one area of your burn rate that absolutely cannot shrink because you’re legally obligated by your RI contract to pay.

Convertible RIs don’t fix the problem, either. Even though they’re less restrictive and more flexible than standard RIs, you can’t sell them on the marketplace. Worse, the contract you sign for a Convertible RI forces you to stay within the same family of instances you originally bought, meaning you can’t upgrade to new ones or scale down into a different family if your needs shrink over time. In the end, convertible doesn’t mean the same thing as flexible. Your hands are still tied when it comes to upgrading, scaling, and selling.

Rightsizing

Businesses looking to save money on their cloud bill should absolutely make themselves familiar with the cloud cost management opportunities available through RIs — they’re the most significant cost-saving tool that AWS provides. That said, it’s important to rightsize resources before committing to a RI in order to meet capacity needs at the lowest possible cost.

The first step in rightsizing is monitoring and analyzing your current use of services to gain insight into instance performance and usage patterns. Common rightsizing metrics include vCPU utilization, memory utilization, network utilization and ephemeral disk use. We recommend that you monitor performance over a two-week to one-month period to capture the workload and business peak, but that also depends on the seasonality of your business.

While Amazon EC2 both offers a wide selection of instance types and sizes and generates comprehensive usage data, giving customers the necessary flexibility to meet capacity needs and determine how to better rightsize instances to meet the technical requirements of their specific workloads, we only recommend taking the plunge and purchasing RIs if you know what your workloads expect to be for the next year or three. AWS-recommended guidelines, too, are often more conservative (ie: less optimized) than what you might find by using a company whose sole mission is to uncover cost-saving optimization strategies (like Sunshower.io!).

Conclusions

While RIs look good in theory, consider how much savings you could be missing out on by trying to forecast your usage needs across a span of 1-3 years. And even if you’re a wizard at forecasting, if Amazon comes out with a new lower-cost instance type, you’re stuck with what you have until your contract expires. (Or are forced to sell your instances on the marketplace for a fraction of what you paid.)

Unused reserved instances only waste your money. The cloud is aimed at helping keep infrastructure costs down, but only if cloud services are used in a smart manner. Fortunately, Sunshower.io can help with that!

Using the strengths of our optimizer, we offer suggestions on the best fitting instance (or instances) for your different workloads. Sunshower.io’s algorithms can increase your cloud compute savings by analyzing your usage data and generating a plan for cloud cost optimization that leverages all of the available pricing options and instance types. As a result, you never have to worry about the complicated process of buying and managing RIs. Leave that to us. Provided you’re not already locked into a long-term RI contract, there’s significant savings to be had. Agility is the key, and although RIs offer deep discounts, it’s at the expense of the flexibility that you need in order to be truly optimized– not just today, but every day. Optimization is an ongoing, dynamic process that requires consistent monitoring and management.

Leave that to us.

1 comment on “How We Optimize Based on Resource Utilization Data”

How We Optimize Based on Resource Utilization Data

We frequently get asked what makes our AWS cost optimization so good. AWS cost management feels like it should be easy, and we talk to a lot of folks who think they’ve done a good job of it. The fact is, we’ve yet to see anyone who’s not wasting at least 40% of their EC2 bill. Let’s walk through it on our platform, and it’ll make sense why.

screenshot of a virtual machine report within the Sunshower platform

Fitting an Instance

It all starts with knowing what you’re actually using, resource-wise. Figuring this out as a human is surprisingly hard. For Sunshower, we look at the past month of a virtual machine’s life (if we have it — that’s our default) and sample every minute (by default, but it’s adjustable). After smoothing the data, that’s how we discover, in this case, 1 CPU (of the 8 they’re paying for) and 10G RAM (of the 30 they’re paying for) are actually being used.

In the screenshots below, you can see the resulting “shape” of the workload on the virtual machine. First, on the left: current vs utilized. The grey is what they’re currently paying for, and the purple is what they’re utilizing. Frankly, it LOOKS like a pretty good fit.

To compare, let’s look at the screenshot on the right: optimized vs utilized. There’s our purple triangle of utilization again. This time, you’ll see the optimized fit we found in blue. Even though the blue section looks a lot bigger, it actually reflects a substantial cost savings over the original, grey fit on the left.

resource utilization compared to purchased virtual machine

How is that possible? The thing you’re really paying for, in most machines, is CPU and Memory. So, the closer a fit you can get on those, the better. In the image on the left, you can see that the majority of the overprovisioning is taking place in the most expensive areas of cloud spend: CPU and memory. Tightening that fit up in CPU and memory, like you see represented in blue in the image on the right, might look like an incremental change from the image on the left, but in reality it adds up.

But, if the image on the right reflects cost savings and better optimization, why does the optimized fit in blue look so much bigger? The newer generations of AWS machines have 5G networking out-of-the-box now, meaning newer generation machines get you a lot more compute power for a lot cheaper.

Getting the Optimization

So, who’s this blue optimized beauty we recommended? It’s that r5.large that we show in the middle select box. We find the instance size that will get you the greatest cost-savings by default, but also give you our top ten recommendations.

screenshot of the top 10 instance size recommendations

One of the things we always find remarkable is how big of a difference there even is even within our cloud optimization results. In this case, our top recommendation is $61.50 a month cheaper than our bottom recommendation. (We do month calculations based on AWS’s 750 hours.)

When the machines start to add up, you can see how the savings (or the waste) also start to add up. It can be painful to watch. That’s why at Sunshower.io, we demystify cloud computing by giving you tools to easily manage the entire lifecycle of your cloud infrastructure. We automate all the decisions and present you with the best options across different vendors and clouds so you can be sure that you’ve achieved total cloud cost optimization. We want to help companies clear away cloud confusion, and empower them to create the most efficient cloud management system possible.

0 comments on “4 Strategies for Cloud Cost Optimization”

4 Strategies for Cloud Cost Optimization

We’ve talked about the most common causes of cloud waste, and how they can negatively impact your company’s bottom line.

Whether it’s choosing the wrong instance size, not fully understanding cloud pricing options, leaving unused resources running, or locking yourself into inflexible reserved instance contracts, there are lots of ways to end up with a cloud bill that wreaks havoc on your financials.

What can you do to keep cloud costs down and reduce your part of the $14.1 billion that will be wasted on cloud compute resources in 2019? You can avoid cloud waste by adopting smart cloud cost optimization strategies. Here are a few good places to start.

1. Don’t Over-Provision Your Cloud Infrastructure

Remember when there was that great deal on strawberries so you bought a bunch because you thought you’d surely eat them? And then you never did? Just like it’s hard to figure out what you’re really going to eat in a week, it’s hard to figure out what resources your software really needs in order to run. You can use a monitoring solution to determine what your resource utilization actually looks like in production, determining how much of critical resources like memory, CPU, disk, networking and more you’re using. Then, it’s a matter of aligning that with an instance size (which unfortunately sometimes feels more like blindly buying strawberries in bulk than reaching for a pre-packaged pint, considering the sheer number of options per cloud service provider).

2. Turn Off Idle Cloud Infrastructure

The main cause of idle capacity is leaving non-production machines up and running 24/7. Consider spinning down build, QA, demo and development environments during off hours. You can schedule them to turn off when your night owls leave and turn back on before the early birds come in. On the production side, use auto-scaling groups to help meet peak demand times. And of course, be vigilant that as people and products come and go you’re monitoring what systems are actually being used.

3. Demystify Cloud Pricing

In order to calculate cloud costs, it often feels like you have to learn a different language. Different cloud service providers use different terminology for the exact same things (for example, “instances” vs “virtual machines”). If you’re not well-versed in the lexicon used by different providers, it’s nearly impossible to compare costs. What AWS calls “on-demand instances” Azure calls “pay as you go.” On top of that, there are so many different options to choose from (memory optimized? Disk optimized? Burstable performance instances across multiple generations?), it’s enough to make anyone want to pull their hair out.

Oh, and did we mention most price by the hour, whatever that even means?

If pricing is so confusing, why don’t providers simplify things for their customers? Because when pricing is confusing, it’s easier for the customer to go with vendor-recommended guidelines so they don’t have to deal with complicated decision making. While it’s in the best interest of providers to keep their customers happy, vendors will often upsell to make sure the customer is covered “just in case.” That often means you’re paying for something you really don’t need.

If keeping cloud costs down is a priority, then consider keeping provisioning decisions in your own hands, not the vendor’s. Be patient. Learn the language. Don’t let frustration lead you down the path of least resistance—It’s expensive down there!

4. Understand How to Purchase Instance Resources

You’re going to need to purchase instance resources, which involves a bit of planning and research. Three choices you’re faced with are:

On-demand instances. Arguably the most popular type of instance, because it’s the most flexible. Here, you’re charged a fixed hourly rate with no contract or commitment.

Spot instances. A bit more complicated—Spot instances let you bid for instance capacity, naming the price you’re willing to pay. If it doesn’t matter when your application is running, and it’s not a problem for the application to stop running if the bid price isn’t available, this instance type can save you quite a bit of money.

Reserved instances. Essentially, you’re making a reservation for the future capacity you plan to use. Here, you save money because you’re committing in advance to a long-term instance purchase.

There’s a reason most folks choose on-demand instances — they work for most workloads. Spot instances, for many, require re-architecting to make work, so you pay for development time instead of compute time. On the other hand, while reserved instances CAN save you a lot of money, whenever you agree to a contract, you’re restricting your flexibility. If you buy capacity today, what happens if you don’t need as much tomorrow? Or, if a competitor comes out with a better-priced option? Less flexibility ties your hands and prevents you from optimizing until your contract runs out. That can be costly in the long run.

Still Feeling Overwhelmed?

We get it. It’s a lot.

With all of these complicated concerns to tackle, it’s no wonder that over 50% of RightScale’s State of the Cloud respondents said that cloud cost optimization was their top concern two years in a row. And while the consequences surrounding the rotten strawberries in your refrigerator is negligible, the ramifications for your company when it comes to cloud waste are much more dangerous. The result of money being wasted on unused, idle, or over-provisioned resources is money your company could be putting toward its own growth and development.

How Sunshower.io Can Help

Building and managing cloud infrastructure can be extremely complicated and time consuming, especially when you don’t have a lot of experience on the cloud, or deep pockets to hire a team to handle your cloud management. This gives large companies a huge advantage over the little guys, and can leave folks without a strong tech background feeling behind the curve.

Sunshower.io aims to change all that.

Helping de-mystify these decisions is literally the reason Sunshower.io exists. We’ve experienced first-hand how challenging it is to migrate, manage, and monitor cloud resources. As a result, Sunshower.io acts as an equalizing tool that makes everyone an instant cloud expert. We see a future where companies can build their own cloud infrastructure more easily and efficiently, and feel confident that everything running on the cloud is fully optimized. (Best of all, our algorithms cut AWS EC2 cloud bills by 40-80%.)

At Sunshower.io, we give you a cloud management platform with tools to easily control the entire lifecycle of your cloud infrastructure. There’s no learning different systems, different pricing structures, or different terminology across cloud service providers. We automate all the decisions and present you with the best options across different vendors and clouds so you can be sure that you’ve achieved total cloud optimization. We want to help companies clear away cloud confusion, and empower them to create the most efficient cloud management system possible.

1 comment on “Cloud Waste is Costing You More Than You Think”

Cloud Waste is Costing You More Than You Think

Cloud over-provisioning is a lot like like buying strawberries in bulk.

I know that sounds weird, but hear me out:

I once bought a huge tub of strawberries at Costco. Sure, there were way more strawberries than I could probably eat, but they were such a great deal– just a little more expensive than a small box of berries at the grocery store. The lure of the deal was strong, so I caved.

What happened next? I ate a few berries out of the tub when I got home, promptly put them in the crisper drawer, and completely forgot about them. The next time I opened the drawer it was like a scene out of Avatar — a mysterious new greenish-blue ecosystem, composed of fuzzy, strawberry-shaped blobs.

Bulk strawberries sound like a great deal, but only if you’re going to, you know, actually eat all those strawberries.

We all pay for stuff we don’t end up using. Whether it’s those strawberries slowly creating a new ecosystem in the fridge or your company’s public cloud infrastructure, that waste can seriously add up. The moral of the story? It’s never a good deal if you don’t use what you’re paying for.

Cloud Waste

As cloud computing continues growing in popularity, more and more companies are turning to the cloud for their computing needs. But if you’re spending any money on a cloud service provider, it’s pretty darn likely that some of those funds are being lost on overprovisioned or underutilized cloud infrastructure. (Think: the proverbial untouched, moldy strawberries in the fridge.) It’s human nature to buy more than you need “just in case,” and this applies to cloud infrastructure too.

Cloud waste is an extremely common problem. (When it comes to the challenges of AWS cost optimization, cloud waste is so common that if you run your AWS infrastructure through our optimizer and DON’T have waste, Sunshower.io will pay you instead of the other way around.) Cloud waste can easily happen by:

1) Not choosing an appropriate instance size. It’s not always easy to tell what resources your software will require to run under production loads, but going big so you’re “better safe than sorry” can be hugely expensive in the long run.

2) Not fully understanding cloud pricing options. There are literally hundreds of thousands of options across leading cloud providers, and making the right choice can be impossible.

3) Leaving unused resources running. Not all resources need to be running 24×7, but it’s hard to know what to shut down when.

4) Buying reserved instances. Reserved instances are a great deal on paper, but if you’re not careful they can lock you into a long-term contract that reduces your flexibility and doesn’t scale along with your needs.

With the growth of cloud adoption, cloud waste is a very real hazard. For many organizations, the extent of misspent cloud money can be shocking. So, what kind of money are we really talking about?

Cloud Spending

Spending on infrastructure as a service is forecasted to hit $39.5 billion in 2019, according to Gartner. In reality, this is probably an underestimation to the tune of a few billion, given that Amazon Web Services and Microsoft Azure’s revenue were both over $20 billion last year. More organizations are moving to the cloud or starting on the cloud every day. The big three CSPs — AWS, Azure, and Google Cloud — all have roughly the same offerings, but despite the increasing commoditization of the cloud, it’s still surprisingly easy for a cloud bill to balloon from $500 a month to $5000 a month to $25000 a month, even as a small company.

In terms of how those numbers connect to one cause of cloud waste, think of it this way:

Resources that aren’t being used can typically be shut down, which means non-production cloud compute resources can be off roughly 75% of the time (assuming a 40 hour work week). Let’s say you’re paying a CSP $200 a month per instance, and are running that instance 24/7. Now you’re overpaying by $150 per instance per month. Sure, that might not sound like a big deal if you only have one instance running. But if you’re running 20 instances? That’s $3000 a month wasted. All on resources you weren’t even using. Ouch.

Running unused instances is just one example of the many ways that cloud waste can affect your bottom line. According to RightScale’s latest State of the Cloud report, most companies think they’re wasting 27% of their overall cloud spend. RightScale assesses the actual number to be closer to be 35% on underutilization.

Between idle and underutilized cloud compute resources, it’s no surprise that the math works out to a projected $14.1 billion wasted on cloud compute resources in 2019.

Conclusion

The good news (or bad news, if you’re a cloud service provider), is that a lot of this cloud spend can be easily reduced with the right cloud management and cloud optimization tools. At Sunshower.io, for example, we’ve found that we can save companies 40 to 80% on their cloud compute bills. (Our cloud optimization engine sifts through your metric utilization data, finding the perfect-priced cloud, the right instance sizes, and the best solution for provisioning around downtime.)

The bottom line is that the cloud is complicated. Understanding the causes, impact, and solutions for cloud waste can help make the decision-making process so much easier. Interested in learning more? Discover how Sunshower.io can demystify cloud computing and manage the entire lifecycle of your cloud infrastructure to help you focus on what matters most — your business.

0 comments on “3 Reasons Your Cloud Bill is So High”

3 Reasons Your Cloud Bill is So High

At Sunshower.io, we talk to a lot of people about their cloud infrastructure usage. In our professional lives, we’ve dealt with the confusion caused by different cloud vendors, including confounding billing methods, lack of insight into the infrastructure you’ve built, and just throwing hardware and money at the current problem and hoping it’ll fix it. Understandably, the question we’re most frequently asked is the one that’s most mission-critical: How did my cloud bill get like this and how do I get it down?

1) You Forgot About Some Infrastructure

“Cloud sprawl” is extremely common, and happens when you’re running more cloud instances than necessary. It’s easy to see how this can happen—running workloads that you’ve forgotten about and unused and idle workloads are all key culprits. In a complex cloud ecosystem, it can be tough to keep watch over everything running in the cloud. Monitoring and controlling those workloads is key to making sure you’re not over-spending on the cloud. If your company isn’t using auto-scaling, you might be running instances 24/7 that aren’t always performing a necessary function. Running instances that you’re not using is essentially throwing money away—like going away for the weekend and leaving all of your lights on.

2) You Bought Too Much “Just In Case”

Overprovisioning refers to buying more cloud compute resources than you typically need. It’s important to tailor what you buy to actual usage, because it really adds up. The first step is figuring out what you’re actually using, which monitoring and cloud cost optimization tools can help with. If this process is overwhelming, there are vendors you can work with to help you sift through your options and make the best possible choices. Without good cloud monitoring tools, it’s impossible to see what you’re wasting. Only then should you start looking into what to buy instead.

3) You Drank The Vendor Kool-aid

The custom services provided by cloud service providers are tempting, but the cost can really add up. Even worse, it removes your ability to migrate to other cloud providers, so it’s hard to pivot to more cost-effective solutions over time. As you build your cloud strategy, try to avoid locking yourself into a relationship with a single cloud service provider. Don’t tie yourself to a single vendor because it’s convenient—make sure that you’re allowing yourself the flexibility to change providers and adapt new strategies when costs start to increase.

Setting Yourself Up For Future Success

When it comes to cloud costs as a whole, think about it this way: When you build a snowman, you start with a tiny ball. As you roll it around, it picks up more and more snow until the ball is eventually so big you can’t even move it. No way are you picking that guy up—he’s staying right where he is until the inevitable destruction by meltdown. Cloud costs can incrementally build up (and melt down) in much the same way. Not everyone has a full-time IT department or the expertise to be able to game the system and make sure their cloud infrastructure is as optimized as possible.

Cloud optimization is an ongoing challenge, and a vital part of any cloud management system. The good news is, there are tools out there to put you on the path to reducing your cloud costs today. The trick is choosing the right solutions—ones priced for the size of your company that simplify your life on the cloud, rather than complicate it. Choosing the right tools to help avoid sprawl, overprovisioning, and overspending are vital parts of a company’s survival. Make it a priority to understand how you use the cloud today, and you’ll be in a better position to reduce cloud spending tomorrow.

This post was originally published as a guest post for Fort Collins Startup Week: http://bit.ly/2SDr9gl

0 comments on “Memory Consumption in Spring and Guice”

Memory Consumption in Spring and Guice

Spring vs guice

The heart of the Sunshower.io cloud management platform is a distributed virtual machine that we call Gyre. Developers can extend Gyre through Sunshower’s plugin system, and one of the nicer features that Gyre plugins provide is a dependency-injection context populated with your task’s dependencies, as well as extension-point fulfillments exported by your plugin. For example, take (part) of our DigitalOcean plugin:

The details are beyond the scope of this post, but basically Gyre will accept an intermediate representation (IR) from one of its front-ends and rewrite it into a set of typed tuples whose values are derived from fields with certain annotations. The task above will be rewritten into (ComputeTask, Vertex, TaskResolver, TaskName, Credential). This tuple-format is convenient for expressing type-judgements, especially on the composite and recursive types that constitute Gyre programs.

As an aside, there are Gyre rewrite rules that can infer recursive and co-recursive types (think singly-linked lists) up to a configurable depth. Sunshower.io uses Spring extensively, and it was quick and convenient to, for a given program, subsequently rewrite a Gyre program into a set of typed tuples, union those suckers together, and if the resulting tuple was satisfiable, create a Spring ApplicationContext with all of the dependencies, start it up, and then execute the program in that context.

For quite a few tasks that we’ve encountered in the wild, these tuples can grow quite large, and so I was curious as to how memory utilization would grow relative to Gyre program size.

We have used Sunshower.io’s cloud computing optimization capabilities to analyze our own workloads, and for current uses, 4 GB heaps work quite well for us from both a cost and performance perspective. However, we don’t have much experience with thousands of concurrently-executing Gyre programs outside of simulations, so I wanted to figure out how badly using a Spring ApplicationContext per Gyre program would burn us, if at all.

The Setup

I wrote up a quick utility to inspect the size of an object using Java’s Instrumentation mechanism. Basically, we just perform a breadth-first search of the object graph and sum the individual sizes of each field as reported by java.lang.instrument.Instrumentation (feel free to ask me for the code — I haven’t had time to open-source it, but I do intend to).

Spring

Now, I used the Gyre to generate a configuration that looks like the following:

Now, testing it out yields the following memory utilizations for different numbers of beans

line graph showing the size, in bytes, of memory utilization

(All sizes in bytes)

Now, Spring has a lot of features, and is correspondingly large. The base size of the object graph of a AnnotationConfigApplicationContext is ~3.2 MB. The size of an empty, loading AnnotationConfigApplicationContext is ~4.7 MB. Now, there isn’t a whole lot of information to be gleaned from this, other than that Spring has a (relatively) large initial memory footprint. The good news is that it doesn’t grow very quickly. As an example, with all of our existing plugins loaded into their own application contexts, as well as Sunshower:Stratosphere, Sunshower:Core, and Sunshower:Kernel (about 10 individual application contexts and about 1400 services, including a JPA context), the total application size is ~78 MB. Is that a lot? Not at all — remember our 4GB heap. As a percentage of available memory, that amounts to a paltry 1.95%.

However, the problem is that we have to serve thousands or tens of thousands of these application contexts per minute. An empty Gyre graph will set us back 4.7 MB, which will allow us to have ~850 concurrent Gyre operations/node at best. And that doesn’t allow us to have any user data at all. That’s a bit of a problem for us. So, in an effort to reduce Gyre memory footprint, I decided to look at Google’s Guice.

Guice

The setup is basically the same. I generated a Guice Module, once again using Gyre, that looks like:

This used a pretty paltry 0.11 MB on average, which remained largely unchanged even as we defined several of the required services from our Spring application context into the Gyre Guice context. That allows us to easily meet our target of 500 concurrent Gyres per node without eating into our user data space.

line chart showing how guice's memory usage compares to Spring's

Final Notes

I want to point out that, relatively speaking, very little application time and application memory are typically consumed by application data. We are certainly not moving the entirety of Sunshower.io from Spring to Guice. Guice and Spring are both quite modular, and Sunshower.io uses virtually all of Spring’s modules. This is to say nothing of the heavy dependency we have on Spring Test. We only required dependency injection in the Gyre, and Guice turned out to be a memory-efficient way of providing that.

1 comment on “Looking for Free Cloud Credits? These 7 Discounted Startup Programs Can Help You Move to the Cloud”

Looking for Free Cloud Credits? These 7 Discounted Startup Programs Can Help You Move to the Cloud

As a cloud management platform, we understand that it’s tough to get started on the cloud. There are so many options, and confusing pricing structures can make the whole process seem overwhelming. Fortunately, there are lots of great programs dedicated to giving startups access to the resources they need to be successful. (Yes, even free cloud credits!) Before locking yourself into a relationship with a cloud vendor, check out these seven discounted startup programs. Applying can give you a great chance to play around with that cloud’s processes and pricing structure without spending more than your budget can handle.

AWS Activate

The credits and perks you’re offered from AWS Activate depend on which package is the right fit for your startup. The Portfolio package includes $15K in AWS credits over 2 years. The Portfolio Plus package gives you access to $100K in credits over 1 year. (AWS business support and training are some of the other perks of Activate.) The Portfolio and Portfolio Plus packages are available to startups that are associated with an approved accelerator, incubator, or VC Fund.

If your startup doesn’t qualify for AWS Activate but you’d still like to get hands-on experience with AWS, you’re in luck. AWS also offers a free tier, which gives new uses free access to over 60 of their tools for 12 months. It’s a great way to learn and experiment with AWS — just be careful to stay within monthly limits!

Create@Alibaba Cloud

Create@AlibabaCloud offers qualifying startups $2K to 50K in credits. You maybe eligible if your startup is a) registered outside of China, b) been in business less than 5 years, c) has less than $500k in annual revenue. And, bonus, you get priority if you’re not on Alibaba Cloud yet.

Google Cloud for Startups

Google Cloud offers new users $300 in credits for a year at https://cloud.google.com/free/, but you may also be interested in their three Google Cloud for Startups programs. You may qualify if you’ve raised no more than a Series A and are less than five years old, getting $3K to $100K in credits for the year.

Hatch by DigitalOcean

Build and scale your startup using the Hatch program and receive 12-month access to the DigitalOcean cloud with up to $100K in credits. Hatch also includes technical support and mentoring, along with a host of other tools.

Hatch program isn’t for you? DigitalOcean is also currently offering a $100 credit over a 60 day free trial period. There’s no affiliation with an incubator or accelerator required for this one, but it’s for new users only.

Microsoft for Startups

There are two options here. The first program is available to all startups, and includes $200 in Azure credits over 30 days. The second program is available to startups who meet their qualification criteria (company should be less than 5 years old, offers technical solutions, less than $10M in annual revenue, under $20M of funding, and associated with an accelerator or incubator), and includes up to $120K of free Azure cloud services over 2 years.

Oracle Global Startup Ecosystem

Oracle’s startup program bills itself as an “acceleration program,” so the application requirements are a little different from some of the other programs. Affiliation with VC firms, incubators, or accelerators isn’t required, but they’re interested in funding companies focused on “transformational technology.” If this describes your company, Oracle Global Startup Ecosystem could be a great opportunity to access free cloud credits, mentorship, training, and support.

Startup with IBM

IBM offers two tiers of support to startups. The Builder program includes $1000 a month in cloud credits over the course of a year, and doesn’t require affiliation with an incubator or accelerator. The Premium program supports companies involved with an approved accelerator or incubator that are focused on developing “innovative technologies.” If you qualify for the Premium program, your startup could receive $10K a month for a full year.

Before You Apply

There are lots of great opportunities out there for startups looking to get started on the cloud. Keep in mind, though, that most of the programs on this list are only available to new users, and some require your startup to be affiliated with an approved accelerator or incubator to qualify. Please check with the individual vendors to verify program details and eligibility before applying.

Once you’ve gotten started on one of these discounted startup programs, you’ll have the tools you need to start experimenting with the cloud. Sunshower.io can be an important part of that plan, too. When you’re just starting out on the cloud, the learning curve can seem pretty steep. Sunshower.io has the right tools to help you easily and intuitively manage the entire lifecycle of your cloud infrastructure. Our drag-and-drop deployments and unbeatable optimization algorithms are all designed for simplicity and ease of use. Whether you want to deploy software on the cloud, track workloads, or view all the spiffy new infrastructure you’ve built, you can master the cloud with Sunshower.io. From cloud resource management to cloud cost optimization, we’re here to help your company succeed by demystifying cloud computing so you can focus on what matters — your business.