Depends for the Deep End

Seven Lessons about Tech and Life

As an artsy, outdoorsy type and a middle school humanities teacher in my mid-thirties, I am the last person you would expect to see in the role of intern at a cloud management platform like Sunshower.io.

Perhaps we can blame Marie Kondo for my transition into web development: I started off cleaning my house and by the time it was decluttered I had a new career plan. More likely we can point our finger at a bottle of wine and a dead-end career in education. Together, they conspired to make me complain to my sister and brother-in-law over dinner about my professional woes. (My sister and brother-in-law are the CEO and CTO, respectively, of Sunshower.io.)  By the end of our meal, he had suggested I go into tech and she had given me an internship.

That was four months ago. The past few months have taught me a few lessons about tech and life:

Lesson #1: You have to start somewhere, and you’re not going to learn it all in a day.

I started with Sunshower.io knowing the bare minimum of the tech it takes to keep a 21st century classroom running. I certainly did not know about IAM, AWS, EC2 management, or cloud cost optimization or why one would care about any of it.

Lisa and Josiah showed me crazy computer things that I had never seen before: pug, github, git, command terminals, Docker, and dev environments. We like to call this steep learning curve “Depends for the deep end” because, sometimes, you end up over your head in code and it’s so scary you want to pee yourself.

Honestly, needing Depends for the deep end is not only okay but expected. Lisa started off her software engineering career with a wee bit of CSS and HTML and a master’s degree in public communication and technology. Josiah was a math major. They both taught themselves everything they needed to know (using life lessons 2 and 3; see below), and they still have days where they have to ask each other questions or take brain breaks to figure out solutions.

Where does this leave me? Having a lot more patience for the learning process. Whether you learn a little bit every day–or, per the Sunshower.io experience, a lot in one day–you will start stockpiling an impressive amount of know-how.

Lesson #2: When in doubt, look it up online.

While Sunshower.io’s people look out for each other, it is also a culture of rugged individualism. When you are working for people who built up their skill set from scratch without help, there is minimal patience or time for hand-holding.

You have to learn to find answers on your own. And you will.

Lisa broke me of the habit of asking too many questions early on. When I would ask her something easily answerable, she would ask what I thought or whether I had looked it up. Pretty soon, I would answer my own questions before she could reply, and finally I stopped needing to ask. Except for unanswerable structural questions, that is, which leads us to lesson 3.

Lesson #3: If you can’t find it online, ask.

For a newb, solutions are easy to find online. However, when you are collaborating on projects like Sunshower.io’s visual modeler, distributed virtual machine or cloud optimization machine learning algorithms, in the end you often HAVE to ask about the factors that might be standing in your way.  

Lesson #4: Be patient. Tech is a rollercoaster of successes and failures.

Some days, your code will work perfectly. You will build something awesome and post your progress on social media. Your dev environment will load perfectly, and you can check off your to-do list in an hour.

Other times, you will run commands in your terminal, and it will take hours and still won’t work. You will run tests through your IDE, and it won’t work…and won’t work…and won’t work.

Lisa taught me that a big part of tech is being comfortable with the failures and dead-ends. Your code won’t always work. And that is okay. Maybe tomorrow you will ask the question that leads you to the solution you need, or perhaps the quick-fix to your broken code will dawn on you at a gloriously random moment.

So pour yourself a cup of coffee, sit in front of that computer screen, and buckle your seatbelt. See where today leads you.

Lesson #5: Have a dog close by.

Between the three founding members and me, we have three dogs, and they are always close at hand. When Sunshower did its QA night, during which we ran the web application with three people clicking buttons simultaneously, we ran tests while dogs roughhoused in our laps and under our feet.

Lisa and Josiah’s mini aussiedoodle Fran forces them to take breaks. Lisa’s brain breaks involve playing with Fran on the floor, and Josiah takes Fran for long runs before he starts working for the day.

In addition to forced breaks and relieved tension, dogs simply remind you not to take tech or life in general too seriously. Having a dog roll over on your laptop keyboard or drool on your computer screen reminds you that, no matter how important your tech projects are–and they are–they are just tech projects.

When the going gets tough, rub that belly or scratch those ears. You’ll figure it out. We promise!

Lesson #6: I will never, ever work as the chief of anything for a company. Ever.

Despite what I knew about Lisa and Josiah’s work days, I tended to romanticize what they did. Work all night? Sleep until noon? Take breaks when you want? Code your way to your dream life? Yes please! Sign me up!

I did not truly comprehend their daily grind, and I seriously questioned their love of cartoons as entertainment (really guys?), until I started working with them. Their work days start at ten in the morning and go until midnight if they are lucky. Josiah has been known to pull multiple all-nighters a week while tackling infrastructure projects. Lisa had chronic shoulder and forearm pain from typing for 14 hours a day that was only cured by throwing down over $80 for a split keyboard.

However, these epic workday ultramarathons have paid dividends. This week, they came in second place in the pitch competition at Fort Collins Startup Week. They have six clients and more in the wings. It is commendable. Truly.

And I would not wish it on anybody.

Lesson #7: Tech is about people, experiences, and services.

Tech is not just about the code you write or the tests you run. It is about connecting with Lisa and Josiah about their QA needs and with Tif about blog posts like this one. It is the Sunshower.io home office: a north-facing former bedroom in their Fort Collins home, the window opening to farmland. It is the late-night walks that help you work out your code, the new people Lisa adds to LinkedIn, the friends I have made through tech meetups. It is the hundreds of hours spent on a service that can save companies thousands of dollars on their monthly AWS bill. It is the scrappy tech stars working long hours to bring their services to life for future clients. Code is only the beginning.   

This is Julie Gumerman,
Sunshower’s unlikely intern.

Sunshower.io Wins Spot in Startup Week Pitch Competition

Meet the 9 companies competing in Fort Collins Startup Week’s PitchNo.CO

BY APRIL BOHNERT, FEBRUARY 21, 2019

Fort Collins Startup Week kicks off on Monday and, for the second year in a row, LaunchNo.CO — a nonprofit dedicated to supporting startups and entrepreneurship in Northern Colorado — will be hosting its PitchNo.CO pitch competition.

This year’s competition, which takes place Feb. 26 through March 1 at Spaces coworking in Fort Collins, features nine companies from Larimer and Weld counties that will pitch their ideas in front of a panel of CEOs from companies like Boomtown and UpRamp.

We’ve taken a look at the startups competing this year to learn more about their products and how they’re hoping to transform industries from solar to fashion to cannabis — and a whole lot in between.

Read more.

Save 40% or more in 40 Seconds

Illustration of an open storage unit partially filled with cardboard boxes

Sunshower.io’s optimizing algorithms help you save time and money on the cloud. There’s no upfront cost, and our results are better than our competitors. It’s cloud computing optimization unlike anything else out there? So how do we do the thing?

Imagine you need to rent a storage unit — you have a bunch of boxes, but nowhere to put them. No problem! There are a ton of companies out there that will rent you a storage space. You do a quick search, and find over 30 self-storage companies scattered across town. You don’t have time to talk to everyone to compare prices (who does?), so you call a company whose name you recognize. You’re not exactly sure what size storage unit you need, so they talk you into renting a 10 x 20 unit, “just in case”, at which point you end up with a storage unit that looks like the photo above.

End result: you’re paying for a lot more than what you need. Sure, you could move to another facility, but who wants to negotiate with another company, then give up their whole day to move a truck and switch facilities? Easier to stay put, and keep that extra space.

Buying too much “just in case” is a very common things for companies on the cloud, too. Why?

  • There are an overwhelming amount of cloud service provider choices
  • There are an overwhelming number of options on each of those cloud service providers
  • The UIs of cloud service providers are confusing
  • It’s hard to know exactly what you need and what you’re using

That’s where Sunshower.io comes in

When you work with us, we securely run metadata about your resource usage through a proprietary algorithm designed to find the exact right fit for your cloud compute needs. We use machine learning to calculate millions of data points, factor in fluctuations in data usage over time, and come up with a cloud plan that ensures you aren’t overpaying “just in case.” We find you a fit so good, we can save our customers 40% or more on their monthly cloud compute bill.

(Think Cinderella’s glass slippers, with good arch support and just enough wiggle room for your toes.)

Over time, this kind of cloud savings can be game-changing. Just imagine the decisions you could make even with an extra 20% of your monthly cloud spend back in your pocket, like hiring another engineer, or launching a great social media campaign. And that’s what we’re all about at Sunshower.io: helping you focus on what matters — your business.

To that end, we’re excited to announce our just-launched AWS EC2 optimizer

If you’re currently using Amazon Web Services EC2 for your cloud infrastructure, our service (colloquially known as Anvil) has been specifically tailored to analyze your data and come up with a better cloud usage plan to help with AWS cost optimization. The bottom line: Our AWS cost optimization can help you save money on AWS with just a few clicks.

Not using AWS EC2? We promise you won’t be left out. We’re launching AWS RDS optimization next, and we’ll be releasing optimizations for more public clouds as we go along. (Google Cloud and Azure are next up on the list, but let us know your needs and we can re-prioritize!)

‘Cloud Management Platform’ Undersells It

Silhouette of one person helping another person up a hill

Because we need a snappy way to refer to what we do, we call ourselves a ‘cloud management platform.’ But that undersells it. Why? Because cloud management platforms and solutions are for the big guys — the ones with IT budgets the size of small countries, or at least small counties. Twee though it may sound, we want to be the great equalizer for cloud computing, supporting engineers of all stripes.

Why does this matter?

More and more, engineers aren’t coming from a Computer Science background, they’re coming from code academies or more ad hoc backgrounds (hello, yes, I majored in journalism). And even when they are, cloud computing isn’t really taught in schools. So you wind up in a job, and suddenly you have to figure out how to deploy things. Or you start a company, and you realize “wow, I can’t just have this run on my localhost.” Or maybe your infrastructure is in the cloud already, and you realize you just wasted $200 on an instance you forgot about. Or your infrastructure is in the cloud and costing you the salaries of three good engineers a month, and you have to figure out how to keep the engineers and not the cloud cost.

And so the Googling and Stack Overflowing begins, except half the information is out of date because things are always changing, and the numbers are almost always wrong because cloud service providers are constantly changing their pricing structure. And don’t even get me STARTED on how or if you should containerize your software.

It is in these moments that you wish that there was a cloud management system that could just do the thing for you — help you pick a cloud, help you migrate across clouds, help you deploy with Docker just like you had been with machine images, help you with cloud cost optimization.

And that, THAT is our why. Because no matter how new or how senior, there will be a moment when you don’t have the support you need. We didn’t pick the name ‘Sunshower’ because it was cute and fit with the cloud theme (okay, we kind of did). We picked it because dealing with the cloud shouldn’t have to be so dang hard. It should be able to be easy, and rewarding, and maybe even fun.

Need some help on the cloud? Visit us at Sunshower.io.
Spending too much on AWS? Learn more about our Anvil optimizer.

Unit-testing Aurelia with Jest + JSDOM + TypeScript + Pug

All source can be found at: Aurelia-Aire

Testing UI components is one of the more difficult parts of QA. In Stratosphere (one of the key components of our cloud management platform), we’d been using Karma + Jasmine, but the browser component had been complicated by the fact that providing a DOM to tests in a fast, portable manner subject to memory and speed constraints can be pretty challenging. Sure, initially we did the PhantomJS thing, then Chrome Headless came out, but writing UI tests just never really felt natural.

Then, last week, we decided to open-source our component framework, Aire, built on top of UIKit+Aurelia, and that created an exigent need to fix some of the things we’d been limping along with, most importantly testing. The success of OSS projects depends on quite a few things, but I consider providing a simple way to get contributors up-and-running critical.

Simple set-up

Internally, Aurelia uses an abstraction layer (Aurelia PAL) instead of directly referencing the browser’s DOM. Aurelia will (in principle) run on any reasonable implementation of PAL. Aurelia provides a partial implementation OOTB, Aurelia/pal-nodejs, that will enable to (mostly) run your application inside of NodeJS.

Project Structure

Our project structure is pretty simple: we keep all our components and tests under a single directory, src:


aire
├── build
│   └── paths.js
├── gulpfile.js
├── index.html
├── jest.config.js
├── jspm.config.js
├── jspm.dev.js
├── package.json
├── package-lock.json
├── src
│   ├── main
│   │   ├── aire.ts
│   │   ├── application
│   │   ├── button
│   │   ├── card
│   │   ├── core
│   │   ├── core.ts
│   │   ├── dropdown
│   │   ├── events.ts
│   │   ├── fab
│   │   ├── form
│   │   ├── icon
│   │   ├── init
│   │   ├── init.ts
│   │   ├── loader
│   │   ├── nav
│   │   ├── navbar
│   │   ├── offcanvas
│   │   ├── page
│   │   ├── search
│   │   ├── table
│   │   ├── tabs
│   │   └── widget
│   └── test
│   │   ├── button
│   │   ├── core
│   │   ├── init
│   │   ├── render.ts
│   │   ├── setup.ts
│   │   └── tabs

...etc

At the top of the tree you’ll notice jest.config.js, the contents of which look like this:

Basically, we tell Jest to look under src for everything. ts-jest will automatically look for your Typescript compiler configuration, tsconfig.js in its current directory, so there’s no need to specify that.

Our tsconfig is pretty standard for Aurelia projects:

Test

If you just copy and paste our tsconfig.json and jest.config.js files while following the outlined directory structure, everything will Just Work (don’t forget to npm i -D the appropriate Jest and Aurelia packages.)

At this point, you can use aurelia-test to write tests a la:

hello.html

hello.ts

hello.test.ts

Now, you can run your tests with npx jest:

aire@1.0.0 test /home/josiah/dev/src/github.com/sunshower/aurelia-aire/aire
npx jest

PASS src/test/button/button.spec.ts
PASS src/test/tabs/tab-panel.spec.ts
PASS src/test/init/init.spec.ts
PASS src/test/core/dom.spec.ts

Test Suites: 4 passed, 4 total
Tests: 12 passed, 12 total
Snapshots: 0 total
Time: 3.786s
Ran all test suites.

Enabling support for complex DOM operations

That wasn’t too bad, was it? Well, the problem we encountered was that we use the excellent UIKit framework, and they obviously depend pretty heavily on the DOM. Any reference in Aire to UIKit’s Javascript would fail with a ReferenceError: is not defined error. Moreover, if we changed the Jest environment from node to jsdom, we’d encounter a variety of errors along the lines of TypeError: Failed to execute 'appendChild' on 'Node': parameter 1 is not of type 'Node' which I suspect were caused by pal-nodejs creating DOM elements via its own jsdom dependency while Jest was performing DOM operations using its jsdom dependency. In any case, the solution turned out to be to define a single, global jsdom by importing jsdom-global. Once we discovered this, we encountered other issues with browser-environment types and operations not being defined, but this setup.js resolved them:

At this point we could successfully test Aurelia + UIKit in NodeJS.

The final piece

All of our component views are developed in Pug, and I didn’t like that we’d be developing in Pug but testing using HTML. The solution to this turned out to be pretty simple: adding Pug as a development dependency and creating a pretty simple helper function:

With that final piece in place, our test looks like:

Conclusion

The benefits of writing tests in this way became apparent the moment we cloned the project into a new environment and just ran them via npm run test or our IDE’s. They’re fast, don’t require any environmental dependencies (e.g. browsers), and allow you to run and debug them seamlessly from the comfort of your IDE. But, perhaps most importantly, these are fun to write!

Gratitude as a Hedge

I think everyone says that building a startup is hard, but what they generally fail to convey is how hard it actually is. I left my previous position on March 1st of this year, and since then it seems like it’s been an unpunctuated litany of 14-hour days.  There’s also this notion (that I had) that when you found a company, you’ll mostly be working on projects that are interesting to you. I mean, it is your company, right?  And you do generally get to choose what that entails, right? And that that engagement will be prophylactic against the darkness.

That’s pretty heckin’ far from the truth in every way. The problems we’re solving are incredibly fascinating: AWS cost optimization, cloud resource management, and EC2 management are all issues I’m excited to be working on. But in the daily grind of solving those problems, about 10% of the code in Sunshower is what I would consider to be “strongly interesting”–the rest is fairly standard enterprise Java, Go, and Typescript.  In other words, Sunshower is 10% distributed graph reduction machine and 90% infrastructure built to interact and support that. It’s actually about twice as bad as that, since fully 55% of the code is tests and testing infrastructure.  And this is to say nothing of setting up continuous delivery, writing deployment manifests, replacing failed hard drives, configuring VPNs, signing documents, worrying about corporate and legal stuff…the list goes on.

That proportion of fun-work to work-work largely tracks my experience at companies of all sizes, with the primary differences being that the pay is much worse at your own startup, and that nobody’s really skeptical of a software project that is already profitable.  The cocktail of doubt, overwork, and scarcity is strong and bitter, and there’s only so much of it you can drink.

In October I got pretty sick and have only just really recovered, and that really pushed me to a place that the kids probably call “too extra.”  You can tell when you get there because everything’s distorted and whack–speed-bumps become cliffs and the little nuggets of interesting work you sprinkle throughout your day lose their luster. But the worst part is that it’s actually pretty hard to tell when you’re there.  I only realized I wasn’t in a great spot recently, and had troubles seeing a way out.

Lisa, my amazing wife and co-founder, really helped with the insight that gratitude is really what will get you out of dark spots.  Gratitude for family, for the people who throw their lot in with yours, for the opportunity to even try, for health, and even a warm breakfast and a colorful sunrise (fingers crossed).  And so I’m trying that and it’s really working.

 

 

 

 

Eulogy for the Old-Fashioned: Things We’ve Innovated Out Of Existence

As a kid, I remember having my mom drop me off at the library so I could rummage through the card catalog. I still have warm-and-fuzzy nostalgic feelings about spending hours pulling books off the shelves, checking the indexes, and finding exactly the pieces of information I needed. It’s a nice memory, but would I trade away the Internet to get another chance at flipping through musty drawers of tiny typewritten cards? No way. You’d have to be bonkers to go backwards and do research like that. I’d rather sit at home in my pajamas and access hundreds of sources online in seconds, thanks very much.

Innovation is a funny thing. We get so accustomed to doing something one way, that it never occurs to us that it could be different. Better. Think about all of the things we used to consider perfectly fine until we figured out a better way:

  • Driving to the store to rent a movie (RIP, Blockbuster)
  • Placing a personal ad in the newspaper
  • Pulling over to make call from a pay phone
  • Planning a route using a tri-fold city map
  • Sending a fax
  • Looking numbers up in the Yellow Pages
  • Listening to music on a Walkman
  • Checking out what’s on tonight in the TV Guide
  • Using a typewriter with a bottle of Wite Out handy
  • Checking spelling using a dictionary
  • Looking facts up in an encyclopedia
  • Keeping business contacts in a Rolodex
  • Calling the theater for showtimes
  • Making a mixtape

At one point, someone asked: Why isn’t there a better way? Then they changed the game.

That’s what we did for the cloud at Sunshower.io.

People used to think it was okay to be surprised by their cloud bill, or to spend hours, sometimes days, sometimes months, deploying applications. We used to find it totally acceptable to ask developers to become experts in how to deploy their software, while still … you know … finding the time to actually write said software. Not anymore. Sunshower.io has all the tools you need to work with the cloud faster, simpler, and more efficiently than ever before. We’re a cloud management platform that’s dedicated to offering better cloud resource management strategies and simple cloud cost optimization tools.

Why take a boat across the ocean when you could fly? You have stuff to do, deadlines to meet. Check out our beta on DigitalOcean to see how we can offer you a more innovative cloud management system.

Github Pages with TLS and a Backend

When you’re spending your days (and nights) developing the best cloud management platform possible, blogging consistently is hard. That said, we’re redoubling our efforts to get content out semi-regularly, even if it’s just to post something simple and (hopefully) helpful. To that end, I’d like to discuss setting up Github pages with a backend.

The Problem

 

Github.com allows you to host static content via its pages feature.  This is fantastic because it makes it trivial to create, deploy, and update a website.  This is how https://sunshower.io is hosted.

But what if you wanted to interact with, say, a database, to track signups?  Furthermore, what if you wanted to do it all over TLS?  This tutorial presumes that your registrar is AWS and your DNS is configured through Route53.

 

Set up Github DNS Alias Records For Github Pages

This one’s easy: In your Route53 Hosted Zone, create an A record that points to Github Page’s DNS servers:

>185.199.110.153

> 185.199.109.153

> 185.199.108.153

> 185.199.111.153

 

 

Then, check in a file in your pages repository under /docs called CNAME containing your DNS name (e.g. sunshower.io)

 

Push that sucker to master and you should have a bouncing baby site!

 

Publish your backend with the correct CORS Preflights

We pretty much just have a plugin for Sunshower.io that registers unactivated users.  Create an EC2 webserver/Lambda function/whatever to handle your requests.  The thing you have to note here is that your backend will have to support preflight requests.  A preflight request is an OPTIONS request that your server understands, and responds with the set of cross-origin resource sharing (CORS​ ) that your backend understands.

This is because your page, hosted at a Github IP, will be making requests to Amazon IPs, even though both are subdomains beneath your top-level domain.  For a JAX-RS service at, say, https://mysite.com:8443/myservice, you will need two methods:

 

@Path("myservice")
@Produces({
MediaType.APPLICATION_JSON,
MediaType.APPLICATION_XML,
})
@Consumes({
MediaType.APPLICATION_JSON,
MediaType.APPLICATION_XML,
})
public interface SignupEndpoint {

@POST
RegistrationConfirmationElement signup(RegistrationRequestElement request);
@OPTIONS
Response getOptions();
}

Note that there must be an @OPTIONS method for each actual conversation method that you want to interact with (e.g. the @POST method here).  What will happen is that a preflight will be made to the same path as the request, and the server will respond with whether that request is allowed.  You can widen the scope of @OPTIONS responses, but you should have a pretty good reason for doing so.

 

The actual @OPTIONS method will look something like:


@Override
public Response getOptions() {
return Response.status(Response.Status.OK)
.allow("POST")
.header("Access-Control-Allow-Origin", "*")
.header("Access-Control-Allow-Methods", "POST, OPTIONS")
.header("Access-Control-Max-Age", 1000)
.header("Access-Control-Allow-Headers", "origin, x-csrftoken, content-type, accept")
.build();
}

where the allow value and Access-Control-Allow-Methods values align with the request type of the actual backend methods.

 

Set up a SSL-enabled ELB

The first thing to do here is to create a certificate using AWS’s wonderful Certificate Manager Service.

Request a certificate by navigating to Certificate Manager > Request a Certificate in the AWS Management Console.  Request a public certificate, and then request it for the fully-qualified name that you want your backend to appear under (e.g. backend.sunshower.io).  If you have access to any of the follow e-mail addresses at your domain:

  • administrator@your_domain_name
  • hostmaster@your_domain_name
  • postmaster@your_domain_name
  • webmaster@your_domain_name
  • admin@your_domain_name

then select e-mail verification, otherwise select DNS verification and proceed according to the instructions provided by AWS.

 

Once you’ve verified your certificate, create an elastic load balancer on whatever availability zones your backend spans.  If you only have one instance in one availability zone (shame shame!), add that availability zone first.  Create a listener targeting the port that your actual backend is listening on, and add a security group with an ingress port that corresponds to the port that you want your public-facing backend to listen on (for instance, if you want your backend to respond to requests to https://backend.sunshower.io, configure your ingress to be 443.

From there, configure a target that points to your actual backend. If your backend is hosted on EC2 instances, select the instance type for the target, otherwise select ip.  Reference the IP/instances of your backend and create the ELB.

 

Configure a DNS A Record for your ELB

The last thing we need to do is create an A Record that points to your ELB.  If, previously, you requested a TLS certificate for backend.mysite.com, you’ll want to create an A-record whose name is backend, alias=true, with an alias target that is the ELB you created in the previous step.  Save that sucker and you are good to go!

 

 

 

Announcing Sunshower.io Preview!

We’re thrilled to announce a preview launch of Sunshower.io, a cloud management platform that offers cloud resource management and cloud optimization. ! I’d like to take a little time to explain what we do.

Beautifully Simple Multicloud Management

The first thing we do is provide a simple, unified interface for managing your public clouds. This means that you provide some read-only access to your cloud, which we store securely in our vault, and then we go discover whatever infrastructure is in your cloud(s) and manage and organize it for you.

Let me step through a quick example or our public cloud management system:

Discover your Resources (aka “Systems”)

The first thing we need to do is to discover your resources. Upon logging into Sunshower.io, you’ll be presented with the System Discovery Wizard. A System (i.e. Weather System) is the set of all infrastructure associated with a set of cloud accounts. So, if you have:

  • 1 Azure Scale Set with 4 active members that is your development cluster (azure-dev)
  • 1 Azure Scale Set with 20 active members that is your production cluster (azure-prod)
  • 1 AWS Autoscaling Group with 10 active members that is your AWS dev cluster (aws-dev)
  • 1 AWS Autoscaling Group with 30 active members that is your AWS production cluster (aws-prod)

Then you will create a system with at least 2 credentials, one for AWS, the other for Azure.

wizard-accounts.PNG

You can add as many accounts from as many cloud providers as you want. Each of the cloud providers is implemented as a (relatively) simple plugin, and if you add a new plugin (e.g. Google Cloud), you’ll be able to add credentials for that cloud, too.

Once all your accounts have been added, we’ll go through and perform the actual discovery. Once that completes, you’ll be presented with a topological overview of your cloud infrastructure:

topological-overview

The group color is used in the topology view as the color between edges of nodes, and in the geography view to color connections between regions:

geography-view.PNG

And yes, you can totally spin the globe!

Use Your Groups

Grouping is pretty fundamental to Sunshower.io:

  • Access control is based on groups
  • Management operations can be performed on entire groups (yes, you can spin down every node in a cluster by stopping its group)
  • Deployments are based on groups

For instance, you can SSH into an individual machine or an entire group. It’s pretty typical to have a ton of identical machines, so executing the same series of commands produces identical output from each machine. We de-duplicate the output and provide you with the results. Just store your private key in our vault (the actual key is stored in the excellent HashiCorp Vault). But, hey, tailing logs across a bunch of machines has never been so easy!

ssh

Visual Deployments (aka “Strata”)

One feature that we’re really excited about is visual modeling of deployments. Basically, you start off with a series of commands (e.g. shell commands), and you compose them together:

visual-modeling

From there, you can select a deployer format (e.g. Docker or Packer or even just the userdata section of an AMI, whatever you have plugins installed for) and viola! You have a deployment that you can share with coworkers, publish to everyone, or share with a specific group. For instance, you might create a Stratus that

  • Installs Java
  • Installs NodeJS
  • Installs Gulp

And you can generate a Dockerfile for it, a Packer file, an Azure Image, or an AMI without changing a thing!

Visual Infrastructure Modeling for Systems

But let’s say you don’t have any existing infrastructure and you want to create some. Now, you can log into your various cloud providers’ consoles and spin up whatever you need, but what if you want to create some infrastructure and try it out across clouds? Enter our visual infrastructure modeling for Systems. You can quickly model your infrastructure and deploy it out to any supported cloud (below is the cluster structure for Sunshower.io’s deployment)

system-designer.png

You can also export the model to a variety of provisioners like AWS CloudFormation, Azure Resource Manager, or Hashicorp Terraform. We’re currently figuring out what generating Kubernetes manifests looks like, but you’ll be able to do that soon too.

Finally, Anvil for Cloud Optimization

All of the data that we collect about your infrastructure we use to build a model of your tasks. If you think about purchased infrastructure as a shipping container, then it makes sense to purchase the smallest shipping container that can fit all of your packages. Anvil extends this analogy by allowing you to define new dimensions for your packages (think 5 or more), and we’ll figure out the smallest shipping container with those dimensions possible across any infrastructure for which there’s an infrastructure plugin. For instance, here’s an example of us spinning up a suboptimal configuration of resources and running Anvil on it:

anvil.png

It yields a much more compact (dense) configuration for your infrastructure. In fact, it typically yields an optimal configuration. You can even configure it to model packages based off of their greatest historical dimensions (like peak hours) so that you’ll never under-provision again, even while saving a substantial amount on your infrastructure. The result is total cloud cost optimization.

Conclusion

Thanks for sticking around! I wanted to provide a list of Sunshower.io’s current features to give everyone a better idea as to how it’s used and what it can do for you. Many people will only need one or two of the features, and we’d like to get some feedback as to which might be the most valuable for your organization so that we can get them to you ASAP.

Installing OpenShift on ESXi and CentOS 7

At Sunshower, we’ve been happily using Docker Compose and Docker Swarm for development and deployment respectively. These technologies make it a snap to build and deploy code, and the effort involved in setting them up is quickly offset by their utility.

We’ll continue to use Compose for development, but for better or worse, the industry has spoken and declared Kubernetes the winner of the container orchestration wars. And by this I mean that Swarm is not offered as a service by any of the major CSPs, but each of them either have or are working towards offering turnkey Kubernetes offerings. So, no sense in swimming upstream. We decided to create a Kubernetes deployment for Sunshower. However, a lot of the Kubernetes public cloud offerings are relatively expensive, which is a barrier for a self-funded startup like Sunshower. Fortunately, we got a very generous donation of hardware that I deployed OpenShift to.

Kubernetes vs. OpenShift

OpenShift is pretty much Kubernetes with some extras that make it extra attractive if you’re going to manage your own infrastructure. We’re using it because it also supplies builds.

Getting Started

Configuring your infrastructure

Configuring your infrastructure is the most tedious and error-prone part of this exercise, but if you don’t get it right, it will bite you.

Infrastructure step 1: Create a base VM image

Download a CentOS ISO (minimal is fine) to ISO_PATH where ISO_PATH is some accessible location on your local hard drive. I was not able to get uploads to work with the ESXi web client, so you’ll need to use the older ESXi thick client. Select the following options:

  1. Hardware Compatibility (only if you’re interacting with ESXi through VMWare Workstation): Workstation 11.x
  2. Installer disc image (iso): ISO_PATH
  3. Virtual Machine Name: centos-base (or whatever)
  4. Processors: (at least 2 are required, whether that’s 2 processors, 1 core/processor or whatever)
  5. 8GB memory
  6. Network type: Bridged !important
  7. Defaults for I/O Controller types, disk type, select a disk
  8. Disk capacity: 50 GB

Once your VM is up and running, run:

yum install update 
reboot now

When your VM comes back up, install some basics:

yum install -y open-vm-tools git docker wget 
systemctl stop network #if you're ssh\'d in, you'll lose access.  Do this through the VMWare console
chkconfig network off
chkconfig NetworkManager on
systemctl NetworkManager start
nmcli dev connect ens33 # You might have a different bridged interface name--check by running nmcli

Infrastructure step 2: Create the VM inventory

This is a bit of a chore since ESXi doesn’t even let you effing clone a VM. If you’re using Workstation, I’d recommend creating all the clones locally, then upload them to the ESXi host.

If you’re not using VMWare Workstation, manually clone each VM by:

  1. Creating the base VM (previous step)
  2. Browse the datastore you want to clone a VM to, create a new folder with the VM’s name (e.g. openshift-cluster-manager)
  3. Copy every file from the base VM’s directory to the new folder except for the log files

For our installation, we’ll have 1 master and 3 workers. If you need HA, you need 3 or 5 masters and however many workers. If you need a production-ready cluster with dozens or hundreds of workers, I do consult =).

Infrastructure step 2: assign static IPs

If you’re using OpenStack/AWS/vSphere, or are running your own DNS server, this is an optional step. Since ESXi does not have any available mechanism for dynamically deploying new virtual machines, your installation will be pretty static, so pick a naming convention, pick an adequate network size, and assign each cluster-node’s interface MAC to an IP. This obviously doesn’t scale well, but eh, it’s fine for a local cluster. One day I’ll show you how to create something a little less hands-on with OpenStack.

Install OpenShift using the convenient installer script provided by Grant Shipley

Select an initiator node. This node will have the role of APIserver, so select one of your masters (or your only master). Clone (Grant Shipley’s installer script)[https://github.com/gshipley/installcentos] with

git clone https://github.com/gshipley/installcentos

Install it with:

echo "console.yourdomain.com" >> /etc/hosts

export DOMAIN=yourdomain.com
export USERNAME=<username> #maybe administrator or something
export PASSWORD=<some super strong password>
cd installcentos && ./install-openshift.sh

This takes a while. Go get some coffee. Take a walk. If your installation node does not have a static IP, it will probably change due to the network reconfiguration this step performs and hork your installation.

If it completes successfully, visit console.yourdomain.com and you should see:

openshift-login

Install your cluster nodes

From your initiator node, copy your ssh key to each of the nodes you want to add to the cluster:

hosts=(os1.sunshower.io
os2.sunshower.io
os3.sunshower.io)

for i in "${hosts[@]}" do
    ssh-copy-id root@$(i)
done;

Otherwise you’ll be typing a lot of passwords.

On the initiator node, edit /etc/ansible/hosts:

[OSEv3:children]
masters
nodes
new_nodes
etcd

[OSEv3:vars]
openshift_deployment_type=origin  # if you bought enterprise, this would be enterprise
os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant  #Important!  The installer defaults to openshift-ovs-subnet for the nodes, but the master is running multitenant.  The openshift node process will fail to start without this
os_firewall_use_firewalld=true  #iptables sucks
osm_cluster_network_cidr=10.0.0.0/24  #change to whatever your network is
openshift_metrics_install_metrics=true  #optional

[masters]
console.sunshower.io  #or whatever your current hostname is

[etcd]
console.sunshower.io  # production installations would have several of these

[new_nodes]
os1.sunshower.io        openshift_schedulable=true  # All the DNS names of your nodes.  Make sure that they're either in your /etc/hosts file or your DNS server is correctly configured
os2.sunshower.io        openshift_schedulable=true
os3.sunshower.io        openshift_schedulable=true


Then, run:

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-node/scaleup.yml. These playbooks are installed by atomic-openshift-utils. After about 10 minutes, the installation process should complete and you should be able to run:

<br />oc get nodes
NAME        STATUS    ROLES            AGE       VERSION
10.0.0.10   Ready     compute,master   1h        v1.9.1+a0ce1bc657
10.0.0.4    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.5    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.6    Ready     <none>           47m       v1.9.1+a0ce1bc657
[root@localhost ~]# kubectl get nodes
NAME        STATUS    ROLES            AGE       VERSION
10.0.0.10   Ready     compute,master   1h        v1.9.1+a0ce1bc657
10.0.0.4    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.5    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.6    Ready     <none>           48m       v1.9.1+a0ce1bc657

EDIT: WordPress’s Markdown escaping is pretty broken–sorry for all the HTML escape codes–I’ll fix them when I get a moment.