Installing OpenShift on ESXi and CentOS 7

At Sunshower, we’ve been happily using Docker Compose and Docker Swarm for development and deployment respectively. These technologies make it a snap to build and deploy code, and the effort involved in setting them up is quickly offset by their utility.

We’ll continue to use Compose for development, but for better or worse, the industry has spoken and declared Kubernetes the winner of the container orchestration wars. And by this I mean that Swarm is not offered as a service by any of the major CSPs, but each of them either have or are working towards offering turnkey Kubernetes offerings. So, no sense in swimming upstream. We decided to create a Kubernetes deployment for Sunshower. However, a lot of the Kubernetes public cloud offerings are relatively expensive, which is a barrier for a self-funded startup like Sunshower. Fortunately, we got a very generous donation of hardware that I deployed OpenShift to.

Kubernetes vs. OpenShift

OpenShift is pretty much Kubernetes with some extras that make it extra attractive if you’re going to manage your own infrastructure. We’re using it because it also supplies builds.

Getting Started

Configuring your infrastructure

Configuring your infrastructure is the most tedious and error-prone part of this exercise, but if you don’t get it right, it will bite you.

Infrastructure step 1: Create a base VM image

Download a CentOS ISO (minimal is fine) to ISO_PATH where ISO_PATH is some accessible location on your local hard drive. I was not able to get uploads to work with the ESXi web client, so you’ll need to use the older ESXi thick client. Select the following options:

  1. Hardware Compatibility (only if you’re interacting with ESXi through VMWare Workstation): Workstation 11.x
  2. Installer disc image (iso): ISO_PATH
  3. Virtual Machine Name: centos-base (or whatever)
  4. Processors: (at least 2 are required, whether that’s 2 processors, 1 core/processor or whatever)
  5. 8GB memory
  6. Network type: Bridged !important
  7. Defaults for I/O Controller types, disk type, select a disk
  8. Disk capacity: 50 GB

Once your VM is up and running, run:

yum install update 
reboot now

When your VM comes back up, install some basics:

yum install -y open-vm-tools git docker wget 
systemctl stop network #if you're ssh\'d in, you'll lose access.  Do this through the VMWare console
chkconfig network off
chkconfig NetworkManager on
systemctl NetworkManager start
nmcli dev connect ens33 # You might have a different bridged interface name--check by running nmcli

Infrastructure step 2: Create the VM inventory

This is a bit of a chore since ESXi doesn’t even let you effing clone a VM. If you’re using Workstation, I’d recommend creating all the clones locally, then upload them to the ESXi host.

If you’re not using VMWare Workstation, manually clone each VM by:

  1. Creating the base VM (previous step)
  2. Browse the datastore you want to clone a VM to, create a new folder with the VM’s name (e.g. openshift-cluster-manager)
  3. Copy every file from the base VM’s directory to the new folder except for the log files

For our installation, we’ll have 1 master and 3 workers. If you need HA, you need 3 or 5 masters and however many workers. If you need a production-ready cluster with dozens or hundreds of workers, I do consult =).

Infrastructure step 2: assign static IPs

If you’re using OpenStack/AWS/vSphere, or are running your own DNS server, this is an optional step. Since ESXi does not have any available mechanism for dynamically deploying new virtual machines, your installation will be pretty static, so pick a naming convention, pick an adequate network size, and assign each cluster-node’s interface MAC to an IP. This obviously doesn’t scale well, but eh, it’s fine for a local cluster. One day I’ll show you how to create something a little less hands-on with OpenStack.

Install OpenShift using the convenient installer script provided by Grant Shipley

Select an initiator node. This node will have the role of APIserver, so select one of your masters (or your only master). Clone (Grant Shipley’s installer script)[https://github.com/gshipley/installcentos] with

git clone https://github.com/gshipley/installcentos

Install it with:

echo "console.yourdomain.com" >> /etc/hosts

export DOMAIN=yourdomain.com
export USERNAME=<username> #maybe administrator or something
export PASSWORD=<some super strong password>
cd installcentos && ./install-openshift.sh

This takes a while. Go get some coffee. Take a walk. If your installation node does not have a static IP, it will probably change due to the network reconfiguration this step performs and hork your installation.

If it completes successfully, visit console.yourdomain.com and you should see:

openshift-login

Install your cluster nodes

From your initiator node, copy your ssh key to each of the nodes you want to add to the cluster:

hosts=(os1.sunshower.io
os2.sunshower.io
os3.sunshower.io)

for i in "${hosts[@]}" do
    ssh-copy-id root@$(i)
done;

Otherwise you’ll be typing a lot of passwords.

On the initiator node, edit /etc/ansible/hosts:

[OSEv3:children]
masters
nodes
new_nodes
etcd

[OSEv3:vars]
openshift_deployment_type=origin  # if you bought enterprise, this would be enterprise
os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant  #Important!  The installer defaults to openshift-ovs-subnet for the nodes, but the master is running multitenant.  The openshift node process will fail to start without this
os_firewall_use_firewalld=true  #iptables sucks
osm_cluster_network_cidr=10.0.0.0/24  #change to whatever your network is
openshift_metrics_install_metrics=true  #optional

[masters]
console.sunshower.io  #or whatever your current hostname is

[etcd]
console.sunshower.io  # production installations would have several of these

[new_nodes]
os1.sunshower.io        openshift_schedulable=true  # All the DNS names of your nodes.  Make sure that they're either in your /etc/hosts file or your DNS server is correctly configured
os2.sunshower.io        openshift_schedulable=true
os3.sunshower.io        openshift_schedulable=true


Then, run:

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-node/scaleup.yml. These playbooks are installed by atomic-openshift-utils. After about 10 minutes, the installation process should complete and you should be able to run:

<br />oc get nodes
NAME        STATUS    ROLES            AGE       VERSION
10.0.0.10   Ready     compute,master   1h        v1.9.1+a0ce1bc657
10.0.0.4    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.5    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.6    Ready     <none>           47m       v1.9.1+a0ce1bc657
[root@localhost ~]# kubectl get nodes
NAME        STATUS    ROLES            AGE       VERSION
10.0.0.10   Ready     compute,master   1h        v1.9.1+a0ce1bc657
10.0.0.4    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.5    Ready     <none>           48m       v1.9.1+a0ce1bc657
10.0.0.6    Ready     <none>           48m       v1.9.1+a0ce1bc657

EDIT: WordPress’s Markdown escaping is pretty broken–sorry for all the HTML escape codes–I’ll fix them when I get a moment.

DevOps Without the DevOps Part 4: Finally, Containers!

In previous segments, we discussed how to collect project dependencies, and use Maven, Gradle, and the Spring Maven Gradle plugin to organize your project dependencies in a maintainable and traceable fashion. In this post, we’re going to take that setup and create a clean, reproducible build using Docker.

Understanding Containerization

A containerized application’s lifecycle is composed of several steps:

  1. Defining the container
  2. Building the container
  3. Tagging the container
  4. Pushing the container to a registry
  5. Running the container

We’ll be using Docker as our containerization technology, but it’s not the only option. Ensure that it’s installed for your platform before you continue.

Note that these instructions are intended for Linux-based machines. Containers differ from VMs in a very important way, namely that containers share the OS kernel of their container host.

What this means is that, say that my host operating system is Debian, based on Linux Kernel 4.9.0.6-amd64 x86_64. Even if my container uses a different OS base (say, Ubuntu), the container and the host should be kernel-compatible (so, no Linux Kernel 3.10.x-ARM, for instance.)

These considerations are rarely an issue for many software projects. We use Debian-based containers because they’re similar to our dev environments and tooling, but Alpine is a really good option for many projects and has some advantages in production (e.g. resource utilization is anecdotally better.)

Step 1: Defining the Container

In the same way that we identified the compile and runtime dependencies of our project in the first series, let’s think about what we need to build the project. On my development system, I typically build a project by building and publishing the project’s Bill-Of-Materials POMs (mvn clean install -f bom), then by building the project’s artifacts with Gradle and publishing them to my local Maven repository (gradle clean build publishToMavenLocal, or, with Gradle’s nifty shortcuts gradle cle b pTML). So, I obviously need Gradle and Maven installed in my container.

So, what we’re going to do here is:

  1. Install the correct version of Java
  2. Download Maven from the Maven project site and install it into the container
  3. Download Gradle from the Maven project site and install it into the container

Let’s see if we can get rid of any of these steps by selecting the correct base container. Searching Docker hub, I see that Oracle provides JDK images. openjdk:8u141-jdk

My Dockerfile becomes simply:

FROM openjdk:8u141-jdk
ENTRYPOINT /bin/bash

Breaking down these instructions:

FROM openjdk:8u141-jdk says, “ask the local Docker registry to find an image with ID openjdk:8u141-jdk and derive from that. If you can’t find that image locally, reach out to hub.docker.com and see if it’s there`

Building the container pulls the base image, then executes all of the commands in the Dockerfile, which produces a new container:

Sending build context to Docker daemon  220.7kB
Step 1/2 : FROM openjdk:8u141-jdk
8u141-jdk: Pulling from library/openjdk
3e17c6eae66c: Already exists 
74d44b20f851: Already exists 
a156217f3fa4: Already exists 
4a1ed13b6faa: Already exists 
77980e5d0a6d: Already exists 
5458607a81d3: Already exists 
e34cf8338f42: Already exists 
2f3d3da5c56e: Already exists 
2ade7a861e3f: Already exists 
Digest: sha256:4b0c879909b729d67d13e5004f5564df85a5f9c1c3820c13e41151edf1f1b1c0
Status: Downloaded newer image for openjdk:8u141-jdk
 ---> 74c95c985a85
Step 2/2 : ENTRYPOINT /bin/bash
 ---> Running in a4bd5943abcd
Removing intermediate container a4bd5943abcd
 ---> b1c22add1692
Successfully built b1c22add1692 # <-- IMPORTANT, this is your container ID, referenced as $CID.

We can check this new container out by running docker run -it --rm $CID which will drop you into a shell that looks something like:

docker-shell

Now, most base containers don’t have many programs installed. The JDK base containers do have Java, which is the important one.

java2

Now, we can install Maven and Gradle really quickly.

Installing the Prerequisites

I like wget for simple downloads, but curl would work just as well. But we need to install Git anyway, so add:

RUN apt-get update
RUN apt-get install -y git-core wget

To your Dockerfile and build:

Sending build context to Docker daemon  220.7kB
Step 1/4 : FROM openjdk:8u141-jdk
 ---> 74c95c985a85
Step 2/4 : RUN apt-get update
 ---> Running in 2bfa3d396ac6
Get:1 http://security.debian.org stretch/updates InRelease [94.3 kB]
Ign:2 http://deb.debian.org/debian stretch InRelease
Get:3 http://deb.debian.org/debian stretch-updates InRelease [91.0 kB]
Get:4 http://deb.debian.org/debian stretch Release [118 kB]
Get:5 http://deb.debian.org/debian stretch Release.gpg [2434 B]
Get:6 http://deb.debian.org/debian stretch-updates/main amd64 Packages [12.1 kB]
Get:7 http://deb.debian.org/debian stretch/main amd64 Packages [9530 kB]
Get:8 http://security.debian.org stretch/updates/main amd64 Packages [468 kB]
Fetched 10.3 MB in 1s (5807 kB/s)
Reading package lists...
Removing intermediate container 2bfa3d396ac6
 ---> 4984ada9d0c8
Step 3/4 : RUN apt-get install -y git-core wget
 ---> Running in 6e6a79b1c1ab
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
  git-core
The following packages will be upgraded:
  wget
1 upgraded, 1 newly installed, 0 to remove and 64 not upgraded.
Need to get 801 kB of archives.
After this operation, 8192 B of additional disk space will be used.
Get:1 http://deb.debian.org/debian stretch/main amd64 wget amd64 1.18-5+deb9u1 [800 kB]
Get:2 http://deb.debian.org/debian stretch/main amd64 git-core all 1:2.11.0-3+deb9u2 [1410 B]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 801 kB in 0s (3242 kB/s)
(Reading database ... 29522 files and directories currently installed.)
Preparing to unpack .../wget_1.18-5+deb9u1_amd64.deb ...
Unpacking wget (1.18-5+deb9u1) over (1.18-5) ...
Selecting previously unselected package git-core.
Preparing to unpack .../git-core_1%3a2.11.0-3+deb9u2_all.deb ...
Unpacking git-core (1:2.11.0-3+deb9u2) ...
Setting up wget (1.18-5+deb9u1) ...
Setting up git-core (1:2.11.0-3+deb9u2) ...
Removing intermediate container 6e6a79b1c1ab
 ---> 67169937ccdd
Step 4/4 : ENTRYPOINT ["/bin/bash"]
 ---> Running in cd4e07152d23
Removing intermediate container cd4e07152d23
 ---> 2c1ab0b17981
Successfully built 2c1ab0b17981

Now, your container will have both wget and git installed.

Setting Environment Variables

If you define an ENV variable in Docker, the value of that ENV can either be passed into the container, or you can specify a default value (or both). We want to be able to reference (and change) both the Gradle version and the Maven version so that if we want to upgrade either, we just pass in new versions when we’re building the container and viola!

# Environment Variables
ENV PROJECT_NAME workspace
ENV GRADLE_VERSION 4.3.1
ENV MAVEN_VERSION 3.5.2
ENV BASE_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin


Now, when we run the container, we have access to those environment variables:

echo $GRADLE_VERSION
4.3.1

Install Gradle

RUN mkdir -p /opt/build/tools/gradle #Create directory for gradle
RUN wget https://services.gradle.org/distributions/gradle-$GRADLE_VERSION-bin.zip -O /opt/build/tools/gradle.zip  # Download gradle from gradle.org 
RUN unzip -d /opt/build/tools/gradle /opt/build/tools/gradle.zip # Unzip gradle 
ENV GRADLE_HOME=/opt/build/tools/gradle/gradle-$GRADLE_VERSION/bin #Export gradle location as GRADLE_HOME

Now, one really sweet thing about Docker is that, each of these commands defines a new layer. If the textual value of the command that builds a layer doesn’t change, then that layer is retrieved from the layer cache upon subsequent builds. What this means is that the URL for the Gradle download could change, and that wouldn’t break our container.

Install Maven

RUN mkdir -p /opt/build/tools/maven
RUN wget http://www-eu.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.zip \
-O /opt/build/tools/maven.zip
RUN unzip -d /opt/build/tools/maven /opt/build/tools/maven.zip
ENV MAVEN_HOME=/opt/build/tools/maven/apache-maven-$MAVEN_VERSION/bin

Once these have all executed, we have a container with Maven, Gradle, and Git installed and ready to use! In the next post, I’ll discuss how to store credentials securely and pass them into the container so that we can pull our project from source-control and build it.

Of course, if you don’t want to maintain your own, this base image is available on docker hub as sunshower/sunshower-base:latest

Server-Sent Events with Undertow, Spring 5, and Resteasy 4

Resteasy 3.5 introduced Server-Sent Events (SSE), and there weren’t any good resources for showing how to get it up-and-running, so I thought I’d put together a quick how-to guide.

Add your dependencies

Up your org.jboss.resteasy:resteasy-jaxrs version to 4.0.0.Beta2. This should automatically pull in a JAX-RS API version 2.1 unless you’ve specified a version of JAX-RS directly. If you have, upgrade to 2.1. This will provide access to the SSE component of JAX-RS 2.1 (SseEventSink, Sse, etc.).

Define an SSE method


@Path("test")
@Produces({MediaType.APPLICATION_JSON})
@Consumes({MediaType.APPLICATION_JSON})
public interface TestService {

  @GET
  @Path("{id}/events")
  @Produces(MediaType.SERVER_SENT_EVENTS)
  void subscribe(@PathParam("id") String id, @Context SseEventSink sink, @Context Sse sse);// It's ok to put JAX-RS annotations on the interface.  Recommended, in fact.

  @POST
  @Path("test")
  TestEntity save(TestEntity testEntity);

  @GET
  @Path("{value}")
  @Produces({MediaType.TEXT_PLAIN})
  String call(@PathParam("value") String input);
}

Implement that biz:

  public void subscribe(String id, SseEventSink sink, Sse sse) {
    service.execute(
        new Thread(
            () -> {
              try {
                sink.send(
                    sse.newEventBuilder()
                        .name("domain-progress")
                        .data(String.class, "starting domain " + id + " ...")
                        .build());
                Thread.sleep(200);
                sink.send(sse.newEvent("domain-progress", "50%"));
                Thread.sleep(200);
                sink.send(sse.newEvent("domain-progress", "60%"));
                Thread.sleep(200);
                sink.send(sse.newEvent("domain-progress", "70%"));
                Thread.sleep(200);
                sink.send(sse.newEvent("domain-progress", "99%"));
                Thread.sleep(200);
                sink.send(sse.newEvent("domain-progress", "Done."))
                    .thenAccept(
                        (Object obj) -> {
                          sink.close();
                        });
              } catch (final InterruptedException e) {
                e.printStackTrace();
              }
            }));
  }

Write a test-case:

 @Test
  void ensureSseWorks() throws InterruptedException {

    ResteasyWebTarget path =
        ((ResteasyWebTarget) webTarget).path(TestService.class).path("1/events");
    SseEventSource source =
        SseEventSource.target(path).reconnectingEvery(10, TimeUnit.SECONDS).build();
    try (SseEventSource s = source) {
      System.out.println("a");
      s.register(
          e -> {
            System.out.println("d");
            System.out.println(e.readData(String.class));
            System.out.println("e");
          },
              System.out::println);
      System.out.println("b");
      s.open();
      System.out.println("c");
      Thread.sleep(1000);
    }
  }

Experience failure

Failure 1

If you’ve done all that, it won’t work. The first error you’ll get is that your @Context SseEventSink is null. Unhork yourself by adding

  @Bean
  public SseEventSinkInterceptor sseEventSinkInterceptor() {
    return new SseEventSinkInterceptor();
  }

to your Spring configuration.

Failure 2

The second failure you’ll experience is on the client-side: No MessageBodyReader for “text/event-stream”. This is fixed by adding a

  @Bean
  public SseEventProvider sseEventOutputProvider() {
    return new SseEventProvider ();
  }

to your Spring client configuration.

Experience success

Ahh, delicious success

a
b
c
d
starting domain 1 ...
e
d
50%
e
d
60%
e
d
70%
e
d
99%
e
d
Done.
e

Sunshower-Test

Plugging Sunshower again, you can get all this goodness by annotating your test-class with @io.sunshower.test.ws.EnableJAXRS if you have io.sunshower.test:test-ws:1.0.0-SNAPSHOT as a dependency.

Unit-Testing Complex External Dependencies

The question of how to unit-test complex external dependencies arises pretty frequently. This is near-and-dear to our hearts because we have many of them. Mocking out complex responses is tedious and error-prone, so I’ll tell you what we do instead.

1: The setup

Stratosphere has an contract for obtaining instances from a cloud service provider, viz.,


public interface ListInstancesOperation extends ProviderOperation<List<Instances>> {
      @Override
      public List<Instance> perform();

      @Override
      public Provider getProvider(); 
     
      @Override
      public Secret() getSecret();
//...etc.

}

perform() will typically interact with the provider service endpoint. The body of perform is quite simple:


  public List<Instance> perform() {
    AmazonEC2 client = createClient();

    DescribeInstancesResult result = client.describeInstances();
    return toInstances(result.getReservations());
  }

This doesn’t look like a super-testable method, so what do we do?

2: Get the actual response and write it to a file

Before you write any code that interacts with an external dependency, you have to understand how that dependency behaves. I recommend keeping a set of IAM credentials in ~/.aws/credentials for testing. Once you do that, actually perform the request and see what you get back.

response

Now, we serialize the response using Java’s default serialization mechanism to a file that we check into source-control.

DescribeInstancesResult result = client.describeInstances();
Objects.write(result, relativeToRoot("src/test/resources/ec2/list-instances.obj")); // our utilities for writing an object using serialization.

3: Mock it real good

Recall that our method under test had 3 statements. One to create the client, one to make the request to the client, and one to map the results. The client’s method under test is public, so we can mock that. We gave createClient default visibility so that we could mock that while not exposing it as part of our Operation API, and then we put all our actual logic into a private method, whose behavior we want to test.

We can now set up our tests to mock out the external operation with a real result:


  private Secret secret;
  private AmazonEC2Client client;
  private EC2ListInstancesOperation operation;

  @BeforeEach
  void setUp() {
    secret = new Secret();
    operation = new EC2ListInstancesOperation(secret, "us-west-2", (AWS) AWS.getInstance());
    operation = spy(operation);

    client = mock(AmazonEC2Client.class);
    given(client.describeInstances())
        .willReturn(Objects.read("src/test/resources/ec2/list-instances.obj", true));
    given(operation.createClient()).willReturn(client);
  }


And test it as follows:

 @Test
  void ensureInstanceFirewallsAreCorrect() {
    List<Instance> perform = operation.perform();
    Instance instance = perform.get(0);
    assertThat(instance.getFirewalls().size(), is(1));
    Firewall firewall = instance.getFirewalls().iterator().next();
    assertThat(firewall.getName(), is("launch-wizard-2"));
    assertThat(firewall.getSecured().contains(instance), is(true));
  }

This approach works pretty well for external dependencies that don’t change frequently (like the public APIs of large cloud service providers), but less well for external dependencies that do. What are some approaches you use?

Mapping Many Properties Quickly

One of the challenges in writing abstraction software like Sunshower is that we have to map a ton of vendor-specific properties into our data-model. I had considered writing a mapping language to transform, say, an EC2 Instance or Azure VM into one of our generic Sunshower compute instances, but decided against it because we just dump all the generic vendor properties like spotInstanceRequestId into properties on our internal model analog, and writing a compiler to do that is for big companies.

But how to avoid individually mapping each darn property by hand? Writing code like:

    instance.addProperty(
        new Property(
            Property.Type.String,
            "ami-launch-index",
            "aws.i18n.ami-launch-index",
            String.valueOf(ec2instance.getAmiLaunchIndex())));

for each of exactly 100 hojillion properties across AWS’s and Azure’s and GCE’s and etc. data model is a breeding ground for ennui and bugs.

Attempt 1

Java 7 introduced the Beans API which makes introspecting Java beans super easy. There’s also a little-known feature of Java Matcher.replaceAll/First that allows you to reference a regular-expression capture group in the replacement string, so I whipped up:





PropertyDescriptor[] propertyDescriptors =
        Introspector.getBeanInfo(com.amazonaws.services.ec2.model.Instance.class, Object.class)
            .getPropertyDescriptors();

    instance.setProperties(
        Stream.of(propertyDescriptors)
            .flatMap(
                t -> {
                  Property.Type type = resolveType(t.getPropertyType());
                  if (type == null) {
                    return Stream.empty();
                  } else {

                    Object value = ReflectionUtils.invokeMethod(t.getReadMethod(), ec2instance);

                    return Stream.of(
                        new Property(
                            type,
                            t.getDisplayName().replaceAll("(.)(\\p{Upper})", "$1-$2").toLowerCase(),
                            "aws.i18n."
                                + t.getDisplayName()
                                    .replaceAll("(.)(\\p{Upper})", "$1-$2")
                                    .toLowerCase(),
                            value == null ? null : String.valueOf(value)));
                  }
                })
            .collect(Collectors.toList()));

Which got me what I wanted:

    instance
        .getProperties()
        .forEach(
            t -> {
              System.out.println(
                  String.format(
                      "Name: %s, key: %s, value: %s", t.getName(), t.getKey(), t.getValue()));
            });
  }
Name: role, key: role, value: io.sunshower.stratosphere.core.topology.model.Instance
Name: aws.i18n.architecture, key: architecture, value: x86_64
Name: aws.i18n.client-token, key: client-token, value: sunsh-WebSe-1KKFOLRIA853V
Name: aws.i18n.ebs-optimized, key: ebs-optimized, value: false
Name: aws.i18n.ena-support, key: ena-support, value: true
Name: aws.i18n.hypervisor, key: hypervisor, value: xen
Name: aws.i18n.image-id, key: image-id, value: ami-39595240
Name: aws.i18n.instance-id, key: instance-id, value: i-05b1e0984260d51dd
Name: aws.i18n.instance-lifecycle, key: instance-lifecycle, value: null
Name: aws.i18n.instance-type, key: instance-type, value: t2.micro
Name: aws.i18n.kernel-id, key: kernel-id, value: null
Name: aws.i18n.key-name, key: key-name, value: sunshower-io
Name: aws.i18n.platform, key: platform, value: null
Name: aws.i18n.private-dns-name, key: private-dns-name, value: ip-172-31-12-114.us-west-2.compute.internal
Name: aws.i18n.private-ip-address, key: private-ip-address, value: 172.31.12.114
...etc.

But it wasn’t very pretty or maintainable. So, refactoring:

Attempt 2



public class Properties {

  public static <T> void map(
      Class<T> type,
      T instance,
      Class<?> bound,
      PropertyAwareObject<?> target,
      PropertyMappingConfiguration cfg)
      throws IntrospectionException {

    PropertyDescriptor[] propertyDescriptors =
        Introspector.getBeanInfo(type, bound).getPropertyDescriptors();

    target.setProperties(
        Stream.of(propertyDescriptors)
            .filter(cfg::accept)
            .flatMap(t -> cfg.map(t, instance))
            .collect(Collectors.toList()));
  }
}

public class ProviderPropertyMappingConfiguration implements PropertyMappingConfiguration {

  private final String prefix;

  public ProviderPropertyMappingConfiguration(String prefix) {
    this.prefix = prefix;
  }

  @Override
  public Property.Type resolveType(PropertyDescriptor descriptor) {

    Class<?> propertyType = descriptor.getPropertyType();

    if (Boolean.class.equals(propertyType)) {
      return Property.Type.Boolean;
    }
    if (isIntegral(propertyType)) {
      return Property.Type.Integer;
    }
    if (String.class.equals(propertyType)) {
      return Property.Type.String;
    }
    return null;
  }

  @Override
  public boolean accept(PropertyDescriptor propertyDescriptor) {
    Class<?> propertyType = propertyDescriptor.getPropertyType();
    return Boolean.class.equals(propertyType)
        || String.class.equals(propertyType)
        || isIntegral(propertyType);
  }

  @Override
  public String mapKeyName(PropertyDescriptor descriptor) {
    return descriptor.getDisplayName().replaceAll("(.)(\\p{Upper})", "$1-$2").toLowerCase();
  }

  @Override
  public String mapName(PropertyDescriptor descriptor) {
    return prefix
        + ".i18n."
        + descriptor.getDisplayName().replaceAll("(.)(\\p{Upper})", "$1-$2").toLowerCase();
  }

  @Override
  public String mapValue(PropertyDescriptor propertyDescriptor, Object instance) {
    Object result = ReflectionUtils.invokeMethod(propertyDescriptor.getReadMethod(), instance);
    return result == null ? null : String.valueOf(result);
  }

  private boolean isIntegral(Class<?> propertyType) {
    return Integer.class.equals(propertyType)
        || int.class.equals(propertyType)
        || long.class.equals(propertyType)
        || Long.class.equals(propertyType);
  }

  @Override
  public <T> Stream<Property> map(PropertyDescriptor propertyDescriptor, T instance) {
    Property.Type type = resolveType(propertyDescriptor);
    return type == null
        ? Stream.empty()
        : Stream.of(
            new Property(
                type,
                mapKeyName(propertyDescriptor),
                mapName(propertyDescriptor),
                mapValue(propertyDescriptor, instance)));
  }
}

Allowing us to easily map any properties:


Name: role, key: role, value: io.sunshower.stratosphere.core.topology.model.Instance
Name: aws.i18n.ami-launch-index, key: ami-launch-index, value: 0
Name: aws.i18n.architecture, key: architecture, value: x86_64
Name: aws.i18n.client-token, key: client-token, value: sunsh-WebSe-1KKFOLRIA853V
Name: aws.i18n.ebs-optimized, key: ebs-optimized, value: false
Name: aws.i18n.ena-support, key: ena-support, value: true
Name: aws.i18n.hypervisor, key: hypervisor, value: xen
...etc.

DevOps without the DevOps part 3: Don’t Buy the Hype

I’ve been asked whether this series really pertains to DevOps. The criticism, as I interpret it, is that DevOps is a conceptual framework for creating high-functioning development teams, whereas the concepts and processes outlined here are related to creating a build with specific tools.

The point at hand seems to be that DevOps as it’s practiced by its adherents and evangelists provides a universal template for whatever ails your development organization. In the DevOps utopia, managers, PMs, engineers, and QA all coexist in complete harmony to produce reliably phenomenal software on time and within the budget. Only, that’s not right, engineers and QA are coalesced in this other interpretation. And managers and PMs? Isn’t that a lot of overhead for healthy, DevOps practicing shops? In this construction, DevOps can become whatever you want it to be, or feel like it should be, because it has no grounding in actual processes.

What I’m getting at is that DevOps can’t be some abstract “template” for “building software factories.” There’s no such thing. What there is is a collective knowledge built of decades of trial and error, and everyone who tells you otherwise is trying to sell you something that you probably don’t need. Sure, containers are great. Kubernetes is great. CI/CD software is probably pretty solid. But an uncomfortable truth is that you can get every bit as good of results with Make, Cron, and Bash because the important thing in writing software is the knowledge of how it all fits together.

And that’s what this series is about: taking common tools and sharing our experience with them and what works well for us, and what doesn’t. You can derive some pretty generalizable truths from this process, like “You need a sane dependency-management process. Here’s what one looks like. There are other valid processes. The important thing is to have one.”

Unmarshalling generic properties with custom logic in JAXB.

I’m not entirely sure the idea behind this post will ever be useful to anyone, and certain as only a reformed sinner can be that it is not wise. Originally we had a polymorphic JAXB Property type, defined as:


public class PropertyElement<U, T extends PropertyElement<U, T>> extends AbstractElement<T> {

  @XmlAttribute String key;

  @XmlAttribute String name;

  @XmlAnyElement(lax = true)
  private U value;

...etc.
}

Where we wanted private U value to be overridable by any subclass, provided that the actual runtime type of U exposed a public, static valueOf method that accepted a string and returned the value represented by the string. We did this because it worked for every Java primitive, plus most of the value types that I could think of. Naturally, this didn’t really work well with anything, so we just created an enumeration of possible types and forsook the notion of dynamically registering new property elements.

Abandoning this was a slightly bitter pill to swallow as it somewhat constrained the extensibility of Sunshower’s core data-model. But meh I tell you, because I spent hours fruitlessly and foolishly attempting to overcome the profound limitations of Java’s erased generics, only for my folly to haunt me for weeks longer. However, should some soul wish to soldier past, here’s what we did:


@Getter
@Setter
public class PropertyElement<U, T extends PropertyElement<U, T>> extends AbstractElement<T> {


  @XmlAnyElement(lax = true)
  private U value;

  private static transient Method valueOf;

I can hear some distant screaming. Possibly it’s me. That is verily a reference to a java.lang.reflect.Method It gets worse. The excellent EclipseLink MOXy library which we prefer over Jackson due to, you know, standards, allows us to define a void afterUnmarshal(Unmarshaller u, Object parent) method, which is invoked after MOXy has finished unmarshalling the object at hand. We figured that we didn’t need to write the actual value since, at runtime, its type is known to MOXy and JAX-RS, and sure enough we didn’t. Reading the object, on the other hand…(hangs head in shame):


  protected void doUnmarshal() {
    try {
      if (valueOf != null) {
        this.value = (U) valueOf.invoke(this, ((XMLRoot) value).getObject());
      }
    } catch (Exception ex) {
      if(ex instanceof RuntimeException) { 
          throw ex;
      }
      throw new RuntimeException(ex);
    }
  }

And we detected the actual type of the argument by introspecting the current type:


  public PropertyElement() {
    this.valueOf = configure(getClass());
  }

 protected static Method configure(Class<?> type) {
    if (valueOf != null) {
      return valueOf;
    }
    final ParameterizedType superclass = (ParameterizedType) type.getGenericSuperclass();
    final Object vt = superclass.getActualTypeArguments()[0];

    if (vt.getClass().equals(Class.class)) {
      Class<?> valueType = (Class<?>) vt;
      return Stream.of(valueType.getDeclaredMethods())
          .filter(t -> t.getName().equals("valueOf") && t.getParameterCount() == 1)
          .filter(
              t -> {
                final Class<?>[] ptypes = t.getParameterTypes();
                final Class<?> ptype = ptypes[0];
                return String.class.equals(ptype) || Object.class.equals(ptype);
              })
          .findFirst()
          .orElseThrow(
              () ->
                  new IllegalArgumentException(
                      "Type does not supply a public, static method valueOf() accepting a string"));
    }
    return null;
  }

In a way, I’m sort of proud of this monster. Dr. Frankenstein would probably understand. On the other hand, our sins surely caught us, prompting a reasonable refactor to exorcise this demon.

Fix your indexes, homeslice

Database performance is the principal thing; therefore, undonk your indexes.
~Probably Confucius

Working with databases is one of the worst parts of building applications. There’s the setup, then there’s migrations, then there’s security, then there’s testing and on and on and on until you just can’t anymore. But, eventually, someone cobbles together a solution that encompasses at least a few of these things, and then you never touch it. Ever. It is forbidden to you. You will ruin everything in the application if you change even one thing. Everything will die.

But one day, something goes wrong. You’re getting transaction timeouts. Users are complaining that it’s taking 40 minutes to log into your UI (this happened once). Your application can’t handle all the data your users are feeding it.

So, someone profiles it and finds that it’s not your application, it’s the database. I mean, it’s pretty much always the database unless Keith is overusing hashmaps and locks again, but discovering which part is pretty tricky. Is it indexes? Is it disks? Is it the transaction log? Is it locks? Is it all of them? Who knows?

Fixing the code is at least a multi-month proposition that requires some poor sap go back and look at the code that interacts with your database, potentially donking the whole biz for everyone, not just this customer. The people who can actually fix it are generally not the ones who volunteer for that hell, and the people who volunteer will usually just make it worse.

So, you do the sensible thing and buy a bigger database server with faster disks. It helps for a while, then it just falls over again. You’ve got to fix this.

So, you perform an investigation, and discover that you’re using either a database sequence ID, or a UUID, or a hash function (I have seen this), and here’s the choose-your-own-misadventure part.

Database sequence ID

Your data is insertion-ordered, which is good for indexing. Heck, you can even use BRIN indexing, but you’re making a round-trip to the database for every insertion, and there’s a network round-trip and either a lock or a single thread somewhere in there. Usually there’s No escaping that.

UUID

Insertions are random. You got page-splits homeskillet. Like, everywhere. And these aren’t cheap. Your average page density is like, 0.0000000001%. Your index is like a million-page phonebook with one number per page. Oh, yeah, and you gotta maintain that B-Tree or whatever on disk because that sucker is way too big to fit into memory. Additional disk hits, baby! No BRINs for you!

Not to mention you’re probably storing the value as a string if my experience with these things is generalizable, which means you’re probably using 36 bytes/ID instead of the 16 actually required. Plus your IDs have a character set that they don’t need, and god help you if that default changes between versions or someone changes it.

Cryptographic hash (content-based addressing)

This one has all the problems of a UUID, plus some. Like, I’ve seen 128-byte (that is truly “byte” because they’re storing it as a string) MD5sum IDs used in large systems.

Ok, ok. So you stored them as a byte array, that only eliminates the size problem.

But here’s the kicker: If you’re using one of these techniques, your implementation will probably be exchanged for another of the problematic ones I just mentioned. Hax.

Use Flake! (Just do it)

At Sunshower, being ourselves the suckers who will eventually have to undonk our database, we decided to get ahead of the problem. The first post we ever did here was about Flake IDs, and now we have a well-tested, high-quality, MIT-licensed implementation that you can just use. If you’re already using UUIDs stored as byte arrays, drop this biz right in.

Step 1: Add our common library to your dependencies

  1. Make sure Nexus Snapshots are enabled by adding https://oss.sonatype.org/content/repositories/snapshots to your Maven repositories

  2. Add io.sunshower.persist:persist-api:<version> (version is currently 1.0.0-SNAPSHOT)–we’ll get a release soon (TM), but this API is totes stable.

  3. Create you a Flake Sequence. I’d recommend 1 per table for very high-scale systems since you can only generate 10,000/sequence/second. Pretty simple:


import io.sunshower.common.Identifier;
import io.sunshower.persist.Identifiers;
import io.sunshower.persist.Sequence;

Sequence<Identifier> sequence = Identifiers.newSequence(true); //'false' would have the sequence API throw an exception if you requested more IDs than you could generate in a given timespan, ~10k/sec/sequence.  Otherwise, the API blocks until the counter resets.

Then, use it however. For instance, to use it with JPA/Hibernate:


public class MyFlakeEntity {
@Id
private byte[] id;

}


In your database schema, just store them as a byte-array. We use bytea in Postgres which incurs 1 additional byte of overhead per row. Meh.

Swank ACLs with Flake IDs

If you really want declarative ACL goodness coupled with delicious DB goodness, and you’re using Spring Security, pull in our service-security library at io.sunshower.service:service-core:<version> (still 1.0.0-SNAPSHOT), then add these to your Spring configuration:

@EnableGlobalMethodSecurity(
prePostEnabled = true, 
jsr250Enabled = true, 
securedEnabled = true
)

public class MySecurityConfiguration { // whatever your configuration is here



  @Bean
  public MutableAclService jdbcAclService(
      JdbcTemplate template, LookupStrategy lookupStrategy, AclCache aclCache) {
    return new IdentifierJdbcMutableAclService(template, lookupStrategy, aclCache, "<SCHEMA>");
  }

 @Bean
  public AclCache aclCache(
      @Named("caches:spring:acl") Cache cache, // replace with your own cache.  This can just be a concurrent hashmap implementation.  We like Ignite
      PermissionGrantingStrategy permissionGrantingStrategy,
      AclAuthorizationStrategy aclAuthorizationStrategy) {
    return new SpringCacheBasedAclCache(
        cache, permissionGrantingStrategy, aclAuthorizationStrategy);
  }

@Bean
  public LookupStrategy aclLookupStrategy(
      DataSource dataSource,
      AclCache aclCache,
      AclAuthorizationStrategy aclAuthorizationStrategy,
      PermissionGrantingStrategy permissionGrantingStrategy) {
    return new IdentifierEnabledLookupStrategy(
        "<SCHEMA>", dataSource, aclCache, aclAuthorizationStrategy, permissionGrantingStrategy);
  }



  @Bean
  public AclAuthorizationStrategy aclAuthorizationStrategy(GrantedAuthority role) {
    return new MultitenantedAclAuthorizationStrategy(role);
  }

  @Bean
  public PermissionGrantingStrategy permissionGrantingStrategy(AuditLogger logger) {
    return new DefaultPermissionGrantingStrategy(logger);
  }

Then, drop this schema into your migrations:

CREATE TABLE <SCHEMA>.acl_sid (
  id        BYTEA        NOT NULL PRIMARY KEY,
  principal BOOLEAN      NOT NULL,
  sid       VARCHAR(100) NOT NULL,
  CONSTRAINT unique_uk_1 UNIQUE (sid, principal)
);

CREATE TABLE <SCHEMA>.acl_class (
  id    BYTEA        NOT NULL PRIMARY KEY,
  class VARCHAR(100) NOT NULL,
  CONSTRAINT unique_uk_2 UNIQUE (class)
);

CREATE TABLE <SCHEMA>.acl_object_identity (
  id                 BYTEA PRIMARY KEY,
  object_id_class    BYTEA   NOT NULL,
  object_id_identity BYTEA   NOT NULL,
  parent_object      BYTEA,
  owner_sid          BYTEA,
  entries_inheriting BOOLEAN NOT NULL,
  CONSTRAINT unique_uk_3 UNIQUE (object_id_class, object_id_identity),
  CONSTRAINT foreign_fk_1 FOREIGN KEY (parent_object) REFERENCES <SCHEMA>.acl_object_identity (id),
  CONSTRAINT foreign_fk_2 FOREIGN KEY (object_id_class) REFERENCES <SCHEMA>.acl_class (id),
  CONSTRAINT foreign_fk_3 FOREIGN KEY (owner_sid) REFERENCES <SCHEMA>.acl_sid (id)
);

CREATE TABLE <SCHEMA>.acl_entry (
  id                  BYTEA PRIMARY KEY,
  acl_object_identity BYTEA   NOT NULL,
  ace_order           INT     NOT NULL,
  sid                 BYTEA   NOT NULL,
  mask                INTEGER NOT NULL,
  granting            BOOLEAN NOT NULL,
  audit_success       BOOLEAN NOT NULL,
  audit_failure       BOOLEAN NOT NULL,
  CONSTRAINT unique_uk_4 UNIQUE (acl_object_identity, ace_order),
  CONSTRAINT foreign_fk_4 FOREIGN KEY (acl_object_identity)
  REFERENCES <SCHEMA>.acl_object_identity (id),
  CONSTRAINT foreign_fk_5 FOREIGN KEY (sid) REFERENCES SUNSHOWER.acl_sid (id)
);

And you can totally use Spring Security annotation-driven security! For instance:


 @Override
  @PreAuthorize("hasPermission(#id, 'io.sunshower.stratosphere.core.vault.model.Secret', 'DELETE')")
  public Secret delete(Identifier id) {
    Secret s = super.delete(id);
    getEntityManager().flush();
    return s;
  }

Also, if you want to use your ACLs in JPQL (or HQL or whatever), we’ve mapped your entities for you. Pull in io.sunshower.core:core-api:1.0.0-SNAPSHOT and you’ll find the following classes:

  1. io.sunshower.model.core.auth.ObjectIdentity
  2. io.sunshower.model.core.auth.SecuredObject
  3. io.sunshower.model.core.auth.SecurityIdentity

If you need multitenancy and security groups (RBAC), that’s a topic for another post, but we have that, too.

To grant a set of permissions:


  @Override
  public <T extends Persistable> void grantWithCurrentSession(
      Class<T> type, T instance, Permission... permissions) {
    final ObjectIdentity oid = new ObjectIdentityImpl(type, instance.getId());
    Sid sid = new PrincipalSid(session.getUsername());
    MutableAcl acl;
    try {
      acl = (MutableAcl) aclService.readAclById(oid);
    } catch (NotFoundException ex) {
      acl = ((MutableAclService) aclService).createAcl(oid);
    }
    for (Permission permission : permissions) {
      acl.insertAce(acl.getEntries().size(), permission, sid, true);
    }
    ((MutableAclService) aclService).updateAcl(acl);
  }

To query all of the objects belonging to a user:

select e from Entity e
join e.identity oid
where oid.owner.username = :username;

Which simply requires the mapping:

  @OneToOne(fetch = FetchType.LAZY)
  @JoinColumn(name = "id", insertable = false, updatable = false)
  private ObjectIdentity identity;

  public ObjectIdentity getIdentity() {
    return identity;
  }

In summary, you don’t have to choose between cool features and robust ACL/RBAC support and database performance with Sunshower. We’re happy to do some of that heavy lifting–and it’s all free!

Aurelia @containerless and custom events

One thing that just got me about [Aurelia’s] @customElement used in conjunction with @containerless is that it doesn’t propagate events.

For instance, I had tried to use:

//view-model
@containerless
@customElement('tag-panel')
export class TagPanel {
     el: HTMLElement;
     //etc
     dispatch() : void {
        let e = createEvent('saved', this.property);
        this.el.dispatchEvent(e);
     }
}

With view markup (Pug)

template
   button(ref="el", click.delegate="dispatch()") 

But I noticed that the event wasn’t propagating. It turns out that the correct thing to do in this case is to inject the DOM element into your view-model, and then dispatch the event from there:

//view-model
@containerless
@inject(Element)
@customElement('tag-panel')
export class TagPanel {

     constructor(private el: Element) {
     }

     dispatch() : void {
        let e = createEvent('saved', this.property);
        this.el.dispatchEvent(e);
     }
}

And don’t reference el from your view:


template
   button(click.delegate="dispatch()") 

DevOps without the Devops part 2: The Structure

I’m back! We’ve decided to go head with Sunshower full time, so expect updates much more regularly here!

Last time, we looked at getting a simple build dockerized. Using the Go platform made this pretty simple for us from a dependency perspective, but a lot of you are using a dependency resolution tool like Gradle, Maven, Crate, or Ivy. This post will detail how to configure Maven and Gradle so that your dependencies are manageable and consolidated–a necessary prerequisite for any sane build/release process.

The Base Project

Sunshower.io has quite a few individual projects, each of which having at least several sub-projects. The first project that we need to build is sunshower-devops. This project contains

  1. Docker container definitions
  2. Bill-of-material POMs that are used by each of the sunshower.io projects
  3. Various scripts bundled with our Docker images.

Recall that last time, the first thing I recommended was for you to aggregate all of your dependencies. We needed that information because it allows us to build a bill-of-materials for our project. This is important because it allows us to understand clearly what our project pulls in. This in turn enables us to manage our dependencies in a revisionable and deterministic fashion. Let’s look at what one of our bills-of-material POMs looks like:


<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
                      http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>io.sunshower.env</groupId>
  <artifactId>persist-dependencies</artifactId>
  <version>1.0.0-SNAPSHOT</version>
  <packaging>pom</packaging>

  <parent>
    <groupId>io.sunshower.env</groupId>
    <artifactId>env-aggregator</artifactId>
    <relativePath>../pom.xml</relativePath>
    <version>1.0.0-SNAPSHOT</version>
  </parent>

  <name>Sunshower Persistence Dependencies</name>

  <scm>
    <url>https://github.com/sunshower-io/sunshower.io</url>
  </scm>

  <properties>
    <hibernate.version>5.1.10.Final</hibernate.version>
... other properties
  </properties>


  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>org.hibernate</groupId>
        <artifactId>hibernate-entitymanager</artifactId>
        <version>${hibernate.version}</version>
      </dependency>
... Other dependencies
  </dependencyManagement>


</project>

Basically, this is just a standard Maven POM file with a structure that is convenient for declaring dependencies. The first thing to note is that dependencies are declared within a <dependencyManagement> tag. This means that POM files that inherit from this, or import it, will not automatically depend on the dependencies declared within, only that if they explicitly declare a dependency from this POM, they will inherit its configuration as it appears in this declaration. For instance, if I import sunshower-env:persist-dependencies, then if I declare org.hibernate:hibernate-entitymanager in my importing POM, I will get the ${hibernate.version} version declared in persist-dependencies without having to redeclare it.

Basically, what we’re going for is this:

  1. We create bill-of-material (BOM) POMs for each category of dependency. This is optional, but I like it because these suckers can get huge otherwise.

  2. If we have commonality between our BOM POMs (and we will), we pull it up into an aggregator POM.

  3. We import each of our BOM POMs into our parent pom (sunshower-parent)

  4. Every subproject in our system will have its own BOM POM that derives from sunshower-parent

  5. Each gradle file for each project uses the spring-maven-gradle plugin to import its BOM pom
  6. Viola! If we add a dependency, that addition is recorded in Git. We can see exactly what we’re pulling in for any release (and go back to a previous POM if we need to)

Visually:

sunshower-parent-pom.PNG

Now, say I want to use hibernate-entitymanager in sunshower-base:persist:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>io.sunshower.base</groupId>
        <artifactId>bom</artifactId>
        <version>1.0.0-SNAPSHOT</version>
        <relativePath>../</relativePath>
    </parent>
    <groupId>io.sunshower.base</groupId>
    <artifactId>bom-imported</artifactId>
    <version>1.0.0-SNAPSHOT</version>
    <packaging>pom</packaging>

    <name>Sunshower.io Imported Bill-Of-Materials</name>
    <url>http://www.sunshower.io</url>

    <properties>
        <env.version>1.0.0-SNAPSHOT</env.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.hibernate</groupId>
            <artifactId>hibernate-entitymanager</artifactId>
        </dependency>
    </dependencies>

</project>

Then, I simply import that into my build.gradle file for whichever project depends on hibernate-entitymanager


    dependencyManagement {
        imports {
            mavenBom("io.sunshower.base:bom-imported:${version}")
        }
    }

Now, in that project (or any subproject thereof), I can just add hibernate-entitymanager to the dependencies block:


dependencies {
    implementation 'org.hibernate:hibernate-entitymanager'
}

Conclusion

While this may seem like overkill, I like it because it scales quite well. It’s easy to audit (assuming you enforce the process), maintainable (dependencies are grouped together sensibly), and forces you to think about what you’re bringing in. Sometimes incompatibilities can be prevented simply by looking through the dependency lists and determining whether two versions are compatible. Finally, it gives a consistent view of the world to everyone in the project: if everyone contributing to the project follows the rules, you won’t get one component consuming one version of a dependency, and another component consuming another, which is a common source of bad builds IME.