So you’re ready to start looking at containers but don’t know where to start? If you’re like most people, you begin the adventure like everyone else. You’ll Google around a bit, attend a few relevant shows, go to a few meetup groups, etc. It’s the first set of steps in a long and familiar journey – one that seemingly repeats itself with every shift in technology. Initially, the DIY route makes sense – you and your team carve out the time and resources needed to tackle it. Pinpointing what to focus on and what to ignore is a real challenge when the technology evolves as rapidly as something like containers but that’s just how these cycles work. Open Source’s benefits are immeasurable but the reality is, the documentation is oftentimes an afterthought so you’re left sifting through random blog posts and outdated documentation to make sense of it all.

After some heavy reading and research, the reality of it all sets in and you realize that the more you know, the less you actually know and that goalpost is a lot further out than what was originally scoped. Orchestrators? Overlay networks? Persistent storage? Containers aren’t just there to replace a hypervisor – it’s a whole new IaaS model and introduces a different thought process when considering how to deploy applications. At this point, a real understanding of what it’ll take to operationalize containers sets in and it’s not going to be trivial. From here, you can go in a couple different directions:  DIY or Diamanti.

As experienced Sherpas here in the land of containers, we at Diamanti would like to help remind ambitious adventurers of what the DIY journey looks like if you decide to go it alone. These challenges aren’t insurmountable, but they are numerous and remember: there’s always a cost every step of the way – whether it’s your engineer’s time or money out of pocket.

  • Research:Figure how all the moving pieces fit. Containers, orchestration, management, ecosystem integration, monitoring, etc. You’ll have to put real engineers on doing the initial homework and eventually, work on implementation.
  • Vendor Engagement: Containers, just like anything else, have to run on something, somewhere. You’ll need to get a hold of servers, network equipment, and allocate space/power. Your friendly neighborhood VAR can help with this and your local colo provider or facilities manager should be able to help out with the physical slotting.
  • Install Equipment: The building blocks need to get physically racked and cabled up.
  • Orchestration: A basic operating system needs to be loaded onto the servers. For containers, you’ll probably want to load in a container capable base OS – ideally using some network booting mechanism. Some people also use VMware to bootstrap so that they can run containers, but this is far from ideal. Bootstrapping container services onto hypervisors is extremely inefficient, negating one the key reasons for going to containers in the first place – bare metal performance.
  • Configuration Management: Once you bootstrap the servers, you’ll want to make sure that you can bring additional nodes online without any real manual configuration work. Docker/Kubernetes services should be defined and started up as a part of this process as well as any other management services.
  • Network/Storage: Docker defaults to NAT mode and doesn’t do persistent storage by default. If you plan on keeping your operation small or within the confines of a personal device, then that should be just fine. For everyone else, you’ll want to figure out how to make sure that the storage endures and that the containers get real network interfaces.
  • Container Orchestration: As you move towards the lightweight microservices model, service/container counts go up, and so do the requirements for tracking and directing them. The old spreadsheet method or vSphere isn’t going to cut it anymore so this is where orchestrators come in. The problem is… there are so many of them. Kubernetes? Docker Swarm? Mesosphere? Other? So many choices... Pros/Cons?
  • Network Overlay: If you’re running a production environment, odds are, you’ll need network interfaces. Here, there are various overlay options – Calico? Flannel? Weave? Pros/Cons?
  • Persistent Storage: How do you maintain storage resiliency when Docker by default, doesn’t do persistent storage? There are again, third party options…
  • Clustering: If you’re going to run containers/orchestrators in a production environment, you’ll probably want to cluster for high availability/redundancy. How will you handle the insertion or removal of cluster members? How will you write your CM templates and bootstrapping mechanisms to quickly deploy new nodes with the prerequisite services preloaded?
  • Management: You’ll probably want a UI, performance profiles, and QoS controls if you want to truly manage the infrastructure with SLAs that can be delivered upon. How does one do this within a containers framework? Where are all of Docker or Kubernetes’ operations-focused tools?
  • Access Controls: For true multitenancy and the self-service features your end users will eventually demand, you’ll need to be able to control who has access to what resources.
  • Monitoring: How do you know who is using what resources? Are you able to monitor on a per-container basis vs just the host volume(s) or host network interface?
  • Run Applications: At some point, you’ll need to test whether or not your application will run within a container framework. You’ll also need to do some tuning here to understand how applications fit within the resource constraints of the host device.
  • Burn in Testing: Will your applications function under load, in a scaled-out fashion? Are all the interconnected pieces working properly?
  • Upgrade Trials: Typical build cycles such as this oftentimes go on for weeks or months. In that time, major releases of Docker/Kubernetes/other will be released, and it’ll be a good opportunity to see what it’s like to upgrade to a new build. Fingers crossed!
  • Tribal Knowledge: Once the house of cards has been built, step back and document everything that was done to build it. Praise the engineers that endured and delivered. Hope they never leave and as a failsafe, build a monitoring job that checks their LinkedIn pages for updates.
  • Support/Handoff: Train the people that will be monitoring and running this from day to day. This usually means the Systems Administration team and/or Service Desk type folks. Hopefully, they can handle the occasional 3am alert but also have your implementation engineers on speed dial if needed. Pray that there’s nothing broken in the actual open source code of your particular release. In case it does go there, it can’t hurt to start thinking about getting the operations team to read/write/debug in Go and have both #docker and #kubernetes IRC channels open.
  • Operationalize: Celebrate the end of a long development phase but always be vigilant about the fact that you are now on the hook to support the platform going forward. This doesn’t just mean being on call – but it also means continuously staying on top of requests to have the latest features in the container or orchestrator engines as they evolve. Think of the upgrade trials from earlier – but now out of the safety of a sandbox with actual users and production load mixed in.

Congratulations and welcome to the reality of building out and ultimately owning an infrastructure stack.

Here’s a handy printable diagram of what it takes to operationalize containers for those that have never felt the pain of bringing a new infrastructure platform online or for those who simply may have forgotten. It’s also helpful for anyone asking why the container environment isn’t production ready yet. Warning: not for the faint of heart:


Of course, there is a better way to do all this.

To really understand what it is that Diamanti does, it helps to go back in time a bit and understand why Docker is so hot right now. You see, Docker isn’t the first to the containers game. Before it, Solaris had Zones and Linux had OpenVZ and LXC. They all enjoyed modest successes, but it wasn’t until the appearance of Docker that containers really took off. Docker had done something that no one else had done before it – enable infrastructure as code via Docker File/Hub/Repos. Developers could now make changes to the containers via commits and push them into common repositories, complete with tagging and import/export functions that enabled portability. Configuration Management requirements were reduced and infrastructure suddenly got a huge boost in speed, flexibility, and usability. This was a fantastic evolution of the technology for developers in particular, who wanted self service capabilities and didn’t necessarily have network interface or storage persistence needs.

Fast forward to today, much of the core container and orchestrator work is still focused around the needs of the developers and the tools needed to operationalize it are still evolving. This is where Diamanti comes in, covering everything from the metal to the container orchestrator and everything in between. All the core components needed to run a containerized environment are included, fully installed/configured, and supported by Diamanti.

Piecing it all together:

  • Docker and Kubernetes preloaded, preconfigured, and supported
  • Layer 2 network interfaces without a network overlay
  • Persistent storage
  • Clustered storage via NVMoE
  • QoS: Granular controls for compute, network, and storage
  • Per-container reporting
  • Role based access control
  • Supported upgrade process

Having been there, we feel your pain and know what it’s like to have to build everything from scratch, only to find that some of the pieces didn’t quite fit. The Diamanti D10 platform is the only true turn-key, full-stack containers product on the market today. Ready to go out of the box and operations focused, it’s for people that have better things to do than troubleshoot open source issues at 3am. When it’s time to move past the point solutions and step up to a fully built container solution, Diamanti is here to help.