“But it works on my machine.” is a quote that has gained notoriety and become a famous meme amongst the developer community. With everyone running different OS, CPU chips, and software configurations, sharing applications in the development space posed a technical conundrum:
How can we all share the same specs?
Since its launch back in early 2013 by a young French startup known as dotCloud, Docker has been the go-to platforming service for digitally packaging software. Docker’s bundling of application files into mobile configurations called containers is often credited for the prevalence of container technology.
And what are containers exactly?
Containers are simply running instances of images, the software one is trying to package up. When an image is being “packaged’’, the dependencies and installation versions are all bunched together. This allows engineers to no longer have to worry about who is running what operating system and which versions of dependencies are installed.
Now for many companies, these containers are incredibly useful, especially for deploying a client-facing application. However, as the number of containers increase, and the product scales and more features are compounded, keeping track, overseeing, and managing the containers becomes arduous.
That is where Kubernetes comes in. Kubernetes, its name stemming from the ancient Greek word for “helmsman”, is a container orchestration tool. Funnily enough, Kubernetes does have features quite similar to a captain’s duty on a ship:
Management, navigation, and security.
Through Kubernetes, one can navigate through all the containers and maintain clusters (groups of containers) through updates to the node. Management comes into play through creating, removing, and upgrading the individual life cycles of clusters. Kubernetes also secures clusters by supporting encryption at rest, which encrypts Secret resources in its highly available key value store called etc distributed (etcd).
Prometheus is the industry standard when it comes to monitoring Kubernetes clusters or any microservices for that matter. It provides hardware usage metrics along with other important time-series metrics which are useful for optimization. Prometheus unfortunately has a pitfall: Prometheus Query Language (PromQL). This deceptively easy querying language allows users to build powerful queries for graphs, but when the syntax looks like:
It is no wonder engineers have such a tough time learning it. With such a steep learning curve, building queries frankly becomes a pain in the ass.
The Prometheus client offers a lot of features. But what it doesn’t offer is a robust, customizable dashboard to easily discern and filter applications from one another. Users have to add additional complex components to their PromQL queries just to see their metrics separated out by application. On top of that, there is PromQL’s massive learning curve that one must tackle. PromQL is a difficult language to learn, and forcing someone who just wants to monitor their microservice metrics to learn PromQL just feels mean. So wouldn’t it be nice if there was a way to isolate these problems?
When our product was still in its ideation phase, we spent a lot of time designing First M8 to be simple and accessible for those with minimal dev ops experience. We wanted to build it with organizations looking to implement dev ops teams into consideration.
So what is First M8? Here at OSLabs, my team and I found a way to abstract the difficulty away from the horrors of PromQL. Through an intuitive electron-based user interface, our application provides users the ability to drag and drop popular PromQL queries into input fields to maximize the user experience. Users start off by inputting the URIs for their respective Prometheus instances. Then, by simply selecting the query and moving it into the designated input field, the need to type out long exasperating queries is no longer necessary.
Let’s talk about how First M8 works
Let’s first start off by downloading First M8 onto your local machine. You can do so by forking and cloning our repository on GitHub. Then, in your terminal run:
- npm install
- npm run build
Depending on your OS…
- npm run package-mac-arm (for arm 64 MacOS)
- npm run package-mac-intel (for intel-based Mac)
- npm run package-win-64 (for Windows)
In your First-M8 folder, navigate to the release-builds folder and double-click on FirstM8.app/exe. Once the application has been downloaded, please have a Prometheus instance spun up and ready.
The next step is to launch the app! Once the app is opened, click on the Settings tab and fill out the Name, IP Address, and Port fields.
Once the configurations have been submitted:
Select the Prometheus Instances drop down and select the Prometheus instance you want displayed.
Afterwards, press the Dashboard button and press New Dashboard Chart in order to get started with your metric inputs. This is where the fun begins! You no longer have to type out PromQL queries. Instead, drag and drop the queries into their designated input fields.
Next, after configuring your set up, press Save and feast your eyes on the wondrous graphs of your time series metrics from your PromQL query!
These metrics are useful for quantifying CPU, GPU, and other hardware metrics to make sure that your clusters/microservices are running optimally and efficiently. And with the ability to navigate between instances, keeping track of separate instance metrics has never been easier.
So what’s next on the list?
At the moment, First M8 is in its Alpha stage and is being launched by the OSLabs tech accelerator program. We have a lot of completed features, but we are still actively working on making First M8 even more streamlined for the future. If you have any strategies you think would benefit the product, feel free to contribute to the repository or reach out to any of the team members! Also, check out our GitHub repo and make sure to click the star button to follow future iterations!