DevOps – IT Buzzword Or Philosophy That Empowers Businesses To Benefit?
Table of Contents

DevOps – IT Buzzword Or Philosophy That Empowers Businesses To Benefit?

DevOps has been a hot topic among programmers, CTOs and CIOs for several years now. Approaches to DevOps and the tools used to support DevOps have become sufficiently mature that they’re now commonplace in teams and companies all around the world.

DevOps enables tech companies and companies that design and build digital products to improve their time to market and the robustness and scalability of their products. And DevOps isn’t just relevant to tech companies – with most companies now devoting significant portions of their time and resources to developing proprietary digital products, DevOps is relevant to most organizations in the world. But what is DevOps? Where has it come from and what does it mean for the way companies build software? Marcin Dąbrowski, CIO of ITMAGINATION, explains.

Amazon Web Services (AWS) defines DevOps as:

… the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes.

In short, DevOps puts forward a way of building digital products (Dev being an abbreviated form of Development) that is closely tied with their Operation (Ops). This demands that the responsibility for building and then maintaining the product are in the same hands.

In short, the DevOps message to teams that build software is: don’t just build the product, be responsible for its implementation and maintenance.

This is in stark contrast to the culture that prevailed at the end of the 20th century and start of the 21st century, when there were often clear divisions between those who developed software and those who provisioned the infrastructure and were responsible for the maintenance of production environments. Collaboration between the ‘Dev’ and ‘Ops’ teams was typically limited to the communication of technology requirements such as “we’re going to need to two machines connected in a cluster with Windows and an IIS server installed”, followed by action from the IT administration or infrastructure team (IT Ops).

The handover from Dev to Ops was similarly siloed, with Developers handing over their ready packets for software deployment to the IT Ops team, who would be responsible for deploying, maintaining and monitoring the tool’s performance on the expensive IT infrastructure that the organization had amassed and was maintaining.  

So what’s changed since then? Who knocked down the wall and how has DevOps come to be one of the key success factors for software development and deployment? Based on our experience, ITMAGINATION has identified three main reasons for the rise of DevOps – agile, microservices and cloud.

The rise of DevOps attributed to rise of agile, microservices and cloud

The appearance of three off the biggest tech buzzwords from the last decade in the same header is no coincidence. The emergence of these methodologies and technologies has been crucial to enabling the emergence of DevOps as the ‘go to’ approach to software development and maintenance.

Agile

Agile is underpinned by the idea of short delivery cycles, incremental delivery and collaboration by multi-disciplinary project teams. One of the key principles is to deliver new functionalities to production within every sprint (typically lasting two weeks). In this way, agility means enabling the product to constantly develop but also adapt to (possibly) changing stakeholder needs and market conditions. It all sounds great, but it requires a highly competent and mature team.

To continually commit new functions to the production environment also requires a mastery of automation, which is a key aspect of DevOps. Put simply, if you’re expected to be adding new features every day or every week, you’re going to want to be able to do it with minimal effort. And it’s not just about building the features and functions – they’ll need to be tested, deployed, monitored and maintained, all of which would typically take time and divert attention and resources from the task of pushing your product forward. Automating as much as possible enables the focus to be on building the best product possible.

Agile also puts focus on accountability. Most of us will recall having witnessed or taken part in a product demo that went wrong because of some kind of ‘behind the scenes’ technical error or a need to re-start a machine. With agile, and the need to frequently and regularly demonstrate progress to stakeholders, there is no place for such hiccups. If there is a need to configure and operate servers, operating systems or networks, the agile team should have such skills onboard.

It’s from this requirement that the increasingly popular role of DevOps Engineer has emerged. DevOps Engineers are professionals who understand the context of what is being built and are able to make the appropriate technical resources – servers, networks, etc. – available in the quickest, most-efficient way possible, often by relying on a constantly evolving suite of automation tools. Unlike their predecessors, these engineers work side by side with developers as part of the project team and so are able to support the rapid and frequent shipment of new iterations and features.

Microservices

DevOps Engineers will make use of many services as part of their daily work lives. Microservices have emerged as a result of changes to the way organizations look to build systems and products. When building a multi-layered (e.g. three layers – visual layer, business logic and data) but monolithic architecture where everything would be delivered as a single package, it would be relatively easy to agree requirements with an IT administrator.

You’d simply specify the hardware you need, how it should be configured and any applicable network settings. After that, the deployment would be relatively simple. You could request a specific type of server for your application (e.g. IIS, JBoss, Apache Tomcat, etc.) and the tools that you’d need installed (e.g. .NET Framework, the appropriate version of Java and any required libraries) and the rest of the responsibility would be with the IT administrator.

In this new way of working, where every system is composed of many individual components – microservices – each of which can be developed and integrated individually, it’s no longer so simple. Unfortunately, an increase in complexity has been the price to pay for increased flexibility, robustness and scalability. Fortunately, technology companies have developed entire suites of tools that help break down this complexity, enable frequently performed tasks to be automated and, importantly, decouple applications from the environments in which they will be deployed.

Chief among these tools are containers and container orchestrators but there are many other pieces to this puzzle. Data bases, cache databases, API gateways, queuing and message-processing systems … the list goes on. And as it is the Developer team that is taking decisions about which tools and technologies to use and how to deploy them, it’s only natural that they take responsibility for their deployment, configuration and integration. It’s only with this end-to-end accountability that expectations around flexibility, scalability and robustness can be fulfilled.

Cloud

The cloud – specifically public cloud – adds an additional layer to be considered. Very often, the microservices described above are available to configure ‘out of the box’. AWS, for example, makes 175 (and counting) different services and products available to consumers of its cloud services. Sound great, right? But building working systems from all of these building blocks requires extensive knowledge and, again, it’s on the Developers to take decisions about how best to build the product.

These days, to create all of the required components from scratch and then deploy them to a procured Infrastructure-as-a-Service setup is inefficient. At the same time, decisions about which components to use can be influenced by a variety of factors such as cost (different services have – often very – different prices), flexibility (e.g. is a managed service more convenient or does it mean losing control of key variables?) robustness and scalability (some cloud services have in-built mechanisms to replicate and deploy instances in different regions, which help ensure uptime and enable rapid expansion).

It’s thus impossible (or at least impractical) to consider Development and Operations as distinct functions – the two must go hand in hand and be dealt with by one team. This same function must extend from the build phase, through to deployment and all the way through to monitoring, taking remedial action (e.g. fixing and restarting services in case of failure) and scaling (e.g. in response to increased demand) of the product.

Public cloud provides one key additional attribute – availability. These days, almost any type of technical resource is available via the cloud with just a few clicks (or with a few commands or lines of code). A virtual machine with Windows installed and 8gb of RAM? Consider it done. A network group with specific addressing and traffic? Coming right up. Configuration of a service to send push notifications to mobile devices? Right away. And because all of this is available publicly and is based on scripts and demands, it lends itself ideally to being automated. This explains the rising popularity off CI/CD (Continuous Integration / Continuous Delivery) and Infrastructure as Code (IaC).

DevOps Tools

It’s often said that a good programmer should be lazy. It’s that laziness that motivates the programmer to automate all of the repeatable tasks in his or her role. These tasks are present in every sprint and happen – probably several times – each day. Programming time is a hot commodity so time spent on repeatable, automatable tasks is not time spent well. It’s often worth devoting an hour to configuring a tool that can perform specific tasks at the required intervals so that it saves time in the future.

The number of DevOps tools available has risen dramatically in recent years. Some of them have become established as ‘go to’ or industry standard tools within their specific DevOps domains, of which there are many. Here are some of the most-important domains and the most-popular tools within them:

SCM (Source Control Management) – the code that we write needs to be stored somewhere. Ideally, it should be available to all key stakeholders and should be organized correctly. After all, code represents an investment and company asset – its safe storage implies stability and security. The GIT protocol has been widely accepted as the industry standard and is used by leading tools such as GitLab, GitHub, Bitbucket and Azure DevOps.

CI (Continuous Integration) – When more than one programmer is working on a specific project (which is most of the time), the code must be continuously integrated into the codebase so that it can be checked for errors and consistency. Within the CI domain, GitLab and Azure Pipelines have emerged as the most-popular tools.

Automated Testing – Regular testing of all key aspects of a function – both individual components and how they operate as a whole – is not just good practice, it’s essential. Much of your testing process can be automated with tools that periodically run tests and report back on whether everything is working or not. Popular tools for this purpose are SoapUI and Selenium.

CD (Continuous Deployment) – Requires a set of tools that will take the artefacts from a system (either an external source or a repository of artefacts built using CI) and will deploy them on specific environments (such as developer, test, production, etc.). Tools such as Ansible, Chef and Puppet can be used for configuration management, while AWS CloudFormation and Terraform – are popular go-to tools for defining deployment using code (Infrastructure as Code, aka IaC). Terraform is a good platform-agnostic choice as it enables deployment to any cloud platform or even to on-premise datacenter infrastructure.

These days, most new applications are deployed as containers, Kubernetes (K8s) and other tools that enable engineers to work with containers (such as OpenShift or ) have recently been added to the list of must-haves for DevOps Engineers. And when it comes to public clouds, services like Azure Kubernetes Services (AKS) and Amazon Elastic Kubernetes Service (EKS) are increasingly seen as the tools of choice.

In addition to all of the tools listed above, there’s an increasing number of tools that enable IT professionals to monitor systems, collate and analyze logs, and react to a variety of events. The range of tools is rich, regardless of whether AWS, Azure or Google Cloud is your platform of choice.

What are you doing about DevOps?

DevOps is no longer an IT buzzword. DevOps is about adding speed and confidence to our ability to react to changes in business and technology. Technology continues to advance at lightning speeds and businesses need to be able to able to adjust course, explore new opportunities and scale quickly and easily. They need to be able to do this with confidence. DevOps and the suite of tools that support this function facilitate this.

DevOps and ITMAGINATION

If you want your technology to keep up with your business ambitions, you need strong DevOps capabilities. Talk to ITMAGINATION to learn how you can boost your organization’s DevOps capabilities.

Learn it. Know it. Done.

Liked the article? subscribe to updates!
360° IT Check is a weekly publication where we bring you the latest and greatest in the world of tech. We cover topics like emerging technologies & frameworks, news about innovative startups, and other topics which affect the world of tech directly or indirectly.

Like what you’re reading? Make sure to subscribe to our weekly newsletter!
Categories:
Share

Join 17,850 tech enthusiasts for your weekly dose of tech news

By filling in the above fields and clicking “Subscribe”, you agree to the processing by ITMAGINATION of your personal data contained in the above form for the purposes of sending you messages in the form of newsletter subscription, in accordance with our Privacy Policy.
Thank you! Your submission has been received!
We will send you at most one email per week with our latest tech news and insights.

In the meantime, feel free to explore this page or our Resources page for eBooks, technical guides, GitHub Demos, and more!
Oops! Something went wrong while submitting the form.

Related articles

Our Partners & Certifications
Microsoft Gold Partner Certification 2021 for ITMAGINATION
ITMAGINATION Google Cloud Partner
AWS Partner Network ITMAGINATION
ISO 9001 ITMAGINATIONISO-IEC 27001:2013 ITMAGINATION
© 2024 ITMAGINATION. All Rights Reserved. Privacy Policy