The first thing to address here is how to define Application Lifecycle Management. ALM was born in some sense from the vendor community. Like BSM was born at BMC, a lot of contribution to ALM came from Borland and Microsoft, as part of the development tools and platforms they were creating. As a result the focus of ALM was on the software code part of the lifecycle, and development up to the point of the build and deployment process. Then, it gets passed over to operations management, going into ITIL and different disciplines.
That is something that limits ALM, making it incomplete, creating a lot of potential issues. Why? When we deploy, we don’t just deploy the software code, we are deploying the environment – the software infrastructure pieces and then the application code on top of it. For the application lifecycle, we have to look at it in the context of the whole environment lifecycle. In a sense, ALM should be expanded to ELM so that this process can produce the quality and speed that is expected from software engineering.
Look at the Entire Lifecycle
The first aspect to look at is the entire lifecycle. Referring to lifecycle, this starts at requirements, definition, design, and architecture planning and then goes to development testing and finally deployment. But it doesn’t stop in deployment. From here, you are dealing with the world of operations. ALM should be very tightly connected to this world, and so lifecycle comes to mean the operation of the software system when it is in production as well.
Today if you look at most of the tool vendors, ALM has one set of vendors, tools and teams. But for operations there are different tools, different teams, different disciplines. While DevOps is now trying to connect development and operations, for ALM tools and techniques we need to have a closer connection and integration into the operations world. This is one of the biggest gaps, particularly when we speak about ALM in today’s world, with increasing pace of change and complexity, and so increasing the risk of failure for the current ALM practice.
Agile Practices Drive ALM Toolsets
It is important to look at the trends in the industry that impact ALM practices. One is the pace of development. Agile development requires both to change the way the organization is structured, the processes and tools that are used, and definitely requires a very different ALM approach than a traditional waterfall model, where you release once in a certain period based on well defined, planned-ahead releases from a roadmap charting several years ahead. Rather, when you work in shorter cycles, introducing dynamic changes, many dynamic requirements enter your environment and the way you handle it should be very different.
There is a lot of information available on agile development practices and tools, however the question remains, how do you handle the entire lifecycle in agile conditions and how do you integrate your process when you start to work in an agile approach? It is not just development that does agile, it is also the operations side. Operations for an agile world is significantly less explored. There is less information about it, less tools, less practices, less experience which creates a conflict.
The pace of change is one thing that definitely impacts ALM. Look what processes you have in operations. You still have the same ITIL and COBIT that don’t speak about agile. The development side of ALM knows how to go agile, but operations does not. So ALM still gets stuck in the operations side.
Complexity of Environments
The complexity of today’s environments also plays a major role. The code is not the same software code we were developing 10 years ago. It’s not just C++. Now there’s Ruby on Rails, Java, PHP and so on relying on a lot of components that you need to have when building such systems, making for very complex software infrastructure. Typically the ALM process focuses on the software piece, while now dependency on the infrastructure is much higher. You depend on the database, depend on the operations system, depend on the messaging platform and so on. So without understanding all the components and embedding them into the build process, you can’t actually deliver a high quality environment.
Disruptive Technologies
Virtualization and cloud now impacts ALM as well. For example, you can leverage these technologies for your testing, like increasing the diversity and flexibility of your testing lab, while enhancing the independence of the developers so they can actually manage their own servers and not wait until a required server is provisioned. In terms of the way you package your deliverable, virtualization and cloud have a significant impact, as well as in terms of design and architecture of the applications.
At the same time virtualization and cloud introduce a number of challenges impacting effectiveness of ALM. Challenges to successfully managing the process in virtual servers come from limited visibility into virtual machine content, the dynamic resource allocation constantly changes physical topology, and the proliferation of virtual images and virtual machine sprawl.
On top of these virtualization issues, cloud adds a self-service automation layer. This layer exacerbates the management challenges because you need to address:
- how do you verify correctness of automatic activities?
- how do you get visibility into the result of automatic actions?
- how do you ensure that right actions are applied?
- how do you integrate automated and manual actions?
The introduction of the cloud will separate the development and operations teams even further. Self provisioning based on rollout of virtual images hides development activities from the Operations team because Operations provides just an infrastructure used then by the Development team, allowing them to set up application environments. While, the Development team has limited visibility into Operation’s services, which are provided as a catalog.
This adds another gap between the two sides, exacerbating the management challenges from not sharing a consistent view of the environment.
Transitioning through environments and managing environments is part of the ALM process, and is even more important when dealing with virtualization and cloud.
ALM Tools for Closing the Development/Operations Gap
Considering the trends happening and the gaps between development and operations, what can you expect from the ALM tools? First of all, we expect that the tools should close these gaps in the process. There are tools for requirement management, for project management, for development, for test management, software configuration management and so on, but you don’t have the tools that will ensure control of the change, analysis of the change, and validation of the change, through the entire path. Not only over the development process but also for what happens when the change is actually deployed. What happens to the changes that take place in production and operations, how do they get backreflected into the pre-production steps of the ALM process. That is a toolset that is definitely missing. The existing tools for ALM and also operations don’t provide integrated control of the change.
To reach this level of control of the change in this complex system consisting of heterogeneous and dynamic environments, there are certain requirements to expect from the tools. The requirements are for tools to instigate the control while maintaining the agility. For this, the tools need to be able to deal with all the changes, meaning there should be software changes but also the environment, software infrastructure changes and so on. Tools should deal with all process changes – formal and informal. The changes that are automated and changes that are done manually. Changes that are authorized and changes that are not authorized, not planned. Tools should cover the entire environment, from software through all layers of the underlying infrastructure stack.
Tools for the Entire Lifecycle and Environment
Also for the entire lifecycle, addressing the gap between development and operations, tools should be involved from development to testing, to staging, to production, to DR, to the retirement of the system – the entire lifecycle. What is very important for this is that the tools get to the right level of details. Take the tools available for software configuration management, like any SCCM tool, which can give you the exact line and exact change that was done in the software code. Compared to this, when you look at the environment side, tools like CMDB (that are supposed to maintain this configuration information) are very limited in providing such a level of details. You might know which configuration item has changed but don’t necessarily know what specific attribute changed. Getting to the parameter level, the configuration, the most granular level that is where the most risk of issues and incidents hide. These tools should be able to overcome the complexity automatically. When we speak about the change, there is a lot of information about the configuration for one system. Today’s systems have multiple applications, in a very complex infrastructure, possibly affected by millions of parameters and attributes, and are all application pieces and information that need to be tracked and analyzed.
Cutting Through Noise
These tools need to be able to cut through the noise and identify the information that will be valuable for a specific user and for a specific step of the ALM and operation process. These tools should be able to deal both with logical system architecture and physical environment topology. For the transition, the lifecycle, logical architecture remains the same, but physical architecture could be very different from one environment to the next. These tools need to support the new disruptive technologies as well as traditional data center environments.
ALM Tools Must Realize Value Rapidly
It is extremely important at this agile pace of change to be able to realize value quickly. If you create changes every day you cannot wait a year for implementation based on the tools available to help you optimize this process. Really for the same pace that you implement changes, the tools you are implementing should keep that pace and deliver results as well.
I think that the degrees of evolution that have happened in software engineering in technology organizations in IT have lead to a new way of working, requiring changes to ALM processes, technologies and tools. What I believe is that a new generation of tools is required to cope with the new trends. The existing tools are great, but they are designed based on an old approach, with old concepts in mind – when there was a well defined process, a structured organization, a well planned roadmap and so on.
So the new set of tools that will be supporting agility and providing flexible control and be able to deliver information, in spite of complexity, the right amount of information to the user that needs it, those are the tools we will see coming into the market in the years to come.