COVID-19 Corona Virus South African Resource Portal – sacoronavirus.co.za

Moving legacy applications to containers

Introduction

 

Many organizations have successful initial public cloud projects, but they are primarily new greenfield application projects carefully chosen as good candidates for running in the public cloud. As a result of these successes, IT organizations are attracted to the elasticity, scalability, and speed of deployment that cloud computing offers. By using cloud technology, IT organizations can more quickly respond to developer and line-of-business demands.

 

Legacy applications are not typically considered for public cloud deployments because of security, regulatory, data locality, or performance concerns. Many legacy apps were written before cloud computing, so it might seem simpler to leave them deployed on existing infrastructure. However, that decision can create bottlenecks for organizations trying to modernize. Efforts to become more responsive while reducing costs cannot succeed without addressing legacy applications, because keeping these applications running often accounts for the majority of IT costs.

 

Containers are a key technology that makes many of the services offered by public cloud providers possible. The design of containers opens up many possibilities for automation. Containers, combined with a platform that provides cloud-like automation, are an attractive environment for running applications. Migrating legacy applications to containers can remove many of the barriers to modernization.[/vc_column_text][/vc_column][/vc_row]

Reasons for moving legacy applications to containers

Legacy systems and new greenfield development opportunities are often connected. New applications and services typically need data from legacy apps, or might perform a service by executing a transaction in the legacy system. A common approach to modernization is to put new interfaces and services implemented in newer technologies in front of legacy systems.

 

Connecting new development on public clouds to internally run legacy applications creates additional complexity and security challenges. Problems, especially network related, are more difficult to trace and diagnose. This issue is more challenging if the legacy application is running on older infrastructure where modern tools are not available.

 

New applications that depend on legacy systems need to be tested. Modern development methodologies tend to rely on automated testing to improve quality and reliability, so legacy applications likely will need more resources in testing environments. Also, development teams might require access to additional, possibly isolated, legacy application test environments to develop and test their new code.

 

Deploying legacy applications in containers can remove the barriers to change and provide the flexibility to evolve. The process starts by decoupling applications from old infrastructure and then using the same platform to host legacy applications and new greenfield development. Both can coexist on the same container or cloud platform and can be managed with the same tools. Operational efficiencies can increase once automation and modern management tools are used with legacy applications without the constraints of old infrastructure.

Legacy applications are often deployed on infrastructure that has fixed and limited resources. Often the resource utilization is low. Yet, when demand increases, it is challenging to scale up without long lead times and high costs. User and business expectations for responsiveness and costs have been changed by the success of Software-as-a-Service (SaaS) applications running in the public cloud. It can be a difficult conversation to explain why internal applications cannot evolve as quickly.

 

While many legacy applications have had stable and predictable growth in the past, new user-driven demand means that the resources available to the legacy application might need to be scaled up quickly. The user-driven demand is difficult for an IT organization to predict because:

  • It is now common for mobile and connected applications to require application programming interface (API)-level access to existing applications.
  • The rise of data science and machine learning creates additional demand for data access.
  • Some of the demand, as well as the applications that consume data and APIs, can be external to the IT organization.

 

Because it is difficult to predict growth and control demand, existing applications need to be repositioned to allow the organization to respond quickly. Modern cloud-scale applications address this challenge by running in containers on a platform that increases or decreases the number of containers running, and thus the capacity of the application, in response to demand.

Benefits of running legacy applications in containers

Portability:

Ability to decouple applications from infrastructure and run applications on any platform that supports containers

Scalability:

Ability to scale up (or down) as needed to respond to demand and achieve better resource usage

Flexibility:

Ease in deploying containers to create testing environments when needed, without tying up resources when they are not needed

Language and technology versatility:

Support for a choice of languages, databases, frameworks, and tooling to allow legacy technologies to coexist with more modern technologies, whether the code is decades old or newly written

Considerations for moving legacy apps to containers

Applications that are not cloud-native need persistent storage for data, logs, and sometimes configuration. However, containers are designed to exist for short periods of time. Unless other arrangements are made, anything written inside the container is lost when the container is restarted. Legacy applications can be accommodated by arranging for the container to have access to persistent storage. Because containers are typically run on clusters consisting of multiple machines, the storage for persistent data needs to be available on all of the machines in the cluster that the container could run on. The types of storage available largely depend on the container platform and the infrastructure it runs on.

Most applications consist of containers that need to run at the same time and connect to each other. For example, the components that make up the tiers of a three-tiered application would run in separate containers. The web or app containers benefit from the ability to dynamically scale out to more machines in the cluster as demand increases. The process of scheduling and managing the containers is referred to as container orchestration, a key responsibility of a container platform.

Applications often have specific networking requirements that are key to the manner in which they are deployed. Virtual networks might need to be recreated in the container environment. In some cases, physical networking hardware might need to be virtualized in the container environment. As with storage, the virtual network for the app needs to be available on each host the container runs on. The container platform manages the virtual network environment that connects components of an application running in different containers, and it isolates those components from the other applications running on the container platform.

Developers need tools for building the application and any of the necessary dependencies into container images. This process should be repeated for code changes and finished releases. During rollouts, operators or developers also need the ability to deploy the new images in place of the current running container images. While low-level tools exist for performing these tasks, the container platform makes this process much easier.

 

Building containers to run applications often requires languages, runtimes, frameworks, and application servers to allow the application to run. These can be pulled in during the build process with a base container image as a foundation. While there are a number of sources for base images, the challenge is to acquire them from a known and trusted source. The base images need to be secure, up-to-date, and free of known vulnerabilities. If a vulnerability is discovered, the base images must be updated. Users also need a way to find out if containers are based on out-of-date images.

One of the challenges IT organizations face when adopting the public cloud is that the infrastructure, management, and automation software provided by the public cloud are different from what the IT organization uses in its own datacenters. Many public cloud tools and services are not available to run on-premise, so they cannot be used with applications that run internally.

 

Many organizations choose to use more than one public cloud for reasons like geographic availability, diversity, and cost. However, each public cloud provider offers vendor-specific interfaces, tools, and services.

 

Containers and cloud have tremendous potential for improving operational efficiency through automation. Containers are an ideal environment to implement DevOps practices and culture. However, a cloud strategy that uses different platforms everywhere there are hosted applications could overload operators and developers with too much to keep track of and learn.

Moving legacy applications into containers

 

Once the application’s containers are built, the next steps for deploying the application are configuring storage and networking. To accommodate the need for permanent storage, applications defined in Red Hat OpenShift can be configured to use persistent storage volumes that are automatically attached to the applications’ containers when they run. Developers can manage elastic storage for container-based applications, drawing from storage pools provisioned by operations.

 

Red Hat OpenShift Container Storage can be used to make software-defined persistent storage. It offers block, file, or object access methods to applications running on a Red Hat OpenShift cluster. Virtual private networking, routing, and load balancing for applications running in containers are built in as part of the platform provided by Kubernetes and Red Hat OpenShift. Networking is specified in a declarative manner as part of the application’s deployment configuration. Application-specific network configuration can be stored with the source code to become infrastructure as code. Tying application-specific infrastructure configuration to each application improves reliability when moving, adding, or changing application deployments.

 

Software-defined routing and load balancing play a key role in enabling applications to automatically scale up or down. Additionally, applications running on Red Hat OpenShift can take advantage of rolling deployments to reduce risk. With Red Hat OpenShift’s built-in service routing, strategies for rolling deployments can be used to test new code on subsets of the user population. If something goes wrong, rolling back to a previous version is easier with containers on Red Hat OpenShift.

 

Finally, Red Hat OpenShift Service Mesh provides increased resilience and performance for distributed applications. OpenShift Service Mesh abstracts the logic of interservice communication into a dedicated infrastructure layer, so communication is more efficient and distributed applications are more resilient. OpenShift Service Mesh incorporates Istio service mesh, Jaeger (for tracing), and Kiali (for visibility) on a security-focused, enterprise platform.