COVID-19 Corona Virus South African Resource Portal – sacoronavirus.co.za

Moving legacy applications to containers

Introduction

 

Many organizations have successful initial public cloud projects, but they are primarily new greenfield application projects carefully chosen as good candidates for running in the public cloud. As a result of these successes, IT organizations are attracted to the elasticity, scalability, and speed of deployment that cloud computing offers. By using cloud technology, IT organizations can more quickly respond to developer and line-of-business demands.

 

Legacy applications are not typically considered for public cloud deployments because of security, regulatory, data locality, or performance concerns. Many legacy apps were written before cloud computing, so it might seem simpler to leave them deployed on existing infrastructure. However, that decision can create bottlenecks for organizations trying to modernize. Efforts to become more responsive while reducing costs cannot succeed without addressing legacy applications, because keeping these applications running often accounts for the majority of IT costs.

 

Containers are a key technology that makes many of the services offered by public cloud providers possible. The design of containers opens up many possibilities for automation. Containers, combined with a platform that provides cloud-like automation, are an attractive environment for running applications. Migrating legacy applications to containers can remove many of the barriers to modernization.[/vc_column_text][/vc_column][/vc_row]

Reasons for moving legacy applications to containers

Legacy systems and new greenfield development opportunities are often connected. New applications and services typically need data from legacy apps, or might perform a service by executing a transaction in the legacy system. A common approach to modernization is to put new interfaces and services implemented in newer technologies in front of legacy systems.

 

Connecting new development on public clouds to internally run legacy applications creates additional complexity and security challenges. Problems, especially network related, are more difficult to trace and diagnose. This issue is more challenging if the legacy application is running on older infrastructure where modern tools are not available.

 

New applications that depend on legacy systems need to be tested. Modern development methodologies tend to rely on automated testing to improve quality and reliability, so legacy applications likely will need more resources in testing environments. Also, development teams might require access to additional, possibly isolated, legacy application test environments to develop and test their new code.

 

Deploying legacy applications in containers can remove the barriers to change and provide the flexibility to evolve. The process starts by decoupling applications from old infrastructure and then using the same platform to host legacy applications and new greenfield development. Both can coexist on the same container or cloud platform and can be managed with the same tools. Operational efficiencies can increase once automation and modern management tools are used with legacy applications without the constraints of old infrastructure.

Benefits of running legacy applications in containers

Portability:

Ability to decouple applications from infrastructure and run applications on any platform that supports containers

Scalability:

Ability to scale up (or down) as needed to respond to demand and achieve better resource usage

Flexibility:

Ease in deploying containers to create testing environments when needed, without tying up resources when they are not needed

Language and technology versatility:

Support for a choice of languages, databases, frameworks, and tooling to allow legacy technologies to coexist with more modern technologies, whether the code is decades old or newly written

Considerations for moving legacy apps to containers

Applications that are not cloud-native need persistent storage for data, logs, and sometimes configuration. However, containers are designed to exist for short periods of time. Unless other arrangements are made, anything written inside the container is lost when the container is restarted. Legacy applications can be accommodated by arranging for the container to have access to persistent storage. Because containers are typically run on clusters consisting of multiple machines, the storage for persistent data needs to be available on all of the machines in the cluster that the container could run on. The types of storage available largely depend on the container platform and the infrastructure it runs on.

Moving legacy applications into containers

 

Once the application’s containers are built, the next steps for deploying the application are configuring storage and networking. To accommodate the need for permanent storage, applications defined in Red Hat OpenShift can be configured to use persistent storage volumes that are automatically attached to the applications’ containers when they run. Developers can manage elastic storage for container-based applications, drawing from storage pools provisioned by operations.

 

Red Hat OpenShift Container Storage can be used to make software-defined persistent storage. It offers block, file, or object access methods to applications running on a Red Hat OpenShift cluster. Virtual private networking, routing, and load balancing for applications running in containers are built in as part of the platform provided by Kubernetes and Red Hat OpenShift. Networking is specified in a declarative manner as part of the application’s deployment configuration. Application-specific network configuration can be stored with the source code to become infrastructure as code. Tying application-specific infrastructure configuration to each application improves reliability when moving, adding, or changing application deployments.

 

Software-defined routing and load balancing play a key role in enabling applications to automatically scale up or down. Additionally, applications running on Red Hat OpenShift can take advantage of rolling deployments to reduce risk. With Red Hat OpenShift’s built-in service routing, strategies for rolling deployments can be used to test new code on subsets of the user population. If something goes wrong, rolling back to a previous version is easier with containers on Red Hat OpenShift.

 

Finally, Red Hat OpenShift Service Mesh provides increased resilience and performance for distributed applications. OpenShift Service Mesh abstracts the logic of interservice communication into a dedicated infrastructure layer, so communication is more efficient and distributed applications are more resilient. OpenShift Service Mesh incorporates Istio service mesh, Jaeger (for tracing), and Kiali (for visibility) on a security-focused, enterprise platform.