Li9 Infrastructure Projects

Li9’s recent infrastructure projects have involved significant changes to traditional IT environments that allowed our customers to take advantage of modern IT features while providing automation, security and business value.    Contact Li9 to discuss these solutions in more detail. 

CI/CD, Automation and Everything as Code

 Modern technologies change in real-time, and new solutions and methods for solving problems are emerging. The market provides new ways to connect with customers and provide a higher quality level product.  We’re living in a modern business paradigm. Even for small businesses and startups, it could be challenging to keep up with the times, and it is always the challenge for market giants such as banks and finance organizations.

Li9 has helped one of the most prominent players in the Finance Market to be on the cutting edge of technology.  Over 40,000 employees work for this Swiss multinational investment bank and financial services company.  With its headquarters in Zürich, it maintains offices in all major financial centers around the world and is one of the eight global “Bulge Bracket” banks providing services in investment banking, private banking, asset management, and shared services.

In Q1 2018 Red Hat engaged Li9 to make this Company IT infrastructure one of the most effective and modern. Li9 provided all Red Hat flagman products such as Red Hat OpenShift, Red Hat Ansible Tower and Red Hat Satellite and helped the customer to achieve their goals, provide a modern high-load platform for the development and operation of internal services.

 Pain points:

  • High efforts on change delivery
  • High efforts on infrastructure testing
  • Slow testing processes – weeks, months
  • All artifacts are ready to production in several months
  • Low test coverage, ~20%
  • Low level of automation

Solution:

  • Implemented multiple OpenShift clusters for application delivery by on-premise model
  • Implemented multiple Azure Red Hat OpenShift for application delivery and integration with Cloud Services
  • Implemented enterprise-ready installation of modern automation solution Red Hat Ansible Tower
  • Introduced and implemented CI/CD to improve speed and quality of applications delivery
  • Everything as a Code (applications, tests, delivery)

Benefits:

  • Artifacts (Application Components) are ready for production in one week
  • Application provisioning and test durations were time reduced to minutes from weeks
  • Infrastructure test coverage was increased up to 90 %
  • Application Delivery efforts significantly decreased
  • All infrastructure is under version control
  • Application delivery via OpenShift (Kubernetes) and CI/CD
  • VMWare vRealize Automation Services Delivery
  • Fully automated RHEL VM Provisioning On VMWare and in Azure
  • Fully automated RHEL and MS Windows patch management

Containers and Infrastructure as Code

 One of many challenges large enterprises face in the era of cloud services is difficult to keep up with ever-changing technological landscape due to their highly regulated policies, which makes them relatively slow in reacting to customer’s demands.  Li9 worked with the largest for-profit provider of managed health insurance with a customer base of more than 40 million members.  This presents unique challenges that can be answered by container technologies and Infrastructure as Code approach (IaC).

Li9 was engaged by Red Hat in Q1 2019 in order to help this customer embrace containerization and DevOps practices.  Li9 consulting’s expertise across a number of DevOps infrastructure components played a crucial role in being able to implement multiple OpenShift Container Platform environments within the customer’s planned timeframe.

Pain points:

  • Manual cloud infrastructure provisioning.
  • Slow application delivery.
  • Lack of container-related expertise in the context of application delivery and security.
  • Fragmentation of knowledge leading to over-reliance on one employee.
  • A highly regulated environment that makes delivery time-critical.

Solution:

  • Li9 leveraged Packer, Terraform and Ansible to automate the provisioning of virtual resources and deploying OpenShift on both AWS and VMWare.
  • A strategy was developed for containerizing applications, carried out by Customer developers.
  • 8 OpenShift clusters were deployed for managing and scanning containerized applications.
  • Working sessions were conducted to train a member of the infrastructure team on how to use new tools.
  • At the moment, the following environments were deployed at Customer:

On-Premise:

  • Development (3 master nodes, 3 infrastructure nodes, and 10 application nodes = 16 nodes total)
  • Load testing (3 master nodes, 3 infrastructure nodes, and 10 application nodes = 16 nodes total)
  • Production (3 master nodes, 3 infrastructure nodes, and 10 application nodes = 16 nodes total)

AWS:

  • Development (3 master nodes, 3 infrastructure nodes, and 10 application nodes = 16 nodes total)
  • Load testing (3 master nodes, 3 infrastructure nodes, and 10 application nodes = 16 nodes total)
  • Production (3 master nodes, 3 infrastructure nodes, and 10 application nodes = 16 nodes total)
  • Disaster recovery (3 master nodes, 3 infrastructure nodes, and 10 application nodes = 16 nodes total)

Benefits:

  • The average total time for deploying a fresh OpenShift cluster and adding new nodes to an existing one was reduced from ~8 hours to ~1.5 hours.
  • The learning curve has lowered dramatically for the team, enabling more members of the team to provision new OpenShift environments. Before the solution was implemented, no member of the infrastructure had skills across all areas needed to perform an OpenShift installation from A to Z, so it had to be carried out by a member of the consulting team.
  • With Li9’s automation in place, the OpenShift installation process can be delegated to 1 of 4 members of the customer infrastructure team, allowing for a 4X improvement in the ability to create OpenShift Container environments.
  • Previous OpenShift installations had errors, usually due to a misconfigured inventory parameter or a missing prerequisite. Now that these details are contained in a Terraform configuration, the only point where errors can be introduced is the configuration of Terraform variables, at the beginning of the build.
  • The lowered time cost of standing up a new OpenShift Cluster opens the possibility of testing new changes to the configuration in a sandbox environment.

Software Defined Network

Juniper Networks, Inc. is a multinational corporation that develops and markets networking products, including routers, switches, network management software, network security products, and software-defined networking technology.

Pain points:

      • High deployment costs
      • High support costs
      • Slow solution deployment at Customer infrastructure
      • Low test coverage

Solution:

      • Developed a Network Functions Virtualization orchestration platform to control the lifecycle of Juniper virtual devices – provisioning, upgrades, decommissions
      • Control plane resides in Docker containers
      • Leveraged Juniper Contrail as the primary Software Defined Network
      • OpenStack is used as Virtual Infrastructure Manager (VIM)

Benefits:

      • Reduced support and implementation efforts/costs
      • Improved predictability of Virtual Network Services
      • Ability to orchestrate Virtual Network Functions
      • High automation including easy upgrades
      • 100% test coverage

Fully Automated Infrastructure for a Content Delivery Network

Li9 was involved in developing and implementing a high-performance content delivery network for a major US provider of IP TV Services.  TV content is captured from satellites and delivered through cable and internet networks throughout the US.  Before the project, the company was spending over $2.5Million annually on renting a CD Network from global suppliers.

Li9’s involvement was the development of a completely automated and dynamic infrastructure to support the IP TV Services.  Li9 used Red Hat, NGINX Networking, and HashiCorp’s tools, along with other key Open Source technologies to create the fully automated datacenter.

Pain points:

  • High latency on delivering content to user devices such as laptops, tablets, TVs, mobile phones, etc.
  • Significant packet lost on the last mile – video streams had not been smooth.
  • Encoders stuck periodically adding breaks in the streaming process.
  • No options to tune the performance depending on the sizes of video chunks.
  • High Costs for CD Network.
  • High development and support costs.

 Solution:

  • Li9 designed a private CDN based on the following requirements:
    • Exclude breaks in video streaming.
    • Exclude points of failure.
    • Dynamic adding and removing servers for on-demand scaling.
    • Autodetecting failures and redirecting workload between health nodes
    • Automated distributing workload between nodes depending on a distance to end-users and nodes throughout.
    • Ability to run completely from one data center, if necessary.
    • Decrease expenses on CDN.
  • Developed automation for fully automated deploying of edge and parent nodes.
  • Server operating systems Linux/RHEL and Unix/FreeBSD are tuned for the best network performance.
  • Deployed an OpenStack cloud for flexible manipulating of encoders.
  • Installed multiple GlusterFS clusters to provide redundant storage.
  • Most of the infrastructure operations are wrapped out into self-automation and a single run command, such as:
    • Scaling cache servers (power on, install OS, software, adding to data plane) – using PXE scripts, iDRAC/iLO (ipmi tools) and Ansible
    • Automated detection of failures of encoders (which run in OpenStack cloud), starting new instances and destroying the failed ones.
    • The control plane gathers metrics from the data plane and manages the traffic flows – monitoring with Grafana and Prometheus.

Benefits:

  • Smooth and stable IP TV streams
  • Network latency has been decreased up to 150%
  • Overall performance increased up to 200% while the number of cache servers was able to be decreased with the optimized network
  • All points of failures were eliminated with key components being at least n+1
  • The project had been implemented for three months, and all clients were migrated to this solution
  • CD Network costs were reduced by over $1.3 Million in the 1st year