Customer is one of the Asia’s largest FinTech companies, offering cutting-edge last mile retail technology. They have a payments terminal integrating with over two dozen banks and financial and technology partners. Their point-of-sale platform serves almost 153,000 merchants across 450,000 merchant network points.
A leading FinTech company, offering merchant payment terminals or point of sale (POS) machines, desired to migrate 2 of their core applications to AWS cloud. Comprinno was responsible for solutioning and migrating the application and database from on-premises datacenter to AWS cloud, meeting the PCI DSS compliance, security, reliability, scalability and performance requirements at optimal costs.
Customer’s payments terminal or POS machines offer a range of services to the merchants. They desired to migrate their existing application which handles UPI payment and Reward points transactions to AWS cloud. Customer had below critical requirements.
• PCI DSS Compliance: The Payment Card Industry Data Security Standard (PCI DSS) compliance ensuring a secure environment for payment transactions was of utmost importance.
• Securing Personal Identifiable Information (PII): Transactions from POS to Bank and vice versa contain highly sensitive personal information like Credit and Debit card details or Bank account numbers.
• Highly available and scalable environment: The number of payment transactions is highly variable and can increase substantially with the number of merchant network points.
• Extensive transaction logging: Transaction logging is required for bank audits and resolution of disputes if any. Logs should be tamper-proof and pass the audit requirements.
• Automatic code deployments: Continuous Integration and deployments were desired for a faster development cycle.
Comprinno was responsible for migration of applications for UPI payments and Reward points transactions, from on-premises datacenter to AWS cloud. The below solution for infrastructure was proposed and implemented.
• PCI DSS Compliance:
- Applications from self-hosted on-premise environment were migrated to containerized applications under microservices on Amazon Elastic Kubernetes Service (EKS) cluster deployed across multiple private subnets.
- The cluster had Network Load Balancer at the front end and the application in the containers and the database could be accessed using AWS PrivateLink. This setup was secure as it required no special connectivity or routing configurations, because the connection between the consumer and provider accounts is on the global AWS backbone and doesn’t traverse the public internet.
- On-premises MS SQL database was migrated to Amazon RDS PostgreSQL. All databases, in-memory caches etc. were in dedicated DB subnets which didn't have outbound internet connectivity as per compliance requirements.
- AWS WAF service integrated with AWS API Gateway was used as an additional level of security against common web exploits and bots, that may affect availability, compromise security or consume excessive resources.
- AWS IAM was used to provide access, with least-privilege permissions, to processes Amazon web services.
- AWS Secrets Manager was used to rotate, manage and retrieve the database credentials and API keys.
• Securing Data:
- AWS Key Management System (KMS) was used for encrypting data as per AES-256 standard, to guarantee high level of security for the data during the transactions. This also guaranteed security provided for sensitive Personal Identifiable Information.
- Below services used in the architecture encrypted data:
Amazon API-Gateway (With Custom Domain and ACM integration) NLB (With ACM integration)
• Providing Highly Available and Scalable environment:
- Amazon EKS offered a 99.95% uptime SLA by running the Kubernetes control plane across multiple AWS Availability Zones, automatically detecting and replacing unhealthy control plane nodes, and providing on-demand, zero downtime upgrades and patching.
- Amazon EKS was used along with Kubernetes Cluster Autoscaler which uses AWS scaling groups to increase or decrease the size of Kubernetes cluster based on the presence of pending pods and node utilization metrics.
- Amazon ElastiCache has been added to PostgreSQL database to boost performance of relational database, reduce latency, and increase throughput and scalability.
- Amazon API GW was used as a highly scalable entry point for the API transactions. Amazon API Gateway has the ability to handle any number of requests per second while making good use of system resources.
- Transactions initiated from POS machines were channeled through AWS IoT Core to securely transmit messages with low latency and high throughput. Messages were transmitted and received using MQTT protocol which also reduces the network bandwidth requirements.
• Logging and Monitoring:
- AWS CloudTrail was used to monitor and record account activity across AWS infrastructure, giving control over storage, analysis, and remediation actions.
- All AWS Services logs were generated and stored in Amazon S3. Amazon S3 buckets associated with Amazon CloudTrail logs were configured to use the Object Lock feature in Compliance mode, in order to prevent tampering of stored logs and meet regulatory compliance.
- Application logs were shipped from Amazon EKS to Amazon Kinesis Firehose with Fluentbit log shipping tool.
- All AWS Services metrics were aggregated to create a common AWS CloudWatch Dashboard.
- Application metrics were exposed using Kubernetes Dashboard.
- Relevant alarms were configured in AWS CloudWatch Alarms for the infrastructure components.
• Disaster Recovery:
- Infrastructure was automated using Terraform. Terraform was an essential part disaster recovery strategy as it helps put up new infrastructure very quickly and efficiently.
• Automatic deployments:
- Automatic Deployment is triggered whenever code is committed to GitHub repository.
- AWS CodeBuild was used for building the docker image and then the image was pushed to AWS ECR (Elastic Container Registry). The built-in capability of ECR to scan docker images for known vulnerabilities was leveraged and the pipeline proceeds to deployment only when no Critical OR High severity vulnerabilities are reported by ECR.
- A notification alert was setup using AWS SNS to report developers about the failed pipeline.
• The final set up in AWS cloud was PCI DSS compliant.
• Customer audited the proposed architecture in AWS cloud to ensure PCI DSS compliance.
• Migrated architecture guarantees up to 99.95% availability and high scalability.
• Reduced delivery time owing to automatic deployments via CI/CD.
• Identifying security vulnerabilities during the deployment lifecycle using AWS ECR.
• Implemented container AWS DevSecOps.