Top FAQ Answered
Your questions on architecture, security, and scalability are addressed. For a detailed technical walkthrough
Categories
Frequently asked questions
Provide a high-level overview of the platform and its key features.
RemitSo, a brand owned by Prymera Consulting Private limited is an Enterprise grade software platform designed to manage money transfer business operations. Some key features/core modules include:
1. Risk mitigation
2. Customer onboarding and KYC
3. Ongoing due-diligence
4. Anti Money Laundering / Counter Finance of Terrorism
5. Sanction Screening
6. Rule based transaction processing system
7. Role based access management
8. Automation of workflows
9. Accounting
10. Audit Logs
11. Data Security and compliance
Model: cloud-based application.
Frontend (Mobile App)
- Built using Flutter, providing a cross-platform experience for Android and iOS.
- Manages user authentication, onboarding, transaction processing, and KYC submissions.
- Communicates with the backend via RESTful APIs and WebSockets for real-time updates.
- Integrates with third-party services like Volume (for payment) and Sumsub (for identity verification).
Backend
- Developed in Laravel (PHP), serving as the core processing engine.
- Implements a modular service structure for handling payments, compliance, transaction workflows, and user management.
- Uses event-driven processing with AWS SQS to handle asynchronous tasks like transaction approvals, risk evaluations, and payout processing.
- Payment Processing: Volume Pay is integrated to process payments and settlements.
Database
- PostgreSQL serves as the primary relational database, ensuring ACID-compliant transactions.
- Uses UUIDs for primary keys to ensure global uniqueness across all records.
- Redis is used for caching frequently accessed data to optimize performance.
- Identity Verification & Compliance:
- Sumsub handles KYC and AML checks, including:
- Document verification (passports, ID cards, bank statements).
- Face match and liveness detection.
- Risk scoring based on document authenticity and user history.
Rule-Based AML System
- Configurable policies, limits, and risk thresholds based on transaction behavior.
- Automated transaction monitoring to flag suspicious activities.
Messaging & Orchestration
- AWS SQS (Simple Queue Service) for event-driven processing, ensuring scalable and fault-tolerant transaction handling.
- AWS SNS (Simple Notification Service) for real-time notifications and alerts.
- AWS SES (Simple Email Service) for secure email communication, including transaction confirmations and regulatory notices.
Infrastructure & Deployment
- Fully hosted on AWS, leveraging managed services for scalability and security.
- AWS Lambda is used for processing lightweight tasks asynchronously.
- S3 storage for document uploads (e.g., KYC documents, receipts, compliance reports).
- Laravel Vapor used for infrastructure automation and deployment management.
What is your current production uptime SLA?
99.99%
Backend technology and frameworks used
PHP 8.3, Laravel 12
Frontend technologies
Flutter 3.23.2, Vue.js, React
Database engine and version
Postgresql 17
Hosting environment
AWS Lambda
Have you conducted a Penetration Test in the past 12 months?
Yes
Do you use automated vulnerability-scanning tools (Snyk, Dependabot, etc.)?
Yes (Dependabot )
How are security patches and updates managed? (E.g., libraries, PHP, Flutter dependencies)
Monitoring & Tracking:
We continuously monitor security advisories and vulnerability databases for PHP, Laravel, Flutter, and other dependencies. Dependabot tracks outdated or vulnerable packages.
Update Process:
Critical security patches are applied immediately following a review and testing process.
Routine updates (e.g., framework and library updates) are scheduled in maintenance sprints to prevent breaking changes.
We follow semantic versioning guidelines to assess impact before upgrading dependencies.
Testing & Deployment:
All updates go through staging environments for testing before being deployed to production.
System-Level Security Updates:
The infrastructure is continuously monitored for OS-level security updates (e.g., RHEL updates on AWS).
Security patches for AWS-managed services (e.g., Lambda, RDS) are handled within AWS maintenance windows.
Describe your access control policies (RBAC, MFA, etc.).
Access Control (RBAC + ABAC):
Uses RBAC (Role-Based Access Control) for standard user roles (e.g., Admin, Agent, Customer).
Implements ABAC (Attribute-Based Access Control) to restrict access based on transaction type, country, and risk level.
How do you manage PII encryption (at rest/in transit)?
PII is encrypted at rest using AES-256 and in transit using TLS 1.2 or 1.3. We employ robust encryption methods for PII, including column-level encryption at rest for sensitive fields like name and phone number.
For data in transit, we utilize secure protocols like TLS/SSL to ensure confidentiality.
Describe your incident response process and reporting timelines.
24/7 Monitoring:
Our system is continuously monitored using AWS CloudWatch, Datadog, and Uptimerobot for performance issues, system alerts, and service outages.
In case of any disruptions, alerts trigger predefined responses and notifications to the Incident Response Team (IRT), which operates 24/7
Are there any open-source components with GPL/AGPL licenses?
No, we do not use open-source components with GPL or AGPL licenses to avoid copyleft obligations that could require us to disclose proprietary source code.
We primarily use MIT, Apache 2.0, and BSD licensed components, which allow commercial use without imposing source code distribution requirements.
All third-party dependencies are reviewed for license compliance before integration.
How do you track license compliance for dependencies?
We track license compliance for dependencies through:
Software Bill of Materials (SBOM): We generate an SBOM to list all dependencies and their associated licenses.
Manual Review: The team reviews dependencies when adding new packages or updating existing ones to ensure compliance with our licensing policies.
Policy-Based Restrictions: We avoid dependencies with GPL/AGPL licenses if they conflict with our business needs.
While we do not conduct formal audits or use automated license scanning tools, we maintain awareness of our dependencies and their licensing terms as part of our development workflow.
Describe your CI/CD pipeline (tools, process, approvals).
Validation & Deployment
Laravel Vapor is used for infrastructure automation and deployment management. The document also mentions CI/CD pipelines for security patches.
Fixes undergo internal testing and code reviews before deployment.
Security patches are rolled out using CI/CD pipelines for quick implementation.
Post-deployment validation ensures the issue is resolved without regressions.
How is Infrastructure-as-Code (IaC) implemented? (Terraform, CloudFormation)
CI/CD pipeline: Laravel Vapor is used for infrastructure automation and deployment management.
Infrastructure-as-Code (IaC): Implemented using Laravel Vapor.
Are automated backups in place? Describe the process and frequency.
Daily backups of data and configurations are taken across all critical systems, including databases, application data, and server configurations.
These backups are stored across different availability zones (AZs) for redundancy.
Snapshotting:
Snapshots of databases are taken regularly for quick restoration in case of failure.
High Availability (HA) Setup
Redundant Network Architecture:
Our network architecture is designed to avoid single points of failure. This includes multi-AZ deployment for all critical components (e.g., web servers, databases, queues, etc.) to ensure that traffic can be rerouted if one part of the system fails.
AWS Lambda for High Availability:
We use AWS Lambda to ensure high availability by leveraging its ability to run code in response to events across multiple AWS regions.
Lambda functions can be triggered automatically to handle failures or disruptions, ensuring system continuity with minimal human intervention.
How is monitoring and alerting implemented (CloudWatch, X-Ray, etc.)?
Monitoring & Alerts
We use advanced monitoring tools to track access, detect anomalies, and prevent breaches.
Access Logs & Monitoring
All authentication attempts, failed logins, and access events are logged.
Logs are stored securely and reviewed periodically.
AWS CloudTrail & SIEM tools are used for detecting suspicious activity.
Real-time Breach Alerts
Repeated failed login attempts trigger account lockout & alerts.
Anomalous behavior detection (e.g., logins from unusual locations or devices).
Audit Logs & Forensics
Admin and compliance officers have access to audit logs to track who accessed what data.
Immutable logs ensure that access records cannot be tampered with.
Do you use Multi-Account AWS structure or single account?
Single Account Per Deployment
Describe your commission management process and configuration flexibility.
Our commission management system is designed to be highly configurable, supporting various commission structures based on business needs.
Commission Structure
Agent and Business Split: Commissions can be split between the agent and the business based on predefined rules.
FX Markup Earnings: Agents can earn commissions through foreign exchange (FX) markups in addition to standard transaction fees.
Configuration Flexibility
Percentage-Based or Fixed Fees: The system supports both percentage-based and fixed commissions for agents and the business.
Custom Rules: Businesses can define custom commission rules, including special rates for specific agents or partners.
Real-Time Updates: Changes to commission structures can be applied dynamically without requiring system downtime.
This flexible commission model ensures adaptability to different business models while maintaining transparency and automation in commission calculations.
How is agent onboarding managed? (Manual, automated, APIs)
Agent onboarding is managed manually to ensure proper verification and compliance with business policies.
Onboarding Steps:
Agent Application Submission – Agents provide necessary details, including identification and business credentials.
Manual Verification – The submitted information is reviewed for accuracy and compliance.
Approval and Account Creation – Once verified, the agent is manually added to the system.
Commission & Access Configuration – The agent’s commission structure and permissions are set up based on predefined rules.
Training & Activation – Agents receive onboarding guidance before they can start processing transactions.
This manual approach ensures that only verified agents are onboarded, reducing risk while maintaining compliance with business and regulatory requirements.
What levels of role-based access are implemented for partners and agents?
The system provides a configurable role-based access control (RBAC) framework, allowing businesses to define roles and permissions based on their operational needs.
Businesses can create custom roles for partners and agents, specifying granular access levels.
Permissions can be assigned for actions such as transaction processing, reporting, fee configuration, and compliance reviews.
Access can be restricted to specific functionalities, ensuring data security and operational control.
The system supports hierarchical role structures, enabling businesses to set different levels of authority within their partner and agent network.
This flexibility allows businesses to tailor access control without modifying core system logic, ensuring adaptability to diverse operational models.
What are your Disaster Recovery (DR) and failover processes?
Disaster Recovery Strategy (DRS)
Cloud-Based Infrastructure (AWS):
Our infrastructure is fully hosted on AWS to take advantage of its built-in redundancy, scalability, and fault tolerance. Key elements include:
Multi-Region Deployment:
Critical services are replicated across multiple AWS regions to ensure availability in case of regional outages. This ensures that if one region experiences a disruption, services can failover to another region.
Automated Failover:
Services, databases, and APIs are designed for automated failover in the event of infrastructure failure, ensuring minimal service disruption.
Backups:
Daily backups of data and configurations are taken across all critical systems, including databases, application data, and server configurations. These backups are stored across different availability zones (AZs) for redundancy.
Snapshotting:
Snapshots of databases are taken regularly for quick restoration in case of failure.
High Availability (HA) Setup Redundant Network Architecture:
Our network architecture is designed to avoid single points of failure. This includes multi-AZ deployment for all critical components (e.g., web servers, databases, queues, etc.) to ensure that traffic can be rerouted if one part of the system fails.
AWS Lambda for High Availability:
We use AWS Lambda to ensure high availability by leveraging its ability to run code in response to events across multiple AWS regions.
Lambda functions can be triggered automatically to handle failures or disruptions, ensuring system continuity with minimal human intervention.
Data Integrity and Backup
Real-Time Data Replication:
Real-time data replication between primary and backup systems ensures that we are always working with the most up-to-date information. This helps minimize the impact of data loss during service disruptions.
Backup Testing:
We regularly test data restoration from backups to ensure the integrity and reliability of our backup systems. This ensures that recovery times are minimized during actual incidents.
Post-Incident Review and Continuous Improvement
Root Cause Analysis (RCA):
After a major disruption or incident, we conduct a Root Cause Analysis to identify the cause of the disruption, evaluate the effectiveness of our response, and implement improvements to prevent recurrence.
Testing and Drills:
Regular disaster recovery drills and business continuity simulations are conducted to ensure that all team members are prepared to respond quickly and effectively to any disruption, minimizing downtime and service impact.
What is your current maximum transaction throughput per Second?
Since we are using AWS Lambda, our maximum transaction throughput per second depends on several factors:
Lambda Concurrency Limits: AWS Lambda scales automatically based on incoming requests. The default concurrency limit per region is 1,000 concurrent executions, but this can be increased upon request.
Execution Duration: If each transaction takes 100ms, we can handle 10 transactions per second per concurrent execution. With 1,000 concurrent executions, that means 10,000 transactions per second at default limits.
Database and External API Limits: Throughput is also constrained by database performance and third-party APIs (e.g., payment gateways).
To optimize throughput, we:
Use asynchronous processing with AWS SQS and SNS.
Implement batch processing where applicable.
Monitor and scale database read/write capacities dynamically.
If required, we can request AWS to increase our concurrency limits to achieve even higher throughput.
Provide a high-level overview of the platform and its key features.
RemitSo, a brand owned by Prymera Consulting Private limited is an Enterprise grade software platform designed to manage money transfer business operations. Some key features/core modules include:
1. Risk mitigation
2. Customer onboarding and KYC
3. Ongoing due-diligence
4. Anti Money Laundering / Counter Finance of Terrorism
5. Sanction Screening
6. Rule based transaction processing system
7. Role based access management
8. Automation of workflows
9. Accounting
10. Audit Logs
11. Data Security and compliance
Model: cloud-based application.
Frontend (Mobile App)
- Built using Flutter, providing a cross-platform experience for Android and iOS.
- Manages user authentication, onboarding, transaction processing, and KYC submissions.
- Communicates with the backend via RESTful APIs and WebSockets for real-time updates.
- Integrates with third-party services like Volume (for payment) and Sumsub (for identity verification).
Backend
- Developed in Laravel (PHP), serving as the core processing engine.
- Implements a modular service structure for handling payments, compliance, transaction workflows, and user management.
- Uses event-driven processing with AWS SQS to handle asynchronous tasks like transaction approvals, risk evaluations, and payout processing.
- Payment Processing: Volume Pay is integrated to process payments and settlements.
Database
- PostgreSQL serves as the primary relational database, ensuring ACID-compliant transactions.
- Uses UUIDs for primary keys to ensure global uniqueness across all records.
- Redis is used for caching frequently accessed data to optimize performance.
- Identity Verification & Compliance:
- Sumsub handles KYC and AML checks, including:
- Document verification (passports, ID cards, bank statements).
- Face match and liveness detection.
- Risk scoring based on document authenticity and user history.
Rule-Based AML System
- Configurable policies, limits, and risk thresholds based on transaction behavior.
- Automated transaction monitoring to flag suspicious activities.
Messaging & Orchestration
- AWS SQS (Simple Queue Service) for event-driven processing, ensuring scalable and fault-tolerant transaction handling.
- AWS SNS (Simple Notification Service) for real-time notifications and alerts.
- AWS SES (Simple Email Service) for secure email communication, including transaction confirmations and regulatory notices.
Infrastructure & Deployment
- Fully hosted on AWS, leveraging managed services for scalability and security.
- AWS Lambda is used for processing lightweight tasks asynchronously.
- S3 storage for document uploads (e.g., KYC documents, receipts, compliance reports).
- Laravel Vapor used for infrastructure automation and deployment management.
What is your current production uptime SLA?
99.99%
Backend technology and frameworks used
PHP 8.3, Laravel 12
Frontend technologies
Flutter 3.23.2, Vue.js, React
Database engine and version
Postgresql 17
Hosting environment
AWS Lambda
Have you conducted a Penetration Test in the past 12 months?
Yes
Do you use automated vulnerability-scanning tools (Snyk, Dependabot, etc.)?
Yes (Dependabot )
How are security patches and updates managed? (E.g., libraries, PHP, Flutter dependencies)
Monitoring & Tracking:
We continuously monitor security advisories and vulnerability databases for PHP, Laravel, Flutter, and other dependencies. Dependabot tracks outdated or vulnerable packages.
Update Process:
Critical security patches are applied immediately following a review and testing process.
Routine updates (e.g., framework and library updates) are scheduled in maintenance sprints to prevent breaking changes.
We follow semantic versioning guidelines to assess impact before upgrading dependencies.
Testing & Deployment:
All updates go through staging environments for testing before being deployed to production.
System-Level Security Updates:
The infrastructure is continuously monitored for OS-level security updates (e.g., RHEL updates on AWS).
Security patches for AWS-managed services (e.g., Lambda, RDS) are handled within AWS maintenance windows.
Describe your access control policies (RBAC, MFA, etc.).
Access Control (RBAC + ABAC):
Uses RBAC (Role-Based Access Control) for standard user roles (e.g., Admin, Agent, Customer).
Implements ABAC (Attribute-Based Access Control) to restrict access based on transaction type, country, and risk level.
How do you manage PII encryption (at rest/in transit)?
PII is encrypted at rest using AES-256 and in transit using TLS 1.2 or 1.3. We employ robust encryption methods for PII, including column-level encryption at rest for sensitive fields like name and phone number.
For data in transit, we utilize secure protocols like TLS/SSL to ensure confidentiality.
Describe your incident response process and reporting timelines.
24/7 Monitoring:
Our system is continuously monitored using AWS CloudWatch, Datadog, and Uptimerobot for performance issues, system alerts, and service outages.
In case of any disruptions, alerts trigger predefined responses and notifications to the Incident Response Team (IRT), which operates 24/7
Are there any open-source components with GPL/AGPL licenses?
No, we do not use open-source components with GPL or AGPL licenses to avoid copyleft obligations that could require us to disclose proprietary source code.
We primarily use MIT, Apache 2.0, and BSD licensed components, which allow commercial use without imposing source code distribution requirements.
All third-party dependencies are reviewed for license compliance before integration.
How do you track license compliance for dependencies?
We track license compliance for dependencies through:
Software Bill of Materials (SBOM): We generate an SBOM to list all dependencies and their associated licenses.
Manual Review: The team reviews dependencies when adding new packages or updating existing ones to ensure compliance with our licensing policies.
Policy-Based Restrictions: We avoid dependencies with GPL/AGPL licenses if they conflict with our business needs.
While we do not conduct formal audits or use automated license scanning tools, we maintain awareness of our dependencies and their licensing terms as part of our development workflow.
Describe your CI/CD pipeline (tools, process, approvals).
Validation & Deployment
Laravel Vapor is used for infrastructure automation and deployment management. The document also mentions CI/CD pipelines for security patches.
Fixes undergo internal testing and code reviews before deployment.
Security patches are rolled out using CI/CD pipelines for quick implementation.
Post-deployment validation ensures the issue is resolved without regressions.
How is Infrastructure-as-Code (IaC) implemented? (Terraform, CloudFormation)
CI/CD pipeline: Laravel Vapor is used for infrastructure automation and deployment management.
Infrastructure-as-Code (IaC): Implemented using Laravel Vapor.
Are automated backups in place? Describe the process and frequency.
Daily backups of data and configurations are taken across all critical systems, including databases, application data, and server configurations.
These backups are stored across different availability zones (AZs) for redundancy.
Snapshotting:
Snapshots of databases are taken regularly for quick restoration in case of failure.
High Availability (HA) Setup
Redundant Network Architecture:
Our network architecture is designed to avoid single points of failure. This includes multi-AZ deployment for all critical components (e.g., web servers, databases, queues, etc.) to ensure that traffic can be rerouted if one part of the system fails.
AWS Lambda for High Availability:
We use AWS Lambda to ensure high availability by leveraging its ability to run code in response to events across multiple AWS regions.
Lambda functions can be triggered automatically to handle failures or disruptions, ensuring system continuity with minimal human intervention.
How is monitoring and alerting implemented (CloudWatch, X-Ray, etc.)?
Monitoring & Alerts
We use advanced monitoring tools to track access, detect anomalies, and prevent breaches.
Access Logs & Monitoring
All authentication attempts, failed logins, and access events are logged.
Logs are stored securely and reviewed periodically.
AWS CloudTrail & SIEM tools are used for detecting suspicious activity.
Real-time Breach Alerts
Repeated failed login attempts trigger account lockout & alerts.
Anomalous behavior detection (e.g., logins from unusual locations or devices).
Audit Logs & Forensics
Admin and compliance officers have access to audit logs to track who accessed what data.
Immutable logs ensure that access records cannot be tampered with.
Do you use Multi-Account AWS structure or single account?
Single Account Per Deployment
Describe your commission management process and configuration flexibility.
Our commission management system is designed to be highly configurable, supporting various commission structures based on business needs.
Commission Structure
Agent and Business Split: Commissions can be split between the agent and the business based on predefined rules.
FX Markup Earnings: Agents can earn commissions through foreign exchange (FX) markups in addition to standard transaction fees.
Configuration Flexibility
Percentage-Based or Fixed Fees: The system supports both percentage-based and fixed commissions for agents and the business.
Custom Rules: Businesses can define custom commission rules, including special rates for specific agents or partners.
Real-Time Updates: Changes to commission structures can be applied dynamically without requiring system downtime.
This flexible commission model ensures adaptability to different business models while maintaining transparency and automation in commission calculations.
How is agent onboarding managed? (Manual, automated, APIs)
Agent onboarding is managed manually to ensure proper verification and compliance with business policies.
Onboarding Steps:
Agent Application Submission – Agents provide necessary details, including identification and business credentials.
Manual Verification – The submitted information is reviewed for accuracy and compliance.
Approval and Account Creation – Once verified, the agent is manually added to the system.
Commission & Access Configuration – The agent’s commission structure and permissions are set up based on predefined rules.
Training & Activation – Agents receive onboarding guidance before they can start processing transactions.
This manual approach ensures that only verified agents are onboarded, reducing risk while maintaining compliance with business and regulatory requirements.
What levels of role-based access are implemented for partners and agents?
The system provides a configurable role-based access control (RBAC) framework, allowing businesses to define roles and permissions based on their operational needs.
Businesses can create custom roles for partners and agents, specifying granular access levels.
Permissions can be assigned for actions such as transaction processing, reporting, fee configuration, and compliance reviews.
Access can be restricted to specific functionalities, ensuring data security and operational control.
The system supports hierarchical role structures, enabling businesses to set different levels of authority within their partner and agent network.
This flexibility allows businesses to tailor access control without modifying core system logic, ensuring adaptability to diverse operational models.
What are your Disaster Recovery (DR) and failover processes?
Disaster Recovery Strategy (DRS)
Cloud-Based Infrastructure (AWS):
Our infrastructure is fully hosted on AWS to take advantage of its built-in redundancy, scalability, and fault tolerance. Key elements include:
Multi-Region Deployment:
Critical services are replicated across multiple AWS regions to ensure availability in case of regional outages. This ensures that if one region experiences a disruption, services can failover to another region.
Automated Failover:
Services, databases, and APIs are designed for automated failover in the event of infrastructure failure, ensuring minimal service disruption.
Backups:
Daily backups of data and configurations are taken across all critical systems, including databases, application data, and server configurations. These backups are stored across different availability zones (AZs) for redundancy.
Snapshotting:
Snapshots of databases are taken regularly for quick restoration in case of failure.
High Availability (HA) Setup Redundant Network Architecture:
Our network architecture is designed to avoid single points of failure. This includes multi-AZ deployment for all critical components (e.g., web servers, databases, queues, etc.) to ensure that traffic can be rerouted if one part of the system fails.
AWS Lambda for High Availability:
We use AWS Lambda to ensure high availability by leveraging its ability to run code in response to events across multiple AWS regions.
Lambda functions can be triggered automatically to handle failures or disruptions, ensuring system continuity with minimal human intervention.
Data Integrity and Backup
Real-Time Data Replication:
Real-time data replication between primary and backup systems ensures that we are always working with the most up-to-date information. This helps minimize the impact of data loss during service disruptions.
Backup Testing:
We regularly test data restoration from backups to ensure the integrity and reliability of our backup systems. This ensures that recovery times are minimized during actual incidents.
Post-Incident Review and Continuous Improvement
Root Cause Analysis (RCA):
After a major disruption or incident, we conduct a Root Cause Analysis to identify the cause of the disruption, evaluate the effectiveness of our response, and implement improvements to prevent recurrence.
Testing and Drills:
Regular disaster recovery drills and business continuity simulations are conducted to ensure that all team members are prepared to respond quickly and effectively to any disruption, minimizing downtime and service impact.
What is your current maximum transaction throughput per Second?
Since we are using AWS Lambda, our maximum transaction throughput per second depends on several factors:
Lambda Concurrency Limits: AWS Lambda scales automatically based on incoming requests. The default concurrency limit per region is 1,000 concurrent executions, but this can be increased upon request.
Execution Duration: If each transaction takes 100ms, we can handle 10 transactions per second per concurrent execution. With 1,000 concurrent executions, that means 10,000 transactions per second at default limits.
Database and External API Limits: Throughput is also constrained by database performance and third-party APIs (e.g., payment gateways).
To optimize throughput, we:
Use asynchronous processing with AWS SQS and SNS.
Implement batch processing where applicable.
Monitor and scale database read/write capacities dynamically.
If required, we can request AWS to increase our concurrency limits to achieve even higher throughput.
We've highlighted most common FAQs, still looking for specific answers
Request DemoFind answers to common questions asked by customers
