FieldPay CRM: Architecture Case Study
Project Type: Enterprise Mobile Architecture Reference Technologies: React Native, Expo, Fastify, TypeScript, Salesforce API, Stripe API Patterns Demonstrated: Backend-for-Frontend, Offline Action Queue, PaymentIntent Flow, Monorepo Architecture GitHub: ihafkenschiel/fieldpay-crm
Introduction
Enterprise field sales teams often operate in environments that mainstream software design rarely considers: warehouse floors with signal dead zones, rural customer sites, and industrial facilities where connectivity is unreliable or absent.
For a sales representative using a mobile CRM to create invoices and collect payments, network dependency is not a minor inconvenience — it is a workflow blocker.
FieldPay CRM is a reference architecture and demonstration implementation designed to illustrate how these constraints can be resolved in enterprise mobile systems — without sacrificing security, integration reliability, or code maintainability. It simulates a field sales platform that connects to Salesforce CRM and Stripe payment processing through a unified mobile application, deployed from a single codebase across iOS, Android, and Web.
This case study documents the architectural decisions behind the system, the tradeoffs involved, and the patterns that carry this design into production.
Role and Scope
I designed and implemented the FieldPay CRM reference architecture as a demonstration of enterprise mobile architecture patterns used in field-service and sales platforms.
The project addressed several concerns that recur across enterprise mobile engagements:
- integrating mobile clients with external enterprise systems such as Salesforce
- protecting service credentials that cannot be safely stored on a device
- supporting reliable write workflows in environments with intermittent connectivity
- maintaining a shared domain model and typed contracts across the client and server boundary
The implementation includes both the mobile application and the Backend-for-Frontend server that mediates access to external services.
System Architecture
The system is organized into three layers: a cross-platform mobile client, a Backend-for-Frontend (BFF) server, and two external services — Salesforce and Stripe.
Client Applications
(iOS • Android • Web)
│
▼
Backend-for-Frontend (Fastify)
│
├── Salesforce CRM
└── Stripe PaymentsThe client is built with React Native and Expo, sharing a single codebase across all three deployment targets. The BFF is a Fastify server written in TypeScript. It owns all secret credentials and serves as the sole intermediary between the mobile client and external services.
The client communicates only with the BFF. It has no direct knowledge of Salesforce endpoints, OAuth flows, or Stripe secret keys.
Backend-for-Frontend Pattern
The Problem with Direct Mobile Integration
Mobile application bundles — both iOS IPA files and Android APKs — can be decompiled and inspected. Any secret embedded in the client bundle is, in practice, publicly accessible. This makes it impossible to securely store the credentials required by Salesforce (OAuth client secret, access tokens) or Stripe (secret API key, webhook signing secret) on the device.
Even with TLS in place, secrets embedded in a client can still be extracted through static analysis or debugging proxies. Enterprise services including Salesforce and Stripe explicitly prohibit embedding their credentials in client applications.
Direct integration would also impose the full complexity of OAuth token management, Stripe PaymentIntent flows, and webhook verification on the mobile client — responsibilities better handled server-side.
The BFF as Security Boundary
The BFF acts as the trust boundary. All secret credentials are loaded from environment variables at server startup and never transmitted to the client. When the mobile client needs to query Salesforce or initiate a Stripe payment, it makes authenticated requests to the BFF, which performs the actual external API calls and returns a shaped response.
This enables a number of production-relevant capabilities without client-side changes:
- Credential rotation can be performed by updating server environment variables and redeploying, with no app release required.
- API response transformation can be applied centrally, reducing client-side data processing.
- Rate limiting and retry logic can be enforced at the server layer.
- Audit logging of external API calls is centralized and consistent.
API Surface
The BFF exposes a purpose-built API surface designed for the mobile client's needs:
/auth/login Session token exchange
/auth/refresh Token refresh
/salesforce/accounts Account list and search
/salesforce/invoices Invoice creation and updates
/stripe/payment-intent PaymentIntent creation
/stripe/webhook Stripe event receiver
/sync/actions Offline action replayEach endpoint represents a mobile workflow, not a direct proxy of an external API. This distinction matters: the BFF is not a transparent pass-through. It is a contract defined by the client's requirements, which the server fulfills using whatever external calls are necessary.
Offline-First Architecture
The Queue Model
Write operations are first serialized as QueuedAction objects before being submitted to the server. Each action carries a type discriminator, an action-specific payload, retry tracking, and status:
interface QueuedAction {
id: string;
type: 'create_invoice' | 'update_invoice' | 'sync_payment';
payload: Record<string, unknown>;
status: 'pending' | 'processing' | 'failed';
attempts: number;
maxAttempts: number;
lastError?: string;
createdAt: string;
}Actions are stored in a Zustand store backed by AsyncStorage on native and localStorage on web. When connectivity is unavailable, the application queues the action locally and continues. The user receives confirmation that their action has been recorded, not that it has been committed to Salesforce.
Why Action Replay, Not Full Sync
A common alternative is maintaining a local database replica and performing bidirectional synchronization with the server. This approach was intentionally avoided.
Full data synchronization introduces conflict resolution complexity, schema migration concerns, and significant client-side state management. For the primary field workflow — creating invoices and recording payments — these costs outweigh the benefits. The dominant use case is writes, and action replay is designed for exactly that.
Action replay is narrower and more reliable. The client does not attempt to maintain a local replica of truth. It records what the user intended to do, in the order they intended to do it, and submits that record when connectivity permits. The BFF processes actions sequentially and returns a structured result:
{
"succeeded": ["action-id-1", "action-id-2"],
"failed": [
{ "id": "action-id-3", "error": "Invoice not found" }
]
}Actions that succeed are dequeued. Actions that fail have their attempt count incremented. After three failures, an action is marked as permanently failed and surfaced to the user for manual review.
Failure Classification
Not all failures are equal. The retry strategy distinguishes between transient and permanent failures:
| Error Category | Retryable | Behavior |
|---|---|---|
| Network timeout / 5xx | Yes | Retry with backoff |
| 4xx client error | No | Mark failed immediately |
| 401 Unauthorized | No | Trigger re-authentication |
| 404 Not Found | No | Mark failed; resource deleted |
| 409 Conflict | No | Mark failed; log for review |
This distinction prevents wasted retry attempts on errors that will not resolve with additional requests, while ensuring transient connectivity failures do not permanently discard user data.
Payment Processing
Payments use Stripe's PaymentIntent flow, which ensures that final charge authorization occurs server-side:
- The client requests a PaymentIntent from the BFF, supplying the invoice amount.
- The BFF calls Stripe's API and returns only the
client_secretto the mobile client. - The Stripe payment sheet collects card details and completes the authorization using the
client_secret. - Stripe sends a webhook event to the BFF confirming the charge.
- The BFF updates the invoice status in Salesforce to
paid.
The Stripe secret API key is never transmitted to the client. The client receives only what it needs to render the payment UI. Confirmation of payment is driven by Stripe's webhook delivery, not by client-side assertion.
Security Model
Three properties define the security posture of the system:
Secret isolation. Salesforce OAuth credentials and Stripe API keys are loaded from server-side environment variables. They are not present in the client bundle, not transmitted in API responses, and not accessible to device inspection tools.
Token-based session management. After authentication, the BFF issues JWT access tokens and refresh tokens. On native platforms, tokens are stored in the platform keychain via expo-secure-store, which is backed by iOS Keychain and Android Keystore. On web, tokens are stored in memory to avoid XSS exposure from localStorage.
Stateless server. The BFF does not maintain server-side session state. JWT validation is self-contained. This enables horizontal scaling without session affinity requirements.
Monorepo Architecture
The project uses a monorepo with npm workspaces:
apps/mobile/ Expo application
packages/core/ Domain models and utilities
packages/ui/ Shared React Native components
packages/api-client/ Typed HTTP client
server/ Fastify BFFShared domain models — invoice types, account shapes, queue action interfaces — are defined once in packages/core and imported by both the mobile application and the server. This eliminates a category of runtime failures that occur when client and server maintain separate type definitions that drift out of alignment.
The typed API client in packages/api-client generates typed request and response contracts from the same domain models, providing compile-time verification that client usage matches server expectations.
Quality gates enforce consistency across the monorepo: TypeScript type checking, ESLint, and a Jest test suite are all run as pre-commit hooks and in CI. A single npm run validate command runs all three.
Observability
In field deployments, support engineers rarely have physical access to the user's device. Diagnostic visibility must therefore be built into the application itself.
The mobile client emits structured diagnostic events throughout the application lifecycle — auth transitions, sync attempts, action failures, network state changes. These events are logged to an in-app diagnostics store that can be surfaced without device access or console attachment.
This addresses a recurring problem in mobile production support: the user's device is not available for inspection, the error is not reproducible in development, and the only information available is a user description of what happened. A persistent diagnostic event log provides the next best thing to a server-side audit trail.
For production deployments, the same event structure can be forwarded to an external observability platform — Datadog, Sentry, or similar — with minimal modification to the emission layer.
Deployment Model
Because the BFF is stateless, it can be deployed on any container or Node.js hosting platform (Railway, Render, Fly.io, AWS ECS, etc.) and scaled horizontally behind a load balancer. The /health endpoint supports orchestrator health checks. No session affinity is required.
At scale, a caching layer (Redis) can reduce Salesforce API load for account and contact data, which changes infrequently. Invoice data should not be cached, as status can change from server-side webhook delivery at any time. An API gateway in front of the BFF can handle rate limiting and centralized JWT validation without modification to the BFF itself.
Lessons and Tradeoffs
Direct client-to-API integration was rejected for security, not convenience. The credential exposure problem is architectural, not a matter of implementation quality. No amount of obfuscation makes secrets safe in a client bundle. The BFF is the correct boundary.
Action replay was chosen over bidirectional sync for scope and reliability reasons. A full offline data replica would have required conflict resolution logic, schema synchronization, and significantly more storage management. For a write-heavy workflow like invoice creation, it would have delivered marginal read-side value at substantial complexity cost.
The monorepo structure pays forward. Shared types across the client, API client, and server eliminate a class of integration bugs that manifest late and are difficult to trace. The investment in workspace configuration is recovered quickly as the surface area of API contracts grows.
Mock mode is a first-class concern. The BFF supports SALESFORCE_MODE=mock and STRIPE_MODE=mock environment flags that replace external API calls with deterministic local responses. This enables full end-to-end development and demonstration without live credentials. In consulting and enterprise contexts, this matters: demos should not depend on external service availability.
Conclusion
The patterns demonstrated here — Backend-for-Frontend boundaries, offline action queues, and server-verified payment workflows — recur across enterprise mobile systems in regulated industries.
FieldPay CRM illustrates how these patterns can be combined into a system that remains operational in unreliable network conditions while maintaining clear security and integration boundaries. The design emphasizes architectural clarity, operational visibility, and simplicity in failure recovery — concerns that become critical as mobile systems move from prototype to production deployment.
The approach applies directly to production systems in healthcare, financial services, logistics, and other regulated environments where auditability, security, and resilience under adverse conditions are requirements rather than aspirations.
GitHub: ihafkenschiel/fieldpay-crm