# Integration Pattern

It's important for us to have consistent integration patterns throughout the system. It reduces coupling, and allows us to innovate without breaking other part of the systems.

We will distinguish the integration from system to system level, and from component to component level. System integrations would normally have tighter contracts, ensuring that validation are in place, and for backward compatibility to be in place - because they are released separately. Component integrations, in contrast, will have their boundaries loosen up. There's not much benefit in having tighter integration there as they will typically be released together, and breaking change will not occur. This means we will be able to save time designing the contracts, and make use of what's already internally in the system.

Some of the examples outlined here might get outdated over time, the best way to find examples of our integration is to hunt for them at our technology vision. System integrations will shown in the system landscape diagram, and the component integrations will be shown in the level 2 diagrams.

There are only two ways to integrate, to call, or to listen (to be called). In our technical world, this would translate to calling API (not necessarily REST) and to listening to events. We will capture our patterns by following this distinction.

Note: We may be using some jargons from this article too, so read it if you haven't: https://martinfowler.com/articles/201701-event-driven.html

# System integration

# Calling

  • Aggregating GraphQL from REST API

    You must use GraphQL to hit our backend for any webapps we built. This is important for performance reason. Even though our application are all static pages (therefore there's no API call needed), there are part of our applications that are dynamic (like Shortlist). Apollo client is doing it's job really well in caching the data that's returned by our queries, and reduces the complexity of caching the returned data by hand in VueX.

    This pattern aggregates most of the REST APIs that we already have at the moment. This pattern arises as we started exposing all of our REST APIs initially by using OpenAPI specification.

    Example: Shortlist page to webapp BFF.

  • REST API

    Use REST API when communicating from backend to backend systems. Don't be tempted to use the GraphQL instance from Webapp BFF, because those are only meant to be used by our webapp.

    In provider portal, it's okay to use REST API instead of GraphQL, because most of the operations that we do will be on a single entity (also historically we didn't have GraphQL knowledge yet).

    Example: Provider portal to comms service

# Listening

We are heavily constrained here by the technology that AWS provides. There are many services that AWS provides for listening to events, such as SQS, SNS, Kinesis, CloudWatch, and EventBridge. Although there are many services that allow us to listen to events from one system to another, this is an area that we haven't explored much. We're going to need to experiment better to see what's going to be suitable for us. When evaluating these services, it's important to compare how they differ. Some of them may only allow a limited amount of consumers, some of them may not retain sequence in their events fired, etc.

  • ECST via EventBridge

    This pattern works well when an event is fired from a system that is not a data leader. This means, any further modification to the event source will not need to update the listening systems anymore.

    Example: Creation of user in Mailchimp. Auth0 -> EventBridge -> Lambda -> Mailchimp. In this case, the user attribute like, marketing preferences, are maintained in Mailchimp, not Auth0.

  • Event Notification via EventBridge

    Because EventBridge doesn't retain the order of events, we can't use it as ECST when there are many modification done to the event source. Therefore treat events from EventBridge as event notification, and hit the event source back to query what you need.

    As compared to other AWS service, we think that EventBridge has tackled many of the issues we're going to have as compared to other services. It is known to be very friendly in working with multiple AWS accounts too. We think that other AWS services are more suitable to be used for component integration purposes.

    Example: None!

# Component integration

# Calling

  • DynamoDB direct hit

    Hitting DynamoDB directly, instead of building REST APIs, will save us a good amount of time. We won't need to design the API layer, and implement the driving layer for it too. Whenever there are more complex queries arises, it's easier to tackle them from code level too!

    Don't worry about bounded contexts if you're still living in a system. When we talk about integration, we should look at the more physical view of our architecture. So when a system comprises of multiple bounded contexts, do call each other databases freely.

    Example: Many! Look for the use of our Repository patterns.

  • REST API / GraphQL

    Use this as the last resort. If your component is a webapp, you'd unfortunately have to use REST API or GraphQL.

    You'll have components that live externally as well, data stored in third party applications like Ghost, will not be accessible directly, so you will have to call their APIs. It's a good practice to look for the SDK/Clients that the vendors have built, otherwise build a thin layer to call their API.

    Example: Calling Auth0 in any part of our system

# Listening

  • DynamoDB streams

    This has been proven really useful for us, especially that we don't need to provision any middleware to fire events. It keeps the event sequence correctly, and the consumers are free to treat the event as event notification or ECST.

    Example: Inventory to Algolia

# Considerations

When it comes to executing application logic, always pick the Calling patterns first. They're always simpler to implement and to be understood. When there are concerns around performance or different query needs, then that's a good time to think about the Listening patterns and store your own data (replicate and modify the structure).

When it comes to data replication needs , always pick the Listening patterns first. For example, if we need to replicate our Inventory data to Algolia by the Calling patterns, we'll have to pick a cron based approach and trigger a batch script. This is not ideal as the data replication will not happen in real time, causing data inconsistency issue. When we crank up the frequency of the batch runs, we'll also run into problems of tracking which data has been processed or not by hand.