When Good Servers Go Bad: How a Serverless Strategy Saved a Food-ordering App

  • Case Studies,
  • How-To,
  • News

Senior software engineer Don C Varghese explains how cloud-based microservices offered just the right recipe to cure dropped orders and bloated code.

Customers who order cake want to eat it too. I was part of an AOT Technologies team developing Moduurn Mobility’s food-ordering app, and things were going pretty well—except sometimes orders would get lost, much to the chagrin of both customer and client. We traced the problem to a couple of factors: peak traffic overwhelmed our queues, and some rejected orders weren’t triggering the right messages.

Using microservices and a serverless architecture we solved these problems, and made coding so much easier we can now integrate new point-of-sales (POS) systems in a quarter of the time! I’d like to tell you how we switched to an AWS serverless model, taking advantage of its built-in queueing, messaging, and step functions to make sure our app would sparkle.

Dealing with the Congestion

Moduurn Mobility’s platform allows any business, large or small, to take orders through its own customized website and mobile app. Moduurn Mobility’s clients use a variety of point-of-sales protocols and payment processors, so integrating those systems to Moduurn’s main “ordering server” was a challenge from the start (read our earlier case study). Originally the main server’s job was not only to receive and record orders, but also sort out which POS system a particular vendor was using. Then the server had to make the necessary formatting changes so the vendor’s system could read and accept the order. It’s a potentially congested situation represented in this diagram:

It’s easy to see that a problem in communicating with one POS system could degrade the server’s performance with other POS systems. Running distinct code side by side on one server in order to communicate with all those distinct systems became a development nightmare, and, worst of all, some orders were being lost when POS systems rejected incoming orders during peak traffic. Customers were showing up at food outlets only to discover the business had no record of their order. To say that this was an issue that needed to be escalated would be a profound understatement.

We were glad the app (and related website) worked pretty well, but clearly if some customers were not getting their orders it undermined the very purpose of the app. We needed to come up with a new systems design, and we needed to do it fast. We came up with a wish list for what the solution should look like:

  • Efficient and cost effective by reducing development time and running expenses
  • High performance by being highly reliable and highly scalable
  • Customer friendly by reliably notifying customers about cancellations
  • Compartmentalized by isolating the main server from POS processing

We researched our options and decided to use Amazon Web Services (AWS) for its serverless features including Simple Queue Service (SQS), AWS Lambda, Amazon SQS dead-letter queues, AWS Step Functions, and Amazon Simple Notification Service (SNS).

I should note that the “serverless” term is a bit misleading. There’s still a server, but all the back-end security and configuration concerns are handled by AWS administrators, freeing up developers to concentrate on the operational code. Another advantage is that the serverless model comes with an on-demand pricing model. Serverless functions wake up when alerted by certain events (such as an order arriving), so you don’t pay for idle time. You also don’t pay for unused capacity under the serverless model because it provides on-demand scalability.

The solution we built uses a MEAN stack (MongoDB, Express.js, AngularJS, and Node.js) and now supports more than 230 organizations in North America. Integration with popular point-of-sale (POS) systems and payment processors is working smoothly and is key to the platform’s success. We have also managed to say good riddance to the cumbersome maintenance that a monolithic application requires, with our new architecture allowing quick and nimble modifications, such as when we integrate an additional POS system.

Exploring the New Architecture Step by Step

We’ll go through the ordering process step by step, but first here’s a look at the overall architecture (click here for a larger view):

Compared to the monolithic model, the serverless model is compartmentalized and designed to lessen the load on the main server. Using AWS serverless features, the POS middleware provides a POS queue and a messaging service if problems develop with the order. And the box within the box contains the routines for communicating ordering details with various POS systems using their specific protocols (TCP, HTTP, and SOAP in the examples).

So that’s the overall structure. Now let’s move through the process step by step.

Step 1: The Customer Places the Order

Moduurn is an online ordering platform, so customers can easily place an order using the mobile application or from a web browser. The order goes through initial processing on the main server, which validates the details and records the order in the central database.

Step 2: Match the Vendor to a POS System

Although Moduurn Mobility offers its clients the use of its own order manager, most clients will want to process orders using their own legacy POS systems. One nuance is that a restaurant chain quite possibly has a separate POS system at each location. The server then pushes the order details to the main SQS queue, which can hold up to 120,000 orders at the same time.

Code Snippet: Getting Orders to the SQS Queue

Once we create the SQS queue, we need to send the order details to SQS from the main server. To do this we must import the AWS SDK library, create an SQS client, then send a message to the queue.

const AWS = require('aws-sdk');
const AWS = require('aws-sdk');
  AWS.config.update({
            accessKeyId: ‘Your AWS access key Id’,
            secretAccessKey: ‘Your AWS secret key’,
            region: ’Region’
        });

const params = {
        MessageBody: { Your Message in JSON format },
        QueueUrl:’Queue URL’,
  };
const sqs = new AWS.SQS();
         sqs.sendMessage(params, function(err, data) {
           if (err) {
             console.log("Error", err);
             reject(err);
           } else {
             console.log("Success", data.MessageId);
             resolve(data);
           }
         });

Step 3: SQS Invokes the Middleware Proxy

Taking advantage of the AWS Lambda service, we can arrange it so that a message arriving at the main POS SQS queue invokes the Lambda function that acts as our POS middleware proxy. This Lambda function then sends the order to the step function for further processing designed to match the vendor’s POS protocol.

Code Snippet: Event Triggers a Lambda Handler

If you create a Node.js Lambda function using the Lambda console, Lambda will automatically create default code for the function. The Lambda function handler is the method in your function code that processes events. When your function is invoked, Lambda runs the handler method. When the handler exists or returns a response, it becomes available to handle another event.

The function in the following example logs the contents of the event object and returns the location of the logs.

Example: index.js

exports.handler = async function(event, context) {
  console.log("EVENT: \n" + JSON.stringify(event, null, 2))
  return context.logStreamName
}

When you configure a function, the value of the handler setting is the file name and the name of the exported handler method, separated by a dot. The default in the console and for examples in this guide is index.handler. This indicates the handler method that’s exported from the index.js file.

The runtime passes three arguments to the handler method. The first argument is the event object, which contains information from the invoker. The second argument is the context object, which contains information about the invocation, function, and execution environment. The third argument, callback, is a function that you can call in non-async handlers to send a response. For async handlers, you return a response, error, or promise to the runtime instead of using callback.

Code Snippet: Linking the Function to the Queue

Once we create the Lambda function, we need to link this Lambda function with the SQS queue we created earlier. AWS lets you automatically invoke a Lambda function when a message arrives in the SQS queue. The Lambda function will then invoke a step function, so we’ll need to write custom code to invoke this step function and then upload this code to AWS Lambda.

Example: index.js

const AWS = require('aws-sdk');
  AWS.config.update({
            accessKeyId: 'Your AWS access key Id',
            secretAccessKey: 'Your AWS secret key',
            region: 'Region'
        });
 
exports.handler = async (event) => {
    const PromiseArray = event.Records.map( eachMessage => {
        const params = {
            stateMachineArn: 'Your State Machine Arn',
            input: eachMessage.body
        };
 
        const stepFunctions = new AWS.StepFunctions();
        return stepFunctions.startExecution(params).promise();
    });
 
    const result = await Promise.all(PromiseArray).catch( err => ({ error: err }) );
    if(result.error)
        console.log('error',result);
    return result
};

Developer Tip: Uploading the Deployment Package

The deployment package is a ZIP file that includes both your custom JavaScript code and any dependencies that your code needs to run. The root level of the ZIP file contains any custom JavaScript code you’ve written, plus a directory called node_modules. Inside the node_modules directory are all the dependencies your code needs. For example, our deployment ZIP looks like this:

index.js
transform.js
package.json
node_modules/json-to-xml
node_modules/xml-to-json

When you compress your Lambda function into a deployment ZIP, if you’ve been storing the code and dependencies in a directory, be sure to compress the contents of the directory, and not the directory itself.

Step 4: Send Order to Correct POS Workflow

This is the most important step, converting the order details to be received by the vendor’s particular POS system. We designed a step function to handle each POS-specific workflow using Workflow Studio, a low-code drag-and-drop interface to design our own AWS Step Functions. Below you can see the basic setup of Workflow Studio (click for a larger view):

AWS Step Functions also provides the ability to create state machines that can apply a logical flow to the functions you’ve created. Here’s the state machine we designed to choose and apply the correct POS formatting standards (click for a larger view):

We’ll explore some of these steps in more detail.

Code Snippet: Formatting Order Details

We start off with a Lambda function called Format Order that fetches the order details from the central database and converts the complex order details such as menu items, modifiers, tax instructions, and all other related information into a simplified JSON format. For example, the database may contain this information about the customer:

{
customer : {
    name : 'Don',
    Phone: '+0001112223'
    }
}

We then simplify the data structure by flattening the JSON data:

{
customerName : 'Don',
customerPhone: '+0001112223'
}

We then use a built-in node module node-data-transform for transforming and performing operations on JSON. The simplified and structured format is then used as the input for converting the data to work with the target POS. With the help of the Choose state element we forward the order details to the corresponding Lambda function. These individual Lambda functions then format the order into different formats such as JSON/XML with appropriate keys/tags according to the POS requirements.

You can read more about node-json-transform – npm.

Code Snippet: A JSON/XML Example

We start with this data:

{
name : 'Don',
Phone: '0001112223'
}

Lambda will convert the above data Into XML format (the desired format of our example POS):

<xml>
<customerName>Don</customerName>
<customerPhone>0001112223</customerPhone>
</xml>

Step 5: Errors and the Dead-letter Queue

After the proper formatting we send this order to the POS’s end point. We have a separate Lambda function called Process Errors to handle error conditions. If there are any errors during the formatting/connectivity issues or if the POS rejects the order then we will send the order information to this Lambda function. This Lambda function pushes the order details into the dead-letter queue for further processing. Dead-letter queues are useful for debugging an application or messaging system because they let you isolate unconsumed messages to determine why their processing didn’t succeed.

Step 6: Refunds and Cancellations

Unprocessed or failed orders in the dead letter queue are then consumed by another Lambda function that initiates the refund and cancels the order. Moreover, with the help of AWS SNS we can send notification to the customer as well as the store-authorized recipients.

Code Snippet: Sending a Message to SNS

We’ll then need to upload the code for the Lambda function to send a message to SNS. This Lambda function is invoked by the dead-letter queue.

Example: code notification.js

// Load the AWS SDK for Node.js
const AWS = require('aws-sdk');
// Set region
AWS.config.update({region: 'REGION'});
exports.handler = async function(event, context) {
// Create publish parameters
var params = {
  Message: 'MESSAGE_TEXT', /* required */
  TopicArn: 'TOPIC_ARN'
};
 
// Create promise and SNS service object
var publishTextPromise = new AWS.SNS({apiVersion: ''}).publish(params).promise();
 
// Handle promise's fulfilled/rejected states
publishTextPromise.then(
  function(data) {
    console.log('Message ${params.Message} sent to the topic ${params.TopicArn}');
    console.log("MessageID is " + data.MessageId);
  }).catch(
    function(err) {
    console.error(err, err.stack);
  });
});

You can read more about Getting started with Amazon SNS – Amazon Simple Notification Service.

Conclusion

In seeking a clean solution to POS integration the AOT team decided to use microservices on the AWS platform to build a reliable and scalable solution. By implementing this architecture the team could greatly improve operational efficiency, as development time for any new POS integration was cut from two months to two weeks or less. The new system also provides extensive monitoring and error handling through the effective use of AWS services such as SQS and SNS. In the end, moving from a monolithic approach to a microservices-based serverless paradigm proved to be a winning strategy.

 

About the Author

Don C Varghese works as a senior software engineer at AOT Technologies. His areas of expertise include Angular, Node.js, AWS, and MongoDB. He specializes in integrations and payment gateways.