Job Queues & CQRS - The pattern that you need to scale to a million request / minute

Santiago Quinteros - CEO & CTO - Software on the road
By:
Santiago Quinteros

Job Queues

A job queue is a mechanism for managing and processing tasks or "jobs" asynchronously. It allows applications to offload tasks not immediately necessary for the primary workflow to a separate service or process, which handles these tasks in the background.

This approach is beneficial for tasks that are resource-intensive, time-consuming, or not critical to the system's immediate response, such as sending emails, generating reports, or performing batch data processing.

How It Works

  1. Job Creation: The application creates a job to perform a specific task. This job is then added to the queue, essentially a list of tasks waiting to be processed.

  2. Queue Management: The job queue holds the jobs until they can be processed. Jobs can be prioritized, delayed, or scheduled for future execution based on the application's needs.

  3. Worker Processes: Separate processes or "workers" continuously monitor the job queue for new jobs to process. Once a worker picks a job from the queue, it executes the task associated with that job.

  4. Job Processing: The worker processes the job according to its specified task. This processing happens asynchronously, meaning the application can continue to operate and respond to other requests while the job is being handled in the background.

  5. Completion and Callbacks: Once a job is completed, the worker can notify the application of its completion, update its status, and optionally trigger callbacks or follow-up actions.

Benefits of Using a Job Queue

  1. Improved Application Performance: By offloading heavy or time-consuming tasks to a job queue, the main application thread remains free to handle incoming requests, improving overall performance and responsiveness.

  2. Scalability: Job queues can distribute tasks across multiple workers or servers, making it easier to scale the application horizontally as the workload increases.

  3. Reliability: Job queues can implement features like retry mechanisms for failed jobs, ensuring that all tasks are eventually processed even during temporary failures.

  4. Flexibility: Developers can schedule jobs to run at specific times, manage the priority of tasks, and adjust the number of workers based on the load, offering greater control over how and when jobs are processed.

Worker

A worker is a dedicated process or thread whose primary purpose is to monitor a job queue and execute the tasks or jobs it finds.

This concept is central to asynchronous processing architectures, where specific tasks are offloaded from the main application flow to be processed in the background, improving efficiency and user experience. Workers are the backbone of this system, ensuring that tasks are executed promptly and orderly.

Key Characteristics of a Worker

Autonomous Operation: Workers run independently of the main application process, often on separate threads or different servers. This separation allows the main application to remain responsive to user requests while workers handle resource-intensive or time-consuming tasks.

Continuous Polling: Workers continuously poll or listen to the job queue for new tasks. Once a task is identified, a worker picks it up, marks it as being processed, and starts the execution of the associated task.

Task Execution: Workers are responsible for executing tasks they pick from the queue. These tasks can range from sending emails, processing files, and generating reports, to any other asynchronous operation that can be performed independently of the main application workflow.

Error Handling and Retries: Workers often have built-in mechanisms for handling errors or failures. If a task fails, a worker can retry the task according to specified rules or policies, ensuring that temporary issues do not lead to task failure.

Scalability: Multiple workers can operate concurrently, allowing the system to scale by adding more workers based on the volume of tasks in the queue. This scalability ensures that the system can handle increasing workloads efficiently.

How Workers Fit into System Architecture

In a typical job queue architecture, workers play a crucial role in balancing the load and ensuring the system's smooth operation.

Here’s how they fit into the overall architecture:

  1. Task Offloading: The main application offloads tasks to a job queue instead of executing them synchronously, ensuring the application remains responsive.

  2. Monitoring: Workers monitor this queue constantly, waiting for new tasks to be added.

  3. Processing: When a worker finds a new task in the queue, it retrieves the task, processes it, and upon completion, removes it from the queue or marks it as completed.

  4. Feedback Loop: After processing a task, workers can update the system or application about the task's status, facilitating a feedback loop where the outcome of background processes can influence the application's state or user experience.

Workers are designed to be robust and resilient, capable of gracefully handling system failures, network issues, and other unforeseen errors.

This resilience and the ability to scale by adding more workers make the worker and job queue pattern a powerful tool for building scalable, efficient, and responsive applications.

Implementation in node.js using Mongodb and Agenda.js

Step 1: Set Up MongoDB

Before using Agenda.js, you need a MongoDB instance since Agenda uses MongoDB to store job data. If you still need to get MongoDB installed and running, you must set it up. You can install MongoDB locally or use a cloud-based solution like MongoDB Atlas.

  1. Local MongoDB Installation: Follow the MongoDB installation guide for your operating system on the official MongoDB documentation.

  2. Cloud-based MongoDB: Sign up for MongoDB Atlas and create a cluster. Atlas offers a free tier for development purposes.

Once your MongoDB instance is ready, note your connection string. You'll need it to connect Agenda.js to your database.

Step 2: Initialize Your Node.js Project

If you haven't already, create a new Node.js project and initialize it:

mkdir my-agenda-app
cd my-agenda-app
npm init -y

Step 3: Install Agenda.js

Install Agenda.js and the MongoDB driver by running:

npm install agenda mongodb

Step 4: Set Up Agenda.js

Now, you'll set up Agenda.js in your Node.js application. Create a file named agendaSetup.js and initialize Agenda with your MongoDB connection:

const Agenda = require('agenda');

const connectionOpts = {
  db: { address: 'mongodb://localhost:27017/agendaDb', collection: 'jobs' },
  processEvery: '30 seconds'
};

const agenda = new Agenda(connectionOpts);

module.exports = agenda;

If you're using a different database or host, replace 'mongodb://localhost:27017/agendaDb' with your MongoDB connection string.

Step 5: Define Jobs

With Agenda, you define jobs by specifying a name and a function that gets called when the job is run. In the same or a different file, define a job like so:

const agenda = require('./agendaSetup');

agenda.define('say hello', async job => {
  console.log('Hello, World!');
});

Step 6: Schedule Jobs

To schedule jobs, you need to start the agenda and then schedule your defined jobs according to your needs. You can do this in an app.js file or at the end of your agendaSetup.js file:

(async function() { // IIFE to use async/await
  await agenda.start();

  await agenda.every('1 hour', 'say hello');
  
  console.log('Job scheduled to say hello every hour.');
})();

Step 7: Running Your Application

Run your application using Node.js:

node app.js

Advance use case - Sending a welcome email after user sign up

In your sign up endpoint just call agenda and schedule a job

app.post('/signup', async (req, res) => {
  const { email } = req.body;
  
  // Here you would add logic to save the user to your database

  // Schedule the 'send welcome email' job
  await agenda.schedule('in 2 minutes', 'send welcome email', { email });

  res.status(200).send('User signed up successfully, welcome email scheduled.');
});

And have the job defined

// Define the 'send welcome email' job
agenda.define('send welcome email', async job => {
  const { email } = job.attrs.data;
  console.log(`Sending welcome email to ${email}`);
  // Here you would integrate with your email service
});

The CQRS Pattern and Integration with Job Queues

Command Query Responsibility Segregation (CQRS) is a software architectural pattern that separates the operations of reading data (queries) from the operations of updating data (commands), allowing them to scale independently and optimize performance, complexity, and security for each operation type.

Integrating job queues with the CQRS pattern can enhance its effectiveness, particularly on the command side of the architecture. This integration brings several benefits, improving the system's scalability, reliability, and responsiveness.

Understanding CQRS

CQRS is based on the principle that the models used to update information do not have to be the same as those used to read information. This separation allows for system design flexibility and can improve performance and scalability. The pattern fits well with event-driven architectures and domain-driven design (DDD), where it can provide clear boundaries and responsibilities within the system.

Benefits of Integrating Job Queues with CQRS

  1. Improved Scalability: By using job queues to handle commands, you can offload the execution of these commands to background workers. This allows the system to handle a high volume of write requests more efficiently by spreading the load across multiple workers and resources, enhancing the scalability of the command model.

  2. Enhanced Performance: Separating commands and queries allows each to be optimized for specific roles. Job queues can further optimize command execution by ensuring that write operations do not block read operations, thus improving the application's overall performance.

  3. Increased Reliability and Fault Tolerance: Job queues can automatically retry failed commands, improving the system's reliability. This is particularly important for operations that must not fail, such as financial transactions or critical data updates. Using job queues ensures that commands can be retried or postponed until they can be completed.

  4. Asynchronous Processing: Integrating job queues allows commands to be processed asynchronously, significantly improving the user experience by making the UI more responsive. Users can receive immediate feedback for their actions, even if the underlying command is processed in the background.

  5. Event Sourcing Compatibility: CQRS often complements Event Sourcing, where changes to the application state are stored as a sequence of events. Job queues can efficiently handle generating and processing these events, facilitating a robust event-driven architecture.

Implementation Considerations

  • Command Handling: In a CQRS-based system integrated with job queues, commands are dispatched to the job queue instead of being executed directly. This decouples the command's issuance from its execution, allowing for more flexible and scalable processing.

  • Consistency: While job queues and CQRS can improve performance and scalability, they also introduce eventual consistency into the system. This means the system might only partially reflect the results of a command. Designing your system to handle or mitigate the effects of eventual consistency is crucial.

  • Error Handling: Robust error handling and retry mechanisms should be in place to manage failed commands during execution. This ensures that the system can recover gracefully from errors without losing data or corrupting the application state.

CQRS Example

To demonstrate a minimal reproducible example of a CQRS architecture using Express and Agenda.js, let's create a simple application. This app will have a command to "create a user" and a query to "get user details". The "create a user" command will be processed asynchronously using Agenda.js.

Setup Your Project

Initialize a new Node.js project (if you haven't already):

mkdir cqrs-agenda-example
cd cqrs-agenda-example
npm init -y

Install necessary packages:

npm install express agenda mongodb body-parser

Implementing the Command Side with Agenda.js

Set up Express and Agenda.js (app.js):

const express = require('express');
const bodyParser = require('body-parser');
const { MongoClient } = require('mongodb');
const Agenda = require('agenda');

const app = express();
const port = 3000;

app.use(bodyParser.json());

const mongoConnectionString = 'mongodb://127.0.0.1/agenda';

// Initialize MongoDB connection and Agenda
const client = new MongoClient(mongoConnectionString);
const agenda = new Agenda({ db: { address: mongoConnectionString } });

// Placeholder for users' data storage
const users = {};

// Define a job for creating a user in Agenda
agenda.define('create user', async (job) => {
 const { userId, userName } = job.attrs.data;
 // Simulate user creation delay
 await new Promise(resolve => setTimeout(resolve, 1000));
 users[userId] = { userId, userName };
 console.log(`User created: ${userName}`);
});

(async function() { // Self-invoking async function to ensure proper startup
 await client.connect();
 await agenda.start();
 console.log('Agenda started');
})();

// Command API to create a user
app.post('/users', async (req, res) => {
 const { userId, userName } = req.body;
 await agenda.schedule('in 2 seconds', 'create user', { userId, userName });
 res.send({ message: `User creation scheduled for ${userName}` });
});

// Query API to get a user
app.get('/users/:userId', (req, res) => {
 const { userId } = req.params;
 const user = users[userId];
 if (user) {
 res.send(user);
 } else {
 res.status(404).send({ message: 'User not found' });
 }
});

app.listen(port, () => {
 console.log(`Example app listening at http://localhost:${port}`);
});

Explanation

  • MongoDB and Agenda Setup: This example connects to MongoDB, initializes Agenda with the connection, and defines a job for creating a user. The users object acts as a simple in-memory store.

  • Command Endpoint: The POST /users endpoint receives a userId and userName, schedules a "create user" job with Agenda, and responds immediately, acknowledging the scheduling.

  • Query Endpoint: The GET /users/:userId endpoint looks up and returns the user's details from the in-memory store. If the user doesn't exist, it returns a 404 error.

  • Asynchronous Job Processing: The "create user" job simulates a delay, mimicking a time-consuming task like sending a welcome email or processing additional data. Once the job runs, it adds the user to the in-memory store.

Running the Example

Make sure MongoDB is running locally.

Start your application with node app.js.

Use a tool like Postman or curl to test the command and query endpoints:

To create a user: POST http://localhost:3000/users with JSON body {"userId": "1", "userName": "John Doe"}.

To get a user: GET http://localhost:3000/users/1.

This example illustrates a basic CQRS pattern with asynchronous command processing using Express and Agenda.js.

It demonstrates how commands can be handled separately from queries, allowing for more scalable and responsive applications.

Advance example - CQRS for web scrapping

For this example, we'll design a simple CQRS-based application that schedules web scraping tasks using Playwright, tracks the status of these jobs, and retrieves their results.

This will involve creating a command to schedule a scraping job, and queries to check job status and get results. We'll use Express.js for the web server, Agenda.js for job queueing, and Playwright for web scraping.

Setup

Initialize a new Node.js project:


mkdir cqrs-scraping
cd cqrs-scraping
npm init -y

Install necessary packages:

npm install express agenda mongodb body-parser playwright

Implementation

Set up Express and Agenda.js (server.js):


const express = require('express');
const bodyParser = require('body-parser');
const { MongoClient } = require('mongodb');
const Agenda = require('agenda');
const { chromium } = require('playwright');

const app = express();
app.use(bodyParser.json());

const mongoConnectionString = 'mongodb://127.0.0.1/agenda';
const agenda = new Agenda({ db: { address: mongoConnectionString } });

const jobsResult = {}; // Store job results keyed by job ID

// Define a job for web scraping
agenda.define('web scraping', async (job) => {
  const { url } = job.attrs.data;
  const browser = await chromium.launch();
  const page = await browser.newPage();
  await page.goto(url);
  const content = await page.content(); // Simplified scraping logic
  await browser.close();

  // Store result with job ID for retrieval
  jobsResult[job.attrs._id] = content;
  console.log(`Scraping completed for job ${job.attrs._id}`);
});

(async function() {
  await agenda.start();
  console.log('Agenda started');
})();

// Endpoint to schedule web scraping
app.post('/scrape', async (req, res) => {
  const { url } = req.body;
  const job = await agenda.now('web scraping', { url });
  res.send({ message: 'Scraping job scheduled', jobId: job.attrs._id });
});

// Endpoint to check job status
app.get('/status/:jobId', (req, res) => {
  const { jobId } = req.params;
  if (jobsResult[jobId]) {
    res.send({ status: 'Completed' });
  } else {
    res.send({ status: 'In Progress' });
  }
});

// Endpoint to get job result
app.get('/result/:jobId', (req, res) => {
  const { jobId } = req.params;
  const result = jobsResult[jobId];
  if (result) {
    res.send({ result });
  } else {
    res.status(404).send({ message: 'Result not found' });
  }
});

const port = 3000;
app.listen(port, () => console.log(`Server running on port ${port}`));

How It Works

  • Web Scraping Job: The agenda.define function defines a job for web scraping. It uses Playwright to navigate to the specified URL and store the page content in jobsResult, keyed by the job's unique ID.
  • Scheduling Endpoint (/scrape): This endpoint schedules a new web scraping job for the given URL and returns the job ID. Clients can use this ID to check the job's status and retrieve results.
  • Status Checking Endpoint (/status/:jobId): Clients can use this endpoint to check if the scraping job has been completed.
  • Result Retrieval Endpoint (/result/:jobId): Once a job is completed, clients can retrieve the scraped content through this endpoint using the job ID.

Running the Example

Start MongoDB locally if it's not running already.

Run the server script:

node server.js

Usage

Schedule a web scraping job by sending a POST request to /scrape with a JSON body containing the URL to scrape.

Check the job status by sending a GET request to /status/:jobId using the job ID returned from the previous step.

Retrieve the job result by sending a GET request to /result/:jobId once the job is completed.

Conclusion

Job queues and the Command Query Responsibility Segregation (CQRS) pattern represent powerful architectural choices that can significantly enhance the scalability, performance, and maintainability of software systems, especially in complex, distributed environments like microservices.

When implemented thoughtfully, these patterns facilitate a high degree of decoupling between components, allowing for more granular scaling, improved fault tolerance, and greater flexibility in responding to changing requirements or workloads.

Get the latest articles in your inbox.

Join the other 2000+ savvy node.js developers who get article updates. You will receive only high-quality articles about Node.js, Cloud Computing and Javascript front-end frameworks.


santypk4

CEO at Softwareontheroad.com