Try : Insurtech, Application Development

AgriTech(1)

Augmented Reality(20)

Clean Tech(8)

Customer Journey(17)

Design(45)

Solar Industry(8)

User Experience(68)

Edtech(10)

Events(34)

HR Tech(3)

Interviews(10)

Life@mantra(11)

Logistics(5)

Strategy(18)

Testing(9)

Android(48)

Backend(32)

Dev Ops(11)

Enterprise Solution(29)

Technology Modernization(8)

Frontend(29)

iOS(43)

Javascript(15)

AI in Insurance(38)

Insurtech(66)

Product Innovation(58)

Solutions(22)

E-health(12)

HealthTech(24)

mHealth(5)

Telehealth Care(4)

Telemedicine(5)

Artificial Intelligence(147)

Bitcoin(8)

Blockchain(19)

Cognitive Computing(7)

Computer Vision(8)

Data Science(23)

FinTech(51)

Banking(7)

Intelligent Automation(27)

Machine Learning(47)

Natural Language Processing(14)

expand Menu Filters

Implementing a Clean Architecture with Nest.JS

4 minutes read

This article is for enthusiasts who strive to write clean, scalable, and more importantly refactorable code. It will give an idea about how Nest.JS can help us write clean code and what underlying architecture it uses.

Implementing a clean architecture with Nest.JS will require us to first comprehend what this framework is and how it works.

What is Nest.JS?

Nest or Nest.JS is a framework for building efficient, scalable Node.js applications (server-side) built with TypeScript. It uses Express or Fastify and allows a level of abstraction to enable developers to use an ample amount of modules (third-party) within their code.

Let’s dig deeper into what is this clean architecture all about. 

Well, you all might have used or at least heard of MVC architecture. MVC stands for Model, View, Controller. The idea behind this is to separate our project structure into 3 different sections.

1. Model: It will contain the Object file which maps with Relation/Documents in the DB.

2. Controller: It is the request handler and is responsible for the business logic implementation and all the data manipulation.

3. View: This part will contain files that are concerned with the displaying of the data, either HTML files or some templating engine files.

To create a model, we need some kind of ORM/ODM tool/module/library to build it with. For instance, if you directly use the module, let’s say ‘sequelize’, and then use the same to implement login in your controller and make your core business logic dependent upon the ‘sequelize’. Now, down the line, let’s say after 10 years, there is a better tool in the market that you want to use, but as soon as you replace sequelize with it, you will have to change lots of lines of code to prevent it from breaking. Also, you’ll have to test all the features once again to check if it’s deployed successfully or not which may waste valuable time and resource as well. To overcome this challenge, we can use the last principle of SOLID which is the Dependency Inversion Principle, and a technique called dependency injection to avoid such a mess.

Still confused? Let me explain in detail.

So, what Dependency Inversion Principle says in simple words is, you create your core business logic and then build dependency around it. In other words, free your core logic and business rules from any kind of dependency and modify the outer layers in such a way that they are dependent on your core logic instead of your logic dependent on this. That’s what clean architecture is. It takes out the dependency from your core business logic and builds the system around it in such a way that they seem to be dependent on it rather than it being dependent on them.

Let’s try to understand this with the below diagram.

Source: Clean Architecture Cone 

You can see that we have divided our architecture into 4 layers:

1. Entities: At its core, entities are the models(Enterprise rules) that define your enterprise rules and tell what the application is about. This layer will hardly change over time and is usually abstract and not accessible directly. For eg., every application has a ‘user’. What all fields the user should store, their types, and relations with other entities will comprise an Entity.

2. Use cases: It tells us how can we implement the enterprise rules. Let’s take the example of the user again. Now we know what data to be operated upon, the use case tells us how to operate upon this data, like the user will have a password that needs to be encrypted, the user needs to be created, and the password can be changed at any given point of time, etc.

3. Controllers/Gateways: These are channels that help us to implement the use cases using external tools and libraries using dependency injection.

4. External Tools: All the tools and libraries we use to build our logic will come under this layer eg. ORM, Emailer, Encryption, etc.

The tools we use will be depending upon how we channel them to use cases and in turn, use cases will depend upon the entities which is the core of our business. This way we have inverted the dependency from outwards to inwards. That’s what the Dependency Inversion Principal of SOLID implies.

Okay, by now, you got the gist of Nest.JS and understood how clean architecture works. Now the question arises, how these two are related?  

Let’s try to understand what are the 3 building blocks of Nest.JS and what each of them does.

  1. Modules: Nest.JS is structured in such a way that we can treat each feature as a module. For eg., anything which is linked with the User such as models, controllers, DTOs, interfaces, etc., can be separated as a module. A module has a controller and a bunch of providers which are injectible functionalities like services, orm, emailer, etc.
  1. Controllers: Controllers in Nest.JS are interfaces between the network and your logic. They are used to handle requests and return responses to the client side of the application (for example, call to the API).
  1. Providers (Services): Providers are injectable services/functionalities which we can inject into controllers and other providers to provide flexibility and extra functionality. They abstract any form of complexity and logic.

To summarize,

  • We have controllers that act as interfaces (3rd layer of clean architecture)
  • We have providers which can be injected to provide functionality (4th layer of clean architecture: DB, Devices, etc.)
  • We can also create services and repositories to define our use case (2nd Layer)
  • We can define our entities using DB providers (1st Layer)

Conclusion:

Nest.JS is a powerful Node.JS framework and the most well-known typescript available today. Now that you’ve got the lowdown on this framework, you must be wondering if we can use it to build a project structure with a clean architecture. Well, the answer is -Yes! Absolutely. How? I’ll explain in the next series of this article. 

Till then, Stay tuned!

About the Author:

Junaid Bhat is currently working as a Tech Lead in Mantra Labs. He is a tech enthusiast striving to become a better engineer every day by following industry standards and aligned towards a more structured approach to problem-solving. 


Read our latest blog: Golang-Beego Framework and its Applications

Cancel

Knowledge thats worth delivered in your inbox

Silent Drains: How Poor Data Observability Costs Enterprises Millions

Let’s rewind the clock for a moment. Thousands of years ago, humans had a simple way of keeping tabs on things—literally. They carved marks into clay tablets to track grain harvests or seal trade agreements. These ancient scribes kickstarted what would later become one of humanity’s greatest pursuits: organizing and understanding data. The journey of data began to take shape.

Now, here’s the kicker—we’ve gone from storing the data on clay to storing the data on the cloud, but one age-old problem still nags at us: How healthy is that data? Can we trust it?

Think about it. Records from centuries ago survived and still make sense today because someone cared enough to store them and keep them in good shape. That’s essentially what data observability does for our modern world. It’s like having a health monitor for your data systems, ensuring they’re reliable, accurate, and ready for action. And here are the times when data observability actually had more than a few wins in the real world and this is how it works

How Data Observability Works

Data observability involves monitoring, analyzing, and ensuring the health of your data systems in real-time. Here’s how it functions:

  1. Data Monitoring: Continuously tracks metrics like data volume, freshness, and schema consistency to spot anomalies early.
  2. Automated data Alerts: Notify teams of irregularities, such as unexpected data spikes or pipeline failures, before they escalate.
  3. Root Cause Analysis: Pinpoints the source of issues using lineage tracking, making problem-solving faster and more efficient.
  4. Proactive Maintenance: Predicts potential failures by analyzing historical trends, helping enterprises stay ahead of disruptions.
  5. Collaboration Tools: Bridges gaps between data engineering, analytics, and operations teams with a shared understanding of system health.

Real-World Wins with Data Observability

1. Preventing Retail Chaos

A global retailer was struggling with the complexities of scaling data operations across diverse regions, Faced with a vast and complex system, manual oversight became unsustainable. Rakuten provided data observability solutions by leveraging real-time monitoring and integrating ITSM solutions with a unified data health dashboard, the retailer was able to prevent costly downtime and ensure seamless data operations. The result? Enhanced data lineage tracking and reduced operational overhead.

2. Fixing Silent Pipeline Failures

Monte Carlo’s data observability solutions have saved organizations from silent data pipeline failures. For example, a Salesforce password expiry caused updates to stop in the salesforce_accounts_created table. Monte Carlo flagged the issue, allowing the team to resolve it before it caught the executive attention. Similarly, an authorization issue with Google Ads integrations was detected and fixed, avoiding significant data loss.

3. Forbes Optimizes Performance

To ensure its website performs optimally, Forbes turned to Datadog for data observability. Previously, siloed data and limited access slowed down troubleshooting. With Datadog, Forbes unified observability across teams, reducing homepage load times by 37% and maintaining operational efficiency during high-traffic events like Black Friday.

4. Lenovo Maintains Uptime

Lenovo leveraged observability, provided by Splunk, to monitor its infrastructure during critical periods. Despite a 300% increase in web traffic on Black Friday, Lenovo maintained 100% uptime and reduced mean time to resolution (MTTR) by 83%, ensuring a flawless user experience.

Why Every Enterprise Needs Data Observability Today

1. Prevent Costly Downtime

Data downtime can cost enterprises up to $9,000 per minute. Imagine a retail giant facing data pipeline failures during peak sales—inventory mismatches lead to missed opportunities and unhappy customers. Data observability proactively detects anomalies, like sudden drops in data volume, preventing disruptions before they escalate.

2. Boost Confidence in Data

Poor data quality costs the U.S. economy $3.1 trillion annually. For enterprises, accurate, observable data ensures reliable decision-making and better AI outcomes. For instance, an insurance company can avoid processing errors by identifying schema changes or inconsistencies in real-time.

3. Enhance Collaboration

When data pipelines fail, teams often waste hours diagnosing issues. Data observability simplifies this by providing clear insights into pipeline health, enabling seamless collaboration across data engineering, data analytics, and data operations teams. This reduces finger-pointing and accelerates problem-solving.

4. Stay Agile Amid Complexity

As enterprises scale, data sources multiply, making Data pipeline monitoring and data pipeline management more complex. Data observability acts as a compass, pinpointing where and why issues occur, allowing organizations to adapt quickly without compromising operational efficiency.

The Bigger Picture:

Are you relying on broken roads in your data metropolis, or are you ready to embrace a system that keeps your operations smooth and your outcomes predictable?

Just as humanity evolved from carving records on clay tablets to storing data in the cloud, the way we manage and interpret data must evolve too. Data observability is not just a tool for keeping your data clean; it’s a strategic necessity to future-proof your business in a world where insights are the cornerstone of success. 

At Mantra Labs, we understand this deeply. With our partnership with Rakuten, we empower enterprises with advanced data observability solutions tailored to their unique challenges. Let us help you turn your data into an invaluable asset that ensures smooth operations and drives impactful outcomes.

Cancel

Knowledge thats worth delivered in your inbox

Loading More Posts ...
Go Top
ml floating chatbot