Try : Insurtech, Application Development

AgriTech(1)

Augmented Reality(21)

Clean Tech(9)

Customer Journey(17)

Design(45)

Solar Industry(8)

User Experience(68)

Edtech(10)

Events(34)

HR Tech(3)

Interviews(10)

Life@mantra(11)

Logistics(5)

Manufacturing(3)

Strategy(18)

Testing(9)

Android(48)

Backend(32)

Dev Ops(11)

Enterprise Solution(33)

Technology Modernization(9)

Frontend(29)

iOS(43)

Javascript(15)

AI in Insurance(38)

Insurtech(66)

Product Innovation(58)

Solutions(22)

E-health(12)

HealthTech(24)

mHealth(5)

Telehealth Care(4)

Telemedicine(5)

Artificial Intelligence(153)

Bitcoin(8)

Blockchain(19)

Cognitive Computing(8)

Computer Vision(8)

Data Science(23)

FinTech(51)

Banking(7)

Intelligent Automation(27)

Machine Learning(48)

Natural Language Processing(14)

expand Menu Filters

Implementing a Clean Architecture with Nest.JS (Part 2)

7 minutes read

It’s been a while since my last article Implementing Clean architecture with NestJS was rolled out where we went through some high-level concepts of clean architecture and the underlying structure of Nest.js. This article is the next part of the same series wherein I will try to break down different layers of Clean architecture and share some of the useful Nest.js tools/concepts that can be used in our implementation. Although, we will not be doing “Real” coding in this part.

So without any further delay. Let’s dive in…

Clean architecture aligns with the objective that the system should be independent of any external agency. It can be a framework, Database, or any third-party tools that are being used by the system. It also focuses on ‘Testability’ which in the modern era of development is a major consideration.

If we dissect the above diagram, what we find is

  • Layers: Each circle represents an independent layer in the system.
  • Dependency: The direction is from out to in. Basically, it means that the outer layers are dependent on the inner layer and the Entity layer is independent.
  • Entities: It will comprise all the entities that will construct your application. e.g. User Entity, Product Entity, Etc.
  • Use cases: To every entity, there are going to be specific use cases that will actually comprise all your core logic. E.g. Generating a password for the user, Adding a product, Etc.
  • Controllers/Presenters: These are basically gateways to your system. You can think of them as entry/exit points to the use cases.
  • Frameworks: It contains all the specific implementations e.g. web framework, database, loggers, Exception Handling.

What I find really interesting and intriguing about this type of system is that it focuses on business logic instead of the frameworks and tools used to run the logic. 

That means it hardly matters which database you choose, and which framework you are using. It can change and evolve over time but what will remain constant and intact is your business logic and the entities of the application.

Let’s try to slowly stack up the layers and try to understand through an example…

Basic application with Users and Products

In our example, we will represent a simple application that actually is responsible for CRUD operations on User and Product Entities.

I will be taking Nest.JS as a reference wherever I will explain the implementation of any functionality. Will touch base on concepts/tools like Dependency Injection, Repository, Class-Validator and DTO.

Entities and Use Cases: Core of our business

At its core, our business logic is defined by two layers of clean architecture: 

  1. Entity layer: Comprises all the business entities that define our application.
  2. Use-case layer: Comprises all the business scenarios that our entities support.
  1. Entities: The building block of our application

Entities are the only independent layer in our system and will comprise information that is not meant to change over a period of time and will shape our application functionality. This also means that these layers will not be affected by any change in the external environment e.g. Services, controllers, and routing. 

There will be only two entities in our application:

a) Product: ID, Name, Category, Cost & Quantity

b) User: ID, Name & Email

  1. Use-Cases: Fundamental part of our business logic.

Use cases are only dependent on Entities and orchestrate all the scenarios that comprehend their counterpart Entity. For two of our entities, we will have the following use cases:

Product

  • Add a product
  • Get a product by Id
  • Get All products
  • Get Products by cost, category, or quantity
  • Update a product
  • Delete a product

User

  • Add user
  • Get User by Id
  • Get All Users

Now, we need to declare these entities and use cases in our codebase and we need to have some form of database service. Database service can be an ORM/SDK which will connect with our database Postgres/MongoDB etc.

  • One way to do it is to use this SDK/ORM as a direct implementation in our use cases and hence making our use cases dependent on the SDK/ORM we use. Any change in the SDK/ORM will directly impact our logic hence defying the main motive of clean architecture.
  • Other way or I would say, the best way, is to take advantage of abstraction and dependency injection. With help of something called a repository(Abstract service/layer) which sits between our use case and the DB implementation. We can create an abstract layer of service injected into our use cases where we will define methods that will be independent of the type of database and the implementation of those methods will depend on the database used. 

This will give us the flexibility to change the DB at our will. All we need to do is the DB implementation of the repository methods and we do not touch our logic at all.

Repository: As mentioned above, the Repository will contain the implementation of the abstract functions which will be used in our use case. In our example, those functions can be InsertItem, updateItemById, getItemById, getItems, fiterItems. This can be a generic repository. We can also have specific repositories for use cases as well which will contain more specific functions like addUser, getUserById, getUsers, addProduct. getProducts, etc.

Our use cases will always call these abstract methods without actually knowing about the DB used.

Controllers and Presenters: Data carriers for our business

Now that the core is set, we have defined our entities and use cases. Something has to act upon our use cases in order for the system to work. Data needs to be passed to the use cases to be stored and updated and similarly, processed data has to be sent out for the end users to be able to use the information. Well, That’s what Controllers and Presenters are there for.

One can think of them as adapters that bind our use cases to the outer world. In basic terminology, Controllers are used to responding to user input e.g. validation, transformation, etc. while the presenters are used to format and prepare data to be sent out.

In clean architecture, the controller’s job is:

  • Receive the user input — some kind of DTO 
  • Sanitization: validate user input
  • Transformation: convert user inputs to the desired type required by entities/use cases
  • Last but not least, call the use case

The controller will only have the formatting of the data. No business logic. On the other hand, the job of the presenter is to get data from the application repository and then build a formatted response for the client. Its main responsibilities include

  • Formatting: converting strings and dates.
  • If needed, add presentation data, like flags.
  • Prepare the data to be displayed in the UI.

In NestJS, the functionality of the controller and presenter is implemented under controllers only, there is no different presentation component in nestjs. With the help NestJS, we will validate our user input using validation pipes and transform our DTOs to business objects using transformation pipes.

Under the hood, nestJs uses a class-validator library for all validations. All we need to do is wrap our DTO with a class-validator decorator and specify our validation properties and we are good to go.

External Interfaces: Our Frameworks

Now that our mission mangal is set to launch. All we need is the right platform. What I mean is that now we have our core ready, and gateways and channels are ready to take inputs. What we need is to set it up with the right frameworks eg. Database, logging, error handling, etc.

For example, for DB we can use TypeORM for our data services and database modelling. We can use Winston for logging. Web application frameworks can either be Express or Fastify.

Summary

This article summarizes all the layers of clean architecture using a real-world example with help of Nest.JS. We tried to demonstrate how we can build a robust structure of layer-by-layer services that decouple our core business logic from frameworks. 

It gives us the superpower to change our frameworks without even worrying about breaking the code. We can easily move from MySQL to PostgreSQL, Express to Fastify without bothering our business logic. This will help us reduce the cost of transition and ease of testing.

I believe that this article would help a lot of you who strive to write clean and maintainable code. Will be back with more content soon. Till then, sayonara!

About the Author:

Junaid Bhat is currently working as a Tech Lead in Mantra Labs. He is a tech enthusiast striving to become a better engineer every day by following industry standards and aligned towards a more structured approach to problem-solving. 

Cancel

Knowledge thats worth delivered in your inbox

AI Code Assistants: Revolution Unveiled

AI code assistants are revolutionizing software development, with Gartner predicting that 75% of enterprise software engineers will use these tools by 2028, up from less than 10% in early 2023. This rapid adoption reflects the potential of AI to enhance coding efficiency and productivity, but also raises important questions about the maturity, benefits, and challenges of these emerging technologies.

Code Assistance Evolution

The evolution of code assistance has been rapid and transformative, progressing from simple autocomplete features to sophisticated AI-powered tools. GitHub Copilot, launched in 2021, marked a significant milestone by leveraging OpenAI’s Codex to generate entire code snippets 1. Amazon Q, introduced in 2023, further advanced the field with its deep integration into AWS services and impressive code acceptance rates of up to 50%. GPT (Generative Pre-trained Transformer) models have been instrumental in this evolution, with GPT-3 and its successors enabling more context-aware and nuanced code suggestions.

Image Source

  • Adoption rates: By 2023, over 40% of developers reported using AI code assistants.
  • Productivity gains: Tools like Amazon Q have demonstrated up to 80% acceleration in coding tasks.
  • Language support: Modern AI assistants support dozens of programming languages, with GitHub Copilot covering over 20 languages and frameworks.
  • Error reduction: AI-powered code assistants have shown potential to reduce bugs by up to 30% in some studies.

These advancements have not only increased coding efficiency but also democratized software development, making it more accessible to novice programmers and non-professionals alike.

Current Adoption and Maturity: Metrics Defining the Landscape

The landscape of AI code assistants is rapidly evolving, with adoption rates and performance metrics showcasing their growing maturity. Here’s a tabular comparison of some popular AI coding tools, including Amazon Q:

Amazon Q stands out with its specialized capabilities for software developers and deep integration with AWS services. It offers a range of features designed to streamline development processes:

  • Highest reported code acceptance rates: Up to 50% for multi-line code suggestions
  • Built-in security: Secure and private by design, with robust data security measures
  • Extensive connectivity: Over 50 built-in, managed, and secure data connectors
  • Task automation: Amazon Q Apps allow users to create generative AI-powered apps for streamlining tasks

The tool’s impact is evident in its adoption and performance metrics. For instance, Amazon Q has helped save over 450,000 hours from manual technical investigations. Its integration with CloudWatch provides valuable insights into developer usage patterns and areas for improvement.

As these AI assistants continue to mature, they are increasingly becoming integral to modern software development workflows. However, it’s important to note that while these tools offer significant benefits, they should be used judiciously, with developers maintaining a critical eye on the generated code and understanding its implications for overall project architecture and security.

AI-Powered Collaborative Coding: Enhancing Team Productivity

AI code assistants are revolutionizing collaborative coding practices, offering real-time suggestions, conflict resolution, and personalized assistance to development teams. These tools integrate seamlessly with popular IDEs and version control systems, facilitating smoother teamwork and code quality improvements.

Key features of AI-enhanced collaborative coding:

  • Real-time code suggestions and auto-completion across team members
  • Automated conflict detection and resolution in merge requests
  • Personalized coding assistance based on individual developer styles
  • AI-driven code reviews and quality checks

Benefits for development teams:

  • Increased productivity: Teams report up to 30-50% faster code completion
  • Improved code consistency: AI ensures adherence to team coding standards
  • Reduced onboarding time: New team members can quickly adapt to project codebases
  • Enhanced knowledge sharing: AI suggestions expose developers to diverse coding patterns

While AI code assistants offer significant advantages, it’s crucial to maintain a balance between AI assistance and human expertise. Teams should establish guidelines for AI tool usage to ensure code quality, security, and maintainability.

Emerging trends in AI-powered collaborative coding:

  • Integration of natural language processing for code explanations and documentation
  • Advanced code refactoring suggestions based on team-wide code patterns
  • AI-assisted pair programming and mob programming sessions
  • Predictive analytics for project timelines and resource allocation

As AI continues to evolve, collaborative coding tools are expected to become more sophisticated, further streamlining team workflows and fostering innovation in software development practices.

Benefits and Risks Analyzed

AI code assistants offer significant benefits but also present notable challenges. Here’s an overview of the advantages driving adoption and the critical downsides:

Core Advantages Driving Adoption:

  1. Enhanced Productivity: AI coding tools can boost developer productivity by 30-50%1. Google AI researchers estimate that these tools could save developers up to 30% of their coding time.
IndustryPotential Annual Value
Banking$200 billion – $340 billion
Retail and CPG$400 billion – $660 billion
  1. Economic Impact: Generative AI, including code assistants, could potentially add $2.6 trillion to $4.4 trillion annually to the global economy across various use cases. In the software engineering sector alone, this technology could deliver substantial value.
  1. Democratization of Software Development: AI assistants enable individuals with less coding experience to build complex applications, potentially broadening the talent pool and fostering innovation.
  2. Instant Coding Support: AI provides real-time suggestions and generates code snippets, aiding developers in their coding journey.

Critical Downsides and Risks:

  1. Cognitive and Skill-Related Concerns:
    • Over-reliance on AI tools may lead to skill atrophy, especially for junior developers.
    • There’s a risk of developers losing the ability to write or deeply understand code independently.
  2. Technical and Ethical Limitations:
    • Quality of Results: AI-generated code may contain hidden issues, leading to bugs or security vulnerabilities.
    • Security Risks: AI tools might introduce insecure libraries or out-of-date dependencies.
    • Ethical Concerns: AI algorithms lack accountability for errors and may reinforce harmful stereotypes or promote misinformation.
  3. Copyright and Licensing Issues:
    • AI tools heavily rely on open-source code, which may lead to unintentional use of copyrighted material or introduction of insecure libraries.
  4. Limited Contextual Understanding:
    • AI-generated code may not always integrate seamlessly with the broader project context, potentially leading to fragmented code.
  5. Bias in Training Data:
    • AI outputs can reflect biases present in their training data, potentially leading to non-inclusive code practices.

While AI code assistants offer significant productivity gains and economic benefits, they also present challenges that need careful consideration. Developers and organizations must balance the advantages with the potential risks, ensuring responsible use of these powerful tools.

Future of Code Automation

The future of AI code assistants is poised for significant growth and evolution, with technological advancements and changing developer attitudes shaping their trajectory towards potential ubiquity or obsolescence.

Technological Advancements on the Horizon:

  1. Enhanced Contextual Understanding: Future AI assistants are expected to gain deeper comprehension of project structures, coding patterns, and business logic. This will enable more accurate and context-aware code suggestions, reducing the need for extensive human review.
  2. Multi-Modal AI: Integration of natural language processing, computer vision, and code analysis will allow AI assistants to understand and generate code based on diverse inputs, including voice commands, sketches, and high-level descriptions.
  3. Autonomous Code Generation: By 2027, we may see AI agents capable of handling entire segments of a project with minimal oversight, potentially scaffolding entire applications from natural language descriptions.
  4. Self-Improving AI: Machine learning models that continuously learn from developer interactions and feedback will lead to increasingly accurate and personalized code suggestions over time.

Adoption Barriers and Enablers:

Barriers:

  1. Data Privacy Concerns: Organizations remain cautious about sharing proprietary code with cloud-based AI services.
  2. Integration Challenges: Seamless integration with existing development workflows and tools is crucial for widespread adoption.
  3. Skill Erosion Fears: Concerns about over-reliance on AI leading to a decline in fundamental coding skills among developers.

Enablers:

  1. Open-Source Models: The development of powerful open-source AI models may address privacy concerns and increase accessibility.
  2. IDE Integration: Deeper integration with popular integrated development environments will streamline adoption.
  3. Demonstrable ROI: Clear evidence of productivity gains and cost savings will drive enterprise adoption.
  1. AI-Driven Architecture Design: AI assistants may evolve to suggest optimal system architectures based on project requirements and best practices.
  2. Automated Code Refactoring: AI tools will increasingly offer intelligent refactoring suggestions to improve code quality and maintainability.
  3. Predictive Bug Detection: Advanced AI models will predict potential bugs and security vulnerabilities before they manifest in production environments.
  4. Cross-Language Translation: AI assistants will facilitate seamless translation between programming languages, enabling easier migration and interoperability.
  5. AI-Human Pair Programming: More sophisticated AI agents may act as virtual pair programming partners, offering real-time guidance and code reviews.
  6. Ethical AI Coding: Future AI assistants will incorporate ethical considerations, suggesting inclusive and bias-free code practices.

As these trends unfold, the role of human developers is likely to shift towards higher-level problem-solving, creative design, and AI oversight. By 2025, it’s projected that over 70% of professional software developers will regularly collaborate with AI agents in their coding workflows1. However, the path to ubiquity will depend on addressing key challenges such as reliability, security, and maintaining a balance between AI assistance and human expertise.

The future outlook for AI code assistants is one of transformative potential, with the technology poised to become an integral part of the software development landscape. As these tools continue to evolve, they will likely reshape team structures, development methodologies, and the very nature of coding itself.

Conclusion: A Tool, Not a Panacea

AI code assistants have irrevocably altered software development, delivering measurable productivity gains but introducing new technical and societal challenges. Current metrics suggest they are transitioning from novel aids to essential utilities—63% of enterprises now mandate their use. However, their ascendancy as the de facto standard hinges on addressing security flaws, mitigating cognitive erosion, and fostering equitable upskilling. For organizations, the optimal path lies in balanced integration: harnessing AI’s speed while preserving human ingenuity. As generative models evolve, developers who master this symbiosis will define the next epoch of software engineering.

Cancel

Knowledge thats worth delivered in your inbox

Loading More Posts ...
Go Top
ml floating chatbot