Filter incoming and outgoing traffic in Windows Firewall with Advanced Security. Rejector is a free system for content filtering and Internet access: at home, in the office and at school

A scheme for filtering encrypted traffic without disclosing encryption keys.

We often hear in discussions that the service of neutralizing distributed denial of service attacks based on constant traffic filtering is less effective and more expensive compared to on-demand filtering.

The arguments used in such dialogues practically do not change over time when the discussion begins: the high cost of constant filtering versus the time delay required to include a specialist or equipment in the process of neutralizing an attack on demand.

Qrator Labs would like to clarify its position by putting forward some arguments about how persistent filtering differs from on-demand filtering and why the first option is actually the only one that works.

One of the key reasons is that modern attacks evolve very quickly - evolving and becoming more complex in real time. The service itself is also evolving - the site and application are developing, so it may turn out that the “normal” behavior of users during a previous attack is no longer relevant.

In the case of manual filtering, technical specialists of a service provider to neutralize denial of service attacks, in most cases, require not only time to understand what is happening in order to develop the correct strategy of behavior and the sequence of specific actions. In addition, such a specialist also needs to know exactly when and how the attack vector changes in order to effectively neutralize it at the client’s request.

Connecting under attack is a particular challenge, mainly due to reduced availability for all users trying to reach the service. If the attack is successful and users do not receive the requested resource, they try to get it again by simply refreshing the page or reloading the application. This makes the attack scenario worse because it becomes more difficult to distinguish junk traffic from legitimate traffic.

The actual deployment of the attack neutralization service - in the cloud or physically on the client’s site, in a partner’s data center, is often a key requirement for its implementation. Since any of the placement options allows for constant automatic or manual filtering, and, accordingly, detection and neutralization of attacks. But having automatic filtering capabilities is a key requirement.

Most often, cloud attack neutralization services filter all incoming traffic - it becomes fully available for analysis. Physical equipment installed at the edge of the network, or receiving cloned traffic, provides almost the same capabilities for monitoring and neutralizing attacks in real time.

Some vendors recommend using the NetFlow metric for traffic analysis, or other metrics, which in itself is a compromise for the worse in terms of results, since third-party or derivative metrics provide only part of the information about the data, thus narrowing the possibilities for detection and neutralizing the attack. And vice versa - cloud services are not required to analyze 100% of incoming traffic, but most often they do this because this approach allows them to best build models and train algorithms.

Another disadvantage of using the NetFlow protocol as the main analysis tool is that it provides only some characteristics of data flows - their description, but not the flows themselves. So, of course, you will notice an attack based on the parameters that NetFlow reflects, but more complex types of attacks that should be detected by analyzing the content of the flow will not be visible. Therefore, attacks on the application layer (L7) are difficult to repel using only NetFlow metrics, except in cases where an attack inside a transport is 100% obvious (because above L4 NetFlow is frankly useless).


General diagram of connection to the filtration network.

1. Why do cloud DDoS mitigation service providers offer “always-on filtering” even if no attack is currently occurring?

The answer is simple: constant filtering is the most effective way to neutralize attacks. It is also necessary to add here that the physical equipment located at the client is not very different from cloud filtering, with the only exception that the box is turned on and off physically somewhere in the data center. However, there is a choice in any case (to work - that is, to turn on the device, always or only when necessary) and it will have to be made.

When you say that reverse proxying narrows the filtering capabilities to only HTTP and HTTPS (SSL) protocols, you are telling only half the truth. HTTP traffic is an integral and critical part of complex filtering systems, and reverse proxying is one of the most effective ways to collect and analyze it.

2. As we know, distributed denial of service attacks can take many forms and can be modified beyond the HTTP protocol. Why is the cloud in this case better than stand-alone equipment on the client’s side?

Overloading individual nodes of the filtration network is as possible as it can be done with equipment placed in a rack. There is no iron box powerful enough to cope with any attacks alone - a complex and multi-component system is required.

However, even the largest hardware manufacturers recommend switching to cloud filtering for the most serious attacks. Because their clouds consist of the same equipment, organized into clusters, each of which, by default, is more powerful than a separate solution located in the data center. In addition, your box works only for you, but a large filtering network serves tens and hundreds of clients - the design of such a network was initially designed to process orders of magnitude larger volumes of data to successfully neutralize an attack.

Before an attack, it is impossible to say for sure what will be easier: to disable stand-alone equipment (CPE) or a filtering network node. But think about this - failure of a point is always your vendor's problem, but a piece of equipment that refuses to work as advertised after purchase is only your problem.

3. The network node acting as a proxy server must be able to receive content and data from the resource. Does this mean anyone can bypass a cloud-based attack mitigation solution?

If there is no dedicated physical line between you and the security provider, yes.

It is true that without a dedicated channel from the client to the service provider to mitigate denial of service attacks, attackers can attack the service's native IP address. Not all providers of such services, in principle, offer leased line services from themselves to the client.

In general, switching to cloud filtering means making appropriate announcements using the BGP protocol. In this case, individual IP addresses of the service under attack are hidden and inaccessible for attack.

4. Sometimes the ratio of the cost of the service and the costs of it on the part of the supplier is used as an argument against cloud filtering. How does this situation look in comparison with equipment located on the client side?

It's safe to say that no matter how small a denial of service attack is, the cloud mitigation service provider will have to handle them all, even though the internal cost of building such networks is always based on the assumption that every the attack is intense, big, long and smart. On the other hand, this does not mean at all that the provider of such a service loses money by selling clients protection against everything, but in reality is forced to cope mainly with small and medium-sized attacks. Yes, the filtering network may spend a little more resources than in an “ideal state,” but in the event of a successfully neutralized attack, no one will ask questions. Both the client and the provider will be satisfied with this partnership and will continue it with a high degree of probability.

Imagine the same situation with equipment on site - it costs orders of magnitude more at a time, requires skilled hands for maintenance and... it will still be forced to work out small and rare attacks. When you were planning to purchase such equipment, which is not cheap anywhere, did you think about this?

The thesis that a separate box, along with a contract for installation, technical support and payment for the work of highly qualified engineers, will ultimately be cheaper compared to buying a suitable tariff in the cloud, is simply not true. The final cost of the equipment and its operating hours is very high - and this is the main reason why protecting and neutralizing distributed denial of service attacks has become a separate business and has formed an industry - otherwise we would see an attack protection department in every IT company.

Based on the premise that an attack is a rare occurrence, the solution to neutralize it must be designed accordingly and be able to neutralize these rare attacks successfully. But, in addition to this, it also costs adequate funds, because everyone understands that most of the time nothing terrible happens.

Cloud providers design and build their networks in an efficient manner in order to consolidate their own risks and cope with attacks by distributing traffic between filtering points that are both hardware and software - two parts of a system built with one purpose.

Here we are talking about the “Law of Large Numbers”, familiar from probability theory. This is the reason why Internet service providers sell more channel capacity than what they actually have. All clients of an insurance company, hypothetically, could get into an unpleasant situation at the same time - but in practice this has never happened. And even though individual insurance claims can be huge, it doesn't cause the insurance business to go bankrupt every time someone gets into an accident.

People professionally involved in neutralizing denial of service attacks know that the cheapest, and therefore the most common, attacks are associated with amplifiers, and cannot in any way be characterized as “small”.

At the same time, accepting that a one-time payment for equipment installed on a physical site will remain there forever - attack methods will evolve. There is no guarantee that yesterday's equipment will cope with tomorrow's attack - this is just an assumption. Therefore, a large investment made in such equipment begins to lose its value right from the moment of installation, not to mention the need for its constant maintenance and updating.

In the business of neutralizing DDoS, it is important to have a highly scalable solution with high connectivity, which is very difficult to achieve by purchasing a separate box of equipment.

When a serious attack occurs, any stand-alone equipment will try to signal to the cloud that it has begun and try to distribute traffic to filtering points. However, no one says that when the channel is clogged with garbage, there is no guarantee that it will be able to deliver the given message to its own cloud. And again, it will take time to switch the data flow.

Therefore, the only real price a customer can pay, other than money, to protect their infrastructure from denial of service attacks is delay and nothing else. But, as we said, well-built clouds reduce latency, improving the global availability of the requested resource.

Keep this in mind when choosing between an iron box and cloud filtration.

Web filtering, or internet filter is a software or hardware tool for filtering web pages by their content, which allows you to limit user access to a specific list of sites or services on the Internet.

Web filtering systems can be implemented in various variations:

  • utilities;
  • applications;
  • browser add-ons;
  • add-on for Internet gateways;
  • cloud services.

Web filtering tools prevent you from visiting dangerous sites that host or are classified as sites. But their main task is to control access to websites of certain categories. With their help, you can easily restrict company employees' access to all websites of certain categories.

All incoming traffic is analyzed and categorized. Depending on the settings of the Internet filtering tool, access to a certain category of content may be blocked, and a warning will be displayed to the user.

Initially, web filtering systems checked the URLs that the user goes to against their own database and blacklists using regular expressions. It was found that this method is ineffective, unlike pattern recognition and language analysis. Thus, not only links are now checked, but also all information posted on a web page is searched using keywords and expressions. Based on the data obtained, the percentage of information that corresponds to any predetermined category is calculated. If this percentage exceeds the permissible level, the Internet filtering system blocks access to the site. An additional function to filtering is the collection of statistics on web page visits and reporting, allowing system administrators to understand how the organization's traffic is spent.

In addition to content filtering of HTTP traffic, developers of web filtering systems provide their users with the ability to check protected HTTPs traffic, as well as the reliability of SSL certificates. This is relevant when attackers use encryption to steal personal data, bank card numbers and PIN codes.

We should not forget that in addition to the danger of visiting fraudulent sites from within the company, the leak of confidential information primarily implies the appearance of data in the company’s outgoing traffic. Employees may mistakenly or intentionally send customer databases or personal data via instant messengers or email. With a correctly configured Internet filter, it will be possible to avoid such incidents that discredit the company’s reputation and lead to serious losses.

When choosing a web filtering tool, you should pay attention to the completeness of the classifier, the speed of classification of new sites, and the percentage of false positives.

Traffic filtering tools

The tasks of traffic filtering tools are to control network traffic (the contents of network packets) and block (filter) traffic that does not meet specified security rules. Traffic filters monitor and analyze the contents of network packets at the application level, but unlike firewalls, they do not perform an intermediary function between two nodes to prevent their direct interaction (firewalls and proxy servers). Unlike IDS/IPS tools, traffic filters do not detect and prevent network intrusions and attacks.

Traffic filtering tools include:

  • network protocol filters;
  • content filters, including URL filters;
  • spam filters;
  • web traffic filters to protect web applications (Web Security).

The specified traffic filtering tools are built-in and used within individual security tools, such as Firewall, Network Antivirus, Proxy Server, IDS/IPS, UTM, WAF, E-Mail Security, HIPS, to solve various problems or are implemented as separate software and hardware - software tools. Additionally, traffic filtering tools are used in billing systems, traffic accounting and tariffication; control, statistics, monitoring of users’ network activity in real time and Internet use, etc.


Filters by network protocols allow traffic through certain network protocols and block traffic from other protocols. These tools are installed at the edge of the network, ensuring that only the necessary network traffic passes through certain protocols into the network and/or into the external network, i.e. ensure the implementation of network policies.


Content filters

Content filters(Content Monitoring and Filtering, CMF) block access to unwanted Internet content. They are web traffic filters (http/https protocols).

Web traffic is filtered by URLs of blacklisted sites (URL filters), keyword, signature or file type, by site content using morphological analysis. Content filters are installed in network gateways (firewalls, proxy servers and etc.) or on workstations in antiviruses (parental control function, for protection against phishing sites), personal firewalls, etc. Can be used as separate software tools.



Web traffic filters (WebSecurity)

Web traffic filters (WebSecurity) used to protect web applications from various types of threats arriving via web traffic, including the penetration of malicious code. They are web traffic filters (http/https protocols). Web traffic filtering functions are used in security tools such as WAF . In order to protect against threats emanating from web traffic, it is recommended to use specialized solutions of the Web Security class.

Main functions of Web Security tools:

  • protecting web traffic from viruses and malware;
  • blocking access to malicious sites;
  • protection against phishing attacks;
  • control of user access to various web resources;
  • URL filtering and website categorization.

Filtering information flows consists of selectively passing them through the screen, possibly performing some transformations and notifying the sender that his data has been denied access. Filtering is carried out based on a set of rules that are pre-loaded into the screen and are an expression of the network aspects of the adopted security policy. Therefore, it is convenient to imagine a firewall as a sequence of filters (Fig. A.2) that process the information flow. Each of the filters is designed to interpret individual filter rules by performing the following steps:

1. Analysis of information according to the criteria specified in the interpreted rules, for example, by the addresses of the recipient and sender or by the type of application for which this information is intended.

2. Making one of the following decisions based on interpreted rules:

Don't miss data;

Process data on behalf of the recipient and return the result to the sender;

Pass the data to the next filter to continue analysis;

Skip the data, ignoring the following filters.

Rice. A.2. Firewall structure

Filtering rules can also specify additional actions that relate to mediation functions, for example, data transformation, event registration, etc. Accordingly, filtering rules define a list of conditions by which, using the specified analysis criteria, the following is carried out:

    permission or prohibition of further data transfer;

    performance of additional protective functions.

The following parameters can be used as criteria for analyzing information flow:

    service fields of message packets containing network addresses, identifiers, interface addresses, port numbers and other significant data;

    the direct contents of message packets, checked, for example, for the presence of computer viruses;

    external characteristics of information flow, for example, temporary,

frequency characteristics, data volume, etc.

The analysis criteria used depend on the layers of the OSI model at which filtering is performed. In general, the higher the level of the OSI model at which a firewall filters packets, the higher the level of protection it provides.

Performing mediation functions

The firewall performs mediation functions using special programs called shielding agents or simply intermediary programs. These programs are resident and prohibit the direct transmission of message packets between the external and internal networks.

If it is necessary to access from an internal network to an external network or vice versa, a logical connection must first be established with an intermediary program running on the screen computer. The intermediary program checks the validity of the requested internetwork interaction and, if it is allowed, it itself establishes a separate connection to the required computer. Further, the exchange of information between computers on the internal and external networks is carried out through a software intermediary, which can filter the message flow, as well as perform other protective functions.

It should be understood that the firewall can perform filtering functions without the use of intermediary programs, ensuring transparent interaction between the internal and external networks. However, software intermediaries may not filter the message flow. In general, screening agents, blocking the transparent transmission of a message flow, can perform the following functions:

    identification and authentication of users;

    authentication of transmitted data;

    restriction of access to internal network resources;

    restriction of access to external network resources;

    filtering and transforming the message stream, for example, dynamic virus scanning and transparent encryption of information;

    translation of internal network addresses for outgoing message packets;

    registration of events, response to specified events, as well as analysis of registered information and generation of reports;

    caching of data requested from an external network.

For a high degree of security, it is necessary to identify and authenticate users not only when they access from an external network to an internal one, but also vice versa. The password must not be transmitted in clear text over publicly accessible communications. This will prevent unauthorized access by intercepting network packets, which is possible, for example, in the case of standard services such as Telnet. The optimal authentication method is to use one-time passwords. It is also convenient and reliable to use digital certificates issued by trusted authorities, for example, a key distribution center. Most middleware programs are designed so that the user is authenticated only at the start of the firewall session. After this, no additional authentication is required from him for a period of time determined by the administrator. Intermediary programs can verify the authenticity of received and transmitted data. This is relevant not only for authenticating electronic messages, but also migrating programs (Java, ActiveXControls), in relation to which forgery can be performed. Verifying the authenticity of messages and programs involves monitoring their digital signatures. Digital certificates can also be used for this. Identification and authentication of users when accessing the firewall allows you to limit their access to resources on the internal or external network. The methods of delimitation to internal network resources are no different from the methods of delimitation supported at the operating system level. When restricting access To For external network resources, one of the following approaches is most often used:

    allowing access only to specified addresses on the external network;

    filtering requests based on updated lists of invalid addresses and blocking searches for information resources using unwanted keywords;

    accumulation and updating by the administrator of authorized information resources of the external network in the firewall disk memory and complete prohibition of access to the external network.

Filtering and transforming the message flow is performed by the broker based on a specified set of rules. Here it is necessary to distinguish between two types of intermediary programs:

    screening agents focused on analyzing the message flow for certain types of services, for example, FTP, HTTP, Telnet;

    universal screening agents that process the entire flow of messages, for example, agents focused on searching and neutralizing computer viruses or transparent data encryption. A software intermediary analyzes the data packets incoming to it, and if any object does not meet the specified criteria, the intermediary either blocks its further progress or performs appropriate transformations, for example, neutralizing detected computer viruses. When analyzing the contents of packages, it is important that the screening agent can automatically unpack passing file archives.

Firewalls with intermediaries also allow you to organize protected virtual networks (VirtualPrivateNetwork-VPN), for example, to securely combine several local networks connected to the Internet into one virtual network. VPNs provide a transparent connection of local networks for users, maintaining the secrecy and integrity of transmitted information by dynamically encrypting it. When transmitted over the Internet, it is possible to encrypt not only user data, but also service information - final network addresses, port numbers, etc. Intermediary programs can also perform such an important function as translating internal network addresses. This function is implemented in relation to all packets traveling from the internal network to the external one. For these packets, the broker automatically resolves the IP addresses of the sending computers into a single "trusted" IP address associated with the firewall from which all outgoing packets are forwarded. As a result, all packets originating from the internal network are sent by the firewall, which eliminates direct contact between the authorized internal network and the potentially dangerous external network. The firewall IP address becomes the only active IP address that reaches the external network.

With this approach, the topology of the internal network is hidden from external users, which complicates the task of unauthorized access. In addition to increasing security, address translation allows you to have your own addressing system within the network, which is not consistent with addressing in an external network, for example, on the Internet. This effectively solves the problem of expanding the address space of the internal network and the shortage of external network addresses. Important functions of intermediary programs are recording events, responding to specified events, as well as analyzing registered information and generating reports. As a mandatory response to detection of attempts to perform unauthorized actions, notification of the administrator, i.e., issuance of warning signals, should be defined. Any firewall that is unable to send warning signals when an attack is detected is not an effective firewall.

Many firewalls contain a powerful system for recording, collecting and analyzing statistics. Accounting can be kept by client and server addresses, user IDs, session times, connection times, the amount of transmitted/received data, administrator and user actions. Accounting systems allow you to analyze statistics and provide administrators with detailed reports. Through the use of special protocols, intermediaries can perform remote notification of certain events in real time. With the help of special intermediaries, caching of data requested from an external network is also supported. When users of the internal network access information resources of the external network, all information is accumulated on the hard disk space of the firewall, called in this case a proxy server. Therefore, if during the next request the necessary information ends up on the proxy server, the intermediary provides it without accessing the external network, which significantly speeds up access. The administrator should only take care of periodically updating the contents of the proxy server.

The caching function can be successfully used to limit access to information resources on an external network. In this case, all authorized information resources of the external network are accumulated and updated by the administrator on the proxy server. Users of the internal network are allowed access only to the information resources of the proxy server, and direct access to external network resources is prohibited. Screening agents are much more reliable than conventional filters and provide a greater degree of protection. However, they reduce the performance of data exchange between internal and external networks and do not have the degree of transparency for applications and end users that is typical for simple filters.

Features of firewalling at various levels of the OSI model

Firewalls support internetwork security at various layers of the OSI model. At the same time, the protection functions performed at different levels of the reference model differ significantly from each other. Therefore, it is convenient to present a complex firewall as a set of indivisible screens, each of which is focused on a separate level of the OSI model. Most commonly, Screen operates at the network, session, and application layers of the reference model. Accordingly, there are such indivisible firewalls (Fig. A.3) as a shielding router, a shielding transport (session-level gateway), and a shielding gateway (application-level gateway).

Considering that the protocols used in networks (TCP/IP, SPX/IPX) do not uniquely correspond to the OSI model, the screens of the listed types, when performing their functions, can also cover adjacent levels of the reference model. For example, an application screen can automatically encrypt messages as they are transmitted to an external network, as well as automatically decrypt cryptographically secure received data. In this case, such a screen functions not only at the application level of the OSI model, but also at the presentation level. The session layer gateway, in its operation, covers the transport and network layers of the OSI model. When analyzing message packets, a shielding router checks their headers not only at the network level, but also at the transport level.

Each type of firewall has its own advantages and disadvantages. Many of the firewalls in use are either application gateways or firewall routers, and do not support complete internetwork security. Reliable protection is provided only by complex firewalls, each of which combines a shielding router, a session-level gateway, and an application gateway.

Rice. A.3. Types of firewalls operating at individual levels of the OSI model