This document discusses evaluating the performance of a DMZ (demilitarized zone) network configuration. It begins with an introduction to DMZs and their purpose of adding an additional layer of network security. It then reviews related work that has evaluated DMZ performance and firewall performance but not specifically DMZ performance. The document aims to explore evaluating DMZ performance using network simulation software. It provides background on common firewall types - packet filtering, stateful inspection, and application-proxy gateways - before discussing ways to test DMZ configurations and analyze the effects on network performance.
LAN Design and implementation of Shanto Mariam University of Creative Technology
Campus Area Network is the Local Area Network of the Shanto Mariam University of Creative Technology.As final year project, we want to build the LAN of Computer LABs at Uttara campus of Shanto-Mariam University of Creativity of Technology. It will centralize the control over all the computer LABs throughout the campus. To do this we make some changes and rebuild the Local Area network of the university LAB System. To make an organized control over the network we install windows server 2012 r2. Where user can access to any LAB computer and can save his work data in user’s distinct folder.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses using packet filtering as a mechanism for network security. It describes how packet filters examine packet headers to make routing decisions based on rules. Factors like asymmetric access requirements and protocol characteristics can complicate rule implementation. The document provides an example set of rules to allow access between two networks in most cases, but deny it from a specific subnet due to security issues. It notes that correctly specifying complex filter rules is difficult, and reordering rules can unintentionally change the access policy that was intended. Packet filtering shows promise as a network security tool but has limitations that must be understood.
Investigation, Design and Implementation of a Secure
1) The document outlines a network design project for the University of Tripoli that involves designing the network infrastructure and implementing security policies and protocols.
2) The design includes VLANs, firewalls, VPN access, and wireless access across multiple engineering departments.
3) The implementation phase focuses on secure configuration of network devices, access control lists, firewall rules, encrypted management access, and a captive portal for wireless users.
A Survey Paper on Jamming Attacks and its Countermeasures in Wireless Networks
The document discusses jamming attacks in wireless networks and game theoretic approaches to model the interaction between attackers and networks. It analyzes different types of jamming attacks and various anti-jamming techniques. Furthermore, it formulates the interaction as a game using game theory and analyzes Nash equilibriums to determine optimal strategies for both networks and attackers.
Layered Approach for Preprocessing of Data in Intrusion Prevention Systems
Due to extensive growth of the Internet and increasing availability of tools and methods for intruding and attacking
networks, intrusion detection has become a critical component of network security parameters. TCP/IP protocol suite is the defacto
standard for communication on the Internet. The underlying vulnerabilities in the protocols is the root cause of intrusions. Therefor
Intrusion detection system becomes an important element in network security that controls real time data and leads to huge
dimensional problem. Processing large number of packets and data in real time is very difficult and costly. Therefor data preprocessing
is necessary to remove redundant and unwanted information from packets and clean network data. Here, we are focusing on
two important aspects of intrusion detection; one is accuracy and other is performance. The layered approach of TCP/IP model can be
applied to packet pre-processing to achieve early and faster intrusion detection. Motivation for the paper comes from the large impact
data preprocessing has on the accuracy and capability of anomaly-based NIPS. In this paper it is demonstrated that high attack
detection accuracy can be achieved by using layered approach for data preprocessing in Internet. To reduce false positive rate and to
increase efficiency of detection, the paper proposed framework for preprocessing in intrusion prevention system. We experimented
with real time network traffic as well as he KDDcup99 dataset for our research.
11.providing security to wireless packet networks by using optimized security...
This document discusses providing security to wireless packet networks using an optimized security method. It proposes encrypting data packets when they are scheduled using the Blowfish encryption algorithm. This would secure the packets at the initial level of scheduling, preventing attackers from modifying packets even if they are delayed. The document outlines the Blowfish algorithm and its use of variable-length keys and data encryption in rounds to encrypt packets. It also describes the system model used and assumptions made, including modeling the wireless channel as a switch and defining packet attributes like arrival time, processing time, security level and deadline. Encrypting packets at the scheduling level with Blowfish aims to securely transmit real-time data over wireless networks.
Review on redundancy removal of rules for optimizing firewall
This document summarizes previous work on optimizing firewall performance by removing redundant rules. It discusses how previous approaches identified redundant rules between adjoining firewalls without revealing the firewall policies. However, these approaches required the firewalls to know each other's policies or be administered under one domain. The document also reviews literature on anomaly detection techniques, traffic-aware firewall optimization, and analysis tools for modeling and checking firewall configurations. Overall, it provides context on the challenges of optimizing firewalls through redundancy removal while preserving the privacy of each firewall's policies.
The document discusses the need for network security on campus networks and some of the common risks faced at different layers of the TCP/IP model. It proposes using the SAPPDRR dynamic security model, which incorporates risk analysis, security policies, defense systems, real-time monitoring, response, disaster recovery and countermeasures. The model aims to provide comprehensive security and stability for campus networks through active defense against threats.
A review on software defined network security risks and challengesTELKOMNIKA JOURNAL
Software defined network is an emerging network architecture that separates the traditional
integrated control logic and data forwarding functionality into different planes, namely the control plane and
data forwarding plane. The data plane does an end-to-end data delivery. And the control plane does
the actual network traffic forwarding and routing between different network segments. In software defined
network the networking infrastructure layer is where the entire networking device, such as switches and
routers are connected with the separate controller layer with the help of standard called OpenFlow
protocol. The OpenFlow is a standard protocol that allows different vendor devices like juniper, cisco and
huawei switches to be connected to the controller. The centralization of the software defined network
(SDN) controller makes the network more flexible, manageable and dynamic, such as provisioning of
bandwidth, dynamic scale out and scale in compared to the traditional communication network, however,
the centralized SDN controller is more vulnerable to security risks such as DDOS and flow rule poisoning
attack. In this paper, we will explore the architectures, the principles of software defined network and
security risks associated with the centralized SDN controller and possible ways to mitigate these risks.
This document discusses security issues with the Ad Hoc On-Demand Distance Vector (AODV) routing protocol for mobile ad hoc networks. It first provides background on AODV and security challenges in mobile ad hoc networks. It then analyzes specific attacks on AODV like traffic redirection, replay attacks, and loop formation. The document presents simulation results for a 5 node network that show that insecure AODV has good throughput but higher packet dropping and delay. It concludes that providing security for AODV is needed to address these issues.
USING A DEEP UNDERSTANDING OF NETWORK ACTIVITIES FOR SECURITY EVENT MANAGEMENTIJNSA Journal
With the growing deployment of host-based and network-based intrusion detection systems in increasingly
large and complex communication networks, managing low-level alerts from these systems becomes
critically important. Probes of multiple distributed firewalls (FWs), intrusion detection systems (IDSs) or
intrusion prevention systems (IPSs) are collected throughout a monitored network such that large series of
alerts (alert streams) need to be fused. An alert indicates an abnormal behavior, which could potentially be
a sign for an ongoing cyber attack. Unfortunately, in a real data communication network, administrators
cannot manage the large number of alerts occurring per second, in particular since most alerts are false
positives. Hence, an emerging track of security research has focused on alert correlation to better identify
true positive and false positive. To achieve this goal we introduce Mission Oriented Network Analysis
(MONA). This method builds on data correlation to derive network dependencies and manage security
events by linking incoming alerts to network dependencies.
LAN Design and implementation of Shanto Mariam University of Creative TechnologyAbdullah Al Mamun
Campus Area Network is the Local Area Network of the Shanto Mariam University of Creative Technology.As final year project, we want to build the LAN of Computer LABs at Uttara campus of Shanto-Mariam University of Creativity of Technology. It will centralize the control over all the computer LABs throughout the campus. To do this we make some changes and rebuild the Local Area network of the university LAB System. To make an organized control over the network we install windows server 2012 r2. Where user can access to any LAB computer and can save his work data in user’s distinct folder.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses using packet filtering as a mechanism for network security. It describes how packet filters examine packet headers to make routing decisions based on rules. Factors like asymmetric access requirements and protocol characteristics can complicate rule implementation. The document provides an example set of rules to allow access between two networks in most cases, but deny it from a specific subnet due to security issues. It notes that correctly specifying complex filter rules is difficult, and reordering rules can unintentionally change the access policy that was intended. Packet filtering shows promise as a network security tool but has limitations that must be understood.
Investigation, Design and Implementation of a SecureFiras Alsayied
1) The document outlines a network design project for the University of Tripoli that involves designing the network infrastructure and implementing security policies and protocols.
2) The design includes VLANs, firewalls, VPN access, and wireless access across multiple engineering departments.
3) The implementation phase focuses on secure configuration of network devices, access control lists, firewall rules, encrypted management access, and a captive portal for wireless users.
A Survey Paper on Jamming Attacks and its Countermeasures in Wireless NetworksIRJET Journal
The document discusses jamming attacks in wireless networks and game theoretic approaches to model the interaction between attackers and networks. It analyzes different types of jamming attacks and various anti-jamming techniques. Furthermore, it formulates the interaction as a game using game theory and analyzes Nash equilibriums to determine optimal strategies for both networks and attackers.
Layered Approach for Preprocessing of Data in Intrusion Prevention SystemsEditor IJCATR
Due to extensive growth of the Internet and increasing availability of tools and methods for intruding and attacking
networks, intrusion detection has become a critical component of network security parameters. TCP/IP protocol suite is the defacto
standard for communication on the Internet. The underlying vulnerabilities in the protocols is the root cause of intrusions. Therefor
Intrusion detection system becomes an important element in network security that controls real time data and leads to huge
dimensional problem. Processing large number of packets and data in real time is very difficult and costly. Therefor data preprocessing
is necessary to remove redundant and unwanted information from packets and clean network data. Here, we are focusing on
two important aspects of intrusion detection; one is accuracy and other is performance. The layered approach of TCP/IP model can be
applied to packet pre-processing to achieve early and faster intrusion detection. Motivation for the paper comes from the large impact
data preprocessing has on the accuracy and capability of anomaly-based NIPS. In this paper it is demonstrated that high attack
detection accuracy can be achieved by using layered approach for data preprocessing in Internet. To reduce false positive rate and to
increase efficiency of detection, the paper proposed framework for preprocessing in intrusion prevention system. We experimented
with real time network traffic as well as he KDDcup99 dataset for our research.
11.providing security to wireless packet networks by using optimized security...Alexander Decker
This document discusses providing security to wireless packet networks using an optimized security method. It proposes encrypting data packets when they are scheduled using the Blowfish encryption algorithm. This would secure the packets at the initial level of scheduling, preventing attackers from modifying packets even if they are delayed. The document outlines the Blowfish algorithm and its use of variable-length keys and data encryption in rounds to encrypt packets. It also describes the system model used and assumptions made, including modeling the wireless channel as a switch and defining packet attributes like arrival time, processing time, security level and deadline. Encrypting packets at the scheduling level with Blowfish aims to securely transmit real-time data over wireless networks.
This document summarizes previous work on optimizing firewall performance by removing redundant rules. It discusses how previous approaches identified redundant rules between adjoining firewalls without revealing the firewall policies. However, these approaches required the firewalls to know each other's policies or be administered under one domain. The document also reviews literature on anomaly detection techniques, traffic-aware firewall optimization, and analysis tools for modeling and checking firewall configurations. Overall, it provides context on the challenges of optimizing firewalls through redundancy removal while preserving the privacy of each firewall's policies.
The document discusses the need for network security on campus networks and some of the common risks faced at different layers of the TCP/IP model. It proposes using the SAPPDRR dynamic security model, which incorporates risk analysis, security policies, defense systems, real-time monitoring, response, disaster recovery and countermeasures. The model aims to provide comprehensive security and stability for campus networks through active defense against threats.
Study of Layering-Based Attacks in a Mobile Ad Hoc NetworksIRJET Journal
This document summarizes research on layering-based attacks in mobile ad hoc networks (MANETs). It begins with an abstract noting that MANETs are commonly used in military and disaster situations, but require high security due to challenges from their characteristics. The document then reviews constraints of MANETs like limited resources and transmission range. It examines security requirements for MANETs and various types of attacks against different network layers, including jamming, denial of service, link spoofing, selective forwarding, sinkhole, Sybil, black hole, and wormhole attacks. Finally, it concludes that no single mechanism can provide full security for MANETs due to their constraints, making security a challenge that requires mapping solutions to different aspects.
Co-operative Wireless Intrusion Detection System Using MIBs From SNMPIJNSA Journal
In emerging technology of Internet, security issues are becoming more challenging. In case of wired LAN it is somewhat in control, but in case of wireless networks due to exponential growth in attacks, it has made difficult to detect such security loopholes. Wireless network security is being addressed using firewalls, encryption techniques and wired IDS (Intrusion Detection System) methods. But the approaches which were used in wired network were not successful in producing effective results for wireless networks. It is so because of features of wireless network such as open medium, dynamic changing topology, cooperative algorithms, lack of centralized monitoring and management point, and lack of a clear line of defense etc. So, there is need for new approach which will efficiently detect intrusion in wireless network. Efficiency can be achieved by implementing distributive, co-operative based, multi-agent IDS. The proposed system supports all these three features. It includes mobile agents for intrusion detection which uses SNMP (Simple network Management Protocol) and MIB (Management Information Base) variables for mobile wireless networks.
COMBINING NAIVE BAYES AND DECISION TREE FOR ADAPTIVE INTRUSION DETECTIONIJNSA Journal
In this paper, a new learning algorithm for adaptive network intrusion detection using naive Bayesian classifier and decision tree is presented, which performs balance detections and keeps false positives at acceptable level for different types of network attacks, and eliminates redundant attributes as well as contradictory examples from training data that make the detection model complex. The proposed algorithm also addresses some difficulties of data mining such as handling continuous attribute, dealing with missing attribute values, and reducing noise in training data. Due to the large volumes of security audit data as well as the complex and dynamic properties of intrusion behaviours, several data miningbased intrusion detection techniques have been applied to network-based traffic data and host-based data in the last decades. However, there remain various issues needed to be examined towards current intrusion detection systems (IDS). We tested the performance of our proposed algorithm with existing learning algorithms by employing on the KDD99 benchmark intrusion detection dataset. The experimental results prove that the proposed algorithm achieved high detection rates (DR) and significant reduce false positives (FP) for different types of network intrusions using limited computational resources.
This document summarizes a research paper on a Secure Adaptive Distributed Topology Control Algorithm (SADTCA) for mobile ad hoc networks. The SADTCA aims to organize nodes into clusters, distribute keys, and dynamically determine quarantine regions to mitigate spam attacks. It operates in four phases: 1) detecting malicious nodes, 2) forming clusters headed by cluster leaders, 3) distributing keys to secure communication, and 4) renewing keys periodically. The SADTCA analyzes energy consumption and communication overhead. It also introduces the Elliptic Curve Digital Signature Algorithm to generate highly secure keys with small sizes for authentication. Simulation results show the approach effectively defends against spam attacks while remaining feasible and cost-effective for mobile
This document discusses the Address Resolution Protocol (ARP) and its use in intrusion detection systems. It proposes a standardized 64-byte ARP protocol structure to more easily capture ARP packets from a network. The structure includes fields for frame information, destination and source addresses, ARP type details, and sender/target MAC and IP addresses. This standardized structure could be integrated into network monitoring to help detect intrusions without affecting normal data transfer processes. Overall, the document aims to optimize the ARP sequence for use in intrusion detection systems.
A COMBINATION OF TEMPORAL SEQUENCE LEARNING AND DATA DESCRIPTION FOR ANOMALYB...IJNSA Journal
Through continuous observation and modelling of normal behavior in networks, Anomaly-based Network Intrusion Detection System (A-NIDS) offers a way to find possible threats via deviation from the normal model. The analysis of network traffic based on time series model has the advantage of exploiting the relationship between packages within network traffic and observing trends of behaviors over a period of time. It will generate new sequences with good features that support anomaly detection in network traffic and provide the ability to detect new attacks. Besides, an anomaly detection technique, which focuses on the normal data and aims to build a description of it, will be an effective technique for anomaly detection in imbalanced data. In this paper, we propose a combination model of Long Short Term Memory (LSTM) architecture for processing time series and a data description Support Vector Data Description (SVDD) for anomaly detection in A-NIDS to obtain the advantages of them. This model helps parameters in LSTM and SVDD are jointly trained with joint optimization method. Our experimental results with KDD99 dataset show that the proposed combined model obtains high performance in intrusion detection, especially DoS and Probe attacks with 98.0% and 99.8%, respectively.
This document discusses the design and implementation of a network security model using routers and firewalls. It begins by outlining the importance of network security and some common vulnerabilities, threats, and attacks against network devices like routers. It then provides details on specific attacks like session hijacking, spoofing, and denial of service attacks. The document also discusses best practices for router and firewall security policies, including access control, authentication, and traffic filtering. The overall aim is to protect networks from vulnerabilities and security weaknesses by implementing preventative measures, securing devices like routers and firewalls, and establishing proper security policies.
This document discusses firewall vulnerabilities and proposes a new approach to classifying them. It begins by providing background on firewalls and their increasing importance for network security. The document then reviews different types of firewalls and their functions. Next, it categorizes common firewall vulnerabilities according to their nature and the firewall type. Some current approaches for mitigating vulnerabilities are also mentioned. The document concludes by briefly introducing the technique of firewall fingerprinting, which can allow attackers to identify a firewall's properties to exploit known vulnerabilities.
This document discusses firewalls and their types. It begins by explaining that firewalls protect networks by guarding entry points and are becoming more sophisticated. It then defines a firewall as a network security system that controls incoming and outgoing network traffic based on rules. The document outlines different generations of firewalls and describes four main types: packet filtering, stateful packet inspection, application gateways/proxies, and circuit-level gateways. It details the characteristics, strengths, and weaknesses of each type. Finally, it emphasizes that networks are still at risk of attacks and that firewalls have become ubiquitous, so choosing the right solution depends on needs, policies, resources.
Accessing secured data in cloud computing environmentIJNSA Journal
Number of businesses using cloud computing has increased dramatically over the last few years due to the attractive features such as scalability, flexibility, fast start-up and low costs. Services provided over the web are ranging from using provider’s software and hardware to managing security and other issues. Some of the biggest challenges at this point are providing privacy and data security to subscribers of public cloud servers. An efficient encryption technique presented in this paper can be used for secure access to and storage of data on public cloud server, moving and searching encrypted data through communication channels while protecting data confidentiality. This method ensures data protection against both external and internal intruders. Data can be decrypted only with the provided by the data owner key, while public cloud server is unable to read encrypted data or queries. Answering a query does not depend on it size and done in a constant time. Data access is managed by the data owner. The proposed schema allows unauthorized modifications detection
ACCESSING SECURED DATA IN CLOUD COMPUTING ENVIRONMENTIJNSA Journal
Number of businesses using cloud computing has increased dramatically over the last few years due to the attractive features such as scalability, flexibility, fast start-up and low costs. Services provided over the web are ranging from using provider’s software and hardware to managing security and other issues. Some of the biggest challenges at this point are providing privacy and data security to subscribers of public cloud servers. An efficient encryption technique presented in this paper can be used for secure access to and storage of data on public cloud server, moving and searching encrypted data through communication channels while protecting data confidentiality. This method ensures data protection against both external and internal intruders. Data can be decrypted only with the provided by the data owner key, while public cloud server is unable to read encrypted data or queries. Answering a query does not depend on it size and done in a constant time. Data access is managed by the data owner. The proposed schema allows unauthorized modifications detection.
Firewall is a device or set of instruments designed to permit or deny network transmissions based upon a set of rules and regulation is frequently used to protect networks from unauthorized access while permitting legitimate communications to pass or during the sensitive data transmission. Distributed firewalls allow enforcement of security policies on a network without restricting its topology on an inside or outside point of view. Use of a policy language and centralized delegating its semantics to all members of the networks domain support application of firewall technology for organizations, which network devices communicate over insecure channels and still allow a logical separation of hosts in- and outside the trusted domain. We introduce the general concepts of such distributed firewalls, its requirements and implications and introduce its suitability to common threats on the Internet, as well as give a short discussion on contemporary implementations.
A NEW COMMUNICATION PLATFORM FOR DATA TRANSMISSION IN VIRTUAL PRIVATE NETWORKijmnct
Nowadays security is an evident matter in designing networks and much research has been done in this
field. The main purpose of the research is to provide an appropriate instruction for data transmission in a
reliable platform. One of the instructions of transferring information is to use public networks like internet.
The main purpose of the present paper is to introduce that enables the users to enter to a new security level.
In this paper, VPN as one of the different instructions for establishing the security proposed to be
examined. In this type, tunneling method of internet protocol security (IPsec) is used. Furthermore, the
advanced method of scanning fingerprint is applied to establish authentication and Diffie-Hellman
algorithm for coding and decoding data, of course with conversion in this algorithm.
ANALYSIS OF SECURITY ASPECTS FOR DYNAMIC RESOURCE MANAGEMENT IN DISTRIBUTED S...ijcseit
Millions of people all over the world are now connected to the Internet for doing business. Therefore, the demand for Internet and web-based services continues to grow. So, need to install required infrastructure to balance the computing. In spite the success of new infrastructure, it is susceptible to several critical
malfunctions. Therefore, to guarantee the secure operations on Network and Data, several solutions need to be developed. The researchers are working in this direction to have the better solution for security. In distributed environment, at the time of management of resources both computing and networking,
resource allocation and resource utilization, etc, the security is most crucial problem. In this paper, an extensive review has been made on the different security aspect, different types of attack and techniques to sustain and block the attack in the distributed environment.
Whenyour computer isconnected to the Internet, you expose your computer to a variety of potentialthreats. The Internet isdesigned in such a waythat if you have access to the Internet, all other computers on the Internet canconnect to yourcomputer.Thisleavesyouvulnerable to variouscommonattacks. This isespeciallytroubling as severalpopular programs open services on your computer thatallowothers to view files on your computer! Whilethisfunctionalityisexpected, the difficultyisthatsecurityerrors are detectedthatalwaysallow hackers to attackyour computer with the ability to view or destroy sensitive information stored on your computer. To protectyour computer fromsuchattacksyouneed to "teach" your computer to ignore or resistexternaltestingattempts. The commonname for such a program is Firewall. A firewall is software thatcreates a secureenvironmentwhosefunctionis to block or restrictincoming and outgoing information over a network. These firewalls actually do not work and are not suitable for business premises to maintain information securitywhilesupporting free exchange of ideas. Firewall are becoming more and more sophisticated in the day, and new features are beingadded all the time, sothat, despitecriticism and intimidatingdevelopmentmethods, they are still a powerfuldefense. In thispaper, weread a network firewall thathelps the corporateenvironment and other networks thatwant to exchange information over the network. The firewall protects the flow of trafficthrough the internet and limits the amount of external and internal information and provides the internal user with the illusion of anonymous FTP and www online communications.
Fragmentation of Data in Large-Scale System For Ideal Performance and SecurityEditor IJCATR
Cloud computing is becoming prominent trend which offers the number of significant advantages. One of the ground laying
advantage of the cloud computing is the pay-as-per-use, where according to the use of the services, the customer has to pay. At present,
user’s storage availability improves the data generation. There is requiring farming out such large amount of data. There is indefinite
large number of Cloud Service Providers (CSP). The Cloud Service Providers is increasing trend for many number of organizations and
as well as for the customers that decreases the burden of the maintenance and local data storage. In cloud computing transferring data to
the third party administrator control will give rise to security concerns. Within the cloud, compromisation of data may occur due to
attacks by the unauthorized users and nodes. So, in order to protect the data in cloud the higher security measures are required and also
to provide security for the optimization of the data retrieval time. The proposed system will approach the issues of security and
performance. Initially in the DROPS methodology, the division of the files into fragments is done and replication of those fragmented
data over the cloud node is performed. Single fragment of particular file can be stored on each of the nodes which ensure that no
meaningful information is shown to an attacker on a successful attack. The separation of the nodes is done by T-Coloring in order to
prohibit an attacker to guess the fragment’s location. The complete data security is ensured by DROPS methodology
This document summarizes a research paper that classifies different types of networks and discusses their associated security issues. It categorizes networks based on size (LAN, MAN, WAN), design (peer-to-peer, client-server, standalone), layering (layered, non-layered), and provides examples such as Ethernet, Wi-Fi, VPNs. It also discusses common security threats for different network types like viruses, denial of service attacks, and evaluates security measures including encryption, firewalls, access control. The paper aims to provide a comprehensive classification of networks and analyze how security needs vary depending on the network and software development stages.
Firewall and vpn investigation on cloud computing performanceIJCSES Journal
The paper presents the way to provide the security to one of the recent development in computing, cloud
computing. The main interest is to investigate the impact of using Virtual Private Network VPN together
with firewall on cloud computing performance. Therefore, computer modeling and simulation of cloud
computing with OPNET modular simulator has been conducted for the cases of cloud computing with and
without VPN and firewall. To achieve clear idea on these impacts, the simulation considers different
scenarios and different form application traffic applied. Simulation results showing throughput, delay,
servers traffic sent and received have been collected and presented. The results clearly show that there is
impact in throughput and delay through the use of VPN and firewall. The impact on throughput is higher
than that on the delay. Furthermore, the impact show that the email traffic is more affected than web
traffic.
A firewall is hardware or software that filters network traffic by allowing or denying transmission based on a set of rules to protect networks from unauthorized access. There are two main types - network layer firewalls which filter at the IP address and port level, and application layer firewalls which can filter traffic from specific applications like FTP or HTTP. A DMZ (demilitarized zone) is a physical or logical sub-network exposed to an untrusted network like the internet that contains external-facing services, protected from internal networks by firewalls. Firewalls provide security benefits like restricting access to authorized users and preventing intrusions from untrusted networks.
Security in MANET based on PKI using fuzzy functionIOSR Journals
This document discusses security issues in mobile ad hoc networks (MANETs) and proposes a security model based on public key infrastructure (PKI) using fuzzy logic. Specifically, it first provides background on MANETs and discusses their key characteristics and security challenges due to their dynamic topology and lack of infrastructure. It then introduces the concept of using PKI and asymmetric encryption with public/private key pairs to distribute session keys between nodes. The proposed algorithm uses fuzzy logic to determine the appropriate length of session keys based on discrimination of different attack types on the network. Experimental results show that the fuzzy-based security approach can enhance MANET security.
IRJET- Multimedia Content Security with Random Key Generation Approach in...IRJET Journal
This document proposes a double stage encryption algorithm to securely store multimedia content like images, audio, and video in the cloud. In the first stage, multimedia content is encrypted into ciphertext using AES symmetric encryption. The ciphertext is then encrypted again in the cloud using a randomly generated symmetric key for added security. This makes it difficult for attackers to extract the encryption key and recover the original multimedia content even if they obtain the ciphertext. The algorithm aims to provide security against side channel attacks in cloud computing through its use of random key generation and double encryption. It is described as having low complexity and wide applicability for safeguarding multimedia content in the cloud.
EFFECTIVE METHOD FOR MANAGING AUTOMATION AND MONITORING IN MULTI-CLOUD COMPUT...IJNSA Journal
Multi-cloud is an advanced version of cloud computing that allows its users to utilize different cloud systems from several Cloud Service Providers (CSPs) remotely. Although it is a very efficient computing
facility, threat detection, data protection, and vendor lock-in are the major security drawbacks of this infrastructure. These factors act as a catalyst in promoting serious cyber-crimes of the virtual world. Privacy and safety issues of a multi-cloud environment have been overviewed in this research paper. The
objective of this research is to analyze some logical automation and monitoring provisions, such as monitoring Cyber-physical Systems (CPS), home automation, automation in Big Data Infrastructure (BDI), Disaster Recovery (DR), and secret protection. The Results of this research investigation indicate that it is possible to avoid security snags of a multi-cloud interface by adopting these scientific solutions methodically.
4.report (cryptography & computer network)JIEMS Akkalkuwa
This document discusses network security and cryptography. It begins by defining network security and explaining the key areas of secrecy, authentication, non-repudiation, and integrity control. It then discusses what cryptography is, explaining that it uses mathematics to encrypt and decrypt data to provide security. The document provides an overview of symmetric and asymmetric key encryption techniques as well as hash functions. It also discusses some existing network security systems and their use of symmetric encryption with periodic key distribution and refresh.
DEFENSE IN DEPTH6IntroductionThe objective of this papLinaCovington707
DEFENSE IN DEPTH
6
Introduction
The objective of this paper is to visually display a defense in depth model and explain features that will encourage an overall layered defense tactic to strategically mitigate against potential threats. The network is comprised of a corporate site in Chicago where all servers are located to include: Web server, file server, print server, mail server, and ftp server. This connection to the Internet has a speed of 50mbps with 300 employees that have access to the Internet, as well as local and corporate resources. There is also one remote site that is 8 miles away with 20 employees that need access to all resources at corporate as well as an Internet connection with the limitation of 3mbps. In this design all network devices will be utilized to include: routers, switches, hubs, firewalls, VPN’s, and proxies. Along with the devices being displayed the interconnections between these devices will be shown, the end user (client) devices (desktops, laptops), and the Internet cloud, which will generically be shown to represent the network’s interface to the Internet.
In addition to the design this discussion will review the flow of data throughout the network to reveal security features that create that in depth design to protect any organization with similar requirements. I will first review the network diagram with physical features, locations, and Internet speeds; then discuss in depth, security features from each of the seven network domains (user, workstation, Local Area network (LAN), LAN-to-Wide Area Network (WAN), Remote Access, WAN, and Systems/Applications) and how they will be incorporated throughout the design and infrastructure of the network.
The objective is to implement these features to enforce the confidentiality, integrity, availability, privacy, authenticity, authorization, non-repudiation, and accounting. (Stewart, J. M., 2011).
Network Design, Data Flow, and Security Features
The network design features the corporate headquarters site in Chicago that includes within the Information Technical (IT) department is a database server, an FTP server, application server, web server, email server, print server, and 30 workstations. The database server utilizes role-based access features as well as two-factor authentication for server and user access (Common Access Card and username/password). The FTP server utilizes the TCP protocols and is within the internal network with additional firewall rules, routing policies that limit open ports, and internal training on how to locate potential threats for the IT department to monitor. The Webserver must be held in the DMZ to allow additional port access to utilize the Internet. The email and print servers are also located within the internal network.
Outside of the IT Department, this organization has six departments that are on three floors that include45 workstations and 5 printers per department. Each department is interconnected to corporate resources ...
An authenticated key management scheme for securing big data environmentIJECEIAES
If data security issues in a big data environment are considered, then the distribution of keys, their management, and the ability to transfer them between server users in a public channel will be one of the most critical issues that must consider on. In which the importance of keys management may outweigh the importance of the encryption algorithm strength. Therefore, this paper raised a new proposed scheme called authenticated key management scheme (AKMS) that works through two levels of security. First, to concerns how the user communicates with the server with preventing any attempt to penetrate senders/receivers. Second, to make the data sent vague by encrypting it, and unreadable by others except for the concerned receiver, thus the server function be limited only as a passageway for communication between the sender and receiver. In the presented work some concepts discussed related to analysis and evaluation as keys security, data security, public channel transmission, and security isolation inquiry which demonstrated the rich value that AKMS scheme carried. As well, AKMS scheme achieved very satisfactory results about computation cost, communication cost, and storage overhead which proved that AKMS scheme is appropriate, secure, and practical to use and protect the user's private data in big data environments.
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed ���live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
YOUR RELIABLE WEB DESIGN & DEVELOPMENT TEAM — FOR LASTING SUCCESS
WPRiders is a web development company specialized in WordPress and WooCommerce websites and plugins for customers around the world. The company is headquartered in Bucharest, Romania, but our team members are located all over the world. Our customers are primarily from the US and Western Europe, but we have clients from Australia, Canada and other areas as well.
Some facts about WPRiders and why we are one of the best firms around:
More than 700 five-star reviews! You can check them here.
1500 WordPress projects delivered.
We respond 80% faster than other firms! Data provided by Freshdesk.
We’ve been in business since 2015.
We are located in 7 countries and have 22 team members.
With so many projects delivered, our team knows what works and what doesn’t when it comes to WordPress and WooCommerce.
Our team members are:
- highly experienced developers (employees & contractors with 5 -10+ years of experience),
- great designers with an eye for UX/UI with 10+ years of experience
- project managers with development background who speak both tech and non-tech
- QA specialists
- Conversion Rate Optimisation - CRO experts
They are all working together to provide you with the best possible service. We are passionate about WordPress, and we love creating custom solutions that help our clients achieve their goals.
At WPRiders, we are committed to building long-term relationships with our clients. We believe in accountability, in doing the right thing, as well as in transparency and open communication. You can read more about WPRiders on the About us page.
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
Support en anglais diffusé lors de l'événement 100% IA organisé dans les locaux parisiens d'Iguane Solutions, le mardi 2 juillet 2024 :
- Présentation de notre plateforme IA plug and play : ses fonctionnalités avancées, telles que son interface utilisateur intuitive, son copilot puissant et des outils de monitoring performants.
- REX client : Cyril Janssens, CTO d’ easybourse, partage son expérience d’utilisation de notre plateforme IA plug & play.
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
Choose our Linux Web Hosting for a seamless and successful online presencerajancomputerfbd
Our Linux Web Hosting plans offer unbeatable performance, security, and scalability, ensuring your website runs smoothly and efficiently.
Visit- https://onliveserver.com/linux-web-hosting/
Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Em...Erasmo Purificato
Slide of the tutorial entitled "Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Emerging Trends" held at UMAP'24: 32nd ACM Conference on User Modeling, Adaptation and Personalization (July 1, 2024 | Cagliari, Italy)
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
Scaling Connections in PostgreSQL Postgres Bangalore(PGBLR) Meetup-2 - MydbopsMydbops
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization.
Key Takeaways:
* Understand why connection pooling is essential for high-traffic applications
* Explore various connection poolers available for PostgreSQL, including pgbouncer
* Learn the configuration options and functionalities of pgbouncer
* Discover best practices for monitoring and troubleshooting connection pooling setups
* Gain insights into real-world use cases and considerations for production environments
This presentation is ideal for:
* Database administrators (DBAs)
* Developers working with PostgreSQL
* DevOps engineers
* Anyone interested in optimizing PostgreSQL performance
Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
論文紹介:A Systematic Survey of Prompt Engineering on Vision-Language Foundation ...Toru Tamaki
Jindong Gu, Zhen Han, Shuo Chen, Ahmad Beirami, Bailan He, Gengyuan Zhang, Ruotong Liao, Yao Qin, Volker Tresp, Philip Torr "A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models" arXiv2023
https://arxiv.org/abs/2307.12980
2. 2 Evaluation the Performance of DMZ
into two segments. The first segment can contains a public access machines such as HTTP server, DNS server
and Mail server, this segment is called Demilitarized zone (DMZ). The second one can contain a private access
machines such as application server, database server and workstations. A DMZ is a network added between a
protected network and an external network in order to provide an additional layer of security [1].
A DMZ is front line of a network that protect the valuables resources from untrusted environments. A DMZ
is an example of the principle of defence in depth. The defence in depth principle points out that no one thing,
no two things will always provide complete security. It points out that the only way the system is reasonably
protected is to consider every part of the system and to ensure that they are all secure. A DMZ adds additional
security layer beyond a single perimeter [2]. It separates the external network from the direct reference to the
internal network. It is achieved by isolating machines that are directly accessible by all other machines. Most of
the time the external network is the Internet, the web server in a DMZ, but this is not the only potential
arrangement. A DMZ can be used to isolate specific machines in the network from other machines. This can be
done for a department that requires internet access and corporate network as well. In DMZ nomenclature,
internal network should have more secure information than external one [2].
Separation is important. Any system should separate its important applications and information. This is a
checks and balances to ensure that any untrusted area cannot corrupt the whole area. The separation principle is
renowned by the government. Generally, government has three divisions the executive, the legislative and the
judicial. The same design is required on a computer network system. Separation of information is necessary, so
the attacker cannot get all the systems. An attacker could access a web server, but it would be worse if the
attacker could access the database through a web server. This is the type of problem DMZ is designed to
prevent.
This work will discuss a way of evaluating the performance of DMZ with regards to network performance.
Different scenarios will be investigated and analysed using OpNet simulator.
2. Related Work
Small number of researches studied the performance of DMZ. Most works concentrated either on the
infrastructure of science data DMZ to transfer a huge amount of data or evaluating the performance of firewall
types. Large number of researchers studied a DMZ in regard to security only.
In [3] researchers studied the Science DMZ architecture, configuration, cybersecurity and performance. They
used supercomputing centres and research laboratories to highlight the effectiveness of the science DMZ model.
They concluded that Science DMZ model enhance collaboration, accelerating scientific discovery.
In [4] researchers studied network firewalls with regards to network performance using parallel firewall. The
results pointed out that the network delay and average response time were degraded by using parallel firewall.
It also showed that firewall deployment has some advantages and disadvantages with regard to network
performance. Furthermore, it demonstrated that Firewall improved link utilization and throughput. However,
the inspection process caused delay. They concluded that parallel firewalls are cost effective from the network
performance point of view.
In [5] researchers evaluated different types of firewall platforms and their effects on network performance.
Their analysis depended on delay, throughput, jitter, and packet loss. They also tested the security of firewalls
by applying a set of attacks. The results represented that network based firewall Performance is better than
personal firewall in all metrics. It also showed that using both type of firewalls provide layered security.
In [6] researchers studied firewall in regards to performance, efficiency, and security. They studied the
relation between firewall’s security and firewall’s performance. The results showed that extra processing
increased response time such as degrade system performance. However, filtering unauthorised traffic increased
the network performance. They concluded that the deployment of firewalls is not only enhances network
security but also they contribute to meet service level agreements and quality of service in terms of
availability and performance.
To sum up, none of the previous work go straightforward to evaluate the performance of DMZ. Thus, this
work aims to study the effect of a DMZ on network performance.
3. Evaluation the Performance of DMZ 3
3. Firewall
Firewall is a hardware, software or combination of both to apply security policies for controlling network
access. The main role of firewalls is protecting a network from unauthorised access. In general, firewalls can
accomplish three security aims: confidentiality, integrity, and availability [4]. There are three main firewall
types.
Packet filters: Also known as static packet filters. It works Based on checking the exchanged packets
between computers on a network [7], it works at both network and transport layers of the OSI model [8].
By checking the packets of a network, the packet filter firewall verifies that the packet conforms to one
or more rules set by the network administrator. These rules determine whether the packet will be
allowed to pass or not based on the information contained in the packet itself. This type of firewalls
enables administrators to pass or block data streams by using the following controls: physical network
interfaces, destination and source IP addresses, and destination and source ports.
Stateful inspection: Also known as dynamic packet filter. It works at layer 3, layer 4, and layer 5 [8]. :
it improves the packet firewall filters by tracking the state of connections and blocking packets that
deviate from the expected state [4]. In more details, this type of technology is not only processes the
packet header, but also checks incoming and outgoing packets for a period of time and maintains the
connection state information in the operating system kernel and parses the IP packet [7]. This type after
examines TCP/IP header and permits it, all answers are automatically permitted for. As a result, all ports
are closed excluding of incoming packets queries a connection to a particular port then only requested
port opens and such method avoids port scanning and a common hacking methods.
Application-proxy gateway: The aim of the second generation firewall is to enhance the packet filter
firewall exclusion at layer 3 and layer 4 of the OSI model and extending to assess network packets for
valid data at layer 7 of the OSI model before allowing to start the connection [9]. Generally this type is a
host running proxy server that does a separation (no direct traffic) between networks. The application
firewall uses a NAT (network address translator) to cover the traffic that passes from side to the other in
different network address. Understanding certain protocols (such as FTP, DNS, HTTP) are very
important to the application layer firewall that helps to identify undesirable protocol trying to bypass the
firewall through open port [7]. Each successful connection attempt actually results in the creation of two
separate connections one between the host and the proxy server, and another one between the proxy
server and another host [4].
4. OPNET
Simulation is a common way to evaluate the design and performance of computer network. Building a
simulation model is not a fiddling task. It needs deep understanding of simulation, modelling, system properties,
and mathematical background [10].
OpNet simulator is a program to simulate the activities and performance of computer and
communication networks. The main advantages of OpNet over other simulators are its power and versatility
[11]. OpNet offers a complete development environment to design and configure communication networks and
distributed systems. The performance of designed system can be analysed using discrete event simulations.
This simulator deals with OSI model starting from layer 7 to the adjustment of the most crucial physical
parameters [11].
OpNet modeller is the most popular product for network simulation. It is used in educational and industrial
sectors. Several universities use OpNet in teaching communication and computer networks, as well as,
companies for modelling, study, analysis, and performance predication of several network systems. Nowadays,
major companies need computer network professionals who can evaluate the performance of their network in
4. 4 Evaluation the Performance of DMZ
order to identify and fix the network problems. OpNet can achieve the aims as well as preventing the problems
from arising [10].
5. Network Design
As shown in Fig. 1. , DMZ network is neither inside nor outside the firewall. It is accessed from both inside
and outside networks. Security rules prevents devices in the outside to connect to inside devices. A DMZ is
more secure than the outside network, but less secure than the inside one [12]. The Internet (outside network) is
connected to a firewall on the outside interface. Users and servers that do not need to be accessible from the
internet are connected to the inside interface. Servers that are accessible from the Internet located in the DMZ.
A DMZ mainly has two goals. The first one is to separate the public access resources from the rest of the
network. The second is to reduce complexity [3].
Fig.1. DMZ network
5.1. Firewall
The firewall is configured as the following: the inside network can establish connections to the outside and
DMZ networks, but the outside and DMZ networks cannot establish connections to it. Outside network cannot
establish connections to the inside network, but it can establish connections to the DMZ. The DMZ cannot
establish connections to the inside network, but it can establish connections to the outside network [12].
5.2. DMZ
The DMZ is public access network. It contains servers which can be accessed from the outside and the inside
network. It can contains HTTP server, Mail server, DNS, etc. Its location reduces the network complexity and
increase the network security [3]. Local users get credible performance because the latency between DMZ and
them is low.
5.3. Inside network
The inside or protected network contains the organisation’s devices and private access servers such as
database and FTP servers. Isolation of inside network protects the organisation’s data from public access [13].
The users of the protected network can access the outside and the DMZ network [13].
6. Case design and discussions
5. Evaluation the Performance of DMZ 5
In this work, the aim is to study the effect of DMZ in network performance. Three topologies are produced
according to DMZ network design. The topologies are built with and without firewalls. Those topologies are
used to build three scenarios and to compare between them in order to study the effectiveness of DMZ. The
three scenarios are proposed as the following:
6.1 No DMZ No Firewall scenario
As shown in Fig. 2. , the network consists of two main segments:
Outside network: it contains Internet Lan, Internet Switch, Internet Router, and Internet. Internet Lan
consists of 500 users trying to access all the servers of the inside network.
Inside network: it consists of Edge Router, Lan Switch, Employee Lan, FTP server, DB server, Email
server, and HTTP server. Employee Lan consists of 50 users.
The IP addresses are assigned for connected routers interfaces and servers. Network address translation is
implemented on both routers. This allows inside network to connect with the internet. The edge router is not
configured to filter any packet coming in/out the inside network, so it passes all the requests of internet Lan to
the all servers.
Fig.2. No DMZ No Firewall
6.2 No DMZ with Firewall scenario
As shown in Fig. 3. , internet users are not able to access the ftp and database servers. Access control list is
implemented on edge router to allow outside users to access HTTP server and Email server only. It is also
configured only to prevent internet users to acess FTP and DB servers. This means that all the database and ftp
requests from outside network are blocked by the firewall. All the requests that reach to FTP server and DB
Server are from Employee Lan. All in all, the edge router acts as a firewall.
6. 6 Evaluation the Performance of DMZ
Fig.3. No DMZ with Firewall
6.3 DMZ scenario
As shown in Fig. 4. , public access servers are separated from other devices. The edge router/firewall is
configured to block any request to connect to the inside network. It also configured to pass any outside reply to
inside network. The firewall is configured to allow any request to access DMZ network. Furthermore, the
firewall is configured to allow inside network to access the DMZ. All the configurations are applied using
access control list that mainly depends on ip addresses and port numbers of the machines.
Fig.4. DMZ
Tables 1 and 2 show a summary of the network topologies and application configurations using OpNet.
7. Evaluation the Performance of DMZ 7
Table 1. Summary of the Network Topologies design using OpNet
Object Name Object Model
DB Server, HTTP Server, FTP Server, and Email Server. ethernet_server node object
External Lan and Employee Lan 10BaseT_LAN node object
Internet Switch, Lan Switch, and DMZ Switch. ethernet16_switch
Internet Router, Edge Router, and Edge Router/Firewall. ethernet4_slip8_gtwy
Internet ip32_cloud node object
Servers <-> Switches
10BaseT_LAN <-> Switches
Switches <-> Routers
10BaseT
Routers <-> Internet PPP_DS3
Table 2. Application Configuration Settings
Application name Application model attribute Application model attribute values
Web browsing HTTP Heavy browsing
File transfer FTP Medium load
Database Database High load
E-mail SMTP High load
7. Simulation results and analysis
Related Simulation statistics are chosen to assess the performance of DMZ. The results are compared and
presented as the following Figures.
Fig.5. TCP delay
8. 8 Evaluation the Performance of DMZ
Fig. 5. represents the average TCP delay. The TCP delay of the “No DMZ No Filter” is the largest because
the network traffic is not filtered, this make high traffic inside the network. The high traffic increases the
probability of congestion and packet loss that are the main reasons of retransmissions and TCP delay.
Fig.6. Queuing delay.
Fig. 6. represents queuing delay from internet to edge router. It is clear that the queuing delay of the DMZ
scenario is the lowest queuing delay. The queuing delay of the “No DMZ with Filter” is the largest because the
traffic coming to inside network will be filtered by edge router/firewall, then the authorized traffic will be
passed to local network. On the other hand, queuing delay of DMZ scenario is the lowest because the
authorized traffic will be passed to the inside network and DMZ through two different interfaces. So, it is clear
that DMZ reduces the queuing delay because it divides the LAN into two segments which reduces the load on
network machines. Furthermore, filtering will prevent unauthorised traffic to reach to Lan switch. This policy
degrades the queuing delay.
Fig.7. Link utilisation (edge router/firewall to LAN switch).
9. Evaluation the Performance of DMZ 9
Fig. 7. shows outgoing link utilisation from edge router to Lan switch. It is clear that DMZ scenario has the
lowest link utilization. The utilisation of “No DMZ with Firewall” is greater than “DMZ” because http and
email servers are not separated from inside network, so the requested traffic to email and http server pass
through edge router to Lan switch. But in “DMZ”, the requested traffic to email and http pass from the edge
router to DMZ switch. Thus it can be estimated that DMZ optimised the overall utilisation of the network.
Fig.8. HTTP page response time.
Fig. 8. represents HTTP page response time measuring in seconds. “No DMZ with Firewall” and “DMZ”
scenarios are the fastest page response because the filtering allows a smaller amount of traffic to go inside the
local network. Http and email traffic only are allowed to go inside the network. Small amount of traffic to
process is faster than a large one. So, allowing only the authorised packets to pass decrease page response time.
Fig. 9. CPU Utilisation of HTTP Server
10. 10 Evaluation the Performance of DMZ
Fig.10. Performance of HTTP Server
Figs. 9 and 10 shows CPU Utilisation of HTTP Server and performance of HTTP Server respectively. It is
clear that The CPU utilization and performance of the three scenarios are almost the same because the two Lans
are able to access http server in all scenarios. So, switching to DMZ network design does not degrade the http
server performance.
Fig.11. A DB query response time (sec).
Fig. 11. shows the database’s query response measured in seconds under three different scenarios. It is clear
that “No DMZ with Firewall” and “DMZ” scenarios are the fastest DB query response. The edge
router/firewall does not allow internet users to access DB server. The DB server receives requests only from
local users, packets pass through Lan switch to the server in both scenarios. Implemented security prevents
high load on inside network.
11. Evaluation the Performance of DMZ 11
Fig.12. CPU Utilisation of FTP Server
Fig. 12. shows CPU Utilisation of FTP Server. It is clear that DMZ Scenario and No DMZ with Firewall are
the lowest CPU utilization. The explanations of this results that the edge router/Firewall blocks each FTP
requests from the internet. So, FTP Server receives requests only from local users.
Fig.13. CPU Utilisation of edge router/firewall
Fig. 13. shows CPU Utilisation of edge router/firewall. It is clear that DMZ Scenario is the largest CPU
utilization. The explanations of this results that the edge router/Firewall filters all packet coming from internet.
It also decide to send the filtered packets either to inside network or DMZ. Those processes make the CPU
utilisation the largest.
8. Conclusions
This work has discussed a case study of evaluating the performance of DMZ. OpNet simulation has been
used to build three indicative scenarios and results are compared and discussed. The performance evaluation
considered TCP delay, queuing delay, link utilisations, http page response time, CPU utilisations, servers’
performance, DB query response time. The results have shown that the DMZ and No DMZ with firewall
12. 12 Evaluation the Performance of DMZ
scenarios have the best TCP delay, DB query response time, HTTP page response time, CPU utilisation of FTP
Server. Moreover, DMZ Scenario queuing delay, links utilisation, and servers’ performance are much better
than No DMZ with firewall. The results have shown that DMZ solves many critical performance problems. To
sum up, DMZ is not only to improve the network security, but it is also to improve the network performance.
References
[1] T. B. D. L. E. Q. J. P. D. M. Z. N. O. Christian Barnes, Hackproofing Your Wireless Network, USA:
Syngress, 2002.
[2] S. Young, “Designing a DMZ,” SANS Institute InfoSec Reading Room.
[3] E. Dart, L. Rotman, B. Tierney and J. Z. Mary Hester, “The Science DMZ: A Network Design Pattern for
Data-Intensive Science”.
[4] E.-S. N. A. Sabry NASSAR, Improve the Network Performance By using Parallel Firewalls.
[5] J. M. ,. A. I. a. A. N. Q. Thaier Hayajneh, “Performance and Information Security Evaluation with
Firewalls,” International Journal of Security and Its Applications, 2013.
[6] O. G. H. Garantla, “Evaluation of Firewall Effects on Network Performance”.
[7] S. E. John R. Vacca, Firewalls: Jumpstart for Network and Systems Administrators, MA, USA: Elsevier
Digital Press, 2005.
[8] Sequeira, CCNA Security 640-554 Quick Reference, Cisco Press, 2012.
[9] E. Romanofski, “A Comparison of Packet Filtering Vs Application Level Fire wall,” Global Information
Assurance Certification Paper.
[10] &. Y. H. S. S. Sethi, The practical OPNET User Guide for Computer Network Simulation, 2012.
[11] “OPNET Simulator,” [Online]. Available:
http://users.salleurl.edu/~zaballos/opnet_interna/pdf/OPNET%20Simulator.pdf. [Accessed 24 06 2017].
[12] G. A. Donahue, Network Warrior, O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol,
CA 95472., 2007.
[13] M. Bishop, Computer Security: Art and Science, Addison Wesley, 2002.
[14] W. Stallings, CRYPTOGRAPHY AND NETWORK SECURITY PRINCIPLES AND PRACTICE, 5 ed.,
Pearson Education, 2011.
[15] J. Webb, Network Demilitarized Zone (DMZ).
[16] S. E. John R.Vacca, Firewalls Jumpstart for Network and Systems Administrators.
[17] M. K. E Aboelela, Network Simulation Experiments Manual.
[18] R. . J. Shimonski, W. Schmied, T. W. Shinder, V. Chang, D. Simonis and D. Imperatore, Building DMZ
Enterprise Network, Syngress Publishing, 2003.
[19] F. F. R. A. M. M. S. Marco Antonio Torrez Rojas, “Science DMZ: Support for e-science in Brazil,” 2016.
[20] K. E. a. R. B. K. Salah, “Performance modeling and analysis of network,” IEEE, 2012.
Authors’ Profiles
Baha Rababah is a lecturer at faculty of computer and information system, Islamic
University, KSA. He obtained BSc Computer Engineering, Al-Balqa Applied University,
Jordan in 2010. He also obtained MSc in Computer Network Administration and
Management, University of Portsmouth, UK in 2015. His researches of interest are network
performance, network security, and cloud computing.
13. Evaluation the Performance of DMZ 13
Shikun Zhou, PhD is a senior lecturer in Internet applications and formal computing,
university of Portsmouth, UK. He also is the Intranet and Web forum Administrator and
also a member of the Communications and Networks Engineering Research Group.
Mansour Bader holds a MSc in computer engineering and networks, University of Jordan,
Jordan, 2016. BSc Computer Engineering, Al-Balqa Applied University, Jordan, 2008. He is
a technical support engineer of computer networks at computer centre of Al-Balqa Applied
University for 8 years. He has 3 papers in the cryptography field and has participated in two
conferences, one in Sydney, Australia called SPM2016 the other were held in Geneva,
Switzerland titled CRIS 2017.
How to cite this paper: Baha Rababah, Shikun Zhou, Mansour Bader," Evaluation the Performance of DMZ",
International Journal of Wireless and Microwave Technologies(IJWMT), Vol.8, No.1, pp. 1-13, 2018.DOI:
10.5815/ijwmt.2018.01.01