The document discusses network vulnerability assessment and finding critical links and nodes. It proposes using a belief propagation algorithm to calculate the vulnerability of each node and the overall network vulnerability over time. It provides an example network and shows the results of analyzing it to find the critical nodes and links using the proposed algorithm. The algorithm works by having each node calculate the vulnerability of its neighbors and share this information over time to determine the overall network vulnerability.
IRJET- Review on Network Intrusion Detection using Recurrent Neural Network A...
This document presents a review of using recurrent neural networks for network intrusion detection. It begins with an introduction to intrusion detection systems and the types of attacks they aim to detect. It then discusses previous research on machine learning approaches for intrusion detection, including the use of autoencoders, support vector machines, and other classifiers. The proposed approach uses a recurrent neural network for feature selection and classification of network data. The framework involves data collection, preprocessing including feature selection, training the recurrent neural network classifier, and then using the trained model to detect attacks in new data. Experimental results on benchmark intrusion detection datasets are presented and compared to other machine learning methods.
Enhanced Intrusion Detection System using Feature Selection Method and Ensemb...
The main goal of Intrusion Detection Systems (IDSs) is
to detect intrusions. This kind of detection system represents a
significant tool in traditional computer based systems for ensuring
cyber security. IDS model can be faster and reach more accurate
detection rates, by selecting the most related features from the
input dataset. Feature selection is an important stage of any IDs to
select the optimal subset of features that enhance the process of the
training model to become faster and reduce the complexity while
preserving or enhancing the performance of the system. In this
paper, we proposed a method that based on dividing the input
dataset into different subsets according to each attack. Then we
performed a feature selection technique using information gain
filter for each subset. Then the optimal features set is generated by
combining the list of features sets that obtained for each attack.
Experimental results that conducted on NSL-KDD dataset shows
that the proposed method for feature selection with fewer features,
make an improvement to the system accuracy while decreasing the
complexity. Moreover, a comparative study is performed to the
efficiency of technique for feature selection using different
classification methods. To enhance the overall performance,
another stage is conducted using Random Forest and PART on
voting learning algorithm. The results indicate that the best
accuracy is achieved when using the product probability rule.
Secure intrusion detection and attack measure selection
This document proposes NICE, a framework for secure intrusion detection and attack mitigation in virtual network systems. NICE uses distributed agents on cloud servers to monitor traffic, detect vulnerabilities, and generate attack graphs. It profiles virtual machines to identify their state and vulnerabilities. When potential attacks are detected, NICE can quarantine suspicious VMs and inspect their traffic. The attack analyzer correlates alerts, constructs attack graphs, and selects appropriate countermeasures based on the graphs. Evaluations show NICE can effectively detect attacks while minimizing performance overhead for the cloud system.
11.a genetic algorithm based elucidation for improving intrusion detection th...Alexander Decker
This document summarizes a research paper that proposes using a genetic algorithm to improve intrusion detection. The paper aims to reduce features from the KDD Cup 99 dataset and generate a rule set using genetic algorithms to detect intrusions. The genetic algorithm evolves rules over generations to maximize fitness. Experiments show this approach can improve detection rates and reduce false alarms compared to existing intrusion detection systems.
1.[1 9]a genetic algorithm based elucidation for improving intrusion detectio...Alexander Decker
This document summarizes a research paper that proposes using a genetic algorithm to improve intrusion detection. The paper aims to reduce features from the KDD Cup 99 dataset and generate a rule set using genetic algorithms to detect intrusions with a condensed feature set. The genetic algorithm is used to evolve rules from the reduced training data, with a fitness function evaluating rule quality. Experiments and evaluations are conducted on the KDD Cup 99 dataset to test the proposed method.
A NOVEL INTRUSION DETECTION MODEL FOR MOBILE AD-HOC NETWORKS USING CP-KNNIJCNCJournal
Mobile ad-hoc network security problems are the subject of in depth analysis. A group of mobile nodes area unit connected to a set wired backbone. In MANET, the node themselves implement the network management in a very cooperative fashion. All the nodes area unit accountable to create a constellation that is dynamically, modification it and conjointly the absence of any clear network boundaries. We tend to project a completely unique intrusion detection model for mobile ad-hoc network victimization. CP-KNN (Conformal Prediction K-Nearest Neighbor) algorithmic rule is to classify the audit knowledge for anomaly detection. The non-conformity score worth is employed to cut back the classification period of time for multi level iteration. It is effectively notice anomalies with high true positive rate, low false positive rate and high confidence that the progressive of assorted anomaly detection ways. Additionally it is interfered
by “noisy” knowledge (unclean data), the projected technique is strong, effective and conjointly it retains
its smart detection performance and to avoid the abnormal activity.
IRJET- Review on Network Intrusion Detection using Recurrent Neural Network A...IRJET Journal
This document presents a review of using recurrent neural networks for network intrusion detection. It begins with an introduction to intrusion detection systems and the types of attacks they aim to detect. It then discusses previous research on machine learning approaches for intrusion detection, including the use of autoencoders, support vector machines, and other classifiers. The proposed approach uses a recurrent neural network for feature selection and classification of network data. The framework involves data collection, preprocessing including feature selection, training the recurrent neural network classifier, and then using the trained model to detect attacks in new data. Experimental results on benchmark intrusion detection datasets are presented and compared to other machine learning methods.
The main goal of Intrusion Detection Systems (IDSs) is
to detect intrusions. This kind of detection system represents a
significant tool in traditional computer based systems for ensuring
cyber security. IDS model can be faster and reach more accurate
detection rates, by selecting the most related features from the
input dataset. Feature selection is an important stage of any IDs to
select the optimal subset of features that enhance the process of the
training model to become faster and reduce the complexity while
preserving or enhancing the performance of the system. In this
paper, we proposed a method that based on dividing the input
dataset into different subsets according to each attack. Then we
performed a feature selection technique using information gain
filter for each subset. Then the optimal features set is generated by
combining the list of features sets that obtained for each attack.
Experimental results that conducted on NSL-KDD dataset shows
that the proposed method for feature selection with fewer features,
make an improvement to the system accuracy while decreasing the
complexity. Moreover, a comparative study is performed to the
efficiency of technique for feature selection using different
classification methods. To enhance the overall performance,
another stage is conducted using Random Forest and PART on
voting learning algorithm. The results indicate that the best
accuracy is achieved when using the product probability rule.
Secure intrusion detection and attack measure selectionUvaraj Shan
This document proposes NICE, a framework for secure intrusion detection and attack mitigation in virtual network systems. NICE uses distributed agents on cloud servers to monitor traffic, detect vulnerabilities, and generate attack graphs. It profiles virtual machines to identify their state and vulnerabilities. When potential attacks are detected, NICE can quarantine suspicious VMs and inspect their traffic. The attack analyzer correlates alerts, constructs attack graphs, and selects appropriate countermeasures based on the graphs. Evaluations show NICE can effectively detect attacks while minimizing performance overhead for the cloud system.
Review of Intrusion and Anomaly Detection Techniques IJMER
Intrusion detection is the act of detecting actions that attempt to compromise the
confidentiality, integrity or availability of a resource. With the tremendous growth of network-based
services and sensitive information on networks, network security is getting more and more importance
than ever. Intrusion poses a serious security threat in a huge network environment. The increasing use of
internet has dramatically added to the growing number of threats that inhabit within it. Intrusion
detection does not, in general, include prevention of intrusions. Now a days Network intrusion detection
systems have become a standard component in the area of security infrastructure. This review paper tries
to discusses various techniques which are already being used for intrusion detection.
Implementation of Secured Network Based Intrusion Detection System Using SVM ...IRJET Journal
This document discusses the implementation of a secured network-based intrusion detection system using the support vector machine (SVM) algorithm. It begins with an abstract that outlines hardening different intrusion detection implementations and proposals. The paper then discusses using naive Bayes, a classification method for intrusion detection, to analyze transmitted data for malicious content and block transmissions from corrupted hosts. It also discusses using flow correlation information to improve classification accuracy while minimizing effects on network performance.
CLASSIFICATION PROCEDURES FOR INTRUSION DETECTION BASED ON KDD CUP 99 DATA SETIJNSA Journal
In network security framework, intrusion detection is one of a benchmark part and is a fundamental way to protect PC from many threads. The huge issue in intrusion detection is presented as a huge number of false alerts; this issue motivates several experts to discover the solution for minifying false alerts according to data mining that is a consideration as analysis procedure utilized in a large data e.g. KDD CUP 99. This paper presented various data mining classification for handling false alerts in intrusion detection as reviewed. According to the result of testing many procedure of data mining on KDD CUP 99 that is no individual procedure can reveal all attack class, with high accuracy and without false alerts. The best accuracy in Multilayer Perceptron is 92%; however, the best Training Time in Rule based model is 4 seconds . It is concluded that ,various procedures should be utilized to handle several of network attacks.
An efficient intrusion detection using relevance vector machineIAEME Publication
The document summarizes an efficient intrusion detection system using Relevance Vector Machine (RVM). It begins with an introduction to intrusion detection and types of attacks. Then it discusses related work using data mining techniques like SVM for intrusion detection. The proposed methodology preprocesses data from the KDD Cup 99 dataset, performs normalization, and classifies using RVM. RVM can provide sparse solutions and inferences with low computation. Experimental results on the KDD Cup 99 dataset show the technique achieves higher detection rates than regular SVM algorithms.
The document discusses using machine learning algorithms like Random Forest and k-Nearest Neighbors for intrusion detection. It analyzes the KDD Cup 1999 intrusion detection dataset to classify network traffic as normal or different types of attacks. The proposed model uses Random Forest for feature selection and k-Nearest Neighbors for classification to more accurately detect known and unknown attacks. Experimental results show the combined approach achieves better detection rates than other algorithms alone, especially for novel attacks not present in training data. Further combining the algorithms into a two-stage process may yield even higher accuracy.
AN IMPLEMENTATION OF INTRUSION DETECTION SYSTEM USING GENETIC ALGORITHMIJNSA Journal
Nowadays it is very important to maintain a high level security to ensure safe and trusted communication of information between various organizations. But secured data communication over internet and any other network is always under threat of intrusions and misuses. So Intrusion Detection Systems have
become a needful component in terms of computer and network security. There are various approaches being utilized in intrusion detections, but unfortunately any of the systems so far is not completely flawless. So, the quest of betterment continues. In this progression, here we present an Intrusion
Detection System (IDS), by applying genetic algorithm (GA) to efficiently detect various types of network intrusions. Parameters and evolution processes for GA are discussed in details and implemented. This approach uses evolution theory to information evolution in order to filter the traffic data and thus reduce the complexity. To implement and measure the performance of our system we used the KDD99
benchmark dataset and obtained reasonable detection rate.
False positive reduction by combining svm and knn algoeSAT Journals
Abstract
With the growth of information technology. There emerges many intrusion detection problem such as cyber security. Intrusion detection system provides basic infrastructure to detect a number of attacks. This research work focuses on intrusion detection problem of network security. The main goal is to detect network behaviour as normal or abnormal. In this research work, two different machine learning algorithm have been combined together to reduce its weakness and takes positive feature of both algorithm. Its experimental results generates better result than other algorithm in terms of performance, accuracy and false positive rate. These combined algorithm has been applied on KDDCUP99 dataset to find better result by improving its performance, accuracy and reducing its false positive rate.
Keywords: Intrusion detection system, KDDCUP99 dataset, False positive rate.
Evasion Streamline Intruders Using Graph Based Attacker model Analysis and Co...Editor IJCATR
Network Intrusion detection and Countermeasure Election in virtual network systems (NICE) are used to establish a
defense-in-depth intrusion detection framework. For better attack detection, NICE incorporates attack graph analytical procedures into
the intrusion detection processes. We must note that the design of NICE does not intend to improve any of the existing intrusion
detection algorithms; indeed, NICE employs a reconfigurable virtual networking approach to detect and counter the attempts to
compromise VMs, thus preventing zombie VMs. NICE includes two main phases: deploy a lightweight mirroring-based network
intrusion detection agent (NICE-A) on each cloud server to capture and analyze cloud traffic. A NICE-A periodically scans the virtual
system vulnerabilities within a cloud server to establish Scenario Attack Graph (SAGs), and then based on the severity of identified
vulnerability toward the collaborative attack goals, NICE will decide whether or not to put a VM in network inspection state. Once a
VM enters inspection state, Deep Packet Inspection (DPI) is applied, and/or virtual network reconfigurations can be deployed to the
inspecting VM to make the potential attack behaviors prominent.
Evaluation of network intrusion detection using markov chainIJCI JOURNAL
Day today life internet threat has been increased significantly. There is a need to develop model in order to
maintain security of system. The most effective techniques are Intrusion Detection System (IDS).The
purpose of intrusion system through the security devices detect and deal with it. In this paper, a
mathematical approach is used effectively to predict and detect intrusion in the network. Here we discuss
about two algorithms ‘K-Means + Apriori’, a method which classify normal and abnormal activities in
computer network. In K-Means process, it partitions the training set into K-clusters using Euclidean
distance and introduce an outlier factor, then it build Apriori Algorithm to prune the data by removing
infrequent data in the database. Based on defined state the degree of incoming data is evaluated through
the experiment using sample DARPA2000 dataset, and achieves high detection performance in level of
attack in stages.
IDS IN TELECOMMUNICATION NETWORK USING PCAIJCNCJournal
This document summarizes a research paper that proposes using principal component analysis (PCA) as a dimension reduction technique for intrusion detection systems (IDS). The paper applies PCA to reduce the number of features from 41 to either 6 or 10 features for the NSL-KDD dataset. One reduced feature set is used to develop a network IDS with high detection success and rate, while the other is used for a host IDS also with good detection success and very high detection rate. The paper outlines the process of applying PCA for IDS, including performing PCA on training data to identify principal components, then using those components to map new online data and detect intrusions based on deviation thresholds.
IRJET- Windows Log Investigator System for Faster Root Cause Detection of a D...IRJET Journal
This document describes a Windows Log Investigator System that was created to help developers more easily detect the root cause of defects. The system uses a log analysis algorithm and backtracking to determine the type of defect and possible solutions. It has a graphical user interface built with C# and WPF to provide an interactive experience for analyzing logs. The system aims to significantly reduce the difficulties faced by developers in solving defects.
IRJET- 3 Juncture based Issuer Driven Pull Out System using Distributed ServersIRJET Journal
This document discusses network security visualization and proposes a classification system for network security visualization systems. It begins by introducing the importance of visualizing network security data due to the large quantities of data produced. It then reviews existing network security visualization systems and outlines key aspects they monitor like host/server monitoring, port activity, and intrusion detection. The document proposes a taxonomy to classify network security visualization systems based on their data sources and techniques. It concludes by stating papers were selected for review based on their relevance to network security, novelty of techniques, and inclusion of evaluations.
Proactive Population-Risk Based Defense Against Denial of Cyber-Physical Serv...IRJET Journal
This document discusses proactive population-risk based defense against denial of cyber-physical service attacks. It proposes using test packets to test network state and rules across switches to detect faults. The goals are to augment human debugging, reduce downtime, and save money. Related work discussed network tomography using end-to-end measurements to identify lossy links. Striped unicast probes were also explored to infer link-level loss rates. The algorithm aims to generate test packets that exercise every rule on each switch to detect faults with a minimum number of packets.
Hyperparameters optimization XGBoost for network intrusion detection using CS...IAESIJAI
With the introduction of high-speed internet access, the demand for security and dependable networks has grown. In recent years, network attacks have gotten more complex and intense, making security a vital component of organizational information systems. Network intrusion detection systems (NIDS) have become an essential detection technology to protect data integrity and system availability against such attacks. NIDS is one of the most well-known areas of machine learning software in the security field, with machine learning algorithms constantly being developed to improve performance. This research focuses on detecting abnormalities in societal infiltration using the hyperparameters optimization XGBoost (HO-XGB) algorithm with the Communications Security Establishment-The Canadian Institute for Cybersecurity-Intrusion Detection System2018 (CSE-CICIDS2018) dataset to get the best potential results. When compared to typical machine learning methods published in the literature, HO-XGB outperforms them. The study shows that XGBoost outperforms other detection algorithms. We refined the HO-XGB model's hyperparameters, which included learning_rate, subsample, max_leaves, max_depth, gamma, colsample_bytree, min_child_weight, n_estimators, max_depth, and reg_alpha. The experimental findings reveal that HO-XGB1 outperforms multiple parameter settings for intrusion detection, effectively optimizing XGBoost's hyperparameters.
Survey of Clustering Based Detection using IDS Technique IRJET Journal
This document discusses intrusion detection systems (IDS) and different techniques used for IDS, including clustering-based detection. It first provides background on IDS, describing their purpose of detecting intruders and protecting systems. It then outlines various IDS types, including mobile agent-based, cluster-based, cryptography-based, and others. The document also summarizes related work from other papers applying data mining techniques like clustering to improve IDS detection rates and reduce false alarms. Finally, it discusses problems with current and traditional IDS, such as threshold detection leading to false positives, and false negatives where attacks are missed.
Analysis of IT Monitoring Using Open Source Software Techniques: A ReviewIJERD Editor
The Network administrators usually rely on generic and built-in monitoring tools for network
security. Ideally, the network infrastructure is supposed to have carefully designed strategies to scale up
monitoring tools and techniques as the network grows, over time. Without this, there can be network
performance challenges, downtimes due to failures, and most importantly, penetration attacks. These can lead to
monetary losses as well as loss of reputation. Thus, there is a need for best practices to monitor network
infrastructure in an agile manner. Network security monitoring involves collecting network packet data,
segregating it among all the 7 OSI layers, and applying intelligent algorithms to get answers to security-related
questions. The purpose is to know in real-time what is happening on the network at a detailed level, and
strengthen security by hardening the processes, devices, appliances, software policies, etc. The Multi Router
Traffic Grapher, or just simply MRTG, is free software for monitoring and measuring the traffic load
on network links. It allows the user to see traffic load on a network over time in graphical form.
Response time optimization for vulnerability management system by combining ...IJECEIAES
The growth of information and communication technology has made the internet network have many users. On the other side, this increases cybercrime and its risks. One of the main attack targets is network weakness. Therefore, cyber security is required, which first does a network scan to stop the attack. Points of vulnerability on the network can be discovered using scanning techniques. Furthermore, mitigation or recovery measures can be implemented. However, it needs a short response time and high accuracy while scanning to reduce the level of damage caused by cyber-attacks. In this paper, the proposed method improves the performance of a vulnerability management system based on network and port scanning by combining the benchmarking and scenario planning models. On a network scanning to discover open ports on a subnet, Masscan can achieve response times of less than 2 seconds, and on scenario planning for detection on a single host by Nmap can reach less than 4 seconds. It was combining both models obtained an adequate optimization response time. The total response time is less than 6 seconds.
Use of network forensic mechanisms to formulate network securityIJMIT JOURNAL
Network Forensics is fairly a new area of research which would be used after an intrusion in various
organizations ranging from small, mid-size private companies and government corporations to the defence
secretariat of a country. At the point of an investigation valuable information may be mishandled which
leads to difficulties in the examination and time wastage. Additionally the intruder could obliterate tracks
such as intrusion entry, vulnerabilities used in an entry, destruction caused, and most importantly the
identity of the intruder. The aim of this research was to map the correlation between network security and
network forensic mechanisms. There are three sub research questions that had been studied. Those have
identified Network Security issues, Network Forensic investigations used in an incident, and the use of
network forensics mechanisms to eliminate network security issues. Literature review has been the
research strategy used in order study the sub research questions discussed. Literature such as research
papers published in Journals, PhD Theses, ISO standards, and other official research papers have been
evaluated and have been the base of this research. The deliverables or the output of this research was
produced as a report on how network forensics has assisted in aligning network security in case of an
intrusion. This research has not been specific to an organization but has given a general overview about
the industry. Embedding Digital Forensics Framework, Network Forensic Development Life Cycle, and
Enhanced Network Forensic Cycle could be used to develop a secure network. Through the mentioned
framework, and cycles the author has recommended implementing the 4R Strategy (Resistance,
Recognition, Recovery, Redress) with the assistance of a number of tools. This research would be of
interest to Network Administrators, Network Managers, Network Security personnel, and other personnel interested in obtaining knowledge in securing communication devices/infrastructure. This research provides a framework that can be used in an organization to eliminate digital anomalies through network forensics, helps the above mentioned persons to prepare infrastructure readiness for threats and also enables further research to be carried on in the fields of computer, database, mobile, video, and audio.
USE OF NETWORK FORENSIC MECHANISMS TO FORMULATE NETWORK SECURITYIJMIT JOURNAL
Network Forensics is fairly a new area of research which would be used after an intrusion in various
organizations ranging from small, mid-size private companies and government corporations to the defence
secretariat of a country. At the point of an investigation valuable information may be mishandled which
leads to difficulties in the examination and time wastage. Additionally the intruder could obliterate tracks
such as intrusion entry, vulnerabilities used in an entry, destruction caused, and most importantly the
identity of the intruder. The aim of this research was to map the correlation between network security and
network forensic mechanisms. There are three sub research questions that had been studied. Those have
identified Network Security issues, Network Forensic investigations used in an incident, and the use of
network forensics mechanisms to eliminate network security issues. Literature review has been the
research strategy used in order study the sub research questions discussed. Literature such as research
papers published in Journals, PhD Theses, ISO standards, and other official research papers have been
evaluated and have been the base of this research. The deliverables or the output of this research was
produced as a report on how network forensics has assisted in aligning network security in case of an
intrusion. This research has not been specific to an organization but has given a general overview about
the industry. Embedding Digital Forensics Framework, Network Forensic Development Life Cycle, and
Enhanced Network Forensic Cycle could be used to develop a secure network. Through the mentioned
framework, and cycles the author has recommended implementing the 4R Strategy (Resistance,
Recognition, Recovery, Redress) with the assistance of a number of tools. This research would be of
interest to Network Administrators, Network Managers, Network Security personnel, and other personnel
interested in obtaining knowledge in securing communication devices/infrastructure. This research
provides a framework that can be used in an organization to eliminate digital anomalies through network
forensics, helps the above mentioned persons to prepare infrastructure readiness for threats and also
enables further research to be carried on in the fields of computer, database, mobile, video, and audio.
The document discusses using machine learning for efficient attack detection in IoT devices without feature engineering. It proposes a feature-engineering-less machine learning (FEL-ML) process that uses raw packet byte streams as input instead of engineered features. This approach is lighter weight and faster than traditional methods. The FEL-ML model is trained directly on unprocessed packet data to perform malware detection on resource-constrained IoT devices. Prior research that used engineered features or complex deep learning models are not suitable for IoT due to limitations of memory and processing power. The proposed FEL-ML approach aims to enable effective network traffic security for IoT using minimal resources.
EFFICIENT ATTACK DETECTION IN IOT DEVICES USING FEATURE ENGINEERING-LESS MACH...ijcsit
Through the generalization of deep learning, the research community has addressed critical challenges in
the network security domain, like malware identification and anomaly detection. However, they have yet to
discuss deploying them on Internet of Things (IoT) devices for day-to-day operations. IoT devices are often
limited in memory and processing power, rendering the compute-intensive deep learning environment
unusable. This research proposes a way to overcome this barrier by bypassing feature engineering in the
deep learning pipeline and using raw packet data as input. We introduce a feature- engineering-less
machine learning (ML) process to perform malware detection on IoT devices. Our proposed model,”
Feature engineering-less ML (FEL-ML),” is a lighter-weight detection algorithm that expends no extra
computations on “engineered” features. It effectively accelerates the low-powered IoT edge. It is trained
on unprocessed byte-streams of packets. Aside from providing better results, it is quicker than traditional
feature-based methods. FEL-ML facilitates resource-sensitive network traffic security with the added
benefit of eliminating the significant investment by subject matter experts in feature engineering.
Secure intrusion detection and countermeasure selection in virtual system usi...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IRJET- Machine Learning Processing for Intrusion DetectionIRJET Journal
This document evaluates different machine learning algorithms for network intrusion detection using the KDD dataset. It analyzes the accuracy of logistic regression, naive bayes, support vector machine, K-nearest neighbor, and decision tree classifiers based on their confusion matrices and receiver operating characteristic curves. The results show that the decision tree algorithm achieved the highest accuracy rate of 99.83% on the KDD dataset for intrusion detection.
ATTACK DETECTION AVAILING FEATURE DISCRETION USING RANDOM FOREST CLASSIFIERCSEIJJournal
This document discusses using a random forest classifier with feature selection to improve intrusion detection. It begins with background on intrusion detection systems and challenges. It then proposes using genetic algorithms for feature selection to identify the most important features from a dataset. A random forest classifier is used for classification, which combines decision trees to improve accuracy. The methodology involves feature selection, classification with random forest, and detection. Feature weights are calculated and cross-validation is used to analyze detection rates for individual attacks. The goal is to improve accuracy, reduce training time, and better detect minority attacks through this approach.
Attack Detection Availing Feature Discretion using Random Forest ClassifierCSEIJJournal
The widespread use of the Internet has an adverse effect of being vulnerable to cyber attacks. Defensive
mechanisms like firewalls and IDSs have evolved with a lot of research contributions happening in these
areas. Machine learning techniques have been successfully used in these defense mechanisms especially
IDSs. Although they are effective to some extent in identifying new patterns and variants of existing
malicious patterns, many attacks are still left as undetected. The objective is to develop an algorithm for
detecting malicious domains based on passive traffic measurements. In this paper, an anomaly-based
intrusion detection system based on an ensemble based machine learning classifier called Random Forest
with gradient boosting is deployed. NSL-KDD cup dataset is used for analysis and out of 41 features, 32
features were identified as significant using feature discretion.
Trust Metric-Based Anomaly Detection via Deep Deterministic Policy Gradient R...IJCNCJournal
Addressing real-time network security issues is paramount due to the rapidly expanding IoT jargon. The erratic rise in usage of inadequately secured IoT- based sensory devices like wearables of mobile users, autonomous vehicles, smartphones and appliances by a larger user community is fuelling the need for a trustable, super-performant security framework. An efficient anomaly detection system would aim to address the anomaly detection problem by devising a competent attack detection model. This paper delves into the Deep Deterministic Policy Gradient (DDPG) approach, a promising Reinforcement Learning platform to combat noisy sensor samples which are instigated by alarming network attacks. The authors propose an enhanced DDPG approach based on trust metrics and belief networks, referred to as Deep Deterministic Policy Gradient Belief Network (DDPG-BN). This deep-learning-based approach is projected as an algorithm to provide “Deep-Defense” to the plethora of network attacks. Confidence interval is chosen as the trust metric to decide on the termination of sensor sample collection. Once an enlisted attack is detected, the collection of samples from the particular sensor will automatically cease. The evaluations and results of the experiments highlight a better detection accuracy of 98.37% compared to its counterpart conventional DDPG implementation of 97.46%. The paper also covers the work based on a contemporary Deep Reinforcement Learning (DRL) algorithm, the Actor Critic (AC). The proposed deep learning binary classification model is validated using the NSL-KDD dataset and the performance is compared to a few deep learning implementations as well.
Trust Metric-Based Anomaly Detection Via Deep Deterministic Policy Gradient R...IJCNCJournal
Addressing real-time network security issues is paramount due to the rapidly expanding IoT jargon. The erratic rise in usage of inadequately secured IoT- based sensory devices like wearables of mobile users, autonomous vehicles, smartphones and appliances by a larger user community is fuelling the need for a trustable, super-performant security framework. An efficient anomaly detection system would aim to address the anomaly detection problem by devising a competent attack detection model. This paper delves into the Deep Deterministic Policy Gradient (DDPG) approach, a promising Reinforcement Learning platform to combat noisy sensor samples which are instigated by alarming network attacks. The authors propose an enhanced DDPG approach based on trust metrics and belief networks, referred to as Deep Deterministic Policy Gradient Belief Network (DDPG-BN). This deep-learning-based approach is projected as an algorithm to provide “Deep-Defense” to the plethora of network attacks. Confidence interval is chosen as the trust metric to decide on the termination of sensor sample collection. Once an enlisted attack is detected, the collection of samples from the particular sensor will automatically cease. The evaluations and results of the experiments highlight a better detection accuracy of 98.37% compared to its counterpart conventional DDPG implementation of 97.46%. The paper also covers the work based on a contemporary Deep Reinforcement Learning (DRL) algorithm, the Actor Critic (AC). The proposed deep learning binary classification model is validated using the NSL-KDD dataset and the performance is compared to a few deep learning implementations as well.
ON FAULT TOLERANCE OF RESOURCES IN COMPUTATIONAL GRIDSijgca
Grid computing or computational grid is always a vast research field in academic, as well as in industry also. Computational grid provides resource sharing through multi-institutional virtual organizations for dynamic problem solving. Various heterogeneous resources of different administrative domain are virtually distributed through different network in computational grids. Thus any type of failure can occur at any point of time and job running in grid environment might fail. Hence fault tolerance is an important and challenging issue in grid computing as the dependability of individual grid resources may not be guaranteed. In order to make computational grids more effective and reliable fault tolerant system is necessary. The objective of this paper is to review different existing fault tolerance techniques applicable in grid computing. This paper presents state of the art of various fault tolerance technique and comparative study of the existing algorithms.
A PROPOSED MODEL FOR DIMENSIONALITY REDUCTION TO IMPROVE THE CLASSIFICATION C...IJNSA Journal
Over the past few years, intrusion protection systems have drawn a mature research area in the field of computer networks. The problem of excessive features has a significant impact on
intrusion detection performance. The use of machine learning algorithms in many previous researches has been used to identify network traffic, harmful or normal. Therefore, to obtain the accuracy, we must reduce the dimensionality of the data used. A new model design based on a combination of feature selection and machine learning algorithms is proposed in this paper. This model depends on selected genes from every feature to increase the accuracy of intrusion detection systems. We selected from features content only ones which impact in attack detection. The performance has been evaluated based on a comparison of several known algorithms. The NSL-KDD dataset is used for examining classification. The proposed model outperformed the other learning approaches with accuracy 98.8 %.
The purpose of this paper two fold. First and foremost it presents a background narrative on the origins, innovations and applications of novel structural automation technologies and the rarity of experts involved in research, development and practice of this field. The second part of this paper presents a rudimentary framework for a solution addressing this paucity – the creation of an interdisciplinary academic program at PAAET that will be the first ever in the region to address applied information communication technologies ICT in the design, planning, engineering and management of structural automation projects. In doing so, we need also to define the level of implementation. This field, as all fields in ICT, have been loosely defined and most applications carry less weight in its implementation than what should be applied. This paper gives an attempt to define an indexing scheme by which we can easily classify such implementation and generate a ranking by which we can safely define its level of ―Intelligence‖.International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Similar to Finding Critical Link and Critical Node Vulnerability for Network (20)
Efficient Data Mining Of Association Rules in Horizontally Distributed Databasesijircee
This document proposes a protocol to securely mine association rules from horizontally distributed databases in a privacy-preserving manner. The key aspects of the protocol are:
1) It uses a novel secure multi-party protocol to compute the union of private subsets held by different players, improving on prior work by avoiding commutative encryption and oblivious transfer.
2) It includes a protocol to test if an element held by one player is contained within a private subset held by another player.
3) Experimental results show the protocol has significantly lower communication and computation costs than prior work, while still protecting individual player's privacy beyond just the final mining results.
Cyclic Sleep Wake Up Scenario for Wireless Body Area Sensor Networksijircee
This document proposes a cyclic sleep wake up scenario for wireless body area sensor networks to improve energy efficiency and extend network lifetime. The scenario involves having one sensor node in an active monitoring state while other nodes sleep, then cyclically switching the active node so all nodes can save power. Simulation results show this approach increases network lifetime compared to a normal setup without sleep cycling. The scenario is implemented using MATLAB and evaluated based on parameters like transmission range, sensor power consumption, data rate, and number of sensor nodes.
Efficient Of Multi-Hop Relay Algorithm for Efficient Broadcasting In MANETSijircee
The document proposes a multi-hop relay algorithm to improve broadcasting efficiency in mobile ad hoc networks (MANETs). It aims to optimize end-to-end delay, packet delivery ratio (PDR), and energy consumption during broadcasting. The algorithm uses source, broadcast, and relay queues at each node to facilitate multi-hop transmissions. It transmits packets using either single-hop, two-hop, or multi-hop relaying depending on the location of the destination node. The algorithm is shown to reduce average end-to-end delay by 3.37%, increase PDR by 1.36%, and reduce average energy consumption per node by 10% compared to previous techniques through simulation experiments.
Mobile Relay in Data-Intensive using Routing Tree WSNijircee
This document discusses using mobile relay nodes to reduce energy consumption in data-intensive wireless sensor networks (WSNs). It proposes an optimal mobile relay configuration (OMRC) approach where mobile relay nodes periodically relocate to optimize the routing tree based on data transfer amounts. The approach formulates an energy optimization framework to determine the position for each relay node that minimizes total transmission and movement energy. It then presents a tree optimization algorithm that iteratively calculates optimal positions for relay nodes using local optimization and breadth-first labeling and weighting of nodes. The algorithm is shown to converge to an optimal configuration that reduces total energy consumption compared to approaches using mobile base stations or data mules.
A Nobel Approach On Educational Data Miningijircee
This document discusses educational data mining and its applications. It begins with introducing data mining and its goal of extracting useful information from large databases. Educational data mining is then discussed as using data mining techniques to understand how students learn. The objectives of educational data mining are outlined as supporting educational research, effective learning, prediction, and feedback. Common data mining techniques discussed include summarization, cluster analysis, classification and prediction, decision trees, and association. The document concludes with how these techniques can be applied in education for knowledge discovery and improving student success.
This document discusses using MapReduce to calculate rough set approximations in parallel for big data. It begins with an introduction to rough sets and how they are calculated based on lower and upper approximations. It then discusses related work applying rough sets and MapReduce to large datasets. The document proposes a parallel method for computing rough set approximations using MapReduce by parallelizing the computation of equivalence classes, decision classes, and their associations. This allows rough set approximations to be calculated more efficiently for big data as compared to traditional serial methods. The document concludes that MapReduce provides an effective framework for the parallel rough set calculations.
The document proposes an energy efficient routing algorithm for maximizing the lifetime of mobile ad hoc networks (MANETs). It calculates the transmission energy between nodes based on distance and selects routes where each node has sufficient residual energy to transmit packets. The algorithm is evaluated based on total transmission energy of routes and maximum number of hops. Simulation results show the total transmission energy metric prolongs network lifetime and transmits more packets compared to maximum hops. The algorithm performs efficiently but could be improved by considering more nodes and comparing to other energy efficient routing protocols.
The document discusses secure data sharing in cloud storage using a key-aggregate cryptosystem (KAC) which allows efficient delegation of decryption rights for any set of ciphertexts. KAC produces constant size ciphertexts and allows any set of secret keys to be aggregated into a single key encompassing the power of the keys being aggregated. This aggregate key can then be sent to others for decryption of the ciphertext set while keeping files outside the set confidential.
Social media management system project report.pdfKamal Acharya
The project "Social Media Platform in Object-Oriented Modeling" aims to design
and model a robust and scalable social media platform using object-oriented
modeling principles. In the age of digital communication, social media platforms
have become indispensable for connecting people, sharing content, and fostering
online communities. However, their complex nature requires meticulous planning
and organization.This project addresses the challenge of creating a feature-rich and
user-friendly social media platform by applying key object-oriented modeling
concepts. It entails the identification and definition of essential objects such as
"User," "Post," "Comment," and "Notification," each encapsulating specific
attributes and behaviors. Relationships between these objects, such as friendships,
content interactions, and notifications, are meticulously established.The project
emphasizes encapsulation to maintain data integrity, inheritance for shared behaviors
among objects, and polymorphism for flexible content handling. Use case diagrams
depict user interactions, while sequence diagrams showcase the flow of interactions
during critical scenarios. Class diagrams provide an overarching view of the system's
architecture, including classes, attributes, and methods .By undertaking this project,
we aim to create a modular, maintainable, and user-centric social media platform that
adheres to best practices in object-oriented modeling. Such a platform will offer users
a seamless and secure online social experience while facilitating future enhancements
and adaptability to changing user needs.
An Internet Protocol address (IP address) is a logical numeric address that is assigned to every single computer, printer, switch, router, tablets, smartphones or any other device that is part of a TCP/IP-based network.
Types of IP address-
Dynamic means "constantly changing “ .dynamic IP addresses aren't more powerful, but they can change.
Static means staying the same. Static. Stand. Stable. Yes, static IP addresses don't change.
Most IP addresses assigned today by Internet Service Providers are dynamic IP addresses. It's more cost effective for the ISP and you.
A brief introduction to quadcopter (drone) working. It provides an overview of flight stability, dynamics, general control system block diagram, and the electronic hardware.
Understanding Cybersecurity Breaches: Causes, Consequences, and PreventionBert Blevins
Cybersecurity breaches are a growing threat in today’s interconnected digital landscape, affecting individuals, businesses, and governments alike. These breaches compromise sensitive information and erode trust in online services and systems. Understanding the causes, consequences, and prevention strategies of cybersecurity breaches is crucial to protect against these pervasive risks.
Cybersecurity breaches refer to unauthorized access, manipulation, or destruction of digital information or systems. They can occur through various means such as malware, phishing attacks, insider threats, and vulnerabilities in software or hardware. Once a breach happens, cybercriminals can exploit the compromised data for financial gain, espionage, or sabotage. Causes of breaches include software and hardware vulnerabilities, phishing attacks, insider threats, weak passwords, and a lack of security awareness.
The consequences of cybersecurity breaches are severe. Financial loss is a significant impact, as organizations face theft of funds, legal fees, and repair costs. Breaches also damage reputations, leading to a loss of trust among customers, partners, and stakeholders. Regulatory penalties are another consequence, with hefty fines imposed for non-compliance with data protection regulations. Intellectual property theft undermines innovation and competitiveness, while disruptions of critical services like healthcare and utilities impact public safety and well-being.
Development of Chatbot Using AI/ML Technologiesmaisnampibarel
The rapid advancements in artificial intelligence and natural language processing have significantly transformed human-computer interactions. This thesis presents the design, development, and evaluation of an intelligent chatbot capable of engaging in natural and meaningful conversations with users. The chatbot leverages state-of-the-art deep learning techniques, including transformer-based architectures, to understand and generate human-like responses.
Key contributions of this research include the implementation of a context- aware conversational model that can maintain coherent dialogue over extended interactions. The chatbot's performance is evaluated through both automated metrics and user studies, demonstrating its effectiveness in various applications such as customer service, mental health support, and educational assistance. Additionally, ethical considerations and potential biases in chatbot responses are examined to ensure the responsible deployment of this technology.
The findings of this thesis highlight the potential of intelligent chatbots to enhance user experience and provide valuable insights for future developments in conversational AI.
Profiling of Cafe Business in Talavera, Nueva Ecija: A Basis for Development ...IJAEMSJORNAL
This study aimed to profile the coffee shops in Talavera, Nueva Ecija, to develop a standardized checklist for aspiring entrepreneurs. The researchers surveyed 10 coffee shop owners in the municipality of Talavera. Through surveys, the researchers delved into the Owner's Demographic, Business details, Financial Requirements, and other requirements needed to consider starting up a coffee shop. Furthermore, through accurate analysis, the data obtained from the coffee shop owners are arranged to derive key insights. By analyzing this data, the study identifies best practices associated with start-up coffee shops’ profitability in Talavera. These findings were translated into a standardized checklist outlining essential procedures including the lists of equipment needed, financial requirements, and the Traditional and Social Media Marketing techniques. This standardized checklist served as a valuable tool for aspiring and existing coffee shop owners in Talavera, streamlining operations, ensuring consistency, and contributing to business success.
A brand new catalog for the 2024 edition of IWISS. We have enriched our product range and have more innovations in electrician tools, plumbing tools, wire rope tools and banding tools. Let's explore together!
How to Manage Internal Notes in Odoo 17 POSCeline George
In this slide, we'll explore how to leverage internal notes within Odoo 17 POS to enhance communication and streamline operations. Internal notes provide a platform for staff to exchange crucial information regarding orders, customers, or specific tasks, all while remaining invisible to the customer. This fosters improved collaboration and ensures everyone on the team is on the same page.
Unblocking The Main Thread - Solving ANRs and Frozen FramesSinan KOZAK
In the realm of Android development, the main thread is our stage, but too often, it becomes a battleground where performance issues arise, leading to ANRS, frozen frames, and sluggish Uls. As we strive for excellence in user experience, understanding and optimizing the main thread becomes essential to prevent these common perforrmance bottlenecks. We have strategies and best practices for keeping the main thread uncluttered. We'll examine the root causes of performance issues and techniques for monitoring and improving main thread health as wel as app performance. In this talk, participants will walk away with practical knowledge on enhancing app performance by mastering the main thread. We'll share proven approaches to eliminate real-life ANRS and frozen frames to build apps that deliver butter smooth experience.
Finding Critical Link and Critical Node Vulnerability for Network
1. ISSN(Online): 2395-xxxx
International Journal of Innovative Research in Computer
and Electronics Engineering
Vol. 1, Issue 4, April 2015
Copyright to IJIRCEE www.ijircee.com 14
Finding Critical Link and Critical Node
Vulnerability for Network
Mr.G.Lenin1
, Mr. D. Ragava Prasad2
, Ms. R. Tharani3
Assistant Professor, Department of CSE, Podhigai College of Engineering & Technology, Tirupattur, Tamilnadu, India1,2,3
ABSTRACT: The vulnerability assessment is the pro-active step to secure network organization. The network
vulnerability is very important in today’s world for critical link and for critical node. The CLD (critical link
disruptor) & CND (critical node disruptor) are always NP complete on the unit disk graph and power law graph
and for the general graph we are analyse the CLD & CND. Finding the solution for CLD & CND problem by using
HILPR algorithm, HILPR algorithm is linear programming algorithm. . In this paper we are proposed one of the novel
methods is belief propagation used for critical link and critical node vulnerability for network vulnerability
assessment and weight of the network.
KEYWORDS: Network vulnerability, critical node, critical link vulnerability assessment Diversity
I. INTRODUCTION
Now a day’s we are study about the network security that is important in today’s world. Firewalls and IDS are
independent layers of security. Firewalls merely examine network packets to determine whether or not to forward them
on to their end destination. Firewalls screen data based on domain names or IP addresses and can screen for low-level
attacks. They are not designed to protect networks from vulnerabilities and improper system configurations. Nor can
they protect from malicious internal activity or rogue assets inside the firewall. Vulnerability assessment takes a wide-
range of network issues into consideration and identifies weaknesses that need correction. Vulnerability assessment
solutions test systems and services such as NetBIOS, HTTP, CGI and WinCGI, FTP, DNS, DoS vulnerabilities, POP3,
SMTP, LDAP, TCP/IP, UDP, Registry, Services, Users and Accounts, password vulnerabilities, publishing extensions,
detection and audit wireless networks, and much more.
Vulnerability analysis aims to provide decision support regarding preventive and restorative actions, ideally
as an integrated part of the planning process. [10]Vulnerability assessment usually focuses mainly on the technology
aspects of vulnerability scanning. The vulnerability scanner works with a proactive approach, it finds vulnerabilities,
hopefully, before they have been used. There is however a possibility that a, to the public, unknown vulnerability is
present in the system vulnerability has two types. Tangible is something which can be measured/ assessed (real)
e.g. Computers, book etc.
Intangible are something which cannot be measured (imaginary). In this paper we study about the
intangible vulnerability. [11]Vulnerability is the measuring weakness of the system or the any network such as ad-hoc
network, World Wide Web, enterprise network. Network vulnerability assessment is study the natural disaster,
unexpected failures of element and also studies the performance of the network reduces in different cases. After
studying the vulnerability of critical links and critical nodes also we are study the vulnerability of network. [1]
Identifying the critical link and critical node for the natural disaster and unexpected failure of network because in the
natural disaster such an earthquake and in the unexpected failure of network. Destroy many important power lines and
a large area blackout.
2. ISSN(Online): 2395-xxxx
International Journal of Innovative Research in Computer
and Electronics Engineering
Vol. 1, Issue 4, April 2015
Copyright to IJIRCEE www.ijircee.com 15
FIG: 1: Detecting Critical Node between Network A&B
Vulnerability scanning consists of using a computer program to identify vulnerabilities in networks, computer
infrastructure or applications [8]. Vulnerability assessment is the process surrounding vulnerability scanning, also
taking into account other aspects such as risk acceptance, remediation etc. A vulnerability assessment process should be
part of an organization’s effort to control information security risks. This process will allow an organization to obtain a
continuous overview of vulnerabilities in their IT environment and the risks associated with them. [9]Only by
identifying and mitigating vulnerabilities in the IT environment can an organization prevent attackers from penetrating
their networks and stealing information many organizations do not frequently perform vulnerability scans in their
environment. They perform scans on a quarterly or annual basis which only provides a snapshot at that point in
time. The figure below shows a possible vulnerability lifecycle with annual scanning in place.
FIG: 2: Annual Vulnerability scanning
Any vulnerability not detected after a schedule scan takes place, will only be detected at the next
scheduled scan. This could leave systems vulnerable for a long period of time. When implementing a vulnerability
management process, regular scans should be scheduled to reduce the exposure time. The above situation will then
look like this:
3. ISSN(Online): 2395-xxxx
International Journal of Innovative Research in Computer
and Electronics Engineering
Vol. 1, Issue 4, April 2015
Copyright to IJIRCEE www.ijircee.com 16
FIG: 3: Continuous vulnerability assessment
Regular scanning ensures new vulnerabilities are detected in a timely manner; allow them to be remediated
faster. Having this process in place greatly reduces the risks an organization is facing. When building a
vulnerability assessment process, the following roles should be identified within the organization:
1) Security officer:
2) Vulnerability Engineer:
3) Asset Owner:
4) IT System Engineer
Suppose taking example of MANET[13] for definition of a critical node is a node whose failure or malicious
behaviour disconnects or significantly degrades the performance of the network. Once identified, a critical node can be
the focus of more resource intensive monitoring or other diagnostic measures. If a node is not considered critical, this
metric can be used to help decide if the application or the risk environment warrant the expenditure of the additional
resources required to monitor, diagnose, and alert other nodes about the problem. In order to detect a critical node we
look towards a graph theoretic approach to detect a vertex-cut and an edge-cut. A vertex-cut is a set of vertices whose
removal produces a sub graph with more components than the original graph. A cut-vertex, or articulation point, is a
vertex cut consisting of a single vertex. An edge-cut is a set of edges whose removal produces a sub graph with more
components than the original graph. A cut-edge, or bridge, is an edge-cut consisting of a single edge. Although the cut-
vertex or cut-edge of a graph G can be determined by applying a straight forward algorithm [12], finding a cut-vertex in
the graphical representation of an ad hoc network is not as straightforward, since the nodes cannot be assumed to be
stationary. A network discovery algorithm can give an approximation of the network topology, but the value of such an
approximation in performing any kind of network diagnosis or intrusion detection depends on the degree of mobility of
the nodes.
II. RELATED WORK
We study about the framework and its components for measuring the vulnerability [5]. Either
connectivity or capacity is needed to network reliability analysis .In the area of distributed computing network
reliability is an important issue. For measuring the graph vulnerability use the neighbor-scattering number [6]. In the
world software, vulnerability is increased in the fast way. In the information security the focusing point is software
vulnerability .for the similarity calculation the national vulnerability database and ontology of vulnerability
management provide the needed information. In many area of vulnerability management the similarity measurement
can be used. in software security , data mining , software testing the similarity measurement model of program
execution is used. reducing the quality of software testing method because of the lacking of measurement. by
using the data types the measurements are categorized. For optimization process the similarity graph is used.
Categorization scheme of the standard vulnerability lacking in the assessment of the information system security, lack
of vulnerability is problem in the information system security. For the information system security assessment and
4. ISSN(Online): 2395-xxxx
International Journal of Innovative Research in Computer
and Electronics Engineering
Vol. 1, Issue 4, April 2015
Copyright to IJIRCEE www.ijircee.com 17
measurement of the software tools and services necessary thing is standard vulnerability taxonomy [7].
Quality of the services is more important in the discovery of topologies because the real time internet
application developed rapidly in the world. So, for assessment the vulnerability of general network topologies we
used the quality of service aware measurement. Many existing works on network vulnerability assessment
mainly focus on the centrality measurements, including degree, be tween’s and closeness centralities, average
shortest path length [6], global clustering coefficients. Due to the failures to assess the network vulnerability using
above measurements, Sun et al. First proposed the total pair wise connectivity as an effective measurement and
empirically evaluate the vulnerability of wireless multichip networks using this metric. Arulselvan et al. [3] showed
the challenge of CND problem by proving its NP-completeness. Later on, the _-disruptor problem was defined by
Dinh et al. [2] to find a minimum set of links or nodes whose removal degrades the total pair wise connectivity to a
desired degree. They proved the NP-completeness of this problem with respect to both links and nodes and the
corresponding in approximability results. Even for the tree topology, Di Summa et al. [4] found that the discovery of
critical nodes also remains NP-complete using this metric. In this paper, we further investigate the theoretical hardness
of both CLD and CND on UDGs and PLGs. In addition, there are a few effective solutions in the literature of the
network vulnerability assessment based on the pair wise connectivity. Arulselvan et al. [3] designed a heuristic
(CNLS) to detect critical nodes, which is however still far away from the optimal solution in large-scale and dense
networks. In [2], Dinh et al. proposed pseudo-approximation algorithms to solve the _-disruptor problem.
However, this problem is defined differently than ours and hard to use its solution when we only know the available
cost to destroy or protect these critical links or nodes.
III. PROPOSED ALGORITHM
A. Design Considerations:
Initially taking nodes & links for drawing graph
Select starting and ending point for finding shortest path.
Solve and find critical node and link
Solve graph for as a general graph ,power law graph and unit disk graph
Lastly solve the belief propagation algorithm for network vulnerability
B. Description of the Proposed Algorithm:
Aim of the proposed algorithm is to find the critical node vulnerability& critical link vulnerability and find network
vulnerability & weight of network. The proposed algorithm is consists of three main steps.
Belief propagation Algorithm:
Step: 1
Every node ni computes vulnerability metric of all its neighboring nodes to which it transmits packets this node
calculates vulnerability belief over time Δt duration.
Step: 2
Node ni periodically calculates vulnerability belief of all its neighbors.
Step: 3
Similarly node ni also gets assessed by its neighboring nodes for belief value
Step: 4
The total vulnerability belief of node ni over Δt
= Σ ni
Step: 5
The total vulnerability belief of network is calculated as
= valunarabality belief node ni
Total No of Node N
5. ISSN(Online): 2395-xxxx
International Journal of Innovative Research in Computer
and Electronics Engineering
Vol. 1, Issue 4, April 2015
Copyright to IJIRCEE www.ijircee.com 18
IV. SIMULATION RESULTS
Suppose the following graph G1 show the network structure of any lab (L)
Fig: 4: The Network of Lab (L)
FIG: 5: Show the Result for Above Graph (Network of L)
6. ISSN(Online): 2395-xxxx
International Journal of Innovative Research in Computer
and Electronics Engineering
Vol. 1, Issue 4, April 2015
Copyright to IJIRCEE www.ijircee.com 19
V. CONCLUSION AND FUTURE WORK
The simulation results showed that the proposed algorithm performs better in this paper using Vulnerability Belief
propagation we are study the novel method for Network Vulnerability Assessment about the given graph of network.
We are first find out the shortest path in the network by using Dijkstra algorithm and then find vulnerability
of CLD & CND .For finding the network vulnerability we are use belief propagation algorithm this vulnerability is in
percentage. Also we are finding the weight of the network.
REFERENCES.
1. Anjum R. Albert, I. Albert, and G. L. Nakarado. “Structural vulnerability of the North American power grid.”
Phys. Rev. E, 69(2), Feb 2004.
2. T. Dinh, Y. Xuan, M. Thai, P. Pardalos, and T. Znati. “On new approaches of assessing network vulnerability:
Hardness and approximation”.
Networking, IEEE/ACM Transactions on, 20(2):609 –619, April 2012.
3. A. Arulselvan, C. W. Commander, L. Elefteriadou, and P. M. Pardalos. “Detecting critical nodes in
sparse graphs” Comput. Oper. Res.,
36:2193–2200, July 2009
4. M. D. Summa, A. Grosso, and M. Locatelli. “Complexity of the critical node problem over trees”.Computers
& OR, 38(12):1766–1774, 2011
5. Scarfone, K.; Grance, T. Nat. Inst. of Stand. & Technol., Washington, DC. “A framework for measuring the
vulnerability of hosts”. Information
Technology IT 2008.1st International Conference on,52(1):1-4 May 2008
6. Fengwei Li; Qingfang Ye; Shuhua Wang. “Neighbor-scattering number in regular graphs”, Multimedia
Technology (ICMT), International Conference on . 2209 – 2214, 2011
7. Dept. of Inf. Technol., Mahakal Inst. of Technol., Ujjain, India Computer Technology and Development
(ICCTD).”Towards standardization of vulnerability taxonomy.Towards standardization of vulnerability
taxonomy”, 2nd International Conference on.379-384,2010
8. A. Karygiannis, E. Antonakakis, and A. Apostolopoulos, “Detecting Critical Nodes for MANET Intrusion
Detection Systems”, In Proceedings of IEEE Workshop on Security, Privacy and Trust in Pervasive and
Ubiquitous Computing, pp.7-15, 2006
.