The document proposes a conceptual trusted incident reaction architecture based on a multi-agent system. The architecture includes three main components: (1) an alert correlation engine that collects and analyzes alerts, (2) a policy instantiation engine that decides on and defines reactions to confirmed alerts, and (3) a policy deployment point that deploys new policies on targeted networks. A decision support system is included to help agents make decisions under uncertainty using an ontology, Bayesian networks, and influence diagrams. The architecture is illustrated using a case study of a medical application network.
This document proposes a conceptual trusted incident reaction architecture based on a multi-agent system. The architecture is designed to dynamically and flexibly react to security incidents across an enterprise network. It incorporates the concept of trust into the decision-making process for determining and deploying appropriate security responses. The architecture is illustrated using a case study of a medical application distributed across buildings, a campus, and metropolitan area networks.
Towards Automating Security Compliance Value Chain_FSE15_2June_submitted_finalSmita S. Ghaisas
This document proposes an approach to automate key activities in the security compliance value chain. It discusses automating the interpretation of PCI-DSS regulations to identify system requirements, tracing these requirements to CIS security controls, implementing appropriate controls, and verifying and reporting compliance. The approach uses a rule model to interpret regulations and classify them based on rule intents and acts. It applies natural language processing to 209 PCI-DSS regulations and traces 189 technological regulations to over 400 CIS security controls for Windows Server 2008. An evaluation achieves 80-83% precision and recall in automated interpretation.
Control systems and computer science are two distinct and important fields
of engineering. The development of cloud computing in computer science
has become an enabler for the widely used controller in control systems to
migrate to the cloud and has created a new field of research in cloud-based
control systems (CCS). The paper used the systematic literature review
approach to obtain insight into current CCS research. The objectives include
a review in areas such as the demographics, topics of the research,
evaluation method, and application domain. To that end, systematic
literature review (SLR) has been conducted. The study obtained 64 primary
studies from 581 articles. The CCS has a distinct characteristic; despite the
fact that the cloud and network dynamics system, when coupled with the
controlled plant, is inherently nonlinear, research efforts have used linear
models with optimal control to approach it successfully in a limited case of
control objectives. Furthermore, cloud-centric and cloud-fog network
architecture approaches are considered in the studies—whereas, the
quantitative method mainly uses simulation and discussion. Finally, the SLR
summarizes open challenges for CCS in the future.
Multi-Agent System (MAS) monitoring solutions are designed for a plethora of usage topics. Existing approach mostly used cloned back-end architectures while front-end monitoring interface tends to constitute the real specificity of the solution. These interfaces are recurrently structured around three dimensions: access to informed knowledge, agent’s behavioural rules, and restitution of real-time states of specific system sector. In this paper, we propose prototyping a sector-agnostic MAS platform (Smart-X) which gathers in an integrated and independent platform all the functionalities required to monitor and to govern a wide range of sector specific environments. For illustration and validation purposes, the use of Smart-X is introduced and explained with a smart-mobility case study.
The document discusses the design and implementation process in software engineering. It covers topics like using the Unified Modeling Language (UML) for object-oriented design, design patterns, and implementation issues. It then discusses the design process, including identifying system contexts and interactions, architectural design, identifying object classes, and creating design models like subsystem, sequence, and state diagrams. The example of designing a weather station system is used to illustrate these design concepts and activities.
This document summarizes the internship work conducted by Marta de la Cruz Martos at CITSEM within the GRyS group. The internship focused on developing algorithms to analyze energy consumption for smart grids as part of the I3RES project, which aims to integrate renewable energy sources into distributed networks using artificial intelligence. Specifically, the internship involved studying relevant technologies, participating in software component design, developing and implementing algorithms, and preparing reports. The document provides background on distributed systems and databases, describes the work conducted, and presents results and conclusions.
Auto Finding and Resolving Distributed Firewall PolicyIOSR Journals
This document presents a method for automatically finding and resolving anomalies in distributed firewall policies. It proposes using rule-based segmentation and a grid-based representation to partition firewall rules into disjoint packet spaces to identify policy anomalies like conflicts and redundancies. The paper describes implementing this approach in a tool called FAME that can discover and resolve anomalies by reordering rules. Experimental results show FAME achieved around 92% conflict resolution and improved network security and availability. The method aims to effectively manage anomalies in distributed firewall environments.
This document summarizes the state of the art in cloud security techniques. It begins by introducing cloud computing and its benefits. It then surveys existing literature on cloud security, summarizing 5 papers on topics like policy reconciliation and attribute-based encryption. It describes several techniques for cloud security like attribute-based encryption and fuzzy identity-based encryption. Finally, it discusses future work on developing a novel sequential rule mining algorithm for market basket data.
This document summarizes four architectural patterns for context-aware systems: WCAM, Event-Control-Action, Action, and architectural pattern for context-based navigation. It discusses examples, problems addressed, solutions, structures, and benefits of each pattern. The patterns are examined to determine which can best overcome complexity and be more extensible for context-aware systems.
A study secure multi authentication based data classification model in cloud ...IJAAS Team
Abstract: Cloud computing is the most popular term among enterprises and news. The concepts come true because of fast internet bandwidth and advanced cooperation technology. Resources on the cloud can be accessed through internet without self built infrastructure. Cloud computing is effectively managing the security in the cloud applications. Data classification is a machine learning technique used to predict the class of the unclassified data. Data mining uses different tools to know the unknown, valid patterns and relationshipsin the dataset. These tools are mathematical algorithms, statistical models and Machine Learning (ML) algorithms. In this paper author uses improved Bayesian technique to classify the data and encrypt the sensitive data using hybrid stagnography. The encrypted and non encrypted sensitive data is sent to cloud environment and evaluate the parameters with different encryption algorithms.
The document proposes an agent-based architecture for multi-level security incident reaction in distributed telecommunication networks. The architecture has three levels: a low level interface with the infrastructure, an intermediate level using multi-agent systems to correlate alerts and deploy reactions across domains, and a high level for global supervision and policy management. The architecture was designed based on requirements like scalability, availability, autonomy, and robust reaction and alert management across distributed systems. It was successfully tested for implementing data access control policies.
Privacy Protection in Distributed Industrial Systemiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document summarizes a research paper about ensuring privacy protection in distributed industrial systems. It begins with an abstract that discusses how traditional cybersecurity approaches may not be effective for industrial networks due to their unique characteristics. It then provides background on industrial automation control systems and typical network configurations. The main goal of the paper is to assess the current security situation for most industrial distributed systems and discuss key elements like system characteristics, standardization efforts, and effective security controls.
The document summarizes two recent studies on access control. It discusses the authors' contributions in each study, their motivations, and potential additional areas of study. The first study introduced metrics to evaluate access control rule sets and provide a scientific method for comparing rule sets. The second study surveyed access control in fog computing, highlighting security challenges and providing requirements and taxonomies for access control models. It suggests attribute-based encryption as an area for further fog computing access control research.
This document discusses the design and implementation chapter of a lecture. It covers topics like using UML for object-oriented design, design patterns, and implementation issues. It then discusses the weather station case study used to illustrate the design process, including defining system context, use cases, architectural design, identifying object classes, design models, and interface specification.
This document discusses design and implementation topics covered in Chapter 7, including object-oriented design using UML, design patterns, implementation issues, and open source development. It provides an example of designing a weather station system using various UML diagrams to illustrate the object-oriented design process. Key activities covered are identifying objects, developing design models, and specifying object interfaces. Implementation issues discussed include reuse, configuration management, and host-target development.
A security decision reaction architecture for heterogeneous distributed networkchristophefeltus
This document proposes a multi-agent system architecture for reacting to security alerts in heterogeneous distributed networks. The architecture has three layers - low, intermediate, and high - and consists of agents that perform alert correlation, reaction decision making, and policy deployment. The agents communicate by exchanging messages. The architecture is intended to allow for quick and efficient reaction to security attacks while ensuring coordinated configuration changes across network components. It was developed and illustrated using a case study of a medical application distributed across buildings, campuses, and metropolitan areas.
This document proposes a multi-agent system architecture for reacting to security alerts in heterogeneous distributed networks. The architecture has three layers - a low level that interfaces with the target infrastructure, an intermediate level that correlates alerts from different domains and deploys reaction actions, and a high level global view. It uses an ontology and Bayesian network based decision support system to help agents make decisions according to preferences and influence diagrams. The approach is illustrated using a case study of a medical application distributed across buildings, campuses and metropolitan areas.
MODEL-DRIVEN SECURITY ASSESSMENT AND VERIFICATION FOR BUSINESS SERVICES ijwscjournal
Information security covers many areas within an enterprise. Each area has security vulnerabilities and, hopefully, some corresponding countermeasures that raise the security level and
provide better protection. The fundamental concepts in information security are the security model, which outlines how security is to be implemented. A security policy outlines how data is accessed, what level of security is required, and what actions should be taken when these requirements are not met. A security model is a statement that outlines the requirements necessary to properly support and implement a certain security policy. An important concept in the design and analysis of secure systems is the security model, because it incorporates the security policy that should be enforced in the system. A model is a symbolic representation of a policy. It maps the desires of the policy makers into a set of rules that are to be followed by a computer system. In the paper we propose a model driven security assessment and verification for business service. The Security Assessment and Verification verifies whether the Application and Services are secure based on the Service Level Agreement and generates the report on the level of security features. It is designed to help business owners, operators and staff to assess the security of their business. It covers potential areas of vulnerability, and provides suggestions for adapting your security to reduce the risk of crime against your business. A security policy states that no one from a lower security level should be able to view or modify information at a higher security level, the supporting security model will outline the necessary logic and rules that need to be implemented to
ensure that under no circumstances can a lower-level subject access a higher-level object in an unauthorized manner. The security policy is an abstract term that represents the objectives and goals a system must meet and accomplish to be deemed secure and acceptable.
Similar to Critical infrastructures governance exploring scada cybernetics through architectured policy semantic (20)
This document provides an agenda and overview for a joint workshop on security modeling hosted by the ArchiMate Forum and Security Forum. The workshop aims to identify opportunities to improve the conceptual and visual modeling of enterprise information security using TOGAF and ArchiMate. The agenda includes introductions, a research spotlight on strengthening role-based access control with responsibility modeling, an open discussion on complementing TOGAF and ArchiMate with enhanced security modeling, and identifying next steps. The workshop purpose is to enable better security architecture decisions and drive usage of TOGAF and ArchiMate for security architecture.
Aligning the business operations with the appropriate IT infrastructure is a challenging and critical activity. Without efficient business/IT alignment, the companies face the risk not to be able to deliver their business services satisfactorily and that their image is seriously altered and jeopardized. Among the many challenges of business/IT alignment is the access rights management which should be conducted considering the rising governance needs, such as taking into account the business actors' responsibility. Unfortunately, in this domain, we have observed that no solution, model and method, fully considers and integrates the new needs yet. Therefore, the paper proposes firstly to define an expressive Responsibility metamodel, named ReMMo, which allows representing the existing responsibilities at the business layer and, thereby, allows engineering the access rights required to perform these responsibilities, at the application layer. Secondly, the Responsibility metamodel has been integrated with ArchiMate® to enhance its usability and benefits from the enterprise architecture formalism. Finally, a method has been proposed to define the access rights more accurately, considering the alignment of ReMMo and RBAC. The research was realized following a design science and action design based research method and the results have been evaluated through an extended case study at the Hospital Center in Luxembourg.
This document proposes an innovative systemic approach to risk management across interconnected sectors. It suggests using enterprise architecture models to manage cross-sector risks in Luxembourg's complex ICT ecosystem. The approach would provide regulators an overview of all players and systems, as well as models of different sectors to analyze collected data and risks at a national level, fostering accurate and reactive risk mitigation across economic domains.
This document proposes extending the HL7 standard with a responsibility perspective to better manage access rights to patient health records. It presents the ReMMo responsibility metamodel, which defines actors' responsibilities and associated access rights. The paper aims to align ReMMo with the HL7-based eSanté healthcare platform model in Luxembourg to semantically enhance access controls based on users' real responsibilities rather than just roles. It will first map concepts between the two models, then evaluate the alignment through a prototype applying inference rules.
This document presents a study that aims to develop and validate a responsibility model to improve IT governance. It analyzes concepts of responsibility from literature and frameworks like COBIT. The researchers developed a responsibility model with key concepts like obligation, accountability, right, and commitment. They then compare this model to COBIT's representation of responsibility to identify areas for potential enhancement, like adding concepts that COBIT lacks. The document illustrates how the responsibility model could be used to refine COBIT's process for identifying system owners and their responsibilities.
This document proposes an innovative approach called SIM (Secure Identity Management) that aims to make access management policies closer aligned with business objectives. It does this in two ways:
1) By focusing the policy engineering process on business goals and responsibilities defined in processes, using concepts from the ISO/IEC 15504 standard. This links capabilities and accountabilities to process outcomes and work products.
2) By defining a multi-agent system architecture to automate the deployment of policies across heterogeneous IT components and devices. The agents provide autonomy and ability to adapt rapidly according to context.
The approach was prototyped using open source components and aims to improve how access rights are defined according to business needs and deployed across an organization
This document proposes a methodological approach for specifying services and analyzing service compliance considering the responsibility dimension of stakeholders. The approach includes a product model and process model. The product model has three layers: an informational layer describing service context and concepts, an organizational layer describing business rules and roles, and a responsibility dimension layer linking the two. The process model outlines steps for service architects to identify context, define concepts and rules, specify services, and analyze compliance. The approach is illustrated with an example of managing access rights for sensitive healthcare data exchange between organizations.
This document discusses integrating responsibility aspects into service engineering for e-government. It proposes a multi-layered approach including an ontological layer defining legal concepts, an organizational layer describing roles and stakeholders, an informational layer representing data structures and integrity constraints, and a technical layer representing IT components. A responsibility meta-model is also introduced to align responsibilities across these layers and facilitate interoperability between services that share data. The approach aims to ensure service compliance and manage risks associated with e-government services.
1) The document proposes a dynamic approach for assigning functions and responsibilities to agents in a multi-agent system for critical infrastructure management.
2) The approach uses an agent's reputation, which is based on past performance, to determine which agents receive which responsibilities as crisis situations change over time.
3) Assigning responsibilities dynamically based on reputation allows the system to continue operating effectively if an agent becomes isolated or has reduced capabilities during a crisis.
This document proposes a responsibility modeling language (ReMoLa) to align access rights with business process requirements. ReMoLa is a responsibility-centered meta-model that integrates concepts from the business and technical layers, with the concept of employee responsibility bridging the two. It incorporates four types of obligations from the COBIT framework to refine employee responsibilities and better assign access rights. ReMoLa maps responsibilities to roles in the RBAC model to leverage its advantages for access right management while ensuring responsibilities align with business tasks and employee commitment.
The document describes the NOEMI assessment methodology, which was developed as part of a research project to help very small enterprises (VSEs) improve their IT practices. The methodology aims to assess VSEs' IT capabilities in order to facilitate collaborative IT management across organizations. It was designed to be aligned with common IT standards like ISO/IEC 15504 and ITIL, but adapted specifically for VSEs. The methodology has been tested through several case studies with VSEs in Luxembourg, with promising results.
This document provides a preliminary literature review of policy engineering methods related to the concept of responsibility. It summarizes key access control models and discusses how they address concepts like capability, accountability, and commitment. The document also reviews engineering methods and how they incorporate responsibility considerations. The overall goal is to orient further research towards a new policy model and engineering method that more fully addresses stakeholder responsibility.
This document proposes an extension of the ArchiMate enterprise architecture framework to model multi-agent systems for critical infrastructure governance. The authors develop a responsibility-driven policy concept and metamodel layers to represent agent behavior and organizational policies across technical, application, and organizational layers. The approach is illustrated through a case study of a financial transaction processing system.
This document summarizes an experimental prototype of the OpenSST protocol for secured electronic transactions. OpenSST was developed to achieve high security, simplicity in software engineering, and compatibility with existing standards. The prototype uses OpenSST for the authorization portion of electronic payments in an e-business clearing solution. It describes the OpenSST message format and types, and discusses how OpenSST is implemented in the prototype's three-element architecture of an OpenSST proxy, reverse proxy, and server.
This document discusses the NOEMI model, a collaborative management model for ICT processes in SMEs. The model was developed by the Centre Henri Tudor and tested with a cluster of 8 partner SMEs. Key aspects of the model include defining ICT activities across 5 domains, assessing each SME's capabilities, and having an operational team manage activities for the cluster under a coordination committee. The experiment showed improved cost control, management, and partner satisfaction compared to alternatives like outsourcing or hiring individual IT staff. The research is now ready for market transfer as the successful model is adopted long-term by participating SMEs.
This document proposes a multi-agent architecture for incident reaction in information system security. The architecture has three layers - low level interacts directly with the infrastructure, intermediate level correlates alerts and deploys reaction actions using multi-agent systems, and high level provides supervision and manages business policies. The architecture was tested for data access control and aims to quickly and efficiently react to attacks while ensuring policy compliance. The document discusses requirements like scalability, autonomy, and global supervision. It also describes the key components of alert management, reaction decision making, and policy definition/deployment to implement the architecture using a multi-agent approach.
More from Luxembourg Institute of Science and Technology (20)
El Nuevo Cohete Ariane de la Agencia Espacial Europea-6_Media-Kit_english.pdfChamps Elysee Roldan
Europe must have autonomous access to space to realise its ambitions on the world stage and
promote knowledge and prosperity.
Space is a natural extension of our home planet and forms an integral part of the infrastructure
that is vital to daily life on Earth. Europe must assert its rightful place in space to ensure its
citizens thrive.
As the world’s second-largest economy, Europe must ensure it has secure and autonomous access to
space, so it does not depend on the capabilities and priorities of other nations.
Europe’s longstanding expertise in launching spacecraft and satellites has been a driving force behind
its 60 years of successful space cooperation.
In a world where everyday life – from connectivity to navigation, climate and weather – relies on
space, the ability to launch independently is more important than ever before. With the launch of
Ariane 6, Europe is not just sending a rocket into the sky, we are asserting our place among the
world’s spacefaring nations.
ESA’s Ariane 6 rocket succeeds Ariane 5, the most dependable and competitive launcher for decades.
The first Ariane rocket was launched in 1979 from Europe’s Spaceport in French Guiana and Ariane 6 will continue the adventure.
Putting Europe at the forefront of space transportation for nearly 45 years, Ariane is a triumph of engineering and the prize of great European industrial and political
cooperation. Ariane 1 gave way to more powerful versions 2, 3 and 4. Ariane 5 served as one of the world’s premier heavy-lift rockets, putting single or multiple
payloads into orbit – the cargo and instruments being launched – and sent a series of iconic scientific missions to deep space.
The decision to start developing Ariane 6 was taken in 2014 to respond to the continued need to have independent access to space, while offering efficient
commercial launch services in a fast-changing market.
ESA, with its Member States and industrial partners led by ArianeGroup, is developing new technologies for new markets with Ariane 6. The versatility of Ariane 6
adds a whole new dimension to its very successful predecessors
A mature quasar at cosmic dawn revealed by JWST rest-frame infrared spectroscopySérgio Sacani
The rapid assembly of the first supermassive black holes is an enduring mystery. Until now, it was not known whether quasar ‘feeding’ structures (the ‘hot torus’) could assemble as fast as the smaller-scale quasar structures. We present JWST/MRS (rest-frame infrared) spectroscopic observations of the quasar J1120+0641 at z = 7.0848 (well within the epoch of reionization). The hot torus dust was clearly detected at λrest ≃ 1.3 μm, with a black-body temperature of
K, slightly elevated compared to similarly luminous quasars at lower redshifts. Importantly, the supermassive black hole mass of J1120+0641 based on the Hα line (accessible only with JWST), MBH = 1.52 ± 0.17 × 109 M⊙, is in good agreement with previous ground-based rest-frame ultraviolet Mg II measurements. Comparing the ratios of the Hα, Paα and Paβ emission lines to predictions from a simple one-phase Cloudy model, we find that they are consistent with originating from a common broad-line region with physical parameters that are consistent with lower-redshift quasars. Together, this implies that J1120+0641’s accretion structures must have assembled very quickly, as they appear fully ‘mature’ less than 760 Myr after the Big Bang.
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
Lunar Mobility Drivers and Needs - ArtemisSérgio Sacani
NASA’s new campaign of lunar exploration will see astronauts visiting sites of scientific or strategic
interest across the lunar surface, with a particular focus on the lunar South Pole region.[1] After landing
crew and cargo at these destinations, local mobility around landing sites will be key to movement of
cargo, logistics, science payloads, and more to maximize exploration returns.
NASA’s Moon to Mars Architecture Definition Document (ADD)[2] articulates the work needed to achieve
the agency’s human lunar exploration objectives by decomposing needs into use cases and functions.
Ongoing analysis of lunar exploration needs reveals demands that will drive future concepts and elements.
Recent analysis of integrated surface operations has shown that the transportation of cargo on the
surface from points of delivery to points of use will be particularly important. Exploration systems will
often need to support deployment of cargo in close proximity to other surface infrastructure. This cargo
can range from the crew logistics and consumables described in the 2023 “Lunar Logistics Drivers and
Needs” white paper,[3] to science and technology demonstrations, to large-scale infrastructure that
requires precision relocation.
SCIENTIFIC INVESTIGATIONS – THE IMPORTANCE OF FAIR TESTING.pptxJoanaBanasen1
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
TOPIC: INTRODUCTION TO FORENSIC SCIENCE.pptximansiipandeyy
This presentation, "Introduction to Forensic Science," offers a basic understanding of forensic science, including its history, why it's needed, and its main goals. It covers how forensic science helps solve crimes and its importance in the justice system. By the end, you'll have a clear idea of what forensic science is and why it's essential.
Possible Anthropogenic Contributions to the LAMP-observed Surficial Icy Regol...Sérgio Sacani
This work assesses the potential of midsized and large human landing systems to deliver water from their exhaust
plumes to cold traps within lunar polar craters. It has been estimated that a total of between 2 and 60 T of surficial
water was sensed by the Lunar Reconnaissance Orbiter Lyman Alpha Mapping Project on the floors of the larger
permanently shadowed south polar craters. This intrinsic surficial water sensed in the far-ultraviolet is thought to be
in the form of a 0.3%–2% icy regolith in the top few hundred nanometers of the surface. We find that the six past
Apollo Lunar Module midlatitude landings could contribute no more than 0.36 T of water mass to this existing,
intrinsic surficial water in permanently shadowed regions (PSRs). However, we find that the Starship landing
plume has the potential, in some cases, to deliver over 10 T of water to the PSRs, which is a substantial fraction
(possibly >20%) of the existing intrinsic surficial water mass. This anthropogenic contribution could possibly
overlay and mix with the naturally occurring icy regolith at the uppermost surface. A possible consequence is that
the origin of the intrinsic surficial icy regolith, which is still undetermined, could be lost as it mixes with the
extrinsic anthropogenic contribution. We suggest that existing and future orbital and landed assets be used to
examine the effect of polar landers on the cold traps within PSRs
Molecular biology of abiotic stress tolerence in plantsrushitahakik1
### Molecular Biology of Abiotic Stress Tolerance in Plants
Abiotic stress refers to the non-living environmental factors that can cause significant harm to plants, including drought, salinity, extreme temperatures, heavy metals, and oxidative stress. Understanding the molecular biology underlying abiotic stress tolerance is crucial for developing crops that can withstand these conditions, ensuring food security in the face of climate change and environmental degradation. Here, we explore the key molecular mechanisms, pathways, and genetic strategies plants use to cope with abiotic stress.
#### 1. Signal Perception and Transduction
**1.1. Signal Perception:**
Plants possess various sensors and receptors to detect abiotic stress signals. For instance, membrane-bound receptors such as receptor-like kinases (RLKs) and ion channels play critical roles in sensing changes in environmental conditions.
**1.2. Signal Transduction Pathways:**
Upon sensing abiotic stress, plants activate complex signal transduction pathways that involve:
- **Calcium Signaling:** Changes in cytosolic calcium levels act as secondary messengers. Calcium-binding proteins, such as calmodulins (CaMs) and calcineurin B-like proteins (CBLs), decode these signals and activate downstream responses.
- **Reactive Oxygen Species (ROS) Signaling:** ROS are produced under stress and function as signaling molecules. Controlled ROS production is crucial for activating defense mechanisms, while excessive ROS can cause cellular damage.
- **Mitogen-Activated Protein Kinase (MAPK) Cascades:** These cascades amplify the stress signal and regulate the expression of stress-responsive genes.
#### 2. Transcriptional Regulation
**2.1. Transcription Factors (TFs):**
TFs are pivotal in regulating the expression of genes involved in stress responses. Key TF families include:
- **AP2/ERF (APETALA2/ETHYLENE RESPONSE FACTOR):** Involved in drought and salinity tolerance.
- **NAC (NAM, ATAF, and CUC):** Play roles in responding to dehydration and high salinity.
- **bZIP (Basic Leucine Zipper):** Associated with responses to various stresses, including drought and oxidative stress.
- **WRKY:** Participate in the regulation of genes involved in stress responses and pathogen defense.
**2.2. Epigenetic Regulation:**
Epigenetic modifications, such as DNA methylation, histone modifications, and chromatin remodeling, influence gene expression without altering the DNA sequence. These modifications can lead to the activation or repression of stress-responsive genes.
#### 3. Stress-Responsive Genes and Proteins
**3.1. Osmoprotectants:**
Plants accumulate osmoprotectants like proline, glycine betaine, and sugars (e.g., trehalose) to maintain cellular osmotic balance under stress conditions.
**3.2. Antioxidant Defense:**
To mitigate oxidative stress, plants enhance the production of antioxidants, such as superoxide dismutase (SOD), catalase (CAT), and peroxidases, which scavenge harmful ROS.
A slightly oblate dark matter halo revealed by a retrograde precessing Galact...Sérgio Sacani
The shape of the dark matter (DM) halo is key to understanding the
hierarchical formation of the Galaxy. Despite extensive eforts in recent
decades, however, its shape remains a matter of debate, with suggestions
ranging from strongly oblate to prolate. Here, we present a new constraint
on its present shape by directly measuring the evolution of the Galactic
disk warp with time, as traced by accurate distance estimates and precise
age determinations for about 2,600 classical Cepheids. We show that the
Galactic warp is mildly precessing in a retrograde direction at a rate of
ω = −2.1 ± 0.5 (statistical) ± 0.6 (systematic) km s−1 kpc−1 for the outer disk
over the Galactocentric radius [7.5, 25] kpc, decreasing with radius. This
constrains the shape of the DM halo to be slightly oblate with a fattening
(minor axis to major axis ratio) in the range 0.84 ≤ qΦ ≤ 0.96. Given the
young nature of the disk warp traced by Cepheids (less than 200 Myr), our
approach directly measures the shape of the present-day DM halo. This
measurement, combined with other measurements from older tracers,
could provide vital constraints on the evolution of the DM halo and the
assembly history of the Galaxy.
Collaborative Team Recommendation for Skilled Users: Objectives, Techniques, ...Hossein Fani
Collaborative team recommendation involves selecting users with certain skills to form a team who will, more likely than not, accomplish a complex task successfully. To automate the traditionally tedious and error-prone manual process of team formation, researchers from several scientific spheres have proposed methods to tackle the problem. In this tutorial, while providing a taxonomy of team recommendation works based on their algorithmic approaches to model skilled users in collaborative teams, we perform a comprehensive and hands-on study of the graph-based approaches that comprise the mainstream in this field, then cover the neural team recommenders as the cutting-edge class of approaches. Further, we provide unifying definitions, formulations, and evaluation schema. Last, we introduce details of training strategies, benchmarking datasets, and open-source tools, along with directions for future works.
Dalghren, Thorne and Stebbins System of Classification of AngiospermsGurjant Singh
The Dahlgren, Thorne, and Stebbins system of classification is a modern method for categorizing angiosperms (flowering plants) based on phylogenetic relationships. Developed by botanists Rolf Dahlgren, Robert Thorne, and G. Ledyard Stebbins, this system emphasizes evolutionary relationships and incorporates extensive morphological and molecular data. It aims to provide a more accurate reflection of the genetic and evolutionary connections among angiosperm families and orders, facilitating a better understanding of plant diversity and evolution. This classification system is a valuable tool for botanists, researchers, and horticulturists in studying and organizing the vast diversity of flowering plants.
Search for Dark Matter Ionization on the Night Side of Jupiter with CassiniSérgio Sacani
We present a new search for dark matter (DM) using planetary atmospheres. We point out that
annihilating DM in planets can produce ionizing radiation, which can lead to excess production of
ionospheric Hþ
3 . We apply this search strategy to the night side of Jupiter near the equator. The night side
has zero solar irradiation, and low latitudes are sufficiently far from ionizing auroras, leading to a lowbackground search. We use Cassini data on ionospheric Hþ
3 emission collected three hours either side of
Jovian midnight, during its flyby in 2000, and set novel constraints on the DM-nucleon scattering cross
section down to about 10−38 cm2. We also highlight that DM atmospheric ionization may be detected in
Jovian exoplanets using future high-precision measurements of planetary spectra.
Transmission Spectroscopy of the Habitable Zone Exoplanet LHS 1140 b with JWS...Sérgio Sacani
LHS 1140 b is the second-closest temperate transiting planet to the Earth with an equilibrium temperature low enough to support surface liquid water. At 1.730±0.025 R⊕, LHS 1140 b falls within
the radius valley separating H2-rich mini-Neptunes from rocky super-Earths. Recent mass and radius
revisions indicate a bulk density significantly lower than expected for an Earth-like rocky interior,
suggesting that LHS 1140 b could either be a mini-Neptune with a small envelope of hydrogen (∼0.1%
by mass) or a water world (9–19% water by mass). Atmospheric characterization through transmission
spectroscopy can readily discern between these two scenarios. Here, we present two JWST/NIRISS
transit observations of LHS 1140 b, one of which captures a serendipitous transit of LHS 1140 c. The
combined transmission spectrum of LHS 1140 b shows a telltale spectral signature of unocculted faculae (5.8 σ), covering ∼20% of the visible stellar surface. Besides faculae, our spectral retrieval analysis
reveals tentative evidence of residual spectral features, best-fit by Rayleigh scattering from an N2-
dominated atmosphere (2.3 σ), irrespective of the consideration of atmospheric hazes. We also show
through Global Climate Models (GCM) that H2-rich atmospheres of various compositions (100×, 300×,
1000×solar metallicity) are ruled out to >10 σ. The GCM calculations predict that water clouds form
below the transit photosphere, limiting their impact on transmission data. Our observations suggest
that LHS 1140 b is either airless or, more likely, surrounded by an atmosphere with a high mean molecular weight. Our tentative evidence of an N2-rich atmosphere provides strong motivation for future
transmission spectroscopy observations of LHS 1140 b.
This an presentation about electrostatic force. This topic is from class 8 Force and Pressure lesson from ncert . I think this might be helpful for you. In this presentation there are 4 content they are Introduction, types, examples and demonstration. The demonstration should be done by yourself
Critical infrastructures governance exploring scada cybernetics through architectured policy semantic
1. Critical Infrastructures Governance
Exploring SCADA Cybernetics through Architectured Policy Semantic
Djamel Khadraoui and Christophe Feltus
Service Science and Innovation, Public Research Centre Henri Tudor
29, avenue John F. Kennedy
L-1855 Luxembourg-Kirchberg, Luxembourg
christophe.feltus@tudor.lu
Abstract — SCADA-systems are very complex, sophisticated and
integrated systems which support people in monitoring and
governing the huge volumes of knowledge engendered by critical
infrastructures (industry, energy, transport, and healthcare).
These systems are elaborated upon a colossal range of
precontrived components which need to interact thoroughly with
each other although, a priori, they all use heterogeneous
technologies and protocols, they behave unevenly through the
SCADA architecture, and they are all located in miscellaneous
corners of the production system. Furthermore, these
components interact, amongst other, by means of policies which
formulate the reasoning for component behaviour in terms of
expecting actions realization or in terms of accessed information.
This paper explores these policies’ semantic through a unified
component modelling approach with the aim of providing a
homogeneous and coherent framework, adapted for the
governance of the system by all SCADA and non-SCADA
operators.
Keywords — Policy management; SCADA system; architecture;
IS security; critical infrastructure.
I. INTRODUCTION
SCADA 1
systems are very complex, sophisticated and
integrated systems which support people in monitoring and
governing the huge volumes of knowledge engendered by
critical infrastructures (CI – in industry, energy, transport, and
healthcare) [1]. In our previous work, we have defined a
metamodel for the components of the SCADA architecture
[2]. This metamodel has been elaborated acknowledging
traditional enterprise architecture metamodel (EAM) and it
allows modelling each component according to a similar
structure. This structure proposes a representation of the latter
based on a three layered perspective, namely: the organization
layer, the application layer and the technical layer [4, 5].
Contrary to traditional EAM, one particular aspect of the
metamodel elaborated for SCADA’s components is that it is
enriched with the concept of policy at the two upper
architecture layers, thereby making it possible to elicit
organizational policies and application policies [2].
Depicting SCADA systems allows acknowledging the wide
range of components which compose it [6], like for instance:
the remote terminal unit (RTU), the intrusion detection system
(IDS), the monitoring tools, the honeypots, and so forth. These
components need to interact with each other and therefore, it is
fundamental to have the system supported by accurate and
1
Supervisory Control and Data Acquisition
perfectly integrated policies. Indeed, when for instance, a
monitoring system detects an intrusion, it requests the firewall
to adapt the filtering policy to face this intrusion. This
requirement for adaptation is achieved by a filtering policy
modification, induced by the monitoring tool. Another example
is the alert correlation performed by a correlation engine. The
homology is achieved according to the correlation policy which
receives alerts from different network sources and which, based
on the correlation policy, generates a new policy at the
destination of the protection mechanisms such as the access
right manager, the server, and so forth. In this context, the
amount of generated policies is very important. Their
elaboration is based on different (meta-) models [7] which have
different semantics (intrusion detection, alert correlation,
reaction, access control, etc. [8]), and that are activated at
different layers of the network (organizational, application or
technical policies).
The objective of this work is to support the SCADA
manager and human operators with an integrated approach for
designing, managing and monitoring SCADA systems policies.
To that end, we propose a solution which consists in modelling
each SCADA component based on the SCADA metamodel, to
elaborate the connections between the SCADA components’
instances, and to define the policy acknowledging the
information in input, the expected format of the outlay policy,
and the operational rules. This paper is organized as follows. In
the next section, we propose a synthetic review of the SCADA
metamodel, in Section III, we highlight how a policy may be
easier engineered following components’ instances and how a
policy monitoring solution may concretely benefit the SCADA
operators and managers. Afterwards we provide a case study
extracted from [15] and finally, we describe the main related
work and conclude the paper.
II. SCADA METAMODEL BACKGROUND
This section recalls our previous work in the field of SCADA
system modelling. We first introduce the SCADA metamodel,
followed by the SCADA modelling layers.
A. SCADA metamodelling insights
Our goal in modelling the SCADA system into a layered
architecture metamodel is to provide CI operators with the
tools for governing SCADA systems (monitoring and decision
making). In previous works [2], we have elaborated such a
SCADA metamodel based on the ArchiMate®
language to give
a multiple layered view of a SCADA component using
policies. To generate the latter, we realized a specialization of
the original ArchiMate®
metamodel for SCADA components.
2. Firstly, we redefined the Core of the metamodel in order to
figure out the concept of the Policy (Fig. 1.). The Core
represents the handling of Passive Structures by Active
Structures during the realization of Behaviours. For the Active
Structures and the Behaviour, the Core differentiates between
external concepts, which represent how the architecture is
being perceived by the external components (as a Service
Provider attainable by an Interface), and the internal concept
which is composed of Structure Elements (Roles,
Components) and linked to a Policy Execution concept.
Passive Structures contains Object (e.g. data and
organizational object) which represents architecture
knowledge. Secondly, the concept of Policy was defined in
accordance to the SCADA metamodel. The proposed
representation is composed of three elements defining the
Policy:
1. “Event” is defined as something done by a Structure
Element which generates the execution of a Policy.
2. “Context” symbolizes a configuration of Passive Structure
that allows the Policy to be executed (e.g. a security level
or the value of an object).
3. “Responsibility” [9, 10] is defined as a state assigned to an
agent (human or software) to specify obligations and rights
in a specific context [2]. Thereby, responsibilities
correspond to a set of behaviours that have to be performed
by Structure Elements. This behaviour can use Object from
Passive Structure or modify values.
With these three elements, we generate an auxiliary Policy
artefact mirroring the execution of a set of Responsibilities in
a specific Context and in response to a determined Event.
Concepts and colours were taken from the original
ArchiMate®
metamodel, except for Organizational Function
and the Application Function which were replaced by the
Organizational Policy concept and the Application Policy
concept. Through the Policy Concept, we show that each
operation done by the SCADA components can be transferred
into a Policy Execution. Although there is a semantic
difference in ArchiMate®
between the application and the user
who exploits the application, in the SCADA domain, we
consider that actors and roles are played by components that
we define as being specific Structure Elements acting in CI
context. Hence, three layers structure the metamodel for the
SCADA components:
4. The Organizational Layer offers products and services to
external customers, which are realized in the organization
by organizational processes performed by Organizational
Roles according to Organizational Policies.
5. The Application Layer supports the Organizational Layer
with Application Services which are realized by
Applications according to Application Policies.
6. The Technology Layer offers Infrastructure Services
needed to run applications, realized by computer and
communication hardware and system software.
Based on this analysis, we had defined the Organizational
Policy as the rules which define the organizational
responsibilities and govern the execution of behaviours, at the
organization domain, that serve the product domain in
response to a process domain occurring in a specific context,
which is symbolized by a configuration of the information
domain. And we defined the Application Policy as the rules
that define the application responsibilities and govern the
execution, at the application domain, of behaviours that serve
the data domain to achieve the application strategy.
B. SCADA metamodel layers
The three layers which structure the SCADA metamodel (Fig.
1) are the Organizational, Application and Technical Layers:
The Organizational Layer highlights the organizational
processes and their links to the Application Layer. At first the
Organizational Layer is defined by an Organizational Role
(e.g. Alert Detection Component). This role, accessible from
the outside through an Organizational Interface, performs
behaviour on the basis of the organization's policy
(Organizational Policy) associated with the role. Then, the
component is able (depending on its role) to interact with other
roles to perform behaviour; this is symbolized by the concept
of Role Collaboration [2]. Organizational Policies are
behavioural components of the organization whose goal is to
achieve an Organizational Service to a role following Events.
Organizational Services are contained in Products
accompanied by Contracts. Contracts are formal or informal
specifications of the rights and obligations associated with a
Product. Values are defined as an appreciation of a Service or
a Product that the Organization attempts to provide or acquire.
The Organizational Objects define units of information that
relate to an aspect of the organization.
The Application layer is used to represent the Application
Components and their interactions with the Application
Service derived from the Organizational Policy of the
Organizational Layer. The concept of the components in the
metamodel is very similar to the components concept of UML
[11] and allows representing any part of the program.
Components use Data Object which is a modelling concept of
objects and object types of UML. Interconnection between
components is modelled by the Application Interface in order
to represent the availability of a component to the outside [2]
(implementing a part or all of the services defined in the
Application Service). The concept of Collaboration from the
Organizational Layer is present in the Application Layer as
the Application Collaboration and can be used to symbolize
the cooperation (temporary) between components for the
realization of behaviour. Application Policy represents the
behaviour that is carried out by the components.
The Technical Layer is used to represent the structural
aspect of the system and highlights the links between the
Technical Layer and the Application Layer and how physical
pieces of information called Artefacts are produced or used.
The main concept of the Technical layer is the Node which
represents a computational resource on which Artefacts can be
deployed and executed. The Node can be accessed by other
Nodes or by components of the Application Layer. A Node is
composed of a Device and a System Software [4]. Devices are
physical computational resources where Artefacts are
deployed when the System Software represents a software
environment for types of components and objects.
3. Communication between the Nodes of the Technology Layer is
defined logically by the Communication Path and physically
by the Network.
The complete SCADA metamodel is the union of the three
layers. As shown below, new connections between the layers
have appeared. For the Passive Structure we observe that
Artefact of the Technical Layer realizes Data Object of the
Application Layer which, itself, realizes Organizational
Object of the Organizational layer.
Figure 1: Three layers of SCADA system metamodel extracted from [2]
The Behaviour element connections show that the
Application Service uses the Organizational Policy to
determine the services which it proposes. In the same way, the
Technical Layer bases its Infrastructure Service upon the
Application Policy of the Application Layer. Concerning the
Active Structure connections, the Role concept determines,
along with the Application Component, the Interface provided
in the Application layer. The Interface of the Technical Layer
is also based on the components of the Application Layer.
C. Policy modelling
In the Organizational Layer, Organizational Policy can
be represented as an UML Use Case [11] where concepts of
Roles represent the Actors which have Responsibilities in
the Use Case, and the Collaboration concepts show the
connections between them. Concepts of Products, Value and
Organizational Service provide the Goal of the Use Case.
Pre- and Post-conditions model the context of the Use Case
and are symbolized in the metamodel by the Event concept
(pre-condition) and the Organizational Object (pre-/post-
condition). In the Application Layer, Application Policy is
defined as the realization of Responsibilities by the
Application Domain in a configuration of the Data Domain.
UML provides support for modelling the behaviour performed
by the Application Domain as Sequence Diagram.
Configuration of the Data Domain can be expressed as Pre-
conditions of the Sequence Diagram and symbolized by the
execution of a test-method on the lifeline of the diagram.
III. POLICY ENGINEERING
To engineer the SCADA policies, two steps are necessary. The
first one concerns the modelling of each SCADA component
according to the metamodel. The second one concerns the
detection and identification of the connections amongst each
composing artefact of the component models.
A. SCADA metamodel instance per component
This first step aims at providing the SCADA operators and
managers with a holistic and integrated view of the SCADA
architecture building blocks. To that end, the SCADA
metamodel is instantiated for each architecture component.
This step is achieved by shaping the component according to
the three abstractions typically advocated by the enterprise
architecture paradigm. This step allows discovering the
building artefacts of the components as well as the
connections amongst the components artefacts. This unified
representation of each component implies paramount
outcomes for the SCADA operator since it confers to the latter
a global functional insight of each component irrespective of
any implementation or vendors’ influence.
B. Policy semantic investigation
The unitary SCADA component models are used in the second
step to picture the global structure of the SCADA architecture
and of the connections, in terms of policies, amongst the
components of the architecture. Fig. 2 highlights the two types
of policies recovered in SCADA architecture:
1) Cognitive Policy
Cognitive Policies [12] are represented in blue in Fig. 2. They
represent policies which govern the behaviour of one artefact
of the component architecture. This policy specifies the rule
that the Responsible artefact needs to follow for the execution
of a defined activity in a specific execution context. This rule
is dictated by the artefact which exists in the same component
or in another one. The artefact which generates the policy is
the Master artefact and the one which execute it is the Slave
artefact. The Cognitive Policy morphology is articulated on
the following set of attributes (perceived by [13]): Master
artefact, Slave artefact, Master component, Slave component,
Behaving rule, Trigger item, Usage context, Priority extension
(Table I).
Table I. Cognitive policy attributes’ name and attributes’ ID
Attribute Name Attribute’s ID
Master artefact CP-Ma-art
Salve artefact CP-S-art
Master component CP-Ma-Com
Slave component CP-S-Com
Behaving rule CP-Ru
Trigger item CP-TI
Usage context CP-UC
Priority extension CP-prior
4. The application schema of a CP, as presented in Fig. 2, obeys
the two following controls: (1) the communication path is
from a Master structural concept to a Slave behavioral concept
or (2) the communication path is from a Master behavioural
artefact to another Slave behavioural artefact.
Figure 2. Two types of policy for SCADA. CP in blue and PP in red.
1) Permissive Policy
Permissive Policies are represented in red in Fig. 2. They
represent policies which govern the knowledge acquisition
rules from the Master to the Slave artefact [14]. This
knowledge acquisition traditionally takes the form of SCADA
states data accessed or provided in order to provide the
Responsible with the access (of in, out, in_out types [16])
to successive Cognitive Policies in case of occurring events.
The Permissive Policies morphology is articulated on the
following set of attributes [perceived by [15]): Master artefact,
Slave artefact, Master component, Slave component,
Permission rules, Pre-permission conditions, Master
permission cardinality, Slave permission cardinality, and
Cognitive constraints (Table II) - sustained by Cognitive
Policy states).
Table II. Permissive Policy attributes’ name and attributes’ ID
Attribute Name Attribute ID
Master artefact PP-Ma-art
Slave artefact PP-S-art
Master component PP-Ma-Com
Slave component PP-S-Com
Permission rules PP-Ru
Permission conditions PP-Condi
Master permission cardinality PP-Ma-Car
Slave permission cardinality PP-S-Car
Cognitive constraints PP-Co.con.
The application schema of a CP, as highlighted in Figure 2,
obeys the two following controls: (1) the communication path
is from a Master structural artefact to a Slave informational
artefact or (2) the communication path is from a Master
behavioural artefact to a Slave informational artefact.
IV. USE CASE
To illustrate CP and PP in SCADA architecture and their
visualization impact on the CI governance by the SCADA
operator, we exploit the SCADA model from the [15].
Therefore, we describe the case study, we elaborate its
building blocks and we discuss the issues.
A. Description of the case study
Fig. 3 introduces the generic building blocks of a SCADA
architecture. This architecture receives input from distributed
probes (I/O from RTU and PLC) and SCADA adaptors,
provides outputs (using a View node visualization system) to
the incident team, and is refined according to new findings by
the Cyber Control Room (CCR). Our approach is illustrated by
three blocks selected according to [17]: the Detection
Correlation, the Online Cyber Analysis and the Visualization
System.
Figure 3. Building blocks of the SCADA system extracted from [15]
1) Detection correlation
Detection and cyber detection (DCD) [19] compose the
skeleton of the detection correlation mechanism which
supports the SCADA environment. Mainly three security
zones constitute the source of classified knowledge for this
DCD, namely, the corporate networks, the CI, and the field of
correlation that constitutes the input from distributed sensors.
The fields are architecture-based on traditional and specific
components from SCADA networks, to know: IDS,
Honeypots, Anti-Virus, and RTU’s mirror. DCD performs a
correlation based on low validated alerts/events aggregation
(true-positive detections [20]) to higher validated alerts. As
highlighted in Fig. 3, cyber threat and security propagation
models support the correlation mechanism by allowing
analyzing and detecting suspicious signatures and suspicious
behaviours to issue a set of Detected Cyber Parameter (DCP).
2) Online cyber analysis
The Online Cyber Analysis (OCA) module [21] acts as the
heart of the SCADA schema, since it supports the definition of
the Refined Cyber Parameters provided to the Cyber
Simulator. OCA receives as input the DCP from the DCD on
the one hand, and the data from the analysis tools on the other
hand. The Cyber-physical Events are correlated amongst this
DCD and the Operative Level Analysis module (which we
5. have decided not to model in this paper since it semantically
acts as the OCA at an operative level).
3) Visualization system
The Visualization System is an extensive module pictured by a
hexagon symbol and acts as an ultimate SCADA Vector of
Communication for the support of the Incident Team [22].
As depicted in Fig. 3, the input of the Visualization System is
extracted from three fundamental building blocks: the Risk
Evaluation, the Automatic Countermeasures Selection and the
Service Simulator - concatenate data from OCA (and the
Operative Level Analysis) - by means of the Cyber Simulator.
The Visualization System supports the SCADA Operators and
Incident Manager, and extensions are possible towards any
ICT Operators.
Figure 4. SCADA architecture extracted from [15] focussed on detection correlation, online cyber analysis and visualization system
B. Building blocks modelling
Fig. 4 illustrates the holistic representation of the SCADA
components realized by the two steps of the policy
engineering schema. In this figure, we focus on three blocks
advocated by [17] cyber detection/correlation, online cyber
analysis and visualization system. Four additional blocks have
been partially incorporated to support the case description:
IDS, Honeypot, Operative Level Analysis, and the CI. In this
model, we have afterwards considered the CP and PP amongst
artefacts, respectively in blue and in red. Finally, these policies
have been reformulated according to the syntax policy
attributes from Table I and II, such as motivated by [2]. The
DCD is modeled on the left size of Fig. 4 and is associated
through two PPs with the IDS and the honeypots, respectively,
by the Detection Module and the Correlation Application. The
Alert Detection DB Slave Artifact at the application layer is
determined, in turn, by the Analytical Function Policy Master
artefact (illustrated in Table III using data from [2]) from the
Online Cyber Analysis module. The OCA module is the
central part of Fig. 4 and acts as a facilitator building block
between the DCD and the Visualization System. The
Confirmed Alert data object from its application layer acts as a
Slave artifact associated with the Visualization Policy. Hence,
this latter is logically of a permissive type and is bound to the
Visualization Policy/Service master artefact. Moreover, three
CP are associated to this artefact. The two first are related to
the IDS Application and Honeypot Application (which are of
Slave type – read access allowed) and the latter is directed
towards the Visualization Policy/Service (which is of Master
type – write access allowed). Finally, the last module of Fig. 4
is the Visualization System. This latter is sustained by a Slave
type CP towards the CI Operators, and by a master type PP
towards the CI Nodes.
6. Table III. CP instantiation for the Analytical function policy master artifact
Attribute ID Attribute Name
CP-Ma-art Organization policy/Service (pII2)
CP-S-art Alert/Detection Database (DB a.dd)
CP-Ma-Com ONLINE CYBER ANALYSIS
CP-S-Com CYBER DETECTION AND CORRELATION
CP-Ru Policy:: pII2, DB a.dd, if Act1On=3
than DB upgrade field DB a.add
correlation list else unchange
CP-TI Probs’ flg activate (Act3On)
CP-UC Any time, sector covered plant 2.
CP-prior Probs’ priority (From Act1 to Act9)
V. RELATED WORKS
The main related work consists in a framework for assessing
the maintainability using EAM proposed by [23] which
sustains flexible systems maintenance by supporting the
SCADA operators to assess the management of the
architecture. In line with EAM, [24] provides a solution to
develop and adapt the security analysis framework for the
architectural language and [25] refines the monitoring
paradigm using EAM. [26] tackles the visualization through
real-time SCADA and very large-scale integration (VLSI)
monitoring interface, such as underlined by [27]. Cyber
security policy has been reviewed by [3] who summarizes
cyber security problems and the possible existence of
vulnerabilities. This was correlated by [17, 19 and 20] who
strengthen new policy EAM based on modelling needs as well
as on bound exploitation perspectives.
VI. CONCLUSIONS
The huge amount of information managed by CI argues for the
support of cybernetic SCADA which behaves as a very
complex and sophisticated system. This later supports human
operators in monitoring and governing the system security by
elaborating the operational policies amongst the architecture
components. This paper explores the usage of EAM to
construct an integrated SCADA metamodel dedicated to these
components’ artefacts and structured according to (1) three
layers of abstraction, namely: Organization, Application, and
Technical layer and (2) two semantically consistent types of
policies: the Permissive Policy and the Cognitive Policy. The
results of the SCADA modelling and policy engineering
approach constitute, as illustrated by the case study, a global
analytical tool for the SCADA operators which rely on a
rational and unified component security based architecture to
continuously monitor and manipulate the policy attributes
acknowledging their impact on the whole CI system.
ACKNOWLEDGMENT
The research is funded by the CockpitCI project within the 7th
framework Programme of the European Union (topic SEC-
2011.2.5-1 – Cyber-attacks against critical infrastructures).
REFERENCES
[1] Briesemeister, L., et al. Detection, correlation, and visualization of
attacks against critical infrastructure systems, 8th PST, 2010, Canada.
[2] Blangenois, J., Guemkam, G., Feltus, C., Khadraoui, D., Organizational
Security Architecture for Critical Infrastructure, 8th International
Workshop on Frontiers in Availability, Reliability and Security, ARES
2013, Regensburg, Germany
[3] Tan, S. Electric Power Automation Control System Based on SCADA
Protocols. Proceedings of the International Conference on Information
Engineering and Applications (IEA) 2012. Springer London, 2013.
[4] Lankhorst, M. ArchiMate language primer, 2004.
[5] Zachman, J. A. 2003. The Zachman Framework For Enterprise
Architecture: Primer for Enterprise Engineering and Manufacturing.
Engineering, no. July: 1-11.
[6] Li, D., Serizawa, Y., Kiuchi, M. Concept Design for a Web Based
Supervisory Control and Data-Acquisition (SCADA) System, IEEE PES
Transmission and Distribution Conference, Yokohama, 2002, pp. 32-36.
[7] Baskerville, R., Siponen, M. (2002). An information security meta-policy
for emergent organizations. Logistics of Information Management,
15(5/6) pp. 337-46.
[8] Chang, D., Patra, A., Bagepalli, N., et al. Zone-Based Firewall Policy
Model for a Virtualized Data Center. U.S. Patent No 20,130,019,277.
[9] Guemkam, G., Feltus, C., Bonhomme, C., Schmitt, P., Khadraoui, D.,
Guessoum, Z. Reputation based Dynamic Responsibility to Agent for
Critical Infrastructure, IEEE/WIC/ACM International Conference on
Intelligent Agent Technology, 22-27/8/2011, Lyon, France.
doi>10.1109/WI-IAT.2011.194
[10] Bonhomme, C., Feltus, C., Khadraoui, D. Dynamic Responsibilities
Assignment in Critical Electronic Institutions - A Context-Aware
Solution for in Crisis Access Right Management, ARES 2011, Vienna,
Austria. DOI: 10.1109/ARES.2011.43
[11] UML 2 ( http://www.uml.org/)
[12] Xu, W., Zhang, X., and Jahn, G. Towards system integrity protection
with graph-based policy analysis. In : Data and Applications Security
XXIII. Springer Berlin Heidelberg, 2009. p. 65-80.
[13] Doherty et al.. (2009). The information security policy unpacked: A
critical study of the content of university policies. IJIM 29(6)
[14] Barth, A et al. (2009). Securing frame communication in browsers.
Communications of the ACM, 52(6), pp. 83-91.
[15] Nicholson, A. et al. (2012) SCADA security in the light of Cyber-
Warfare. Computers and Security , 31 (4), pp. 418-436.
[16] Maj, S. P., Makasiranondh, W., & Veal, D. (2010). An Evaluation of
Firewall Configuration Methods. IJCSNS, 10(8), 1.
[17] Briesemeister, L., Cheung, S., Lindqvist, U., & Valdes, A. (2010,
August). Detection, correlation, and visualization of attacks against
critical infrastructure systems. In Privacy Security and Trust (PST),
2010 Eighth Annual International Conference on (pp. 15-22). IEEE.
[18] Rakocevic, V., et al. Computational intelligence in a real-time SCADA
system to monitor and control continuous casting of steel billets.
Systems, Man and Cybernetics, 1995. IEEE, Vol. 2.
[19] Daryabar, F., et al., Towards secure model for SCADA systems,
International Conference on Cyber Security, Cyber Warfare and Digital
Forensic (CyberSec), 2012 60, 64, 2012.
[20] Gill, R., Smith, J., & Clark, A. (2006, January). Experiences in passively
detecting session hijacking attacks in IEEE 802.11 networks. In
Proceedings of the 2006 Australasian workshops on Grid computing and
e-research-Volume 54 (pp. 221-230). Australian Computer Society, Inc.
[21] Fink, G. A., et al. Visualizing cyber security: Usable workspaces.
Visualization for Cyber Security, 2009. VizSec 2009. 6th International
Workshop on. IEEE, 2009.
[22] Kitamura, M., Kojima, T., & Nishida, S. (2006). SCADA Data
Visualization Using Equipment Graphs. IEEE Transactions on
Electronics, Information and Systems, p. 126, pp.788-796.
[23] Lagerström, R. Analyzing system maintainability using enterprise
architecture models. Proceedings of the 2nd Workshop on Trends in
Enterprise Architecture Research (TEAR 2007).
[24] Ekstedt, M., Sommestad, T., Enterprise architecture models for cyber
security analysis, PSCE '09. IEEE/PES, 1,6, 15-18 March 2009
[25] Stanescu, A. M. et al. Supervisory control and data acquisition for
virtual enterprise, IJPR, Vol. 40, Iss. 15, 2002.
[26] Qiu, B., et al. "Internet-based SCADA display system." Computer
Applications in Power, IEEE 15.1 (2002): pp. 14-19.
[27] Constantinescu, C. Trends and challenges in VLSI circuit reliability.
Micro, IEEE 23.4 (2003): pp. 14-19.