This document proposes a framework for regulating security policies that integrates business requirements. It describes an architecture for a policy regulation system dedicated to computer network security. The architecture is based on identifying phases to react to failures or attacks. It aims to react quickly to attacks by implementing immediate countermeasures, while also adopting new policies to prevent future attacks, taking business goals into account.
This document discusses the need for project management in information security projects. It explains that most information security projects require a trained project manager or skilled IT manager to oversee implementation. The project manager's role is crucial to the success of complex security projects. The document also outlines technical and non-technical considerations for implementing a project plan, such as conversion strategies, change management processes, and organizational readiness for change.
This document discusses information security policies and their components. It begins by outlining the learning objectives, which are to understand management's role in developing security policies and the differences between general, issue-specific, and system-specific policies. It then defines what policies, standards, and practices are and how they relate to each other. The document outlines the three types of security policies and provides examples of issue-specific and system-specific policies. It emphasizes that policies must be managed and reviewed on a regular basis to remain effective.
SECURE SERVICES: INTEGRATING SECURITY DIMENSION INTO THE SA&D cscpconf
Services security is often assimilated to a set of software solutions (Firewall, data encryption.) but rarely consider the organizational security rules as a fundamental part of the Services security policy. With the increasing use of new Services architectures (Open Services architecture, distributed database, multi web server, multi-tier application servers) security leaks become crucial and every security problem is harmful to the organization business continuity. To reduce and detect major security risks at an earlier step of the Services project, our approach is based on different knowledge exchange between end users, analyst, designers and developers collaborating at the Services project. The knowledge is mainly oriented to the detection of weak signals inside the organization. In this paper, we present the different knowledge surroundings an Services project and a knowledge pattern structure that can be used for the formalization aspects of the established exchange that should be established during the Services project between the different participants
The document discusses information security system implementation and certification. It explains how an organization's security blueprint becomes a project plan, addressing organizational considerations. A project manager plays a key role in successfully implementing complex security projects using technical strategies and models. Organizations face nontechnical challenges when implementing rapid security changes and must certify systems through processes like NIST and ISO to verify security controls meet requirements.
The document discusses the implementation phase of a security project life cycle. It explains that an organization's security blueprint must be translated into a detailed project plan that addresses leadership, budget, timelines, staffing needs, and organizational considerations. An effective project plan uses a work breakdown structure and considers financial, priority, scheduling, procurement, and change management factors. The project manager plays a key role in planning, supervising, and wrapping up the project successfully.
Attacks on the enterprise are getting increasingly sophisticated. Current solutions available do not seem to be adequate given the innovativeness, precision and persistence of these attacks in different forms and of different dimensions. Organisations thus want to increase the sophistication of their employees and also of the solutions to be deployed given this backdrop.
This document introduces the IT Baseline Protection Manual, which provides standard security measures and guidance to help organizations securely configure typical IT systems. It aims to simplify the process of developing an IT security policy by eliminating the need for complex threat and risk analyses. The manual is continuously updated to reflect new IT developments and user feedback. It contains modules on topics like IT security management, infrastructure, networked and non-networked systems, data transmission, and telecommunications.
This document discusses principles of software design for information security. It summarizes key software design principles identified by Saltzer and Schroeder, including least privilege and separation of duties. It also outlines the National Institute of Standards and Technology's (NIST) approach to securing the software development lifecycle (SDLC), which involves integrating security early and conducting activities like risk assessments and testing at each phase. Finally, it describes various security roles in an organization, including the chief information security officer, security project team, data owners and custodians, and communities of interest.
The document discusses the need for nuclear facilities to secure portable media devices due to threats of cyber attacks. It outlines regulations from the Nuclear Regulatory Commission requiring facilities to implement cyber security programs, including controlling portable media. The document recommends designing secure data workflows that incorporate user authentication, file scanning, and use of kiosks to scan all portable media before entering secure areas in order to establish multiple layers of protection against known and unknown threats.
This document provides an introduction to information security. It discusses the history of computer security and how it evolved into information security. Key topics covered include the definition of information security, the systems development life cycle for security, and security professionals' roles in an organization. The document presents information security as both an art and a science due to its complex interactions between users, policies, and technology controls.
This document discusses the design of security architecture and contingency planning. It covers spheres of security and levels of controls that make up a security framework. Defense in depth through multiple layers of controls is described. The importance of security education, training, and awareness programs is emphasized to reduce accidental breaches and build security knowledge. Contingency plans like incident response, disaster recovery, and business continuity plans aim to restore operations during and after incidents. The contingency planning process involves impact analysis, preventive controls, recovery strategies, plan development, testing and more.
WIRELESS SECURITY MEASUREMENT USING DATA VALUE INDEXIJNSA Journal
Nowadays, use of wireless technology in organizations is a regular act, and we can see this technology erupted in all possible different areas. Related to employing wireless technology those organizations need to apply properly security level, depend on security policy which already defined. If security system applied but not required, or security system required but not provided, leads to improper security system. In this paper we have shown the way to evaluate the data significant and their appropriate security level. Here a model to evaluate the cost of data on security point of view by consideration of some parameters like sensitivity, volume, life, frequency, etc…, this research makes organizations to predict and implement or understand the cost involved for security of their data by measuring the data value. We used questionnaire and survey methodologies to collect the data; and then used SPSS and SAS program to calculate and design a model. In this way regression and BOOTSTARP help us to find accurate result.
The document discusses a new paradigm called the Unified Access and Application Delivery Methodology (UAADM) that aims to address shortcomings of traditional network security architectures. The UAADM revolves around how networks connect users to applications, considering access context and security profiles. It proposes a Unified Access and Application Delivery Controller that examines access requests, matches context to resource requirements, and intelligently applies services like caching, compression, and security screening. The methodology is presented as addressing issues with traditional approaches like lack of extensibility, complexity, and separate network and security designs.
Five principles for improving your cyber securityWGroup
The document discusses cyber security risks for businesses and provides five principles for improving cyber security. It notes that as corporate assets have increasingly become virtual, cyber security risks have also increased. The five principles are: 1) Identifying security risks and determining how to address them, 2) Managing risks through resource allocation and transferring risks, 3) Understanding legal implications of breaches, 4) Obtaining technical expertise on security issues, and 5) Having expectations and oversight of the cyber security program.
This document discusses information security policies, standards, and practices. It explains the different types of security policies an organization may have, including general security policies, issue-specific policies, and system-specific policies. It emphasizes the importance of management support for security policies and outlines the key components of an information security blueprint, including management controls, operational controls, and technical controls. The document also discusses the importance of security education, training, and awareness programs to ensure all employees understand and comply with security policies and procedures.
The technology behind information systems in today’s world has been embedded in nearly every aspect of our lives. Thus, the idea of securing our information systems and/or computer networks has become very paramount. Owing to the significance of computer networks in transporting the information and knowledge generated by the increased diversity and sophistication of computational machinery, it would be very imperative to engage the services of network security professionals to manage the resources that are passed through the various terminals (end points) of the these network, so as to achieve a maximum reliability of the information passed, making sure that this is achieved without creating a discrepancy between the security and usability of such network. This paper examines the various techniques involved in securely maintaining the safe states of an active computer network, its resources and the information it carries. We examined techniques of compromising an information system by breaking into the system without authorised access (Hacking), we also looked at the various phases of digital analysis of an already compromised system, and then we investigated the tools and techniques for digitally analysing a compromised system in other to bring it back to a safe state.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
This document discusses building a responsibility model using modal logic. It begins with a literature review of existing policy models and engineering methods related to concepts of accountability, capability and commitment. It identifies that while some concepts like rights and roles are commonly addressed, models do not fully cover all responsibility components. The document then proposes a preliminary responsibility model and defines the main concepts of capability, accountability and commitment. It suggests a formalization of these concepts using deontic logic to help analyze organizational structures and policies for consistency and problems.
This document provides a preliminary literature review of policy engineering methods related to the concept of responsibility. It summarizes key access control models and discusses how they address concepts like capability, accountability, and commitment. The document also reviews engineering methods and how they incorporate responsibility considerations. The overall goal is to orient further research towards a new policy model and engineering method that more fully addresses stakeholder responsibility.
This document proposes a methodology for aligning business and IT policies using a responsibility model. The methodology is a five-step approach consisting of collecting information, defining capabilities, accountabilities and commitments, linking responsibilities to processes, validating the model, and defining policies. It is illustrated with a case study from an IT company where they define an access control policy using this methodology and responsibility model. The responsibility model defines three components - capabilities, accountabilities, and commitments - to clarify roles and responsibilities for policy definition.
This document proposes enhancements to the Role-Based Access Control (RBAC) model by integrating the concept of responsibility. It summarizes the existing RBAC model and user-role/permission-role assignment processes. It then presents a responsibility model built around three concepts: an employee's obligations derived from responsibilities, the rights required to fulfill obligations, and the employee's commitment to fulfill obligations. The paper argues RBAC could be improved by incorporating acceptance of responsibility within the role assignment process. It proposes integrating the responsibility model with RBAC to address identified weaknesses and modeling the integrated model using the OWL ontology language.
Social Communities: Don’t end up making them virtual ghost towns.Sanjay Abraham
Social communities could add lot of value if it’s properly built and nurtured within an enterprise. More than a technology or platform, a Social Community is about people. There could be technology worth millions but if people are not engaging, the social communities are bound
to fail.
This document discusses two open-source e-learning platforms developed in Luxembourg: AnaXagora and OpenMCMS. AnaXagora was created through a collaborative process between the CRPHT and other partners. It was developed from the open-source platform Ganesha, adding new functionality. OpenMCMS was created by the CVCE to support their European Navigator knowledge base, providing content management and multimedia capabilities. Both platforms use open-source philosophies and licenses to promote sharing and dissemination of knowledge.
This document describes an Arduino lab experiment to generate an SOS signal using LED blinks of different durations. It defines a pin for the LED, sets it as an output, and uses a for loop in the main loop function to blink the LED three times at 100ms intervals and three times at 300ms intervals to create the Morse code pattern for SOS. This pattern would be recognized as an SOS signal, most commonly associated with distress signals heard on radios and televisions for many years. The document provides contact information for the author to ask additional questions.
Charla impartida de Igor Lukic de Zendal Backup, en el I Curso de Verano de Informática Forense de la Facultad de Informática de la Universidad de A Coruña.
This document proposes an automatic reaction strategy for critical infrastructure SCADA systems. It defines a three-layer metamodel for modeling SCADA components and two types of policies (cognitive and permissive) that govern component behavior. It then presents a two-phase method for identifying these policies from the SCADA architecture and formalizing them to support an automatic reaction strategy. This strategy is modeled as an integral part of the SCADA architecture using the defined metamodel and policy identification method. It includes organizational and application layers with main actors, strategies, and components that realize the reaction policies based on expected automation levels.
I assigned Newsweek\'s Johnnie Roberts to bring the complicated world of broadcasting mogul Cathy Hughes to light in this business profile for Essence. I give him high marks for shoe-leather reporting.
Blogging can be a way to make money if simple steps are followed such as going to the link provided, which likely details how to start a blog and monetize it through ads or affiliate marketing in order to earn income from the comfort of your own home. The document uses capital letters and exclamation points to emphasize that blogging can be an easy way for anyone to earn money online.
The newsletter provides the following information:
1) Susan Ardrey, a part-time reference librarian, retired in December 2009 after many years of service at Indiana University Kokomo in various roles.
2) A student donated materials from a World War II history class to the library's special collections, including books and an autobiography about the 100th Infantry Battalion.
3) The library has gained online access to over 1,100 Blackwell-published journal titles through a new agreement with Wiley InterScience.
Harbor Research - Designing Security for the Internet of Things & Smart DevicesHarbor Research
The document discusses the growing security challenges posed by the increasing number of internet-connected devices (the Internet of Things). It notes that while the Internet has enabled widespread connectivity, the underlying architecture is still vulnerable to security issues. The company Mocana has developed a unique approach to networked device security that could provide a foundation for security in an economy powered by trillions of interconnected devices and sensors.
Agile Business Intelligence is taking shape as the way to address the disconnect between Business users and IT developers of BI applications. Find out how Yellowfin is making Agile BI easy.
The document summarizes that TBD Enterprises is committed to being a partner to customers and helping them succeed by providing the highest quality and cost-effective automation solutions and services. It presents TBD as offering complete solutions for robotic and non-robotic automation needs across various industries and processes, as well as related services from engineering to shipping. Customers are encouraged to contact TBD to have their production processes or products reviewed for potential optimizations.
A security decision reaction architecture for heterogeneous distributed networkchristophefeltus
This document proposes a multi-agent system architecture for reacting to security alerts in heterogeneous distributed networks. The architecture has three layers - low, intermediate, and high - and consists of agents that perform alert correlation, reaction decision making, and policy deployment. The agents communicate by exchanging messages. The architecture is intended to allow for quick and efficient reaction to security attacks while ensuring coordinated configuration changes across network components. It was developed and illustrated using a case study of a medical application distributed across buildings, campuses, and metropolitan areas.
This document proposes a multi-agent system architecture for reacting to security alerts in heterogeneous distributed networks. The architecture has three layers - a low level that interfaces with the target infrastructure, an intermediate level that correlates alerts from different domains and deploys reaction actions, and a high level global view. It uses an ontology and Bayesian network based decision support system to help agents make decisions according to preferences and influence diagrams. The approach is illustrated using a case study of a medical application distributed across buildings, campuses and metropolitan areas.
The document proposes a multi-agent system architecture for incident reaction in telecommunication networks. The architecture has three layers - low level at the network interface, intermediate level to correlate alerts, and high level with a global view. Agents represent components like alert correlation, reaction decision-making, and policy deployment. The reaction decision agent receives alerts and decides if a reaction is needed based on policies, organization knowledge, and specified behavior. It defines new policy rules for the reaction. The policy deployment agent instantiates and sends the new policies to policy enforcement points to change the network security state. A decision support system using ontologies, Bayesian networks, and influence diagrams helps the agents make decisions.
A multi agent based decision mechanism for incident reaction in telecommunica...christophefeltus
The document proposes a multi-agent based decision system for responding to incidents in telecommunications networks. It describes a three-layer distributed architecture with low, intermediate, and high levels to coordinate incident response. The low level interfaces with the network, the intermediate level correlates alerts and deploys response actions, and the high level has a global view for decision making. The architecture uses multi-agent systems for autonomous response capabilities. It also incorporates an OntoBayes model to help agents make decisions based on preferences, ontology, Bayesian networks, and influence diagrams. The approach was tested for data access control and aims to enable timely, adaptive incident response across complex, distributed infrastructure.
Multi agents based architecture for is security incident reactionchristophefeltus
This document proposes a multi-agent architecture for responding to security incidents in information systems. The architecture has three layers: a low level that interfaces with the targeted infrastructure, an intermediate level that correlates alerts and deploys response actions using multi-agent systems, and a high level that provides supervision and manages business policies. The architecture was designed based on requirements like scalability, availability, autonomy, and global supervision. It aims to quickly and efficiently respond to attacks while ensuring responses do not violate business policies. The document then discusses using a multi-agent system with JADE to represent nodes in the architecture and facilitate communication and coordination between components for selecting and deploying response policies.
This document proposes a context-aware solution for dynamically assigning responsibilities and access rights to agents in a critical infrastructure security architecture during a crisis. It introduces the concept of agent responsibility, which is assigned based on the crisis type and severity. Responsibilities define an agent's obligations and accountabilities for tasks, as well as the necessary rights and capabilities. The architecture enhances an existing multi-agent reaction system called ReD by integrating a mechanism for dynamically changing responsibility assignments according to the crisis context, and granting access rights based on the agents' responsibilities. This allows the architecture to quickly adapt its response by reallocating functions when agents are compromised during an attack.
Towards an innovative systemic approach of risk managementchristophefeltus
This document proposes an innovative systemic approach to managing risks across interconnected sectors in Luxembourg's digital economy. It discusses how individual sectors are increasingly interdependent, so risks in one sector can impact others. The authors argue a systemic risk management approach is needed to improve accuracy, reactivity and minimize risk propagation across sectors. They describe ongoing work to develop an enterprise architecture model to assess cross-sector risks using proof of concepts with Luxembourg's regulators and ICT providers. The goal is a common risk management framework and interface to facilitate agreement between actors and oversee risks at a national level.
This document proposes an innovative systemic approach to risk management across interconnected sectors. It suggests using enterprise architecture models to manage cross-sector risks in Luxembourg's complex ICT ecosystem. The approach would provide regulators an overview of all players and systems, as well as models of different sectors to analyze collected data and risks at a national level, fostering accurate and reactive risk mitigation across economic domains.
Essay QuestionsAnswer all questions below in a single document, pr.docxjenkinsmandie
Essay Questions
Answer all questions below in a single document, preferably below the corresponding topic.
Responses should be no longer than half a page.
One
1. A security program should address issues from a strategic, tactical, and operational view. The
security program should be integrated at every level of the enterprise’s architecture. List a
security program in each level and provide a list of security activities or controls applied in these
levels. Support your list with real-world application data.
2. The objectives of security are to provide availability, integrity, and confidentiality protection to
data and resources. List examples of these security states where an asset could lose these
security states when attacked, compromised, or became vulnerable. Your examples could
include fictitious assets that have undergone some changes.
3. Risk assessment can be completed in a qualitative or quantitative manner. Explain each risk
assessment methodology and provide an example of each.
Two
1. Access controls are security features that are usually considered the first line of defense in
asset protection. They are used to dictate how subjects access objects, and their main goal is to
protect the objects from unauthorized access.
These controls can be administrative, physical, or technical in nature and should be applied in a
layered approach, ensuring that an intruder would have to compromise more than one
countermeasure to access critical assets. Explain each of these controls of administrative,
physical, and technical with examples of real-world applications.
2. Access control defines how users should be identified, authenticated, and authorized. These
issues are carried out differently in different access control models and technologies, and it is up
to the organization to determine which best fits its business and security needs. Explain each of
these access control models with examples of real-world applications.
3. The architecture of a computer system is very important and comprises many topics. The
system has to ensure that memory is properly segregated and protected, ensure that only
authorized subjects access objects, ensure that untrusted processes cannot perform activities
that would put other processes at risk, control the flow of information, and define a domain of
resources for each subject. It also must ensure that if the computer experiences any type of
disruption, it will not result in an insecure state. Many of these issues are dealt with in the
system’s security policy, and the security model is built to support the requirements of this
policy. Given these definitions, provide an example where you could better design computer
architecture to secure the computer system with real-world applications. You may use fictitious
examples to support your argument.
Three
1. Our distributed environments have put much more responsibility on the individual user, facility
management, and administrative procedures and controls than in th.
Blueprint for Cyber Security Zone ModelingITIIIndustries
The increasing need to implement on-line services for all industries has placed greater focus upon the security controls deployed to protect the corporate network. The demand for cyber security is further required when IT solutions are built to operate in the cloud. As more business activities are migrated to the on-line channel the security protection systems must cater for a variety of applications. This includes access for enterprise users who are mobile, working from home, or situated at business partner locations. One set of key security measures deployed to protect the enterprise perimeter include firewalls, network routers, and access gateways. In addition, a set of controls are also in place for cloud enabled IT solutions. Collectively these components make up a set of protection systems referred to as the security zones. In this paper, a security zone model that has been deployed in practice for the industry is presented. The zone model serves as a design blueprint to validate existing architectures or to assist in the design of new cyber security zone deployments.
Ise viii-information and network security [10 is835]-solutionVivek Maurya
This document contains the question paper solution from VTU for the course Information and Network Security 10IS835. It discusses various topics in system security policies, including:
- How managerial guidelines and technical specifications can be used in system-specific security policies.
- Who is responsible for policy management and how policies are managed.
- The different approaches for creating and managing issue-specific security policies.
- The major steps and components of contingency planning, including the business impact analysis.
- Pipkin's three categories of incident indicators and the ISO/IEC 270xx standard for information security management.
- The importance of incident response planning and testing security response plans.
- The
11What is Security 1.1 Introduction The central role of co.docxmoggdede
1
1
What is Security? 1.1 Introduction
The central role of computer security for the working of the economy, the defense of the country, and the protection of our individual privacy is universally acknowledged today. This is a relatively recent development; it has resulted from the rapid deployment of Internet technologies in all fields of human endeavor and throughout the world that started at the beginning of the 1990s. Mainframe computers have handled secret military information and personal computers have stored private data from the very beginning of their existence in the mid-1940s and 1980s, respectively. However, security was not a crucial issue in either case: the information could mostly be protected in the old-fashioned way, by physically locking up the computer and checking the trustworthiness of the people who worked on it through background checks and screening procedures. What has radically changed and made the physical and administrative approaches to computer security insufficient is the interconnectedness of computers and information systems. Highly sensitive economic, financial, military, and personal information is stored and processed in a global network that spans countries, governments, businesses, organizations, and individuals. Securing this cyberspace is synonymous with securing the normal functioning of our daily lives.
Secure information systems must work reliably despite random errors, disturbances, and malicious attacks. Mechanisms incorporating security measures are not just hard to design and implement but can also backfire by decreasing efficiency, sometimes to the point of making the system unusable. This is why some programmers used to look at security mechanisms as an unfortunate nuisance; they require more work, do not add new functionality, and slow down the application and thus decrease usability. The situation is similar when adding security at the hardware, network, or organizational level: increased security makes the system clumsier and less fun to use; just think of the current airport security checks and contrast them to the happy (and now so distant) pre–September 11, 2001 memories of buying your ticket right before boarding the plane. Nonetheless, systems must work, and they must be secure; thus, there is a fine balance to maintain between the level of security on one side and the efficiency and usability of the system on the other. One can argue that there are three key attributes of information systems:
Processing capacity—speed
Convenience—user friendliness
Secure—reliable operation
The process of securing these systems is finding an acceptable balance of these attributes. 1.2 The Subject of Security
Security is a word used to refer to many things, so its use has become somewhat ambiguous. Here we will try to clarify just what security focuses on. Over the years, the subject of information security has been considered from a number of perspectives, as a concept, a function, and ...
This document outlines a 5-step process for managing organizational ICT security:
1. Identify the organization's business objectives to ensure ICT resources support them.
2. Identify all ICT resources, including network infrastructure, servers, user devices, and hardware.
3. Identify and assess risks to ICT resources, such as theft, damage, and unauthorized access, and prioritize them based on likelihood and cost.
4. Develop activities to mitigate risks through a 7-layered approach involving policies, physical security, perimeter controls, internal access management, host protection, and application hardening.
5. Implement and monitor the security program with roles for the CIO, CISO, ICT
This document provides an overview of information security concepts including: the history of computer security evolving into information security; key terms like availability, integrity, and confidentiality; components of an information system; approaches to implementing security like top-down and bottom-up; and the security systems development life cycle with phases like investigation, analysis, design, implementation, and maintenance. It also outlines security professionals' roles in an organization.
PERFORMANCE EVALUATION OF ENHANCEDGREEDY- TWO-PHASE DEPLOYMENT ALGORITHMIJNSA Journal
The document discusses and evaluates an enhanced algorithm for deploying firewall policies from an initial to target configuration. It summarizes:
1) The original "Greedy-Two-Phase Deployment" algorithm was found to be incorrect for some cases and could result in an incorrect final policy order.
2) A new "Enhanced-Greedy-Two-Phase Deployment" algorithm is proposed to address this by moving rules to their target position rather than a shifted one.
3) An evaluation of the new algorithm shows it performs the policy deployments faster than the previous best "SANITIZEIT" approach, with improvements for larger policies of up to 10,000+ rules.
PERFORMANCE EVALUATION OF ENHANCEDGREEDY-TWO-PHASE DEPLOYMENT ALGORITHMIJNSA Journal
The document discusses and evaluates an enhanced algorithm for deploying firewall policies from an initial to target configuration. It summarizes:
1) The original "Greedy-Two-Phase Deployment" algorithm was found to be incorrect for some cases and could result in an incorrect final policy order.
2) A new "Enhanced-Greedy-Two-Phase Deployment" algorithm is proposed to address this by moving rules to their target position rather than a shifted one.
3) An evaluation of the new algorithm shows it performs the policy deployments faster than the previous best "SANITIZEIT" approach, with improvements for larger policies of up to 10,000+ rules.
Operational technology threats in developing countries and possible solutionFaysal Ghauri
My first paper on Cybersecurity, especially to Operational Technology and the challenges in developing countries although I have found similar challenges in the developed countries as well. This paper has been published by the International Journal of Computer Science and Information Security (IJCSIS) in April 2021, Vol. 19 No. 4 Publication.
20190423 PRiSE model to tackle data protection impact assessments and data pr...Brussels Legal Hackers
This document discusses the interdisciplinary nature of data protection by design and data protection impact assessments. It notes that while software engineers and lawyers both aim to ensure compliance, they often take different approaches - with engineers focusing on technical risks and lawyers on legal concepts. The document proposes aligning these perspectives through a shared understanding of a system using a meta-model that incorporates both technical and legal requirements. This would facilitate transparency and demonstration of compliance. It also discusses how design scientists can help develop user-friendly solutions to meet transparency obligations through approaches like interactive interfaces and privacy languages.
Multi-Agent System (MAS) monitoring solutions are designed for a plethora of usage topics. Existing approach mostly used cloned back-end architectures while front-end monitoring interface tends to constitute the real specificity of the solution. These interfaces are recurrently structured around three dimensions: access to informed knowledge, agent’s behavioural rules, and restitution of real-time states of specific system sector. In this paper, we propose prototyping a sector-agnostic MAS platform (Smart-X) which gathers in an integrated and independent platform all the functionalities required to monitor and to govern a wide range of sector specific environments. For illustration and validation purposes, the use of Smart-X is introduced and explained with a smart-mobility case study.
This document provides an agenda and overview for a joint workshop on security modeling hosted by the ArchiMate Forum and Security Forum. The workshop aims to identify opportunities to improve the conceptual and visual modeling of enterprise information security using TOGAF and ArchiMate. The agenda includes introductions, a research spotlight on strengthening role-based access control with responsibility modeling, an open discussion on complementing TOGAF and ArchiMate with enhanced security modeling, and identifying next steps. The workshop purpose is to enable better security architecture decisions and drive usage of TOGAF and ArchiMate for security architecture.
Aligning the business operations with the appropriate IT infrastructure is a challenging and critical activity. Without efficient business/IT alignment, the companies face the risk not to be able to deliver their business services satisfactorily and that their image is seriously altered and jeopardized. Among the many challenges of business/IT alignment is the access rights management which should be conducted considering the rising governance needs, such as taking into account the business actors' responsibility. Unfortunately, in this domain, we have observed that no solution, model and method, fully considers and integrates the new needs yet. Therefore, the paper proposes firstly to define an expressive Responsibility metamodel, named ReMMo, which allows representing the existing responsibilities at the business layer and, thereby, allows engineering the access rights required to perform these responsibilities, at the application layer. Secondly, the Responsibility metamodel has been integrated with ArchiMate® to enhance its usability and benefits from the enterprise architecture formalism. Finally, a method has been proposed to define the access rights more accurately, considering the alignment of ReMMo and RBAC. The research was realized following a design science and action design based research method and the results have been evaluated through an extended case study at the Hospital Center in Luxembourg.
This document proposes extending the HL7 standard with a responsibility perspective to better manage access rights to patient health records. It presents the ReMMo responsibility metamodel, which defines actors' responsibilities and associated access rights. The paper aims to align ReMMo with the HL7-based eSanté healthcare platform model in Luxembourg to semantically enhance access controls based on users' real responsibilities rather than just roles. It will first map concepts between the two models, then evaluate the alignment through a prototype applying inference rules.
This document presents a study that aims to develop and validate a responsibility model to improve IT governance. It analyzes concepts of responsibility from literature and frameworks like COBIT. The researchers developed a responsibility model with key concepts like obligation, accountability, right, and commitment. They then compare this model to COBIT's representation of responsibility to identify areas for potential enhancement, like adding concepts that COBIT lacks. The document illustrates how the responsibility model could be used to refine COBIT's process for identifying system owners and their responsibilities.
This document proposes an innovative approach called SIM (Secure Identity Management) that aims to make access management policies closer aligned with business objectives. It does this in two ways:
1) By focusing the policy engineering process on business goals and responsibilities defined in processes, using concepts from the ISO/IEC 15504 standard. This links capabilities and accountabilities to process outcomes and work products.
2) By defining a multi-agent system architecture to automate the deployment of policies across heterogeneous IT components and devices. The agents provide autonomy and ability to adapt rapidly according to context.
The approach was prototyped using open source components and aims to improve how access rights are defined according to business needs and deployed across an organization
This document proposes a methodological approach for specifying services and analyzing service compliance considering the responsibility dimension of stakeholders. The approach includes a product model and process model. The product model has three layers: an informational layer describing service context and concepts, an organizational layer describing business rules and roles, and a responsibility dimension layer linking the two. The process model outlines steps for service architects to identify context, define concepts and rules, specify services, and analyze compliance. The approach is illustrated with an example of managing access rights for sensitive healthcare data exchange between organizations.
This document discusses integrating responsibility aspects into service engineering for e-government. It proposes a multi-layered approach including an ontological layer defining legal concepts, an organizational layer describing roles and stakeholders, an informational layer representing data structures and integrity constraints, and a technical layer representing IT components. A responsibility meta-model is also introduced to align responsibilities across these layers and facilitate interoperability between services that share data. The approach aims to ensure service compliance and manage risks associated with e-government services.
1) The document proposes a dynamic approach for assigning functions and responsibilities to agents in a multi-agent system for critical infrastructure management.
2) The approach uses an agent's reputation, which is based on past performance, to determine which agents receive which responsibilities as crisis situations change over time.
3) Assigning responsibilities dynamically based on reputation allows the system to continue operating effectively if an agent becomes isolated or has reduced capabilities during a crisis.
This document proposes a responsibility modeling language (ReMoLa) to align access rights with business process requirements. ReMoLa is a responsibility-centered meta-model that integrates concepts from the business and technical layers, with the concept of employee responsibility bridging the two. It incorporates four types of obligations from the COBIT framework to refine employee responsibilities and better assign access rights. ReMoLa maps responsibilities to roles in the RBAC model to leverage its advantages for access right management while ensuring responsibilities align with business tasks and employee commitment.
The document describes the NOEMI assessment methodology, which was developed as part of a research project to help very small enterprises (VSEs) improve their IT practices. The methodology aims to assess VSEs' IT capabilities in order to facilitate collaborative IT management across organizations. It was designed to be aligned with common IT standards like ISO/IEC 15504 and ITIL, but adapted specifically for VSEs. The methodology has been tested through several case studies with VSEs in Luxembourg, with promising results.
This document proposes an extension of the ArchiMate enterprise architecture framework to model multi-agent systems for critical infrastructure governance. The authors develop a responsibility-driven policy concept and metamodel layers to represent agent behavior and organizational policies across technical, application, and organizational layers. The approach is illustrated through a case study of a financial transaction processing system.
This document summarizes an experimental prototype of the OpenSST protocol for secured electronic transactions. OpenSST was developed to achieve high security, simplicity in software engineering, and compatibility with existing standards. The prototype uses OpenSST for the authorization portion of electronic payments in an e-business clearing solution. It describes the OpenSST message format and types, and discusses how OpenSST is implemented in the prototype's three-element architecture of an OpenSST proxy, reverse proxy, and server.
This document discusses the NOEMI model, a collaborative management model for ICT processes in SMEs. The model was developed by the Centre Henri Tudor and tested with a cluster of 8 partner SMEs. Key aspects of the model include defining ICT activities across 5 domains, assessing each SME's capabilities, and having an operational team manage activities for the cluster under a coordination committee. The experiment showed improved cost control, management, and partner satisfaction compared to alternatives like outsourcing or hiring individual IT staff. The research is now ready for market transfer as the successful model is adopted long-term by participating SMEs.
This document proposes a metamodel for modeling reputation-based multi-agent systems using an adaptation of the ArchiMate enterprise architecture modeling framework. It describes a case study applying this metamodel to model an electrical distribution critical infrastructure system. Key elements of the metamodel include:
- Representing agents and their behaviors through policies that integrate both behavior and trust components
- Modeling trust relationships between agents using a reputation-based trust model
- Illustrating the metamodel layers and components on a system that detects weather alerts and broadcasts messages to the public through various channels like SMS or social media
The document discusses information security concerns of industry managers. A survey found that information security is the top concern of managers, even more than risks from the economy or natural disasters. While industries invest heavily in information security, most managers still trust their current security systems despite few having organizations well-adapted to new information risks. The complexity of assessing security risks is growing due to new IT capabilities, critical infrastructure developments, cloud services, and increasing cybercrime. Industries and academics must collaborate further on information security research to address these challenges.
More from Luxembourg Institute of Science and Technology (20)
Collaborative Team Recommendation for Skilled Users: Objectives, Techniques, ...Hossein Fani
Collaborative team recommendation involves selecting users with certain skills to form a team who will, more likely than not, accomplish a complex task successfully. To automate the traditionally tedious and error-prone manual process of team formation, researchers from several scientific spheres have proposed methods to tackle the problem. In this tutorial, while providing a taxonomy of team recommendation works based on their algorithmic approaches to model skilled users in collaborative teams, we perform a comprehensive and hands-on study of the graph-based approaches that comprise the mainstream in this field, then cover the neural team recommenders as the cutting-edge class of approaches. Further, we provide unifying definitions, formulations, and evaluation schema. Last, we introduce details of training strategies, benchmarking datasets, and open-source tools, along with directions for future works.
Search for Dark Matter Ionization on the Night Side of Jupiter with CassiniSérgio Sacani
We present a new search for dark matter (DM) using planetary atmospheres. We point out that
annihilating DM in planets can produce ionizing radiation, which can lead to excess production of
ionospheric Hþ
3 . We apply this search strategy to the night side of Jupiter near the equator. The night side
has zero solar irradiation, and low latitudes are sufficiently far from ionizing auroras, leading to a lowbackground search. We use Cassini data on ionospheric Hþ
3 emission collected three hours either side of
Jovian midnight, during its flyby in 2000, and set novel constraints on the DM-nucleon scattering cross
section down to about 10−38 cm2. We also highlight that DM atmospheric ionization may be detected in
Jovian exoplanets using future high-precision measurements of planetary spectra.
A slightly oblate dark matter halo revealed by a retrograde precessing Galact...Sérgio Sacani
The shape of the dark matter (DM) halo is key to understanding the
hierarchical formation of the Galaxy. Despite extensive eforts in recent
decades, however, its shape remains a matter of debate, with suggestions
ranging from strongly oblate to prolate. Here, we present a new constraint
on its present shape by directly measuring the evolution of the Galactic
disk warp with time, as traced by accurate distance estimates and precise
age determinations for about 2,600 classical Cepheids. We show that the
Galactic warp is mildly precessing in a retrograde direction at a rate of
ω = −2.1 ± 0.5 (statistical) ± 0.6 (systematic) km s−1 kpc−1 for the outer disk
over the Galactocentric radius [7.5, 25] kpc, decreasing with radius. This
constrains the shape of the DM halo to be slightly oblate with a fattening
(minor axis to major axis ratio) in the range 0.84 ≤ qΦ ≤ 0.96. Given the
young nature of the disk warp traced by Cepheids (less than 200 Myr), our
approach directly measures the shape of the present-day DM halo. This
measurement, combined with other measurements from older tracers,
could provide vital constraints on the evolution of the DM halo and the
assembly history of the Galaxy.
Testing the Son of God Hypothesis (Jesus Christ)Robert Luk
Instead of answering the God hypothesis, we investigate the Son of God hypothesis. We developed our own methodology to deal with existential statements instead of universal statements unlike science. We discuss the existence of the supernaturals and found that there are strong evidence for it. Given that supernatural exists, we report on miracles investigated in the past related to the Son of God. A Bayesian methodology is used to calculate the combined degree of belief of the Son of God Hypothesis. We also report the testing of occurrences of words/numbers in the Bible to suggest the likelihood of some special numbers occurring, supporting the Son of God Hypothesis. We also have a table showing the past occurrences of miracles in hundred year periods for about 1000 years. Miracles that we have looked at include Shroud of Turin, Eucharistic Miracles, Marian Apparitions, Incorruptible Corpses, etc.
Transmission Spectroscopy of the Habitable Zone Exoplanet LHS 1140 b with JWS...Sérgio Sacani
LHS 1140 b is the second-closest temperate transiting planet to the Earth with an equilibrium temperature low enough to support surface liquid water. At 1.730±0.025 R⊕, LHS 1140 b falls within
the radius valley separating H2-rich mini-Neptunes from rocky super-Earths. Recent mass and radius
revisions indicate a bulk density significantly lower than expected for an Earth-like rocky interior,
suggesting that LHS 1140 b could either be a mini-Neptune with a small envelope of hydrogen (∼0.1%
by mass) or a water world (9–19% water by mass). Atmospheric characterization through transmission
spectroscopy can readily discern between these two scenarios. Here, we present two JWST/NIRISS
transit observations of LHS 1140 b, one of which captures a serendipitous transit of LHS 1140 c. The
combined transmission spectrum of LHS 1140 b shows a telltale spectral signature of unocculted faculae (5.8 σ), covering ∼20% of the visible stellar surface. Besides faculae, our spectral retrieval analysis
reveals tentative evidence of residual spectral features, best-fit by Rayleigh scattering from an N2-
dominated atmosphere (2.3 σ), irrespective of the consideration of atmospheric hazes. We also show
through Global Climate Models (GCM) that H2-rich atmospheres of various compositions (100×, 300×,
1000×solar metallicity) are ruled out to >10 σ. The GCM calculations predict that water clouds form
below the transit photosphere, limiting their impact on transmission data. Our observations suggest
that LHS 1140 b is either airless or, more likely, surrounded by an atmosphere with a high mean molecular weight. Our tentative evidence of an N2-rich atmosphere provides strong motivation for future
transmission spectroscopy observations of LHS 1140 b.
Possible Anthropogenic Contributions to the LAMP-observed Surficial Icy Regol...Sérgio Sacani
This work assesses the potential of midsized and large human landing systems to deliver water from their exhaust
plumes to cold traps within lunar polar craters. It has been estimated that a total of between 2 and 60 T of surficial
water was sensed by the Lunar Reconnaissance Orbiter Lyman Alpha Mapping Project on the floors of the larger
permanently shadowed south polar craters. This intrinsic surficial water sensed in the far-ultraviolet is thought to be
in the form of a 0.3%–2% icy regolith in the top few hundred nanometers of the surface. We find that the six past
Apollo Lunar Module midlatitude landings could contribute no more than 0.36 T of water mass to this existing,
intrinsic surficial water in permanently shadowed regions (PSRs). However, we find that the Starship landing
plume has the potential, in some cases, to deliver over 10 T of water to the PSRs, which is a substantial fraction
(possibly >20%) of the existing intrinsic surficial water mass. This anthropogenic contribution could possibly
overlay and mix with the naturally occurring icy regolith at the uppermost surface. A possible consequence is that
the origin of the intrinsic surficial icy regolith, which is still undetermined, could be lost as it mixes with the
extrinsic anthropogenic contribution. We suggest that existing and future orbital and landed assets be used to
examine the effect of polar landers on the cold traps within PSRs
SCIENTIFIC INVESTIGATIONS – THE IMPORTANCE OF FAIR TESTING.pptxJoanaBanasen1
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
download it
Keys of Identification for Indian Wood: A Seminar ReportGurjant Singh
Identifying Indian wood involves recognizing key characteristics such as grain patterns, color, texture, hardness, and specific anatomical features. These identification keys include observing the wood's pores, growth rings, and resin canals, as well as its scent and weight. Understanding these features is essential for accurate wood identification, which is crucial for various applications in carpentry, furniture making, and conservation.
Additionally, the application of Convolutional Neural Networks (CNN) in wood identification has revolutionized this field. CNNs can analyze images of wood samples to identify species with high accuracy by learning and recognizing intricate patterns and features. This technological advancement not only enhances the precision of wood identification but also accelerates the process, making it more efficient for industry professionals and researchers alike.
Ethical considerations play a crucial role in research, ensuring the protection of participants and the integrity of the study. Here are some subject-specific ethical issues that researchers need
A mature quasar at cosmic dawn revealed by JWST rest-frame infrared spectroscopySérgio Sacani
The rapid assembly of the first supermassive black holes is an enduring mystery. Until now, it was not known whether quasar ‘feeding’ structures (the ‘hot torus’) could assemble as fast as the smaller-scale quasar structures. We present JWST/MRS (rest-frame infrared) spectroscopic observations of the quasar J1120+0641 at z = 7.0848 (well within the epoch of reionization). The hot torus dust was clearly detected at λrest ≃ 1.3 μm, with a black-body temperature of
K, slightly elevated compared to similarly luminous quasars at lower redshifts. Importantly, the supermassive black hole mass of J1120+0641 based on the Hα line (accessible only with JWST), MBH = 1.52 ± 0.17 × 109 M⊙, is in good agreement with previous ground-based rest-frame ultraviolet Mg II measurements. Comparing the ratios of the Hα, Paα and Paβ emission lines to predictions from a simple one-phase Cloudy model, we find that they are consistent with originating from a common broad-line region with physical parameters that are consistent with lower-redshift quasars. Together, this implies that J1120+0641’s accretion structures must have assembled very quickly, as they appear fully ‘mature’ less than 760 Myr after the Big Bang.
A mature quasar at cosmic dawn revealed by JWST rest-frame infrared spectroscopy
Business governance based policy regulation for security incident response
1. 23 1
Business Governance based Policy Regulation
for Security Incident Response
Christophe Feltus , Djamel Khadraoui, Benoît de Rémont and André Rifaut
Centre de Recherche Public Henri Tudor – Luxembourg
Email: christophe.feltus@tudor.lu, benoit.deremont@tudor.lu, djamel.khadraoui@tudor.lu and
andre.rifaut@tudor.lu
Abstract—This paper describes the architecture of a policy
regulation system and some of its related concepts dedicated to
the application domain of computer network security context.
The actual architecture is based on a methodology identifying the
main phases addressing the needed reactions that could be
realized in order to get out of a failure or an attack situation of a
network.
Policy management domain has already been largely discussed
in the scientific literature. In fact, large panoply of works
focusing on how to develop a policy framework taking into
account the business goals, the organisational structure, the
operational rules and the links between low-level policy and high-
level one [13]. Nevertheless, it is notable that policy regulation
remains an area where less work has been done, more specially
the policy regulation according to business requirements.
This paper aims to propose a framework for policy regulation
that integrates the business layer during the regulation phase.
Index Terms—Architecture, Policy, Regulation, Computer
network security, Reaction.
I. INTRODUCTION
Today telecommunication and information systems are more
widely spread and mainly heterogeneous. This basically
involves more complexity through their opening and their
interconnection. Consequently, this has a dramatic drawback
regarding threats that could occur on such networks via
dangerous attacks. These attacks, continuously growing are
based on all new attacks techniques, which are actually
exposing operators as well as the end user.
Manuscript received March 31, 2007. The National Research Ministry of
Luxembourg supported this work in progress under a EUREKA project called
RED which stands for: Reaction after Detection.
D. Khadraoui. Author is with the Centre de Recherche Public henri Tudor,
Luxembourg, 29, Avenue John F. Kennedy, Kirchberg, Luxembourg
(corresponding author to provide phone: +352 425991286; fax: +352
425991777; e-mail: djamel.khadraoui@ tudor.lu).
C. Feltus, B. de Rémont and A. Rifaut are also with the Centre de
Recherche Public henri Tudor, Luxembourg, 29, Avenue John F. Kennedy,
Kirchberg, Luxembourg.
The realm of security management of information and
communication systems is actually facing many challenges,
very often due to the fact that it is difficult to:
• Establish central or local permanent decision capabilities;
• Have the necessary level of information;
• Quickly collect the information, which is critical in case
of an attack on a critical system node;
• Launch automated counter measures to quickly block a
detected attack;
• Efficiently react against an attack, especially if this needs
a change on an equipment configuration, which often
necessitate many checks that have to be performed in
order to avoid bad side effects (conflict creation, services
stability, etc).
Thus, it is crucial to elaborate a strategy of reaction after
detection against these attacks. This is mainly the subject of
the work presented in this paper dealing with the concepts
aiming at fulfilling the mission of optimising security and
protection of communication and information systems. The
principle is mainly to achieve the following:
• React quickly and efficiently to any simple attack but also
to any complex and distributed ones. The reaction is
organised in 2 steps: instantaneous reaction based on
existing policy to avoided leaving vulnerability in the
system and differed reaction aiming in adopting the based
policy to avoided new attack occurrence.
• Ensure a homogeneous and smart communication system
configuration, that are commonly considered and the
main sources of vulnerabilities.
The different phases of an attack and the associated reaction
processes are shown in Figure 1. This figure is extracted from
the RED1
project principle [12]. As a partner of this project,
our main contribution is actually related to the RED
architecture as well as at the policy management level. Some
1
RED: REaction after Detection a European CELTIC project
2. 23 2
of these primary elements of the contributions are presented in
this paper. These are related to the way to exploit the RED
architecture in the context of policy management and more
especially in the perspective of policy regulation based on
business governance. In [13] the authors presented an
innovative and new mechanism for adapting the security
policy of an information system according to the threat it
receives, and hence its behaviour and the services it offers.
This mechanism takes into account not only threats, but also
legal constraints and other objectives of the organization
operating this information system, taking into account
multiple security objectives and providing several trade-off
options between security objectives, performance objectives,
and other operational constraints.
Our contributions are widely related to [13] in the sense that it
uses the context principle of the Or-BAC modelling valid
during the crisis period (intrusion context). Our approach is to
adopt the same philosophy than in [13] in terms of regulation
but at the same time it enhances the business involvement
during the policy modification mechanism, which lays down
the foundation of a new approach for the elaboration of
methodological aspects strengthening the regulation
perspectives.
Secondes
minutes
Hours
Local
reaction
containment
Automated counter
measures (delude,
isolate, …)
Final reaction
(Vulnerability solved,
configuration changed,…)
Nominal
conditions
Certified
configuration
Global detection
Of suspicious
Local
detection
of suspicious
behavior
Before attacks After
time
Management console
Figure 1: Different phases of attack and main processes
II. ARCHITECTURE
Since the early days of intrusion detection, the question of
alert handling has been a daunting task for security officers.
The original proposition of intrusion detection is the creation
of a trustworthy audit trail that can show and track threats.
Intrusion prevention has taken over, proposing to automate the
reaction to alerts provided by intrusion detection devices. We
are currently at a stage where intrusion-prevention devices,
network-based or host-based, are capable of blocking
undesired traffic, reconfiguring firewalls or quarantining
undesired code.
Unfortunately, this state of the art leaves much to be desired in
terms of coherent response. First of all, reaction is based on
immediate detection, and is very close to the actual event or
audit trail. As a result, the same action from the attacker is
going to trigger the same local reaction to the perceived threat,
hence possibly overloading the reaction device or the target
information system. The location of the response is not
optimal either, as the detection/prevention device may be deep
into the network; the threat is therefore carried unchecked
within the information system whereas it could be stopped
earlier.
More fundamentally, the configuration of the response
happening on the intrusion prevention device is left to an
operator that may not be aware of the operational constraints
of the information system or network, but is preoccupied by
the protection of its (smaller) territory. Therefore, mistakes
and undesired side effects are often likely to happen, or
reaction is deactivated because of the fear of side effects.
The need for a more coherent approach to reaction is therefore
important to progress in the direction of attack resilience and
obtain more secure information systems. We propose to base
this approach on policies, and more specifically on security
policies and the OrBAC formalism.
More pragmatically, RED architecture consists in a regulation
based upon policies. Indeed, the policy modification is a way
to adapt the security of a global network. The corporate policy
is defined once for all, and isn’t modified by the regulation.
As it’s described on the left of the Figure 2, the corporate
rules represent an input to the system, and to the regulation
module. The corporate rules, combined with the new rules
(issued from the policy’s modification) are mapped into
technical rules, as described in section IV (Policy Based
Regulation). The new technical rules are then instantiate on
the network and on the related objects. It’s in this instantiation
that the reaction is really realized. Thus, at this point, the
network reaches a new security status. New observations are
realized on the network, and if necessary (as described in the
section IV, Methodology), new business rules are defined.
Optionally, before introducing these new rules in the system,
an agreement could be asked to the owner of the corporate
policy. This agreement can be automatic or not, depending on
the context, the extend of application, the level of abstraction
of the policy application’s area, the policy owner agreement.
3. 23 3
Figure 2 - RED architecture (basic elements)
The Figure 2 represents the main elements of the foreseen
RED architecture. This takes the business rules as input, the
policy regulation in the loop, and a new security status as
output of the system. This is mainly relying on a policy
adapted to a specific situation (context).
III. POLICY MANAGEMENT
In the literature, a large number of works has already been
realized in the context of policy and policy deployment. The
work around Ponder, a language for specifying management
and security policies for distributed systems, has defined a
policy as several rules that govern the choices in the behavior
of a system. Security policy define which actions are allowed,
for what, whom, and under which conditions [1, 2]. In [3],
Travis Beaux et al. makes a survey over policies and classify
the policies in term of:
• high-level program policies that address security goals,
security staff and their responsibilities;
• issue policies that address a single legal or technical
security issue such as properly handling financial or
health care information, contingency planning, or remote
connectivity; and
• system policies that concern low level technical policies
that describe how to configure specific systems and
applications.
Even if at the beginning, research about policies had largely
been focused on low-level policy (technical policy),
researchers have also devoted some attention to the policy’s
specification [9]. Arosha K. Bandara et al. [4] propose a
method for refinement of high-level goals into operations that
could be derived on implementable policies. In [5], Rifaut et
al. explain the approach of formalizing BASEL II2
and ORM3
with goal models and the ISO/IEC 15504. They present in
Figure 3 the idea that low-level policies are issue from higher-
level policy.
2
Basel Committee on Banking Supervision, “International Convergence of
Capital Measurement and Capital Standards”; BIS; Basel, June 2004.
3
Operational Risk Management
Figure 3: Policy refinement from high-level policies to low-
level policies
The above Figure 3 presents a company’s abstraction layer
structured view from the strategic to the technical one. Policy
refinement mechanism from the higher to the lower layer is
strongly done in accordance with the corporate objectives
down to the technical one. Indicators and strategies are
omnipresent in the refinement process.
When an attack occurs, it is necessary to change of policy in
emergency or to take action not allowed by the policy. This
situation is current for IT employees, but IT managers most of
the time do not define procedures to inform or to consult the
business managers. In case of an attack or an unusual
perturbation in the system, a major constraint in policy
adaptation is that it’s not allowed to modify low-level policy
without referring before to the high level policy. Not taking
care about that constraint may be the source of a bad business
IT alignment.
The technical policy has to be issued in straight line from
corporate policy. The structure between policies, or policies’
hierarchy, implied that the low-level policy owner is fully
accountable to the higher policy level owner. This can be
illustrated by the following example:
A healthcare institute has got a corporate policy to ensure the
confidentiality of the patient’s records. This corporate policy
(owned by the board of directors) mentions that only the
patient’s doctor has access to the patient files. Due to an IT
incident (i.e. attack or system intrusion), the IT clerk needs the
right to access these records but the IT policy denied such an
access. In this case, adding new right to the IT clerk means a
policy modification (or regulation) in contradiction with the
corporate policy (or policy consign) and need consequently to
be approved by the board of director (or by another procedure
agreed by the same board).
IV. POLICY BASED REGULATION METHODOLOGY
In order to reach a modification of the policy, several steps are
necessary. These steps are identified in specific modules as
described in the following:
1) Measurement
First of all, we need to realize some security measures on the
network’s key elements. These elements could be data, service
(DNS), critical applications, equipments, etc. All the unrefined
measures should be gathered in a specific place in order to be
4. 23 4
processed. This gathering could be done through a distributed
solution or via a classic client/server application.
2) Detection
The detection module relies on an application able to parse the
measured data representing the status of the entire network’s
security. In parsing the data coming from different elements,
the application must be able to combine them with the last
security states, in order to detect predefined failure or
intrusion patterns if any.
3) Analysis
Once a pattern found, it is necessary to define which elements
of the network are involved in (e.g. an actor, realizing an
action on a (or several) object(s)). Considering these three
elements, the found pattern and the current state of the
network’s security the policy’s rule(s) to be added, removed
or modified could be determined. Furthermore, in this module,
it is important to take in to account the business policy, in
order to respect it, and to avoid rules conflict generation.
4) Interpretation
Despite the analysis module, conflicts in the policy could
appear. The potential modifications that could be applied to
the policy must be interpreted. In interpreting a modification,
it became possible to specify its consequences and thus, the
possible conflicts between several rules. If a conflict is
discovered, the application will try to solve it, avoiding a
compromising configuration of the policy.
5) Alert
If a modification can be applied without generating any
conflict and without modifying the policy, it becomes
necessary to advertise the system (by sending an alert to the
concerned actors or by logging the modification). If the
modification concerns high-risk elements, an approval could
be asked.
6) Reaction
To modify the policy, a new rule could be added and/or an
older removed or modified, at the business rules level (not
specifically and only the technical level). The technical policy
corresponding to the new business policy will be generated
and applied to the system. In the same way, the technical
security could be modified in order to reinforce the network’s
security. Thus, the entire system becomes dynamic since it is
mainly creating a looping feedback by adapting the measures
to the new security status.
Figure 4: Methodology of the reaction.
The Figure 4 above represents the methodology used to
emerge on a policy’s modification, based on measures realized
on the technical layer of the targeted infrastructure.
V. THE OR-BAC USE CASE
We illustrate the concept of policy regulation in the context of
access control policy, and more precisely based on the Or-
BAC model [6].
As explain by the author, none of the classical access control
models such as DAC, MAC, RBAC, TBAC or TMAC [10,
11], is fully satisfactory to model security policies that are not
restricted to static permissions but also include contextual
rules related to permissions, prohibitions, obligations and
recommendations. In [7], the context in Or-BAC is defined as:
“A context is viewed as an extra condition that must be
satisfied to activate a given privilege “. By using the Or-BAC
model, the context can be associated to an emergency
situation due to an IT perturbation (attack, intrusion or other).
This kind of context is named intrusion context [13].
Figure 5: Policy regulation in the Or-BAC modelIn the Figure
5, we mainly added a layer (the bottom layer) in order to
illustrate the proposed regulation process of section IV.
In this Or-BAC uses case, a basic policy (issued from the
abstract level and validate by the business owner) is running
5. 23 5
in the company at the concrete layer. When an attack occurs,
the technical IT people first takes actions to face the problem
and secondly, initiate a process to modify if necessary the
basic policy. This policy modification of the basic policy
needs to be validated or improved by the policy owner before
being introduced in production, this correspond to the
agreement bloc of the figure 2The new validated policy
represents the input for the context element of the Or-BAC
model at the abstract level.
According to the above example of section III, this means that
the new policy may become operational if and only if the
board of directors has deliver its opinion again the requested
modification.
VI. CONCLUSION
In the context of this position paper, we have explained the
objectives of the RED project in term of reaction after
detection. We proposed to improve the regulation chain of
policies regulation and adaptation after occurrence of an
attack on the network. In our proposed solution, we give a
major importance of the business agreement approval during
the policy adaptation.
Policy regulation’s automation needs in the first hand the
existence of a hierarchy between the rules in case of multiple
choices due to multiple attacks and in second hand an
automatic method to validate the policy’s modifications.
Cuppens et al. explain in [8] that contexts (and all the
concepts of its model like org, role, activity, view…) are
organized hierarchically. Since that, when a conflict occurs,
security rules associated with the higher context in the
hierarchy will override the security rules associated with the
lower contexts.
The next steps of our achievements will to concentrate more
in the development and the elaboration of the reaction
methodology, as well as to experimentally validate the
automated way of a policy modification by the business or by
the policy owner as it is illustrated in the actual paper.
ACKNOWLEDGMENT
The Ministry of Culture, Higher Education and Research of
Luxembourg supports the on-going research work
(EUREKA/CELTIC RED project). Any opinions, findings,
and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect
the views of the funding organizations.
REFERENCES
[1] Charalambides, M., Flegkas, P., Pavlou, G., Bandara, A. K., Lupu, E. C.,
Russo, A., Dulay, N., Sloman, M., and Rubio-Loyola, J. 2005. Policy Conflict
Analysis for Quality of Service Management. In Proceedings of the Sixth
IEEE international Workshop on Policies For Distributed Systems and
Networks (Policy'05) - Volume 00 (June 06 - 08, 2005). POLICY. IEEE
Computer Society, Washington, DC, 99-108. DOI=
http://dx.doi.org/10.1109/POLICY.2005.23
[2] N. Dulay, E. Lupu, M Sloman, N. Damianou A Policy Deployment Model
for the Ponder Language, Proc. IEEE/IFIP International Symposium on
Integrated Network Management (IM’2001), Seattle, May 2001.
[3] Travis Breaux, Annie I. Antón, Clare-Marie Karat and John Karat,
Enforceability vs. Accountability in Electronic Policies, IEEE 7th
International Workshop on Policies for Distributed Systems and Networks
(POLICY‘06), London, Ontario, Canada, pp. 227-230, 5-7 June 2006.
[4] Arosha Bandara, Emil Lupu, Jonathan Moffet, and Alessandra Russo, A
Goal-based Approach to Policy Refinement, Proceedings 5th IEEE Workshop
on Policies for Distributed Systems and Networks, New York, USA, 2004
[5] A. Rifaut and C. Feltus, Improving Operational Risk Management Systems
by Formalizing the Basel II Regulation with Goal Models and the ISO/IEC
15504 Approach, Proceeding, REMO2V'2006, International Workshop on
Regulations Modelling and their Validation & Verification, to be held in
conjunction with the 18th Conference on Advanced Information System
Engineering (CAiSE'06), 6 June 2006, Luxembourg
[6] A. Abou El Kalam, R. El Baida, P. Balbiani, S. Benferhat, F. Cuppens, Y.
Deswarte, Miège, C. Saurel et G. Trouessin, Organization Based Access
Control. IEEE 4th International Workshop on Policies for Distributed Systems
and Networks (Policy 2003), Lake Come, Italy, June 4-6, 2003.
[7] F.Cuppens and A.Miège, Modelling contexts in the Or-BAC model, 19th
Annual Computer Security Applications Conference, Las Vegas, December,
2003
[8] Cuppens, F., Cuppens-Boulahia, N., Mi`ege, A.: Inheritance hierarchies in
the Or-BAC Model and application in a network environment. In: Second
Foundations ofComputer Security Workshop (FCS’04), Turku, Finland (2004)
14. Ullman, J.D.: Principles of Database and Knowledge Base
[9] S. Illner, H. Krumm, A. Pohl, I. Lück, D. Manka, and T. Sparenberg Policy
Controlled Automated Management of Distributed and Embedded Service
Systems Parallel and Distributed Computing and Networks, PDCN 2005,
Innsbruck, Austria
[10] D.F. Ferraiolo and D.R. Kuhn (1992) "Role Based Access Control" 15th
National Computer Security Conference
[11] R. S. Sandhu, E.J. Coyne, H.L. Feinstein, C.E. Youman (1996), "Role-
Based Access Control Models", IEEE Computer 29(2): 38-47, IEEE Press,
1996.
[12] RED (REaction after Detection) – CELTIC Project. http://www.celtic-
initiative.org/red
[13] H. Debar, Y. Thomas, N. Boulahia-Cuppens, F. Cuppens
Using contextual security policies for threat response . Third GI International
Conference on Detection of Intrusions & Malware, and Vulnerability
Assessment (DIMVA). Germany. Juillet 2006.