The document discusses intelligent management of electrical systems in industries. It describes applications for supporting distribution network management in industrial plants. These include real-time network monitoring, state estimation, optimization, planning and simulation of operations, and management of disturbances. Key functions are load modeling, reliability management, power quality analysis, voltage dip analysis, and condition monitoring. Advanced distribution automation and a distribution system of the future with ADA are also discussed.
The document provides details about an engineering training program at M/S Toshiba in Japan on their DDCMIS system for NTPC-Kudgi power plant. The training covered the Toshiba TG control system architecture, software, hardware, communication networks, and visits to their manufacturing facilities. It discussed the Toshiba DDCMIS components like the automatic control system, human-machine interface, and engineering station. The training helped provide a better understanding of Toshiba's implementation that could help with Kudgi plant erection and operator training.
A resonable approach for manufacturing system based on supervisory control 2
This document summarizes a research paper that proposes a novel approach for manufacturing system control using supervisory control and discrete event systems. It describes a testbed that was developed using this approach with three main hardware components: a personal computer, interface, and programmable logic controller. The paper discusses developing a model for the large, complex testbed manufacturing system by breaking it down into smaller, fundamental and interaction sub-models. It explains how the testbed model was implemented using clocked Moore synchronous state machines in programmable logic controller ladder logic programs.
Advance autonomous billing system in the EB meter with GSM technology
This document describes a smart energy metering system that uses sensors to measure voltage and current, an Arduino microcontroller to analyze the inputs, and a GSM module to automatically send electricity bills to customers via text message. It aims to automate the billing process, reduce costs, and prevent power theft. The system calculates usage data and transmits it to the cloud for storage and to the electricity provider for billing. Key benefits include remote monitoring, reduced manual labor, and improved transparency.
SCADA stands for Supervisory Control And Data Acquisition. SCADA software system is a device monitoring and controlling framework. The supervisory control includes, taking action and control through remote locations for various control mechanisms and processes.The front-end UI of Mobile App or Web dashboard along with backend business logic, database and a Gateway (as depicted in the above block diagram) manifests a SCADA solution for control and monitoring of devices in an IoT network.
https://www.embitel.com/blog/embedded-blog/what-is-scada-system-and-software-solution
This document discusses key enabling technologies for the Internet of Things (IoT). It covers wireless sensor networks, cloud computing, big data analytics, and embedded systems. Wireless sensor networks use small nodes to monitor environments and pass data through the network to a central location. Cloud computing provides on-demand access to applications, storage, and processing over the internet. Big data analytics involves collecting and analyzing large datasets to discover useful patterns. Embedded systems are computer systems designed for specific control tasks that are integrated with other devices.
This document discusses the design and implementation of a SCADA system to control an induction motor. It begins with an introduction to SCADA technology and its applications. It then describes the hardware components used, including the induction motor, PLC, and other electrical components. The document outlines the working of the overall control system, with the PLC controlling the motor based on inputs to the SCADA interface. It also discusses the development of the SCADA interface and screens to monitor and control the motor remotely. Screenshots are provided of the SCADA screens under different operating conditions of the induction motor.
TRAINING REPORT ON INDUSTRIAL AUTOMATION- PLC SCADA, VARIABLE FREQUENCY DRIVE
This document provides an overview of a training report on PLC, SCADA, and automation submitted by Akshay Sachan to the Electrical Engineering Department of the National Institute of Technology in Kurukshetra. The report includes an introduction to automation concepts, the history and introduction of programmable logic controllers, the architecture of PLCs including ladder diagrams, programming PLCs using ladder diagrams, applications of PLCs and SCADA systems, SCADA software and architecture, applications of SCADA, variable frequency drives, and a conclusion. Diagrams are provided to illustrate PLC internal architecture, simplified PLC structure, basic PLC sections, and ladder diagrams.
Embedded systems are specialized computer systems designed for specific tasks, often with strict requirements for performance, power consumption, and cost, and they are commonly used in devices like consumer electronics, vehicles, and industrial equipment. An embedded system combines both hardware and software components to perform dedicated functions in a larger mechanical or electrical system. Real-time operating systems are often used in embedded systems to ensure processes meet strict timing deadlines for functions like braking in a vehicle or medical monitoring equipment.
This document describes an energy management system called e3m that helps organizations analyze and reduce their energy consumption and costs. It can centrally manage meter data, key figures, and benchmarks from multiple properties to identify savings opportunities. A case study shows how e3m helped Migros, a large Swiss retailer, reduce its specific electricity consumption by 14.7% between 2002 and 2008 through centralized monitoring and control. The system is scalable and can integrate with other building management and enterprise systems.
Solving big data challenges for enterprise application
This document discusses the challenges of application performance monitoring (APM) systems that deal with "big data". APM systems instrument enterprise applications to monitor metrics like response times and failures across distributed systems. This generates enormous amounts of monitoring data. The document evaluates six open-source data stores (Cassandra, HBase, Voldemort, Redis, VoltDB, MySQL Cluster) for their ability to handle the throughput of APM workloads in memory-bound and disk-bound cluster setups. It aims to provide performance results, lessons learned on setup complexity, and insights for using these data stores in an industrial APM system context.
This document provides an overview of a training report on programmable logic controllers (PLCs), supervisory control and data acquisition (SCADA) systems, and automation. It includes sections on the history and introduction of PLCs, the architecture of PLCs including the central processing unit and memory, programming PLCs using ladder logic, applications of PLCs and SCADA systems, the architecture of SCADA systems, and applications of automation in various industries. The training report was submitted to the Electrical Engineering department at the National Institute of Technology in Kurukshetra, India by a student as part of an internship on automation.
The document summarizes a student project presentation on developing an effective energy management system using programmable logic controllers (PLC) and supervisory control and data acquisition (SCADA). It includes an agenda, timeline, brief summary of the project, working principle, testing and validation process, results, and scope for commercialization. The system aims to monitor and control industrial processes, manage electrical systems through automation, and automatically generate bills while monitoring load parameters. It presents the benefits of PLC and SCADA for process control and energy management in industries.
A Cloud Computing design with Wireless Sensor Networks For Agricultural Appli...
1. The document proposes a design for using wireless sensor networks and cloud computing together for agricultural applications. It describes how sensor nodes can collect environmental data and send it to the cloud for storage, analysis and decision making.
2. The proposed system has three main components - a sensing cluster with various sensors to collect data, a cloud service cluster to process and analyze the data, and a mechanism cluster with actuator nodes that can take actions based on the cloud's decisions.
3. Some potential applications discussed are image processing of unhealthy plants, predicting crop diseases based on sensor readings, and automatically controlling the cultivation environment through actuators. The system is aimed to help farmers optimize resources and increase productivity.
Tiarrah Computing: The Next Generation of Computing
The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.
The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.
Quality Patents: Patents That Stand the Test of Time
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdf
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
Measuring the Impact of Network Latency at Twitter
Widya Salim and Victor Ma will outline the causal impact analysis, framework, and key learnings used to quantify the impact of reducing Twitter's network latency.
Implementing Oracle Utility-Meter Data Management For Power ConsumptionIJERDJOURNAL
ABSTRACT: In this digital mobile world, it‟s need of time to streamline and increase efficiency in business processes like effective data collection, measurement, automatic validation, editing and estimation of measurement data, analysis and dashboard for forecasting and ease in end user accessibility with Just in Time. This paper is following two methodology in this process. CEMLI is an extensive framework for developing and implementing for Oracle whereas OUM is business process and use case driven process which supports products, tool, technologies and documentation. This paper have focused on analytical data, system automation functionality along with prototype designing. For this, analysts and administrators will collect and define calculation rule for data collection and measurement, deployment methods, dashboards and security features. This paper gives measure understanding of cloud technologies and their features like services (SaaS), deployment methods, security and ability to reduce overhead cost, downtime, and automate business processes with 360 degree review and analysis. It consolidates data in one system with volumes of analog and interval data which facilitates new customer with offering and effective program. Also it maximizes return on investments and protects revenue through comprehensive exception management.
This document provides details about a project to create an environment and power monitoring panel using an ARM microcontroller board. It includes an introduction describing the importance of automation and sensor monitoring in industrial systems. It then provides details on the hardware and software used, including a Texas Instruments LM3S9D92 microcontroller board, sensors, and a graphical user interface design. The project aims to remotely monitor and display parameters from an industrial cabinet to improve maintenance and optimization.
Intelligent management of electrical systems in industriespushpeswar reddy
This document discusses the intelligent management of electrical systems in industries. It notes that while industrial plants have increasingly automated processes, electricity distribution networks have not seen as much focus on automation. Disturbances in power supply can be very costly. The document then outlines some intelligent applications that are needed, including handling large amounts of information, illustrating complex dependencies, and giving operators instructions in fault situations. It describes distribution management functions like real-time monitoring, state estimation, topology management, and loss minimization. Finally, it concludes that while industrial distribution faces different challenges than public distribution, intelligent software methods can be promising.
The document provides details about an engineering training program at M/S Toshiba in Japan on their DDCMIS system for NTPC-Kudgi power plant. The training covered the Toshiba TG control system architecture, software, hardware, communication networks, and visits to their manufacturing facilities. It discussed the Toshiba DDCMIS components like the automatic control system, human-machine interface, and engineering station. The training helped provide a better understanding of Toshiba's implementation that could help with Kudgi plant erection and operator training.
A resonable approach for manufacturing system based on supervisory control 2IAEME Publication
This document summarizes a research paper that proposes a novel approach for manufacturing system control using supervisory control and discrete event systems. It describes a testbed that was developed using this approach with three main hardware components: a personal computer, interface, and programmable logic controller. The paper discusses developing a model for the large, complex testbed manufacturing system by breaking it down into smaller, fundamental and interaction sub-models. It explains how the testbed model was implemented using clocked Moore synchronous state machines in programmable logic controller ladder logic programs.
Advance autonomous billing system in the EB meter with GSM technologyIRJET Journal
This document describes a smart energy metering system that uses sensors to measure voltage and current, an Arduino microcontroller to analyze the inputs, and a GSM module to automatically send electricity bills to customers via text message. It aims to automate the billing process, reduce costs, and prevent power theft. The system calculates usage data and transmits it to the cloud for storage and to the electricity provider for billing. Key benefits include remote monitoring, reduced manual labor, and improved transparency.
SCADA stands for Supervisory Control And Data Acquisition. SCADA software system is a device monitoring and controlling framework. The supervisory control includes, taking action and control through remote locations for various control mechanisms and processes.The front-end UI of Mobile App or Web dashboard along with backend business logic, database and a Gateway (as depicted in the above block diagram) manifests a SCADA solution for control and monitoring of devices in an IoT network.
https://www.embitel.com/blog/embedded-blog/what-is-scada-system-and-software-solution
This document discusses key enabling technologies for the Internet of Things (IoT). It covers wireless sensor networks, cloud computing, big data analytics, and embedded systems. Wireless sensor networks use small nodes to monitor environments and pass data through the network to a central location. Cloud computing provides on-demand access to applications, storage, and processing over the internet. Big data analytics involves collecting and analyzing large datasets to discover useful patterns. Embedded systems are computer systems designed for specific control tasks that are integrated with other devices.
This document discusses the design and implementation of a SCADA system to control an induction motor. It begins with an introduction to SCADA technology and its applications. It then describes the hardware components used, including the induction motor, PLC, and other electrical components. The document outlines the working of the overall control system, with the PLC controlling the motor based on inputs to the SCADA interface. It also discusses the development of the SCADA interface and screens to monitor and control the motor remotely. Screenshots are provided of the SCADA screens under different operating conditions of the induction motor.
TRAINING REPORT ON INDUSTRIAL AUTOMATION- PLC SCADA, VARIABLE FREQUENCY DRIVEAKSHAY SACHAN
This document provides an overview of a training report on PLC, SCADA, and automation submitted by Akshay Sachan to the Electrical Engineering Department of the National Institute of Technology in Kurukshetra. The report includes an introduction to automation concepts, the history and introduction of programmable logic controllers, the architecture of PLCs including ladder diagrams, programming PLCs using ladder diagrams, applications of PLCs and SCADA systems, SCADA software and architecture, applications of SCADA, variable frequency drives, and a conclusion. Diagrams are provided to illustrate PLC internal architecture, simplified PLC structure, basic PLC sections, and ladder diagrams.
Embedded systems are specialized computer systems designed for specific tasks, often with strict requirements for performance, power consumption, and cost, and they are commonly used in devices like consumer electronics, vehicles, and industrial equipment. An embedded system combines both hardware and software components to perform dedicated functions in a larger mechanical or electrical system. Real-time operating systems are often used in embedded systems to ensure processes meet strict timing deadlines for functions like braking in a vehicle or medical monitoring equipment.
This document describes an energy management system called e3m that helps organizations analyze and reduce their energy consumption and costs. It can centrally manage meter data, key figures, and benchmarks from multiple properties to identify savings opportunities. A case study shows how e3m helped Migros, a large Swiss retailer, reduce its specific electricity consumption by 14.7% between 2002 and 2008 through centralized monitoring and control. The system is scalable and can integrate with other building management and enterprise systems.
Solving big data challenges for enterprise applicationTrieu Dao Minh
This document discusses the challenges of application performance monitoring (APM) systems that deal with "big data". APM systems instrument enterprise applications to monitor metrics like response times and failures across distributed systems. This generates enormous amounts of monitoring data. The document evaluates six open-source data stores (Cassandra, HBase, Voldemort, Redis, VoltDB, MySQL Cluster) for their ability to handle the throughput of APM workloads in memory-bound and disk-bound cluster setups. It aims to provide performance results, lessons learned on setup complexity, and insights for using these data stores in an industrial APM system context.
This document provides an overview of a training report on programmable logic controllers (PLCs), supervisory control and data acquisition (SCADA) systems, and automation. It includes sections on the history and introduction of PLCs, the architecture of PLCs including the central processing unit and memory, programming PLCs using ladder logic, applications of PLCs and SCADA systems, the architecture of SCADA systems, and applications of automation in various industries. The training report was submitted to the Electrical Engineering department at the National Institute of Technology in Kurukshetra, India by a student as part of an internship on automation.
Dr Sohaira-CACIO-V2a_EngineeringTech.pptxHarisMasood20
The document summarizes a student project presentation on developing an effective energy management system using programmable logic controllers (PLC) and supervisory control and data acquisition (SCADA). It includes an agenda, timeline, brief summary of the project, working principle, testing and validation process, results, and scope for commercialization. The system aims to monitor and control industrial processes, manage electrical systems through automation, and automatically generate bills while monitoring load parameters. It presents the benefits of PLC and SCADA for process control and energy management in industries.
A Cloud Computing design with Wireless Sensor Networks For Agricultural Appli...Editor IJMTER
1. The document proposes a design for using wireless sensor networks and cloud computing together for agricultural applications. It describes how sensor nodes can collect environmental data and send it to the cloud for storage, analysis and decision making.
2. The proposed system has three main components - a sensing cluster with various sensors to collect data, a cloud service cluster to process and analyze the data, and a mechanism cluster with actuator nodes that can take actions based on the cloud's decisions.
3. Some potential applications discussed are image processing of unhealthy plants, predicting crop diseases based on sensor readings, and automatically controlling the cultivation environment through actuators. The system is aimed to help farmers optimize resources and increase productivity.
Tiarrah Computing: The Next Generation of ComputingIJECEIAES
The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.
The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.
Similar to intelligent-management-of-electrical-systems-in-industries.docx (20)
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
Measuring the Impact of Network Latency at TwitterScyllaDB
Widya Salim and Victor Ma will outline the causal impact analysis, framework, and key learnings used to quantify the impact of reducing Twitter's network latency.
Advanced Techniques for Cyber Security Analysis and Anomaly DetectionBert Blevins
Cybersecurity is a major concern in today's connected digital world. Threats to organizations are constantly evolving and have the potential to compromise sensitive information, disrupt operations, and lead to significant financial losses. Traditional cybersecurity techniques often fall short against modern attackers. Therefore, advanced techniques for cyber security analysis and anomaly detection are essential for protecting digital assets. This blog explores these cutting-edge methods, providing a comprehensive overview of their application and importance.
Blockchain technology is transforming industries and reshaping the way we conduct business, manage data, and secure transactions. Whether you're new to blockchain or looking to deepen your knowledge, our guidebook, "Blockchain for Dummies", is your ultimate resource.
Comparison Table of DiskWarrior Alternatives.pdfAndrey Yasko
To help you choose the best DiskWarrior alternative, we've compiled a comparison table summarizing the features, pros, cons, and pricing of six alternatives.
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Em...Erasmo Purificato
Slide of the tutorial entitled "Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Emerging Trends" held at UMAP'24: 32nd ACM Conference on User Modeling, Adaptation and Personalization (July 1, 2024 | Cagliari, Italy)
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
RPA In Healthcare Benefits, Use Case, Trend And Challenges 2024.pptxSynapseIndia
Your comprehensive guide to RPA in healthcare for 2024. Explore the benefits, use cases, and emerging trends of robotic process automation. Understand the challenges and prepare for the future of healthcare automation
Support en anglais diffusé lors de l'événement 100% IA organisé dans les locaux parisiens d'Iguane Solutions, le mardi 2 juillet 2024 :
- Présentation de notre plateforme IA plug and play : ses fonctionnalités avancées, telles que son interface utilisateur intuitive, son copilot puissant et des outils de monitoring performants.
- REX client : Cyril Janssens, CTO d’ easybourse, partage son expérience d’utilisation de notre plateforme IA plug & play.
Mitigating the Impact of State Management in Cloud Stream Processing SystemsScyllaDB
Stream processing is a crucial component of modern data infrastructure, but constructing an efficient and scalable stream processing system can be challenging. Decoupling compute and storage architecture has emerged as an effective solution to these challenges, but it can introduce high latency issues, especially when dealing with complex continuous queries that necessitate managing extra-large internal states.
In this talk, we focus on addressing the high latency issues associated with S3 storage in stream processing systems that employ a decoupled compute and storage architecture. We delve into the root causes of latency in this context and explore various techniques to minimize the impact of S3 latency on stream processing performance. Our proposed approach is to implement a tiered storage mechanism that leverages a blend of high-performance and low-cost storage tiers to reduce data movement between the compute and storage layers while maintaining efficient processing.
Throughout the talk, we will present experimental results that demonstrate the effectiveness of our approach in mitigating the impact of S3 latency on stream processing. By the end of the talk, attendees will have gained insights into how to optimize their stream processing systems for reduced latency and improved cost-efficiency.
Fluttercon 2024: Showing that you care about security - OpenSSF Scorecards fo...Chris Swan
Have you noticed the OpenSSF Scorecard badges on the official Dart and Flutter repos? It's Google's way of showing that they care about security. Practices such as pinning dependencies, branch protection, required reviews, continuous integration tests etc. are measured to provide a score and accompanying badge.
You can do the same for your projects, and this presentation will show you how, with an emphasis on the unique challenges that come up when working with Dart and Flutter.
The session will provide a walkthrough of the steps involved in securing a first repository, and then what it takes to repeat that process across an organization with multiple repos. It will also look at the ongoing maintenance involved once scorecards have been implemented, and how aspects of that maintenance can be better automated to minimize toil.
YOUR RELIABLE WEB DESIGN & DEVELOPMENT TEAM — FOR LASTING SUCCESS
WPRiders is a web development company specialized in WordPress and WooCommerce websites and plugins for customers around the world. The company is headquartered in Bucharest, Romania, but our team members are located all over the world. Our customers are primarily from the US and Western Europe, but we have clients from Australia, Canada and other areas as well.
Some facts about WPRiders and why we are one of the best firms around:
More than 700 five-star reviews! You can check them here.
1500 WordPress projects delivered.
We respond 80% faster than other firms! Data provided by Freshdesk.
We’ve been in business since 2015.
We are located in 7 countries and have 22 team members.
With so many projects delivered, our team knows what works and what doesn’t when it comes to WordPress and WooCommerce.
Our team members are:
- highly experienced developers (employees & contractors with 5 -10+ years of experience),
- great designers with an eye for UX/UI with 10+ years of experience
- project managers with development background who speak both tech and non-tech
- QA specialists
- Conversion Rate Optimisation - CRO experts
They are all working together to provide you with the best possible service. We are passionate about WordPress, and we love creating custom solutions that help our clients achieve their goals.
At WPRiders, we are committed to building long-term relationships with our clients. We believe in accountability, in doing the right thing, as well as in transparency and open communication. You can read more about WPRiders on the About us page.
2. Abstract
The automation of public electricity distribution has developed very
rapidly in the past few years. The same basis can be used to develop new
intelligent applications for electricity distribution networks in industrial plants.
Many new applications have to be introduced because of the different
environment and needs in industrial sector. The paper includes a system
description of industrial electric system management. The paper discusses on the
requirements of new applications and methods that can be used to solve problems
in the areas of distribution management and condition monitoring of industrial
networks.
4. CONTENTS
1 Introduction …………………..…………………….………………... 04
2 Applications for supporting the public
Distribution network management ................................................ 05
3 Description of the system environment …………………………….….08
4 Application functions for distribution
management in industrial plants ………………………………............ 11
5 Advanced Distribution
Automation ………………...............................………………....……..14
5.1 Distribution System of Future
with ADA ………………………………………….….17
6 Distribution Management
Functions …...............……………………………….…....18
7Application Functions of Data Management
Systems ………………………………........................…......…...…..21
7.1) Load modeling ………..……….….........................................21
7.2) Reliability management………………………….........……..23
7.3) Voltage dip analyses.........……………........……...............…25
7.4) Power quality analyses……………………………................26
7.5) Condition monitoring…………………………………..........26
8 Conclusion….......................................……………..…………….…...29
9 Bibliography.……………….................……………………................30
5. Introduction
Industrial plants have put continuous pressure on the advanced process automation.
However, there has not been so much focus on the automation of the electricity distribution
networks. Although, the uninterrupted electricity distribution is one basic requirement for
the process. A disturbance in electricity supply causing the“downrun” of the process may
cost huge amount of money. Thus the intelligent management of electricity distribution
including, for example, preventive condition monitoring and on-line reliability analysis has
a great importance. Nowadays the above needs have aroused the increased interest in the
electricity distribution automation of industrial plants. The automation of public electricity
distribution has developed very rapidly in the past few years. Very promising results has
been gained, for example, in decreasing outage times of customers. However, the same
concept as such cannot be applied in the field of industrial electricity distribution, although
the bases of automation systems are common. The infrastructures of different industry
plants vary more from each other as compared to the public electricity distribution, which
is more homogeneous domain. The automation devices, computer systems, and databases
are not in the same level and the integration of them is more complicated.
Applications for supporting the public
6. Distribution network management
It was seen already in the end of 80's that the conventional automation system (i.e.
SCADA) cannot solve all the problems regarding to network operation. On the other hand,
the different computer systems (e.g. AM/FM/GIS) include vast amount of data which is
useful in network operation. The operators had considerable heuristic knowledge to be
utilized, too. Thus new tools for practical problems were called for, to which AI-based
methods (e.g. object-oriented approach, rule-based technique, uncertainty modeling and
fuzzy sets, hypertext technique, neural networks and genetic algorithms) offers new
problem solving methods. So far a computer system entity, called as a distribution
management system (DMS), has been developed. The DMS is a part of an integrated
environment composed of the SCADA, distribution automation (e.g. microprocessor-based
protection relays), the network database (i.e. AM/FM/GIS), the geographical database, the
customer database, and the automatic telephone answering machine system. The DMS
includes many intelligent applications needed in network operation. Such applications are,
for example, normal state-monitoring and optimization, real-time network calculations,
short term load forecasting, switching planning, and fault management.
7. The core of the whole DMS is the dynamic object-oriented network model. The
distribution network is modeled as dynamic objects which are generated based on
the network data read from the network database. The network model includes the
real-time state of the network (e.g. topology and loads). Different network
operation tasks call for different kinds of problem solving methods. Various
modules can operate interactively with each other through the network model,
which works as a blackboard (e.g. the results of load flow calculations are stored in
the network model, where they are available in all other modules for different
purposes).The present DMS is a Windows NT -program implemented by Visual
C++. The prototyping meant the iteration loop of knowledge acquisition, modeling,
implementation, and testing. Prototype versions were tested in a real environment
from the very beginning. Thus the feedback on new inference models, external
connections, and the user-interface was obtained at a very early stage. The aim of
a real application in the technical sense was thus been achieved. The DMS entity
was tested in the pilot company, Koillis-Satakunnan Sähkö Oy, having about 1000
distribution substations and 1400 km of 20 kV feeders. In the pilot company
different versions of the fault location module have been used in the past years in
over 300 real faults. Most of the faults have been located with an accuracy of some
hundred meters, while the distance of a fault from the feeding point has been from
a few to tens of kilometers. The fault location system has been one reason for the
reduced outage times of customers (i.e. about 50 % in the 8 past years) together
with other automation.
8. The experiences as a whole were so encouraging that the DMS was modified as a
commercial product. The vendor was first a small Finnish software company. Since 1997
the DMS has been a worldwide software product of ABB Transmit Oybeing integrated to
the MicroSCADA platform. At present the DMS is in everyday use in several distribution
companies all over the world. Part of the research group behind the development of the
DMS works at present as the employees of ABB, which has confirmed the successful
commercially phase.
9. Description of the system environment
A big industrial plant differs from public distribution company by organizatory
structure and by system environment. A production is divided into many departments or
many companies. These units have the responsibility of production and maintenance. Very
often the maintenance is maintained by a service company. An energy department or
company is in charge of local energy production and of the distribution network. Above
organizations may have some control systems that serve for their needs only, but usually
information systems are closely connected together. A process automation system is the
most important system in an industrial plant, sometimes including other systems, as
illustrated in Fig. 1. For example, all energy production and distribution network control
tasks can be done in a process automation system. Normally, because of the reliability
reasons, vital parts of distribution network control is independent on the process
automation. The independency of process automation system vendor has been one reason
for separate systems, too.
Figure1: Automation and information systems of an industrial plant.
10. The systems in Fig. 1 utilize many databases, which contain data that can be used
in new applications. Process automation systems collect data for process monitoring and
optimization tools. The databases contain information of material flow, energy flow and
control data of production machines. Maintenance databases include technical
specifications and condition data of production machine components. Similar information
of electricity network components is supported by network database. Production
programs are stored in the databases of administrative systems.
Intelligent applications are needed to:
- Handle large amount of information available. This includes filtering of data and
producing new information by collecting data.
- Illustrate complex dependencies of electricity distribution and production processes in
abnormal situations.
- Give instructions for operators in fault situations. A risk of misoperation in unusual
fault situation is obvious and prevents or delay operators’ decision making.
- Automize analysis tasks. Continuous information analysis is not possible manually.
In order to introduce new intelligent applications for the management of electric
systems in industrial plants, a basis for implementation is needed. The following
requirements should be satisfied:
- Documentation of electricity distribution network is available for the systems. Network
databases can supply this information.
11. - Network, process and motor measurements are available for the system. This means, that
data acquisition from multiple sources with capability to use various data transfer methods
is needed, as illustrated in Fig. 2.
12. Application functions for distribution management in
industrialplants
As mentioned above the concept of public distribution automation cannot be
applied as such in the management of industrial electricity networks. For example, fast
and accurate fault location has a great importance for reducing the outage time of
customers in the public electricity distribution, while there is no special need of such a
function in industrial networks. Predictive condition monitoring, reliability calculations,
and protection relay coordination to prevent disturbances in advance are more important.
Caused by the features of industrial networks there are needs for methods to model
dynamic phenomena and harmonics, and to calculate load-flow and fault currents in ring
connected networks. An essential need is the load modeling which differs considerable
from the public distribution. The basis of the distribution management system (i.e. the use
of network model as the blackboard) is common in the both domains. The network model
includes the real-time topology and network calculation results in the prevailing
switching and load conditions. The main functions of system entity for the industrial
networks are listed in the following:
* Real-time network monitoring, state estimation and optimization:
- Topology management
- load flow and fault currents also as dynamic phenomena
- Monitoring and compensation of reactive power
- monitoring of harmonics and resonances
13. - Minimization of power losses
* Planning and simulation of operation actions
- switching planning
- Automatic load shedding and forming a local island
- switching the network as a part of the national grid
- fault situations
* Management of disturbances
- Event analysis
- Fault location and network restoration
- Preventive condition monitoring
- Protection relay coordination
- Reliability calculations
- reporting
Distribution Automation which includes feeder automation and distribution
management systems (DMS) is an important technique in distribution network. The
distribution management systems are composed of distribution management functions.
The DMF is an entity which incorporates different applications on a single platform over
which supervision is made. This mainly supports documentation of network data planning
operation and reliability management of distribution networks. Various application
functions for distribution management in industrial plants are mainly load modeling
,reliability management , power quality analysis, voltage dip analysis and condition
monitoring .All this are incorporated in a domain of distribution management functions.
15. ADVANCED DISTRIBUTION AUTOMATION
Traditional distribution systems were designed to perform one function—
distributing power to end users. The distribution system of the future will be more
versatile and will be multifunctional.
Strategic drivers for ADA are to
• Improve system performance
• Reduce outage times
• Allow the efficient use of distributed energy resources
• Provide the customer more choices and
• To integrate the customer systems
For ADA to work, the various intelligent devices must be interoperable both in
the electric system architecture and in the communication and control architecture.
.
Figure3: ADA architecture
16. ADA will enable the distribution system to be configured in new ways for such
things as looped secondaries or intentional islanding to facilitate easy recovery from
outages and to deal with other emergencies.
Fig: 4
The three major components of ADA
– Flexible electrical system architecture
– Real-time state estimation tools
– Communication and control system based on open architecture standards
The intelligent universal transformer is a prime example of a new electronic device that
will be a cornerstone of ADA. It will provide a variety of functions including
– Voltage stepping
– Voltage regulation
– Power quality enhancement
– New customer service options such as DC power output
– Power electronic replacement for conventional copper and iron transformers
17. The Flexible Electric Architecture and the Open Communications Architecture
synergistically empower each other to create the distribution system of the future.
Each of these is made more valuable by its interaction with the other.
ADA will provide improvements in many areas including
– Reliability
– System performance
– Condition monitoring
– Outage detection and restoration
– Maintenance practices and prioritization
– Automated switching and fault management
– Reactive power and voltage management
– Loss reduction and load management
– Customer service options
19. DISTRIBUTION MANAGEMENT FUNCTIONS
Distribution management functions form an entity of applications supporting
documentation of network data, and planning, operation and reliability management of
distribution network in industrial plants. The functions can be included into different
computer systems, like AM/FM/GIS, Distribution Management System (DMS), and
SCADA or case specific customized applications. The main functions of distribution
management entity for the industrial networks are listed in the following:
• Documentation of network data
• Graphical user interfaces
• Real-time network monitoring, state estimation and optimization
- Topology management, load flow and fault current calculation, monitoring and
compensation of reactive power, monitoring of harmonics and resonance, and
minimization of power losses
• Planning and simulation of operation actions
- switching planning, fault situations, automatic load shedding and forming a local island
• Management of disturbances and reliability
- Preventive condition monitoring, reliability and availability management, protection
relay coordination, event analysis, fault location and network restoration, reporting.
Caused by the features of industrial networks the importance of the distribution
management functions are different as in public electricity networks. There are also needs
for new methods. An essential need is the load modeling which differs considerable from
the public distribution. Predictive condition monitoring, reliability management, and
20. protection relay coordination to prevent disturbances in advance have a great importance.
Some functions of the DMS for the management of public distribution networks can be
applied almost as such also in the management of industrial electricity networks, e.g.
topology management.
21. APPLICATION FUNCTIONS OF DATA MANAGEMENT
SYSTEMS
1) Load modeling
The essential basis for advanced application functions is the modeling of loads
connected to the network. Usually there are only few measurement points in the network.
However, loading of every load node of the network must be known in the network
calculations. For that purpose the loads are estimated by load models.
The essential need for the load models is that they form a basis for the load-flow
calculations. Results of load-flow calculations are utilized different kind of tasks as real-
time network monitoring and optimization, and switching planning. Information on loads
can also be utilized in preventive condition monitoring and reliability analyses. Although,
the loads (i.e. the current) of some nodes can be measured on-line, models are needful
because of the DMS can be used also in simulated state, when the information of system
does not correspond the current real-time state of the distribution network.
In the domain of public electricity distribution hourly load curves have been
determined for each customer group to be used in load-flow calculation and load
forecasting. In industrial plants the load modeling should be based mainly on the process
22. itself and its behavior. Load models can be determined by making enough measurements
in different known process conditions. However, the industrialplants vary from each other
quite much, which means that load models determined in one plant may not be able
to used as such in other one. One aim of the research work is to develop tools and methods
by which the determination of the plant specific load models can be achieved during the
installation of the automation system when enough measurements have been done and
certain process specific parameters are known. Neural networks can be used to learn the
correlations between the measurements and the process in order to produce the load model
Significant features of the load models are swiftness, simplicity, a capability to
utilize measured information, a capability to utilize inaccurate information and a capability
to adapt alternating and different conditions. The state monitoring of the DMS acts in real
times which appoint demands to the swiftness of the load models. Further the industrial
processes will be developed and so the load models must be able to adapt in varied
situation.
Demands, mentioned above, could be achieved using advanced methods and
technologies. This means using neural networks technology, fuzzy logic and self-
adaptively technologies in further development of load models of the industrial distribution
networks.
23. Fig 5: Network Load Model Determining
Load forecasting in the industrial environment cannot be based on any regularity
of behavior. Reliable forecasting assumes use of methods which can utilize production
plans in some time distance which also can have a large difference with each other and
include inaccurate information. The load forecasting of the network feeding some process
bases on the known behavior of the process, earlier measured values and the planned
production.
Calculation methods for meshed networks
The DMS for public distribution management included load flow and fault current
calculation procedures, which worked only in radial networks. The need for calculating
meshed networks in industrial distribution networks is anyway obvious (e.g. there are
several fault current sources).
Load flow calculation for meshed network leads to a group of non-linear
24. equations. Classic Newton-Raphson iteration is considered be the most competent method
for solving load flow equations, and was selected as the solver. Fault current calculation is
performed only in the symmetrical three-phase case. In fact, the calculation can be done
simply by inverting a matrix. To calculate inverse of matrix with conventional methods is
now too laborious and therefore discarded. Instead an algorithm called Z-bus algorithm is
used for calculating inverse effectively.
The load flow and fault current algorithms are implemented as a part of the DMS
so that they can utilize the common network model and topology analysis. The primary
information for the load-flow calculation is the loads of the secondary substations and
motors connected to the medium voltage network. The loading information is read from
the Access –database including the load models for different situations. The results of load
flow and fault current calculations can be studied through the user-interface of the DMS
by selecting the desired node.
2) Reliability management
The functions related to reliability have considerable economic significance in
industry. The losses of production caused by the disturbances and the inputs into the
investments of the systems including maintenance and operational arrangements join here.
The reliability can be studied with both qualitative and quantitative methods. With
a qualitative analysis the possible states of the system and reasons which lead to these are
determined with non-numerical methods. The failure modes, effects and
25. criticality analyses are adapted generally on the qualitative methods. Using failure modes,
effects and criticality analysis it is aimed to identify those faults of the devices or of the
subsystems which affect the capabilities of the system significantly. The system is
systematically analyzed and the effects of the component faults of the system are evaluated.
In a quantitative analysis indicators describing the capabilities of the system are calculated.
For example, availability, fault frequencies, durations of disturbances and indicators which
describe the economic appreciation of interruptions can be evaluated. The functions
supporting power distribution reliability management can be included in several different
systems which are, among others, AM/FM/GIS, the Distribution Management System
(DMS), SCADA system, maintenance systems, and documentation systems depending on
the total concept.
The load flow calculations and short circuit calculations are applications which
have central meaning in reliability analyses. The calculations make it possible to simulate
faults, to plan relaying arrangements and network operations. Switching plans operational
instructions can furthermore be stored in databases. An essential function supporting
reliability management and analyses is also the management of various instructions and
documents. There are many kind of documents which can be used to support the reliability
management. The graphical user-interface makes available the developing of the different
sophisticated user friendly functions, for example, determination of the feeding routes of
the components or loads to be examined
The estimation of the reliability technical state and capabilities of the distribution
system together with real-time condition supervision and maintenance programmes are in
26. a central position in the anticipating and prevention of disturbances and in the minimization
of their effects.
The analysis of reliability technical state and capability of power distribution
network is closely related to the protection coordination, too. Using fault current and load-
flow calculations personnel can evaluate how the distribution and the primary processes
will behave in fault situations of the distribution network.
3) Voltage dip analyses
A voltage dip is a sudden reduction of the supply voltage to a value between 90
%and 1 % of the declared voltage, followed by a voltage recovery after a short period of
time. Possible causes of these dips are typically faults in installations or in feeding public
networks and switching of large loads (e.g. motors). In rural areas voltage dips are
generally caused by short circuit faults in the public MV overhead network. The interest
in voltage dips is mainly due to the problems they cause on several types of equipment e.g.
tripping of adjustable-speed drives (both ac and dc drives), process-control equipment,
computers and contactors in front of some devices. The employment of IUT with the
support of ADA is a step towards reduction in these voltage dips.
4) Power quality analyses
The term Power Quality (PQ) is used with slightly different meanings. More
extensive meaning can be associated with any problems in voltage, current or
frequency deviations which result in failure, malfunction, disturbances or combination
of voltage quality and current quality. However, the voltage quality is addressed in
27. most cases. Voltage quality is concerned with deviations of the voltage from the ideal
and main characteristics can be described as with regard to frequency, magnitude,
waveform, symmetry of the three phase voltages and interruptions. In industrial plants on
the other hand increasing amount of disturbing devices (e.g. adjustable drives and power
electronics) and on the other hand increasing amount of sensitive devices (computers,
process automation ,electronic devices and adjustable drives) have caused growing concern
about power quality. Thus there is also a growing need to manage and monitor power
quality.
Volts
5 ) Condition monitoring
There exist many systems for condition monitoring of industrial processes,
especially for rotating machines. Monitoring usually covers electric motors that are
connected to the monitored processes. There are on-line systems designed mainly for
condition monitoring of electric motors, too. These systems usually include measuring
device connected with processing device, which can be connected permanently to data bus
supplying information for analyzing computer or data can be collected from device
occasionally. A selection between continuous data transfer and manually performed data
collection is made mainly by the costs of instrumentation and labour. Electric motors are
often considered to be very reliable, which means that investment not economically
justified.
On-line condition monitoring of components of electricity distribution network is
not commonly used. Protection relays include some functions for condition monitoring
such as self diagnostics of relay and counter of operations.
28. The applications described which are required to collect data from various sources,
for example from process automation, electricity grid and energy management system.
These systems contain data or are able to collect data to be used for condition monitoring
purposes. Process automation and energy management can provide energy, power, current
and temperature measurements of motors as well as measurement of output quantity of
drive, such as mass flow of pump. Electricity grid protection and measuring devices supply
quantitative and sometimes also qualitative information of voltage and current. Some
useful information of condition of components can be created just by collecting and
analyzing information available.
Database information is used in condition monitoring and condition planning of
network components as follows:
* Component data from the network database:
- Date of installation, model, and nominal life time
- Plan for service and replacement investments
* Operation counters and operation time of switches and disconnectors:
- Mechanical condition can be estimated
- Test instruction for unused disconnectors to prevent sticking
* Integrated lifetime (estimate of aging)
* Reliability analysis:
- Topology information and estimated reliability of components in a given load situation
* Analysis (reconstruction) of actual faults:
29. - Simulated network state using topology, load and voltage information of previous
situation.
30. Conclusion
Requirements of intelligent software applications for supporting the operation of
industrial distribution networks are different compared to the public distribution. The
domain is more segmented and heterogeneous, and the infrastructure of automation and
computer systems for electricity networks are not so sophisticated and advanced as other
process automation.
On the other hand the chance to apply intelligent software methods is promising
from the point of view of end-user attitudes, because the same kind of methods have been
successfully applied in process automation, e.g. in fuzzy control and system modeling
using neural networks. This paper discusses the requirements of intelligent methods in the
new domain, introduces the system environment and presents initial results gained in the
research work. Intelligent management will provide improvements inmany areas including
Reliability, System performance, loss reduction and load management.
The emergence of intelligent management is a promising step towards efficient
maintenance and complete automation.
BIBILIOGRAPHY
31. 1) Jero.A,” Load modeling for distribution management function of industrial medium
Voltage distribution networks “, IEEE Transactions on Industry applications, Vol.32
No 4, January 2001.
2) Frank R. Goodman, Jr., Ph.D.” Advanced Distribution Automation”, www.epri.com.
3) Markku Kauppinen, Tampere University of Technology, Finland “Management of
electrical systems in industrial plants”, www.energyline.com .
4) Lijun Qin,”A new principle fro system protection in distribution networks”, IEEE
transactions on power delivery, Vol 10, No 4, June 2001.
5) Monclar F.R,” Intelligent support system for distribution network management “,
International conference on Intelligent system application to power systems “, Sweden,
June 2000.