The increased size of modern power systems
demand faster and accurate means for the security assessment,
so that the decisions for reliable and secure operation planning
could be drawn in a systematic manner. Large computational
overhead is the major impediment in preventing the power
system security assessment (PSSA) from on-line use. To
mitigate this problem, this paper proposes, a cluster computing
based architecture for power system static security assessment,
utilizing the tools in the open source domain. A variant of the
master/slave pattern is used for deploying the cluster of
workstations (COW), which act as the computational engine
for the on-line PSSA. The security assessment is performed
utilizing the developed composite security index that can
accurately differentiate the secure and non-secure cases and
has been defined as a function of bus voltage and line flow
limit violations. Due to the inherent parallel structure of
security assessment algorithm and to exploit the potential of
distributed computing, domain decomposition is employed for
parallelizing the sequential algorithm. Extensive
experimentations were carried out on IEEE 57 bus and IEEE
145-bus 50 machine standard test systems for demonstrating
the validity of the proposed architecture.
A Critical Review on Employed Techniques for Short Term Load Forecasting
This document discusses techniques for short term load forecasting. It begins by defining the importance of load forecasting for electric utilities in planning energy purchasing and generation. It then reviews various short term load forecasting methods including artificial neural networks, fuzzy logic, genetic algorithms, and time series approaches. Finally, it provides details on artificial neural networks and their benefits for load forecasting applications. In summary, the document provides an overview of short term load forecasting and a critical review of techniques such as artificial neural networks, fuzzy logic and genetic algorithms.
Study of Reliability Analysis to the Iraqi South Region Network
This document analyzes the reliability of the 400kV power network in southern Iraq using the path tracing method. It identifies 12 power flow paths between generation stations and distribution stations. Using a matrix and the Minitab program, it calculates the failure rate and repair rate of each component. It then determines 21 minimal cut sets that could cause failure and calculates the failure rate of each cut set. The results provide a reliability assessment of the southern Iraq 400kV network.
Mc calley pserc_final_report_s35_special_protection_schemes_dec_2010_nm_nsrc
This document provides a summary of a report on system protection schemes (SPS). It discusses SPS standards, practices, and advancements. It also examines relationships between SPS and other industries like process control and nuclear. The report proposes frameworks to identify risks to SPS from both a process and system view. It contributes methods to assess SPS operational complexity and incorporate this into transmission planning studies. The frameworks and models developed in this report can be applied to real utility systems to evaluate SPS reliability and impacts on the power grid.
An Investigation of Fault Tolerance Techniques in Cloud Computing
Cloud computing which is created on Internet has the most powerful architecture of computation that provides users with the capabilities of information technology as a service and allows them to have access to these services without having specialized information or controlling the infrastructure. Fault tolerance has. The main advantages of using fault tolerance that has all the necessary techniques to keep active power and reliability in cloud computing include failure recovery, lower costs, and improved performance criteria. In this paper, we will investigation of the different techniques that are used for fault tolerance on cloud computing. Ya Min | Khin Myat Nwe Win | Aye Mya Sandar "An Investigation of Fault Tolerance Techniques in Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26611.pdfPaper URL: https://www.ijtsrd.com/computer-science/distributed-computing/26611/an-investigation-of-fault-tolerance-techniques-in-cloud-computing/ya-min
Vagueness concern in bulk power system reliability assessment methodology 2-3-4
This document discusses reliability assessment methods for bulk power systems. It begins by defining reliability and describing the hierarchical levels involved in power system reliability assessment. It then discusses the differences between adequacy and security, and describes some traditional deterministic reliability assessment methods as well as probabilistic methods. The document focuses on loss of load probability, loss of load expectation, and expected energy not supplied as key reliability indices used in bulk power system assessment. It also discusses generation and load models used in probabilistic analyses and outlines the conceptual tasks involved in hierarchical level 1 reliability evaluation.
AC drives are employed in process industries for varying applications resulting in a wide range of ratings. The entire process industry has seen a paradigm shift from manual to automated systems. The major factor contributing to this is the advanced power electronics technology enabling power electronic drives for smooth control of electric motors. Induction motors are most commonly used in industries. Faults in the power electronic circuits may occur periodically. These faults often go unnoticed as they rarely cause a complete shutdown and the fault levels may not be large enough to lead to a breakdown of the drive. An early detection of these faults is required to prevent their escalation into major faults. The diagnostic tool for detection of faults requires real time monitoring of the entire drive. In this work, detailed investigation of different faults that can occur in the power electronic circuit of an industrial drive is carried out. Analysis and impact of faults on the performance of the induction motor is presented. A real time monitoring platform is proposed to detect and classify the fault accurately using machine learning. A diagnostic tool also is developed to display the severity and location of the fault to the operator to take corrective measures.
Facts Devices Placement using Sensitivity Indices Analysis MethodIRJET Journal
This document discusses using sensitivity analysis to determine the optimal placement of Flexible AC Transmission System (FACTS) devices to improve power system security. It analyzes the IEEE 5-bus test system using a real power flow performance index sensitivity method. The results show that Line 1 has the most negative sensitivity index, indicating it is the most sensitive line and suitable for placing a FACTS device to enhance system security. In conclusion, optimal FACTS device placement determined through sensitivity analysis can help overcome power system security issues.
A Critical Review on Employed Techniques for Short Term Load ForecastingIRJET Journal
This document discusses techniques for short term load forecasting. It begins by defining the importance of load forecasting for electric utilities in planning energy purchasing and generation. It then reviews various short term load forecasting methods including artificial neural networks, fuzzy logic, genetic algorithms, and time series approaches. Finally, it provides details on artificial neural networks and their benefits for load forecasting applications. In summary, the document provides an overview of short term load forecasting and a critical review of techniques such as artificial neural networks, fuzzy logic and genetic algorithms.
Study of Reliability Analysis to the Iraqi South Region NetworkIRJET Journal
This document analyzes the reliability of the 400kV power network in southern Iraq using the path tracing method. It identifies 12 power flow paths between generation stations and distribution stations. Using a matrix and the Minitab program, it calculates the failure rate and repair rate of each component. It then determines 21 minimal cut sets that could cause failure and calculates the failure rate of each cut set. The results provide a reliability assessment of the southern Iraq 400kV network.
Mc calley pserc_final_report_s35_special_protection_schemes_dec_2010_nm_nsrcNeil McNeill
This document provides a summary of a report on system protection schemes (SPS). It discusses SPS standards, practices, and advancements. It also examines relationships between SPS and other industries like process control and nuclear. The report proposes frameworks to identify risks to SPS from both a process and system view. It contributes methods to assess SPS operational complexity and incorporate this into transmission planning studies. The frameworks and models developed in this report can be applied to real utility systems to evaluate SPS reliability and impacts on the power grid.
An Investigation of Fault Tolerance Techniques in Cloud Computingijtsrd
Cloud computing which is created on Internet has the most powerful architecture of computation that provides users with the capabilities of information technology as a service and allows them to have access to these services without having specialized information or controlling the infrastructure. Fault tolerance has. The main advantages of using fault tolerance that has all the necessary techniques to keep active power and reliability in cloud computing include failure recovery, lower costs, and improved performance criteria. In this paper, we will investigation of the different techniques that are used for fault tolerance on cloud computing. Ya Min | Khin Myat Nwe Win | Aye Mya Sandar "An Investigation of Fault Tolerance Techniques in Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26611.pdfPaper URL: https://www.ijtsrd.com/computer-science/distributed-computing/26611/an-investigation-of-fault-tolerance-techniques-in-cloud-computing/ya-min
Vagueness concern in bulk power system reliability assessment methodology 2-3-4IAEME Publication
This document discusses reliability assessment methods for bulk power systems. It begins by defining reliability and describing the hierarchical levels involved in power system reliability assessment. It then discusses the differences between adequacy and security, and describes some traditional deterministic reliability assessment methods as well as probabilistic methods. The document focuses on loss of load probability, loss of load expectation, and expected energy not supplied as key reliability indices used in bulk power system assessment. It also discusses generation and load models used in probabilistic analyses and outlines the conceptual tasks involved in hierarchical level 1 reliability evaluation.
The document describes power quality analysis using LabVIEW. It discusses designing an accurate measurement system to measure parameters like harmonics, sub-harmonics and inter-harmonics under distorted conditions. The voltage and current waveforms from various loads are sensed using sensors and interfaced with a PC using a data acquisition card. Experimental results for loads like a diode bridge rectifier and thyristor converter feeding resistive, inductive and DC motor loads are obtained and verified theoretically. Simulation results for parameters like RMS voltage, current, THD, power, crest factor etc. for different cases match the mathematically obtained values, proving the effectiveness of the system.
The modern-day power grid aims at providing reliable and quality power, which requires careful monitoring of the power grid against catastrophic faults.
Therefore one promising way is to provide the system a wide protection and control named as “Wide Area Measurement and Control System” /PMU is required.
Predicting Post Outage Transmission Line Flows using Linear Distribution FactorsDr. Amarjeet Singh
In order to design and implement preventive
and remedial actions, a continuous performance of fast
security analysis is imperative amid outages of system
components. Following the contingency of a system
component, State estimation and Load flow techniques
are the two popular techniques used to determine
system state variables leading to estimation of flows,
losses and violations in nodal voltages and transmission
line flows. But the dynamic state and complexity of the
system requires faster means of estimations which can
be achieved by linear distribution factors. The use of
Distribution factors in form of Power Transfer
Distribution Factors (PTDF) and Line Outage
Distribution Factors (LODF) which are transmission
line sensitivities with respect to active power exchanges
between buses and transmission line outages offer an
alternative to these two techniques being linear,
quicker, and non-iterative. Following the estimation of
the linear distribution factors from a reference
operating point (base case) and contingency cases
involving line outage, generator output variation and
outage of a Six bus network using Matlab programs,
the results show that by means of Linear Distribution
factors quick estimates of post outage line flows can be
made which match flow results obtained from DC load
flow analysis.
Efficient decentralized iterative learning tracker for unknown sampled data i...ISA Interchange
In this paper, an efficient decentralized iterative learning tracker is proposed to improve the dynamic performance of the unknown controllable and observable sampled-data interconnected large-scale state-delay system, which consists of NN multi-input multi-output (MIMO) subsystems, with the closed-loop decoupling property. The off-line observer/Kalman filter identification (OKID) method is used to obtain the decentralized linear models for subsystems in the interconnected large-scale system. In order to get over the effect of modeling error on the identified linear model of each subsystem, an improved observer with the high-gain property based on the digital redesign approach is developed to replace the observer identified by OKID. Then, the iterative learning control (ILC) scheme is integrated with the high-gain tracker design for the decentralized models. To significantly reduce the iterative learning epochs, a digital-redesign linear quadratic digital tracker with the high-gain property is proposed as the initial control input of ILC. The high-gain property controllers can suppress uncertain errors such as modeling errors, nonlinear perturbations, and external disturbances (Guo et al., 2000) [18]. Thus, the system output can quickly and accurately track the desired reference in one short time interval after all drastically-changing points of the specified reference input with the closed-loop decoupling property.
Security Constraint Unit Commitment Considering Line and Unit Contingencies-p...IJAPEJOURNAL
This summary provides the key details about the document in 3 sentences:
The document presents a new approach for security constrained unit commitment that considers both generator and transmission line contingencies using an incidence matrix methodology. It formulates the security constrained unit commitment problem and proposes modeling the optimal power flow using an incidence matrix to overcome challenges of admittance matrix based methods. The methodology allows easier modeling of multiple contingencies without changes to the network topology.
Proactive cloud service assurance framework for fault remediation in cloud en...IJECEIAES
Cloud resiliency is an important issue in successful implementation of cloud computing systems. Handling cloud faults proactively, with a suitable remediation technique having minimum cost is an important requirement for a fault management system. The selection of best applicable remediation technique is a decision making problem and considers parameters such as i) Impact of remediation technique ii) Overhead of remediation technique ii) Severity of fault and iv) Priority of the application. This manuscript proposes an analytical model to measure the effectiveness of a remediation technique for various categories of faults, further it demonstrates the implementation of an efficient fault remediation system using a rulebased expert system. The expert system is designed to compute a utility value for each remediation technique in a novel way and select the best remediation technique from its knowledgebase. A prototype is developed for experimentation purpose and the results shows improved availability with less overhead as compared to a reactive fault management system.
Upsurging Cyber-Kinetic attacks in Mobile Cyber Physical SystemsIRJET Journal
This document discusses cyber threats and security approaches for cyber-physical systems (CPS). It first reviews studies on CPS security modeling and data management. It then discusses three main approaches for modeling and optimizing secure CPS: model-based design, platform-based design, and contract-based design. Next, it covers four methods for CPS risk assessment: expert elicitation models, attack graphs, game theory, and Petri nets. It concludes by discussing reachability analysis, controller synthesis, and vulnerability analysis techniques for verifying CPS models and properties.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Convergence Problems Of Contingency Analysis In Electrical Power Transmission...CSCJournals
Contingency analysis is a tool used by power system engineers for planning and assessing
power system reliability. The conventional analytical method which is mathematical model based,
is not only tedious and time consuming in view of the large number of components in the network
but always left some critical components unassessed due to non-convergence of the power flow
analysis of such, hence the contingency analysis of such system could not be said to be
completed.
In this work, contingency analysis of line components of a standard IEEE-30 Bus and real 330-kV
Nigerian Transmission Company of Nigeria (TCN) network (28Bus) systems were investigated
using Radial Basis Function Neural Network (RBF-NN) which is artificial intelligence based.
The contingency analysis was carried out by solving the non-linear algebraic equations of steady
state model for the standard IEEE-30 Bus and TCN-28 Bus power networks using NewtonRaphson
(N-R) power flow method. RBF-NN method was used for the computation of Reactive
and Active performance indices (PIR and PIA ) which were ranked in order to reveal the criticality
of each line outage. Simulation was carried out using MATLAB R2013a version. The nonconverged
lines in both systems were reinforced and re-analysed. The results of contingency
analyses of the reinforced systems show more robust systems with complete line ranking.
Complex Measurement Systems in Medicine: from Synchronized Monotask Measuring...ITIIIndustries
Design problems of flexible computer systems for physiological researches are discussed. The widespread case of employing of commercial medical devices as parts of the resulting computer system is analyzed. To overcome most of the arising difficulties, we propose using of the universal synchronizing device and the modular script-based software. The prospects of such computer systems are outlined as an evolution of them into cyber-physical systems with on-demand plugging in of required hardware modules.
Cluster computing involves linking multiple computers together to take advantage of their combined processing power. The document discusses cluster computing, including its architecture, history, applications, advantages, and disadvantages. It provides examples of high performance computing clusters used for tasks like genetic algorithm research and describes how cluster computing can improve processor speed and allow computational tasks to be shared among multiple processors.
A cluster computer consists of multiple connected nodes that work together like a single system. It can increase performance over a single computer by distributing work across nodes. There are different types of clusters, including load balancing clusters for high performance computing, visualization clusters with graphics cards, and grids that pool multiple distributed resources. Key advantages of clusters are increased performance through parallel processing, scalability by adding nodes, and lower cost by using commodity hardware. Performance monitoring is important as a cluster's speed depends on its nodes and network connection.
Clusters are groups of tightly coupled computers that work together closely to perform tasks. They are commonly connected through fast local area networks and have evolved to support applications requiring huge databases. Clusters provide a cost-effective way to gain high performance, load balancing, and high availability features. They allow for scalability as more processors and nodes can be added as demand increases.
A cluster is a type of parallel or distributed computer system, which consists of a collection of inter-connected stand-alone computers working together as a single integrated computing resource.
This document discusses Fedora Workstation, an operating system designed for laptops and desktop PCs. It provides reliable and user-friendly software for developers and general users alike. The presentation highlights upcoming features in Fedora 25 like GNOME 3.22, Wayland display system, and Flatpak application support. It also provides information on how to get involved with the Fedora Workstation project community.
This document provides an overview of cluster computing. It defines a cluster as a group of loosely coupled computers that work together closely to function as a single computer. Clusters improve speed and reliability over a single computer and are more cost-effective. Each node has its own operating system, memory, and sometimes file system. Programs use message passing to transfer data and execution between nodes. Clusters can provide low-cost parallel processing for applications that can be distributed. The document discusses cluster architecture, components, applications, and compares clusters to grids and cloud computing.
This document is a project report submitted by Sudhanshu kumar sah to the Computer Society of India on creating an e-commerce website using J2EE, HTML, and MySQL. It acknowledges the guidance provided by Computer Society of India and two individuals. The contents section provides an outline of topics to be covered in the report, including introduction, history, electronic commerce, customers, product selection, payment, delivery, shopping cart systems, design, advantages, and disadvantages. It also includes an introduction to the project and certificates of completion.
This document discusses computer clusters and their architecture. A cluster consists of loosely connected computers that can be viewed as a single system. It includes nodes, a network, an operating system, and cluster middleware to allow programs to run across nodes. Clusters provide benefits like data sharing, parallel processing, and task scheduling. The architecture includes a master node that manages the cluster and computing nodes that process tasks. Beowulf clusters specifically use many connected commodity computers as nodes. The document outlines some example applications and operating systems used in clusters.
This document provides a summary of cluster computing. It discusses that a cluster is a group of linked computers that work together like a single computer. It then describes different types of clusters including high availability clusters for fault tolerance, load balancing clusters for distributing work, and parallel processing clusters for computationally intensive tasks. It also outlines some key cluster components such as nodes, networking, storage and middleware. Finally it provides some examples of cluster applications including Google's search engine, petroleum reservoir simulation, and image rendering.
Clustering involves connecting multiple computers together to appear as a single system for improved reliability and performance. A computer cluster consists of interconnected standalone computers working as a single integrated resource. Clusters can be classified based on their application, ownership, node architecture, operating system, and components. Common cluster types include high availability clusters for mission critical applications, load balancing clusters for distributing work, and parallel processing clusters for scientific computing using multiple processors sharing a single memory and interface.
A computer cluster is a group of tightly coupled computers that work together as a single computer. Clusters provide increased processing power at lower costs compared to single computers. They improve availability by eliminating single points of failure. Additional nodes can be added to a cluster to increase its overall capacity as processing demands grow. Key components of clusters include processors, memory, fast networking components, and specialized cluster software.
A cluster is a type of parallel computing system made up of interconnected standalone computers that work together as a single integrated resource. Clusters provide high-performance computing at a lower cost than specialized machines. As applications requiring large processing power become more common, the need for high-performance computing via clusters is increasing. Programming clusters can be done using message passing libraries like MPI, parallel languages like HPF, or parallel math libraries. Clusters make high-level computing more accessible to groups with modest resources.
Cluster computing involves linking multiple computers together to act as a single system. There are three main types of computer clusters: high availability clusters which maintain redundant backup nodes for reliability, load balancing clusters which distribute workloads efficiently across nodes, and high-performance clusters which exploit parallel processing across nodes. Clusters offer benefits like increased processing power, cost efficiency, expandability, and high availability.
The document discusses a seminar presentation on online shopping systems. It covers the goals of online selling, building community, automating customer service, and generating new leads. The history and process of online shopping are explained through various steps from adding items to a cart, checking out, payment options, and order completion. Advantages include convenience while disadvantages include losing the enjoyment of retail shopping. Tips for secure online shopping are also provided.
This is the presentation on clusters computing which includes information from other sources too including my own research and edition. I hope this will help everyone who required to know on this topic.
Project report On MSM (Mobile Shop Management)Dinesh Jogdand
This document provides an overview of a proposed mobile store management system for Mahalakshmi Communications. Key points:
- Mahalakshmi Communications is a mobile solution retailer with 2 stores and a vision to expand across India.
- The proposed system will computerize manual processes like inventory, customer, and employee data to increase efficiency and data accuracy over the current paper-based system.
- The system is designed to be easy to use, generate reports, and securely manage the store's data and operations through a database and user-friendly interface.
International Journal of Engineering (IJE) Volume (3) Issue (1)CSCJournals
This document discusses the implementation of artificial intelligence techniques for steady state security assessment in deregulated power system markets. It proposes using neural networks, decision trees, and adaptive neuro-fuzzy inference systems to analyze power transactions between generators and customers in deregulated systems. Data from load flow analysis is used to train and test the AI models. The techniques are tested on various standard power system test cases. The results show that neural networks provide more accurate and faster assessments compared to decision trees and neuro-fuzzy systems, but the latter two may be easier to implement for practical applications. The new methods could help improve security in planning and operating deregulated power system markets.
This document discusses electrical energy management and load forecasting in smart grids using artificial neural networks. It presents a study applying backpropagation neural networks to short-term load forecasting for Sudan's National Electric Company. The neural network model was used to forecast load, with error calculated by comparing forecasted and actual load data. The document also discusses generation dispatch, demand forecasting techniques, and designing a neural network for one-day load forecasting. It evaluates network performance and error for different training data sizes, finding that a ten-day training dataset produced the best results with minimum error. The neural network approach was able to reliably predict the nonlinear relationship between historical data and load.
Risk assessment of power system transient instability incorporating renewabl...IJECEIAES
This document presents a risk assessment method for power system transient stability that incorporates renewable energy sources. The method uses Gaussian process regression and feature selection algorithms to build a predictive model for online transient stability assessment. Offline data is collected from simulations at different operating conditions and contingencies. Feature selection algorithms identify the most important features related to critical fault clearing time as the stability index. The predictive model based on the selected features can then assess transient stability online by predicting critical fault clearing times based on new operating conditions. The method was tested on a 66-bus power system model with wind and solar power integrated at various buses.
Study on the performance indicators for smart grids: a comprehensive reviewTELKOMNIKA JOURNAL
This paper presents a detailed review on performance indicators for smart grid (SG) such as voltage stability enhancement, reliability evaluation, vulnerability assessment, Supervisory Control and Data Acquisition (SCADA) and communication systems. Smart grids reliability assessment can be performed by analytically or by simulation. Analytical method utilizes the load point assessment techniques, whereas the simulation technique uses the Monte Carlo simulation (MCS) technique. The reliability index evaluations will consider the presence or absence of energy storage elements using the simulation technologies such as MCS, and the analytical methods such as systems average interruption frequency index (SAIFI), and other load point indices. This paper also presents the difference between SCADA and substation automation, and the fact that substation automation, though it uses the basic concepts of SCADA, is far more advanced in nature.
Safeguard the Automatic Generation Control using Game Theory TechniqueIRJET Journal
This document discusses using game theory techniques to safeguard the automatic generation control (AGC) in smart grids from false data injection attacks. It first provides background on AGC and how false data can affect its performance and potentially cause blackouts. It then discusses using a game theory model to represent the interactions between attackers injecting false data and defenders protecting the system. The risks of different attack events are calculated and fed into the game model. Dynamic programming is used to determine optimal defense strategies based on resource constraints. Simulation results show the approach can minimize risks to the AGC under different attack scenarios.
Overview of State Estimation Technique for Power System ControlIOSR Journals
This document provides an overview of state estimation techniques for power system control. It discusses static, tracking, and dynamic state estimation approaches and how they differ based on whether measurements and system models are time-variant or invariant. The document also describes how state estimation processes redundant measurements to filter noise and estimate true system states like voltage magnitudes and angles at each bus. It further discusses weighted least squares estimation, the use of Jacobian matrices to iteratively estimate states, and how state estimation provides critical real-time data for power system monitoring and control functions.
Contingency plans based on N - 1 and N - 2 contingencies are already very much used by utilities . Artificial intelligent methods are new trends for analysing the contingency scenario along with state of art congestion management. This gives extra backup and b oost to reliable operation under contingent scenario of power system. This paper envisages the summary of all those efforts. This paper will help utilities to put more thinking in terms of recent developments in fast and intelligent computing methods. The paper highlights classical research and modern trends in contingency analysis such as hybrid artificial intelligent methods. Steady state stability assessment of a power system pursues a twofold objective:first to appraise the system's capability to withs tand major contingencies,and second to suggest remedial actions,i.e. means to enhance this capability,whenever needed. The first objective is the concern of analysis,the second is a matter of control.
A transition from manual to Intelligent Automated power system operation -A I...IJECEIAES
This paper reviews the transition of the power system operation from the traditional manual mode of power system operations to the level where automation using Internet of Things (IOT) and intelligence using Artificial Intelligence (AI) is implemented. To make the review paper brief only indicative papers are chosen to cover multiple power system operation based implementation. Care is taken there is lesser repeatation of similar technology or application be reviewed. The indicative review is to take only a representative literature to bypass scrutinizing multiple literatures with similar objectives and methods. A brief review of the slow transition from the traditional to the intelligent automated way of carrying out power system operations like the energy audit, load forecasting, fault detection, power quality control, smart grid technology, islanding detection, energy management etc is discussed .The Mechanical Engineering Perspective on the basis of applications would be noticed in the paper although the energy management and power delivery concepts are electrical.
IRJET- Location Identification for FACTs DeviceIRJET Journal
This document presents a method for determining the optimal location for placing Flexible AC Transmission System (FACTS) devices on a power grid to improve system security. It evaluates three selection factors: 1) Contingency Severity Index (CSI), which measures a line's sensitivity to overloads during contingencies, 2) Excess Power Flow (EPF), which calculates overload amounts during contingencies, and 3) Number of Times Overloaded Line (NOTOL), which counts how often a line is overloaded. These factors are calculated for different contingencies on IEEE 6-bus and 30-bus test systems. Lines are ranked based on the factors individually and combined. The highest ranked lines are determined to be the best locations for placing
Probabilistic Performance Index based Contingency Screening for Composite Pow...IJECEIAES
Composite power system reliability involves assessing the adequacy of generation and transmission system to meet the demand at major system load points. Contingency selection was being the most tedious step in reliability evaluation of large electric systems. Contingency in power system might be a possible event in future which was not predicted with certainty in earlier research. Therefore, uncertainty may be inevitable in power system operation. Deterministic indices may not guarantee the randomness in reliability assessment. In order to account for volatility in contingencies, a new performance index proposed in the current research. Proposed method assimilates the uncertainty in computational procedure. Reliability test systems like Roy Billinton Test System-6 bus system and IEEE-24 bus reliability test systems were used to test the effectiveness of a proposed method.
IRJET - Detection of False Data Injection Attacks using K-Means Clusterin...IRJET Journal
This document discusses detecting false data injection attacks in networks using k-means clustering. It proposes a system that uses a camera to detect inside attacks on a sub-network. When an outside person pauses the camera for a certain period of time, the server will detect this as an inside attack and inform the administrator. The system aims to improve network security by identifying these inside attacks using k-means clustering algorithm to classify sensor measurements and detect false data injected by attackers.
This document discusses detecting false data injection attacks using k-means clustering. It begins with an abstract that describes implementing detection of inside attacks in a sub-network using cameras. When an outside person pauses the camera for a specific amount of time, the server can detect this as an inside attack and notify the administrator. The document then reviews related work on cyber attacks against power grids and state estimation. It proposes a system using cameras to monitor for inside attackers pausing cameras. When this occurs, the server will detect an inside attack and inform the administrator. The key algorithm discussed is k-means clustering to classify sensor data and detect attacks.
IRJET- A Secured Method of Data Aggregation for Wireless Sensor Networks in t...IRJET Journal
This document summarizes a research paper that proposes using iterative filtering (IF) algorithms to securely aggregate sensor data in wireless sensor networks in the presence of collusion attacks. Collusion attacks occur when nodes secretly or illegally agree to corrupt transmitted data, causing a mismatch in data aggregation. The paper aims to implement IF algorithms to avoid collusion attacks. IF algorithms work by repeatedly applying a function, using the output of one iteration as the input for the next. This helps maximize the likelihood of inferring accurate data from partially observed systems. The document outlines the methodology, benefits of IF algorithms, and basic steps of how they are implemented.
This document presents a new approach to determine the risk of transient stability in power systems. It uses rotor trajectory index (RTI) to assess the severity of three-phase faults. RTI is proposed as a quantitative index to represent the severity of transient instability. Risk of transient stability is calculated using a risk formula that considers the probability and consequences of an event. The methodology is implemented on the IEEE 39-bus test system to calculate risks for different three-phase faults at various fault clearing times and load levels. The results show that risk increases with longer fault clearing times and higher loads. Faults at certain lines were found to have higher risks of causing transient instability.
Power System Reliability Assessment in a Complex Restructured Power SystemIJECEIAES
The basic purpose of an electric power system is to supply its consumers with electric energy as parsimoniously as possible and with a sensible degree of continuity and quality. It is expected that the solicitation of power system reliability assessment in bulk power systems will continue to increase in the future especially in the newly deregulated power diligence. This paper presents the research conducted on the three areas of incorporating multi-state generating unit models, evaluating system performance indices and identifying transmission paucities in complex system adequacy assessment. The incentives for electricity market participants to endow in new generation and transmission facilities are highly influenced by the market risk in a complex restructured environment. This paper also presents a procedure to identify transmission deficiencies and remedial modification in the composite generation and transmission system and focused on the application of probabilistic techniques in composite system adequacy assessment
SYNCHROPHASOR DATA BASED INTELLIGENT ALGORITHM FOR REAL TIME EVENT DETECTION ...IAEME Publication
The wide area measurement system (WAMS) has been installed at several locations in power system. Phasor measurements units (PMU) are considered as the building blocks of WAMS are being installed at various locations of power system. PMU is sending very large volume of data to Power system control center with the sampling rate of 50 or 25 samples per second. However there are always several events per day occurring in the system but the rate at which data is received and the volume of data to be analyzed is a big challenge for power system engineer. There is a need for developing an intelligent system to handle large volume of Synchrophasor data and identify Power system event in the present context. This paper presents an intelligent algorithm to automatically detect such events using wide area measurements in real time. In this work, Synchrophasor measurements received from PMU are fed to KNN based pattern recognition algorithm which is used to identify the Power system events. The severity and the type of the event can be judged through the change in voltage magnitude and phase angle at various buses. The developed algorithm is tested for IEEE 14 bus system and results are verified.
This document describes a condition monitoring system for induction motors that uses both vibration and electrical signals for fault diagnosis. The system includes an embedded device that acquires real-time vibration and electrical data from sensors attached to the motor. It then uses these signals to perform both operating condition monitoring and fault diagnosis analysis. For condition monitoring, it assesses the motor's health based on vibration levels. If an abnormality is detected, it uses a hybrid approach involving both vibration and electrical signals to classify the specific type of fault, such as stator, rotor, bearing, or eccentricity issues. The system is intended to help maintenance workers more efficiently diagnose problems and schedule repairs.
Wide area protection-research_in_the_smart_gridAlaa Eladl
This document discusses wide-area protection research in the context of the smart grid. It describes how technologies enabled by the smart grid like synchronized phasor measurement, improved communication networks, and standard protocols allow for the development of wide-area protection systems. These systems provide monitoring, control, and backup protection across large geographical areas. The document outlines some key technologies that wide-area protection relies on like wide-area measurement systems and communication networks. It also discusses trends like adaptive protection schemes that utilize system-wide information in real-time and agent-based control architectures.
This paper presents a novel optimization technique using genetic algorithms to develop an optimized emergency defence plan for power systems. The technique determines the optimal combination of generator tripping, load shedding, and islanding to regain system stability following severe contingencies. It was applied to the Libyan power system using time-domain simulations to evaluate solutions. Results showed the optimized defence plan required less load shedding than the existing Libyan plan and improved system response during a 2003 blackout event.
Protection Scheme in Generation NetworkIRJET Journal
This document discusses protection schemes for generation networks. It covers several topics related to protection schemes including adaptive protection strategies, reliability aspects, self-healing mechanisms, cybersecurity challenges and solutions, and advanced relay technologies and innovations. The document aims to comprehensively explore how smart grid concepts can transform protection relay technology and addresses aspects like data management, protection strategies, fault detection optimization techniques, and network reconfiguration.
Similar to Cluster Computing Environment for On - line Static Security Assessment of large Power Systems (20)
Power System State Estimation - A ReviewIDES Editor
This document provides a review of power system state estimation techniques. It discusses both static and dynamic state estimation algorithms. For static state estimation, it covers weighted least squares, decoupled, and robust estimation methods. Weighted least squares is commonly used but can have numerical instability issues. Decoupled state estimation approximates the gain matrix for faster computation. Robust estimation uses M-estimators and other techniques to handle outliers and bad data. Dynamic state estimation applies Kalman filtering, leapfrog algorithms, and other methods to continuously monitor system states over time.
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
This document summarizes a research paper that proposes using artificial intelligence techniques and FACTS controllers for reactive power planning in real-time power transmission systems. The paper formulates the reactive power planning problem and incorporates flexible AC transmission system (FACTS) devices like static VAR compensators (SVC), thyristor controlled series capacitors (TCSC), and unified power flow controllers (UPFC). Evolutionary algorithms like evolutionary programming (EP) and differential evolution (DE) are applied to find the optimal locations and settings of the FACTS controllers to minimize losses and costs. Simulation results on IEEE 30-bus and 72-bus Indian test systems show that UPFC performs best in reducing losses compared to SVC and TCSC.
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
Damping of power system oscillations with the help
of proposed optimal Proportional Integral Derivative Power
System Stabilizer (PID-PSS) and Static Var Compensator
(SVC)-based controllers are thoroughly investigated in this
paper. This study presents robust tuning of PID-PSS and
SVC-based controllers using Genetic Algorithms (GA) in
multi machine power systems by considering detailed model
of the generators (model 1.1). The effectiveness of FACTSbased
controllers in general and SVC-based controller in
particular depends upon their proper location. Modal
controllability and observability are used to locate SVC–based
controller. The performance of the proposed controllers is
compared with conventional lead-lag power system stabilizer
(CPSS) and demonstrated on 10 machines, 39 bus New England
test system. Simulation studies show that the proposed genetic
based PID-PSS with SVC based controller provides better
performance.
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
This paper presents the need to operate the power
system economically and with optimum levels of voltages has
further led to an increase in interest in Distributed
Generation. In order to reduce the power losses and to improve
the voltage in the distribution system, distributed generators
(DGs) are connected to load bus. To reduce the total power
losses in the system, the most important process is to identify
the proper location for fixing and sizing of DGs. It presents a
new methodology using a new population based meta heuristic
approach namely Artificial Bee Colony algorithm(ABC) for
the placement of Distributed Generators(DG) in the radial
distribution systems to reduce the real power losses and to
improve the voltage profile, voltage sag mitigation. The power
loss reduction is important factor for utility companies because
it is directly proportional to the company benefits in a
competitive electricity market, while reaching the better power
quality standards is too important as it has vital effect on
customer orientation. In this paper an ABC algorithm is
developed to gain these goals all together. In order to evaluate
sag mitigation capability of the proposed algorithm, voltage
in voltage sensitive buses is investigated. An existing 20KV
network has been chosen as test network and results are
compared with the proposed method in the radial distribution
system.
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
Controlling power flow in modern power systems
can be made more flexible by the use of recent developments
in power electronic and computing control technology. The
Unified Power Flow Controller (UPFC) is a Flexible AC
transmission system (FACTS) device that can control all the
three system variables namely line reactance, magnitude and
phase angle difference of voltage across the line. The UPFC
provides a promising means to control power flow in modern
power systems. Essentially the performance depends on proper
control setting achievable through a power flow analysis
program. This paper presents a reliable method to meet the
requirements by developing a Newton-Raphson based load
flow calculation through which control settings of UPFC can
be determined for the pre-specified power flow between the
lines. The proposed method keeps Newton-Raphson Load Flow
(NRLF) algorithm intact and needs (little modification in the
Jacobian matrix). A MATLAB program has been developed to
calculate the control settings of UPFC and the power flow
between the lines after the load flow is converged. Case studies
have been performed on IEEE 5-bus system and 14-bus system
to show that the proposed method is effective. These studies
indicate that the method maintains the basic NRLF properties
such as fast computational speed, high degree of accuracy and
good convergence rate.
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
The size and shape of opening in dam causes the
stress concentration, it also causes the stress variation in the
rest of the dam cross section. The gravity method of the analysis
does not consider the size of opening and the elastic property
of dam material. Thus the objective of study is comprises of
the Finite Element Method which considers the size of
opening, elastic property of material, and stress distribution
because of geometric discontinuity in cross section of dam.
Stress concentration inside the dam increases with the opening
in dam which results in the failure of dam. Hence it is
necessary to analyses large opening inside the dam. By making
the percentage area of opening constant and varying size and
shape of opening the analysis is carried out. For this purpose
a section of Koyna Dam is considered. Dam is defined as a
plane strain element in FEM, based on geometry and loading
condition. Thus this available information specified our path
of approach to carry out 2D plane strain analysis. The results
obtained are then compared mutually to get most efficient
way of providing large opening in the gravity dam.
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
Pushover Analysis a popular tool for seismic
performance evaluation of existing and new structures and is
nonlinear Static procedure where in monotonically increasing
loads are applied to the structure till the structure is unable
to resist the further load .During the analysis, whatever the
strength of concrete and steel is adopted for analysis of
structure may not be the same when real structure is
constructed and the pushover analysis results are very sensitive
to material model adopted, geometric model adopted, location
of plastic hinges and in general to procedure followed by the
analyzer. In this paper attempt has been made to assess
uncertainty in pushover analysis results by considering user
defined hinges and frame modeled as bare frame and frame
with slab modeled as rigid diaphragm and results compared
with experimental observations. Uncertain parameters
considered includes the strength of concrete, strength of steel
and cover to the reinforcement which are randomly generated
and incorporated into the analysis. The results are then
compared with experimental observations.
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
This document summarizes and analyzes secure multi-party negotiation protocols for electronic payments in mobile computing. It presents a framework for secure multi-party decision protocols using lightweight implementations. The main focus is on synchronizing security features to avoid agreement manipulation and reduce user traffic. The paper describes negotiation between an auctioneer and bidders, showing multiparty security is better than existing systems. It analyzes the performance of encryption algorithms like ECC, XTR, and RSA for use in the multiparty negotiation protocols.
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
The problems associated with selfish nodes in
MANET are addressed by a collaborative watchdog approach
which reduces the detection time for selfish nodes thereby
improves the performance and accuracy of watchdogs[1]. In
the related works they make use of credit based systems, reputation
based mechanisms, pathrater and watchdog mechanism
to detect such selfish nodes. In this paper we follow an approach
of collaborative watchdog which reduces the detection
time for selfish nodes and also involves the removal of such
selfish nodes based on some progressively assessed thresholds.
The threshold gives the nodes a chance to stop misbehaving
before it is permanently deleted from the network.
The node passes through several isolation processes before it
is permanently removed. Another version of AODV protocol
is used here which allows the simulation of selfish nodes in
NS2 by adding or modifying log files in the protocol.
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
Wireless sensor networks are networks having non
wired infrastructure and dynamic topology. In OSI model each
layer is prone to various attacks, which halts the performance
of a network .In this paper several attacks on four layers of
OSI model are discussed and security mechanism is described
to prevent attack in network layer i.e wormhole attack. In
Wormhole attack two or more malicious nodes makes a covert
channel which attracts the traffic towards itself by depicting a
low latency link and then start dropping and replaying packets
in the multi-path route. This paper proposes promiscuous mode
method to detect and isolate the malicious node during
wormhole attack by using Ad-hoc on demand distance vector
routing protocol (AODV) with omnidirectional antenna. The
methodology implemented notifies that the nodes which are
not participating in multi-path routing generates an alarm
message during delay and then detects and isolate the
malicious node from network. We also notice that not only
the same kind of attacks but also the same kind of
countermeasures can appear in multiple layer. For example,
misbehavior detection techniques can be applied to almost all
the layers we discussed.
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
The recent advancements in the wireless technology
and their wide-spread deployment have made remarkable
enhancements in efficiency in the corporate and industrial
and Military sectors The increasing popularity and usage of
wireless technology is creating a need for more secure wireless
Ad hoc networks. This paper aims researched and developed
a new protocol that prevents wormhole attacks on a ad hoc
network. A few existing protocols detect wormhole attacks but
they require highly specialized equipment not found on most
wireless devices. This paper aims to develop a defense against
wormhole attacks as an Anti-worm protocol which is based on
responsive parameters, that does not require as a significant
amount of specialized equipment, trick clock synchronization,
no GPS dependencies.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
This document summarizes a proposed cloud security and data integrity framework that provides client accountability. The framework aims to address issues like lack of user control over cloud data, need for data transparency and tracking, and ensuring data integrity. It proposes using JAR (Java Archive) files for data sharing due to benefits like portability. The framework incorporates client-side verification using MD5 hashing, digital signature-based authentication of JAR files, and use of HMAC to ensure data integrity. It also uses password-based encryption of log files to keep them tamper-proof. The framework is intended to provide both accountability and security for data sharing in cloud environments.
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
A System state in HTTP botnet uses HTTP protocol
for the creation of chain of Botnets thereby compromising
other systems. By using HTTP protocol and port number 80,
attacks can not only be hidden but also pass through the
firewall without being detected. The DPR based detection
leads to better analysis of botnet attacks [3]. However, it
provides only probabilistic detection of the attacker and also
time consuming and error prone. This paper proposes a Genetic
algorithm based layered approach for detecting as well as
preventing botnet attacks. The paper reviews p2p firewall
implementation which forms the basis of filtering.
Performance evaluation is done based on precision, F-value
and probability. Layered approach reduces the computation
and overall time requirement [7]. Genetic algorithm promises
a low false positive rate.
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
This document summarizes a research paper that proposes a method for enhancing data security in cloud computing through steganography. The method hides user data in digital images stored on cloud servers. When data needs to be accessed, it is extracted from the images. The document outlines the cloud architecture and security issues addressed. It then describes the proposed system architecture, security model, and data storage and retrieval process. Data is partitioned and hidden in multiple images to improve security. The goal is to prevent unauthorized access to user data stored on cloud servers.
The main tasks of a Wireless Sensor Network
(WSN) are data collection from its nodes and communication
of this data to the base station (BS). The protocols used for
communication among the WSN nodes and between the WSN
and the BS, must consider the resource constraints of nodes,
battery energy, computational capabilities and memory. The
WSN applications involve unattended operation of the network
over an extended period of time. In order to extend the lifetime
of a WSN, efficient routing protocols need to be adopted. The
proposed low power routing protocol based on tree-based
network structure reliably forwards the measured data towards
the BS using TDMA. An energy consumption analysis of the
WSN making use of this protocol is also carried out. It is
found that the network is energy efficient with an average
duty cycle of 0:7% for the WSN nodes. The OmNET++
simulation platform along with MiXiM framework is made
use of.
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
The security of authentication of internet based
co-banking services should not be susceptible to high risks.
The passwords are highly vulnerable to virus attacks due to
the lack of high end embedding of security methods. In order
for the passwords to be more secure, people are generally
compelled to select jumbled up character based passwords
which are not only less memorable but are also equally prone
to insecurity. Multiple use of distributed shares has been
studied to solve the problem of authentication by algorithms
based on thresholding of pixels in image processing and visual
cryptography concepts where the subset of shares is considered
for the recovery of the original image for authentication using
correlation function[1][2].The main disadvantage in the above
study is the plain storage of shares and also one of the shares
is being supplied to the customer, which will lead to the
possibility of misuse by a third party. This paper proposes a
technique for scrambling of pixels by key based random
permutation (KBRP) within the shares before the
authentication has been attempted. Total number of shares to
be created is dependent on the multiplicity of ownership of
the account. By this method the problem of uncertainty among
the customers with regard to security, storage, retrieval of
holding of half of the shares is minimized.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
A microelectronic circuit of block-elements
functionally analogous to two hydrogen bonding networks is
investigated. The hydrogen bonding networks are extracted
from â-lactamase protein and are formed in its active site.
Each hydrogen bond of the network is described in equivalent
electrical circuit by three or four-terminal block-element.
Each block-element is coded in Matlab. Static and dynamic
analyses are performed. The resultant microelectronic circuit
analogous to the hydrogen bonding network operates as
current mirror, sine pulse source, triangular pulse source as
well as signal modulator.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
How to Store Data on the Odoo 17 WebsiteCeline George
Here we are going to discuss how to store data in Odoo 17 Website.
It includes defining a model with few fields in it. Add demo data into the model using data directory. Also using a controller, pass the values into the template while rendering it and display the values in the website.
How to Add Colour Kanban Records in Odoo 17 NotebookCeline George
In Odoo 17, you can enhance the visual appearance of your Kanban view by adding color-coded records using the Notebook feature. This allows you to categorize and distinguish between different types of records based on specific criteria. By adding colors, you can quickly identify and prioritize tasks or items, improving organization and efficiency within your workflow.
How to Configure Time Off Types in Odoo 17Celine George
Now we can take look into how to configure time off types in odoo 17 through this slide. Time-off types are used to grant or request different types of leave. Only then the authorities will have a clear view or a clear understanding of what kind of leave the employee is taking.
Webinar Innovative assessments for SOcial Emotional SkillsEduSkills OECD
Presentations by Adriano Linzarini and Daniel Catarino da Silva of the OECD Rethinking Assessment of Social and Emotional Skills project from the OECD webinar "Innovations in measuring social and emotional skills and what AI will bring next" on 5 July 2024
Join educators from the US and worldwide at this year’s conference, themed “Strategies for Proficiency & Acquisition,” to learn from top experts in world language teaching.
Principles of Roods Approach!!!!!!!.pptxibtesaam huma
Principles of Rood’s Approach
Treatment technique used in physiotherapy for neurological patients which aids them to recover and improve quality of life
Facilitatory techniques
Inhibitory techniques
Ardra Nakshatra (आर्द्रा): Understanding its Effects and RemediesAstro Pathshala
Ardra Nakshatra, the sixth Nakshatra in Vedic astrology, spans from 6°40' to 20° in the Gemini zodiac sign. Governed by Rahu, the north lunar node, Ardra translates to "the moist one" or "the star of sorrow." Symbolized by a teardrop, it represents the transformational power of storms, bringing both destruction and renewal.
About Astro Pathshala
Astro Pathshala is a renowned astrology institute offering comprehensive astrology courses and personalized astrological consultations for over 20 years. Founded by Gurudev Sunil Vashist ji, Astro Pathshala has been a beacon of knowledge and guidance in the field of Vedic astrology. With a team of experienced astrologers, the institute provides in-depth courses that cover various aspects of astrology, including Nakshatras, planetary influences, and remedies. Whether you are a beginner seeking to learn astrology or someone looking for expert astrological advice, Astro Pathshala is dedicated to helping you navigate life's challenges and unlock your full potential through the ancient wisdom of Vedic astrology.
For more information about their courses and consultations, visit Astro Pathshala.
Understanding and Interpreting Teachers’ TPACK for Teaching Multimodalities i...Neny Isharyanti
Presented as a plenary session in iTELL 2024 in Salatiga on 4 July 2024.
The plenary focuses on understanding and intepreting relevant TPACK competence for teachers to be adept in teaching multimodality in the digital age. It juxtaposes the results of research on multimodality with its contextual implementation in the teaching of English subject in the Indonesian Emancipated Curriculum.