As education is growing day by day, the competition has prompted a need for the student to
understand more about the educational field. Many times the counselor isn’t available all the time and
sometimes due to the lack of proper knowledge about some educational field. Due to this, it creates an issue of
misconception of that field. This creates a problem for the student to decide a proper educational trajectory and
guidance is not always useful. The proposed paper will overcome all these problem using machine learning
algorithm. Various algorithms are being considered and amongst them the best suitable for our project are used
here. There are 3 major problems that come across our path and they are solved using Random forest, Linear
regression and Searching algorithm using Google API. At first Searching algorithm solves the problem of
location by segregating the college’s location vice, then Random Forest provides the list of colleges by using
stream and range of percentage and finally Linear Regression predicts the current cutoff using previous years’
data. Rather than this, the proposed system also provides information regarding all fields of education helping
students to understand and know about their field of interest better. The following idea is a total fresh idea with
no existing projects of similar kind. This project will help students guide them throughout.
11.software modules clustering an effective approach for reusability
This document summarizes previous work on using clustering techniques for software module classification and reusability. It discusses hierarchical clustering and non-hierarchical clustering methods. Previous studies have used these techniques for software component classification, identifying reusable software modules, course clustering based on industry needs, mobile phone clustering based on attributes, and customer clustering based on electricity load. The document provides background on clustering analysis and its uses in various domains including software testing, pattern recognition, and software restructuring.
Machine learning is the subfield of computer science that, according to Arthur Samuel in 1959, gives "computers the ability to learn without being explicitly programmed.Evolved from the study of pattern recognition and computational learning theory in artificial intelligence,machine learning explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions,:2 through building a model from sample inputs. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms with good performance is difficult or unfeasible; example applications include email filtering, detection of network intruders or malicious insiders working towards a data breach,Optical character recognition (OCR),learning to rank and computer vision.
Neuro-Fuzzy Model for Strategic Intellectual Property Cost Management
Strategic Intellectual property (IP) management requires strategic IP creation cost management. It is ideal to
be able to proactively estimate the cost of creating IP. This would facilitate the alignment of IP creation activities in order
to meet strategic management objectives. This paper proposes the use of Neuro-fuzzy model for strategic management
of IP cost management. The extraction of the variables for the model is based on the Activity Based Costing techniques.
Artificial Intelligence for Automated Decision Support ProjectValerii Klymchuk
Artificial intelligence can be used to develop automated decision support systems. There are different types of AI systems like expert systems, knowledge-based systems, and neural networks that can learn from data and apply rules to make decisions. One example is IBM's Watson, which uses natural language processing and evidence-based learning to provide personalized medical recommendations. Automated decision systems are rule-based and can make repetitive operational decisions in real-time, like pricing and loan approvals, freeing up human workers for more complex tasks. The key components of these systems are knowledge acquisition from experts, knowledge representation in a structured format like rules, and inference engines that apply the rules to draw new conclusions.
A neural network is a series of algorithms that attempts to identify underlying relationships in a set of data by using a process that mimics the way the human brain operates. Neural networks have the ability to adapt to changing input so the network produces the best possible result without the need to redesign the output criteria.
This document summarizes a seminar presentation on machine learning. It defines machine learning as applications of artificial intelligence that allow computers to learn automatically from data without being explicitly programmed. It discusses three main algorithms of machine learning: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning uses labelled training data, unsupervised learning finds patterns in unlabelled data, and reinforcement learning involves learning through rewards and punishments. Examples applications discussed include data mining, natural language processing, image recognition, and expert systems.
11.software modules clustering an effective approach for reusabilityAlexander Decker
This document summarizes previous work on using clustering techniques for software module classification and reusability. It discusses hierarchical clustering and non-hierarchical clustering methods. Previous studies have used these techniques for software component classification, identifying reusable software modules, course clustering based on industry needs, mobile phone clustering based on attributes, and customer clustering based on electricity load. The document provides background on clustering analysis and its uses in various domains including software testing, pattern recognition, and software restructuring.
Machine learning is the subfield of computer science that, according to Arthur Samuel in 1959, gives "computers the ability to learn without being explicitly programmed.Evolved from the study of pattern recognition and computational learning theory in artificial intelligence,machine learning explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions,:2 through building a model from sample inputs. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms with good performance is difficult or unfeasible; example applications include email filtering, detection of network intruders or malicious insiders working towards a data breach,Optical character recognition (OCR),learning to rank and computer vision.
Neuro-Fuzzy Model for Strategic Intellectual Property Cost ManagementEditor IJCATR
Strategic Intellectual property (IP) management requires strategic IP creation cost management. It is ideal to
be able to proactively estimate the cost of creating IP. This would facilitate the alignment of IP creation activities in order
to meet strategic management objectives. This paper proposes the use of Neuro-fuzzy model for strategic management
of IP cost management. The extraction of the variables for the model is based on the Activity Based Costing techniques.
June 2020: Top Read Articles in Advanced Computational Intelligenceaciijournal
Advanced Computational Intelligence: An International Journal (ACII) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of computational intelligence. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced computational intelligence concepts and establishing new collaborations in these areas.
A Guide to Wireless Communication, Addison Wesley, 1995.
3. J.F.Kurose and K.W.Ross: Computer Networking: A Top Down Approach Featuring
the Internet, Addison Wesley, 2002.
4. C.Siva Ram Murthy and B.S.Manoj: Adhoc Wireless Networks: Architectures and
Protocols, Prentice Hall, 2004.
5. William Stallings: Wireless Communications and Networks, Prentice Hall, 2002.
Image Processing
Introduction to Digital Image Processing, Image acquisition and digitization, Image
enhancement, Image restoration, Image compression, Image segmentation, Image
representation and description, Object recognition
This document provides an introduction to machine learning techniques presented by Dr. Radhey Shyam. It begins with definitions of machine learning and discusses when machine learning is applicable. The document then covers types of learning problems, designing learning systems, the history of machine learning, function representation techniques, search algorithms, and evaluation parameters. It also introduces several machine learning approaches and discusses common issues in machine learning.
Lecture1 introduction to machine learningUmmeSalmaM1
Machine Learning is a field of computer science which deals with the study of computer algorithms that improve automatically through experience. In this PPT we discuss the following concepts - Prerequisite, Definition, Introduction to Machine Learning (ML), Fields associated with ML, Need for ML, Difference between Artificial Intelligence, Machine Learning, Deep Learning, Types of learning in ML, Applications of ML, Limitations of Machine Learning.
Regression, Bayesian Learning and Support vector machineDr. Radhey Shyam
The document discusses machine learning techniques including regression, Bayesian learning, and support vector machines. It provides details on linear regression, logistic regression, Bayes' theorem, concept learning, the Bayes optimal classifier, naive Bayes classifier, and Bayesian belief networks. The document is a slide presentation given by Dr. Radhey Shyam on machine learning techniques, outlining these various topics in greater detail over multiple slides.
Rule-based Information Extraction for Airplane Crashes ReportsCSCJournals
Over the last two decades, the internet has gained a widespread use in various aspects of everyday living. The amount of generated data in both structured and unstructured forms has increased rapidly, posing a number of challenges. Unstructured data are hard to manage, assess, and analyse in view of decision making. Extracting information from these large volumes of data is time-consuming and requires complex analysis. Information extraction (IE) technology is part of a text-mining framework for extracting useful knowledge for further analysis.
Various competitions, conferences and research projects have accelerated the development phases of IE. This project presents in detail the main aspects of the information extraction field. It focused on specific domain: airplane crash reports. Set of reports were used from 1001 Crash website to perform the extraction tasks such as: crash site, crash date and time, departure, destination, etc. As such, the common structures and textual expressions are considered in designing the extraction rules.
The evaluation framework used to examine the system's performance is executed for both working and test texts. It shows that the system's performance in extracting entities and relations is more accurate than for events. Generally, the good results reflect the high quality and good design of the extraction rules. It can be concluded that the rule-based approach has proved its efficiency of delivering reliable results. However, this approach does require an intensive work and a cycle process of rules testing and modification.
IRJET- A Survey on Soft Computing Techniques and ApplicationsIRJET Journal
This document provides an overview of soft computing techniques and their applications. It discusses several key techniques including evolutionary algorithms, genetic algorithms, harmony search, fuzzy logic, rough sets, and nonlinear predictors. For each technique, it briefly explains the concept and provides examples of real-world applications. The document concludes that soft computing techniques are becoming increasingly important as computing power increases, and that techniques like evolutionary algorithms, genetic algorithms, fuzzy logic and rough sets are already being used successfully in many industrial, commercial, medical and other applications. This is expected to continue growing significantly in the next decade.
The AML group carries out both theoretical and experimental work on developing and applying new machine learning techniques for solving various application problems.
Artificial Intelligence Future |Impact Of Artificial Intelligence On SocietyYashShah445
the content has artificial intelligence Artificial intelligence is helping farmers, doctors and rescue workers improve their positive impact on society. ... While fear of the negative consequences remain, AI is proving it can bring about enormous societal benefits.
This summarizes a research poster presentation on an unsupervised machine learning framework to learn and predict individual daily activity patterns for personal robots. The framework uses a 2-layer LDA model to classify daily activity data from sensors into topics and extract features. It then applies random classification and regression forests to the LDA outputs to learn patterns and predict activities. An experiment applying this framework to one user's 3 months of indoor location and app usage data achieved an average 65.6% F-measure for activity prediction and 83.5% precision for frequent activities.
Machine learning is a scientific discipline that develops algorithms to allow systems to learn from data and improve automatically without being explicitly programmed. The document discusses several key machine learning concepts including supervised learning algorithms like decision trees and Naive Bayes classification. Decision trees use branching to represent classification or regression rules learned from data to make predictions. Naive Bayes classification is a simple probabilistic classifier that applies Bayes' theorem with strong independence assumptions between features.
The document provides a literature review on heuristic based multiobjective optimization problems in crisp and fuzzy environments. It summarizes 15 research papers on topics related to multiobjective optimization using techniques like particle swarm optimization, ant colony optimization, cuckoo search, and simulated annealing. The papers are summarized in a table that lists the title, authors, journal, volume, year, and pages of each paper. The literature review explores multiobjective optimization applications in areas like assembly line balancing, estimating nadir points, task scheduling in cloud computing, and engineering design problems.
A Comprehensive review of Conversational Agent and its prediction algorithmvivatechijri
There is an exponential increase in the use of conversational bots. Conversational bots can be
described as a platform that can chat with people using artificial intelligence. The recent advancement has
made A.I capable of learning from data and produce an output. This learning of data can be performed by using
various machine learning algorithm. Machine learning techniques involves construction of algorithms that can
learn for data and can predict the outcome. This paper reviews the efficiency of different machine learning
algorithm that are used in conversational bot.
Survey on MapReduce in Big Data Clustering using Machine Learning AlgorithmsIRJET Journal
This document summarizes research on using MapReduce techniques for big data clustering with machine learning algorithms. It discusses how traditional clustering algorithms do not scale well for large datasets. MapReduce allows distributed processing of large datasets in parallel. The document reviews several studies that implemented clustering algorithms like k-means using MapReduce on Hadoop. It found this improved efficiency and reduced complexity compared to traditional approaches. Faster processing of large datasets enables applications in areas like education and healthcare.
APPLICATION OF ARTIFICIAL NEURAL NETWORKS IN ESTIMATING PARTICIPATION IN ELEC...Zac Darcy
This document discusses using artificial neural networks to estimate voter participation rates in future elections in Iran. Specifically, it describes using a two-layer feed-forward neural network to predict voter turnout in the Kohgiluyeh and Boyer-Ahmad province with 91% accuracy. The neural network was trained on past electoral data from the province. The document also provides background on artificial neural networks and reviews their use in predicting outcomes in various domains, including economics, politics, tourism, the environment, and information technology.
A REVIEW ON PREDICTIVE ANALYTICS IN DATA MININGijccmsjournal
The data mining its main process is to collect, extract and store the valuable information and now-a-days it’s
done by many enterprises actively. In advanced analytics, Predictive analytics is the one of the branch which is
mainly used to make predictions about future events which are unknown. Predictive analytics which uses
various techniques from machine learning, statistics, data mining, modeling, and artificial intelligence for
analyzing the current data and to make predictions about future. The two main objectives of predictive
analytics are Regression and Classification. It is composed of various analytical and statistical techniques used
for developing models which predicts the future occurrence, probabilities or events. Predictive analytics deals
with both continuous changes and discontinuous changes. It provides a predictive score for each individual
(healthcare patient, product SKU, customer, component, machine, or other organizational unit, etc.) to
determine, or influence the organizational processes which pertain across huge numbers of individuals, like in
fraud detection, manufacturing, credit risk assessment, marketing, and government operations including law
enforcement.
A REVIEW ON PREDICTIVE ANALYTICS IN DATA MININGijccmsjournal
The data mining its main process is to collect, extract and store the valuable information and now-a-days it’s done by many enterprises actively. In advanced analytics, Predictive analytics is the one of the branch which is mainly used to make predictions about future events which are unknown. Predictive analytics which uses various techniques from machine learning, statistics, data mining, modeling, and artificial intelligence for
analyzing the current data and to make predictions about future. The two main objectives of predictive analytics are Regression and Classification. It is composed of various analytical and statistical techniques used for developing models which predicts the future occurrence, probabilities or events. Predictive analytics deals with both continuous changes and discontinuous changes. It provides a predictive score for each individual
(healthcare patient, product SKU, customer, component, machine, or other organizational unit, etc.) to determine, or influence the organizational processes which pertain across huge numbers of individuals, like in fraud detection, manufacturing, credit risk assessment, marketing, and government operations including law enforcement.
A REVIEW ON PREDICTIVE ANALYTICS IN DATA MININGijccmsjournal
The data mining its main process is to collect, extract and store the valuable information and now-a-days it’s
done by many enterprises actively. In advanced analytics, Predictive analytics is the one of the branch which is
mainly used to make predictions about future events which are unknown. Predictive analytics which uses
various techniques from machine learning, statistics, data mining, modeling, and artificial intelligence for
analyzing the current data and to make predictions about future. The two main objectives of predictive
analytics are Regression and Classification. It is composed of various analytical and statistical techniques used
for developing models which predicts the future occurrence, probabilities or events. Predictive analytics deals
with both continuous changes and discontinuous changes. It provides a predictive score for each individual
(healthcare patient, product SKU, customer, component, machine, or other organizational unit, etc.) to
determine, or influence the organizational processes which pertain across huge numbers of individuals, like in
fraud detection, manufacturing, credit risk assessment, marketing, and government operations including law
enforcement.
A REVIEW ON PREDICTIVE ANALYTICS IN DATA MININGijccmsjournal
The data mining its main process is to collect, extract and store the valuable information and now-a-days it’s
done by many enterprises actively. In advanced analytics, Predictive analytics is the one of the branch which is
mainly used to make predictions about future events which are unknown. Predictive analytics which uses
various techniques from machine learning, statistics, data mining, modeling, and artificial intelligence for
analyzing the current data and to make predictions about future. The two main objectives of predictive
analytics are Regression and Classification. It is composed of various analytical and statistical techniques used
for developing models which predicts the future occurrence, probabilities or events. Predictive analytics deals
with both continuous changes and discontinuous changes. It provides a predictive score for each individual
(healthcare patient, product SKU, customer, component, machine, or other organizational unit, etc.) to
determine, or influence the organizational processes which pertain across huge numbers of individuals, like in
fraud detection, manufacturing, credit risk assessment, marketing, and government operations including law
enforcement.
A REVIEW ON PREDICTIVE ANALYTICS IN DATA MINING ijccmsjournal
The data mining its main process is to collect, extract and store the valuable information and now-a-days it’s
done by many enterprises actively. In advanced analytics, Predictive analytics is the one of the branch which is
mainly used to make predictions about future events which are unknown. Predictive analytics which uses
various techniques from machine learning, statistics, data mining, modeling, and artificial intelligence for
analyzing the current data and to make predictions about future. The two main objectives of predictive
analytics are Regression and Classification. It is composed of various analytical and statistical techniques used
for developing models which predicts the future occurrence, probabilities or events. Predictive analytics deals
with both continuous changes and discontinuous changes. It provides a predictive score for each individual
(healthcare patient, product SKU, customer, component, machine, or other organizational unit, etc.) to
determine, or influence the organizational processes which pertain across huge numbers of individuals, like in
fraud detection, manufacturing, credit risk assessment, marketing, and government operations including law
enforcement.
A REVIEW ON PREDICTIVE ANALYTICS IN DATA MININGijscai
The data mining its main process is to collect, extract and store the valuable information and now-a-days it’s
done by many enterprises actively. In advanced analytics, Predictive analytics is the one of the branch which is
mainly used to make predictions about future events which are unknown. Predictive analytics which uses
various techniques from machine learning, statistics, data mining, modeling, and artificial intelligence for
analyzing the current data and to make predictions about future. The two main objectives of predictive
analytics are Regression and Classification. It is composed of various analytical and statistical techniques used
for developing models which predicts the future occurrence, probabilities or events. Predictive analytics deals
with both continuous changes and discontinuous changes. It provides a predictive score for each individual
(healthcare patient, product SKU, customer, component, machine, or other organizational unit, etc.) to
determine, or influence the organizational processes which pertain across huge numbers of individuals, like in
fraud detection, manufacturing, credit risk assessment, marketing, and government operations including law
enforcement.
The data mining its main process is to collect, extract and store the valuable information and now-a-days it’s
done by many enterprises actively. In advanced analytics, Predictive analytics is the one of the branch which is
mainly used to make predictions about future events which are unknown. Predictive analytics which uses
various techniques from machine learning, statistics, data mining, modeling, and artificial intelligence for
analyzing the current data and to make predictions about future. The two main objectives of predictive
analytics are Regression and Classification. It is composed of various analytical and statistical techniques used
for developing models which predicts the future occurrence, probabilities or events. Predictive analytics deals
with both continuous changes and discontinuous changes. It provides a predictive score for each individual
(healthcare patient, product SKU, customer, component, machine, or other organizational unit, etc.) to
determine, or influence the organizational processes which pertain across huge numbers of individuals, like in
fraud detection, manufacturing, credit risk assessment, marketing, and government operations including law
enforcement.
The document discusses the role of computers in research. It states that computers have become indispensable research tools that are ideally suited for tasks like large-scale data analysis, storage and retrieval, and processing data using various techniques. Computers expedite research work and reduce human effort while improving quality. They allow vast amounts of data to be accurately and rapidly processed and analyzed. Computers also facilitate tasks like report generation, graphing, and formatting references.
APPLICATION WISE ANNOTATIONS ON INTELLIGENT DATABASE TECHNIQUESJournal For Research
Databases are systems, which are used for storing data. With the increase in information at rapid pace, extraction of relevant information becomes time-consuming using traditional databases. Thus, the need of intelligent or smart databases arises. Intelligent database is a system in which automation is applied on conventional database in order to enhance its functionality and efficiency. Intelligent databases help us in making decisions and can respond or act to situations by learning. It has applications in wide areas such as healthcare, automobile, education, information security, business etc. This paper represents various intelligent database techniques, which are further used for implementation of applications of this domain.
Artificial Intelligence (AI) has revolutionized in information technology.AI is a subfield of computer science that includes the creation of intelligent machines and software that work and react like human beings. AI and its Applications gets used in various fields of life of humans as expert system solve the complex problems in various areas as science, engineering, business, medicine, video games and Advertising. But “Do any traffic lights use Artificial Intelligence??”, I thought a lot of this when waiting in a red light. This paper gives an overview of Artificial Intelligence and its applications used in human life. This will explore the current use of Artificial Intelligence technologies in Network Intrusion for protecting computer and communication networks from intruders, in the medical area-medicine, to improve hospital inpatient care, for medical image classification, in the accounting databases to mitigate the problems of it, in the computer games, and in Advertising. Also, it will show artificial intelligence principle and how they were applying in traffic signal control, how they solve some traffic problem in actual. This paper gives an introduction to a self-learning system based on RBF neural network and how the system can simulate the traffic police’s experience. This paper is focusing on how to evaluate the effect of the control with the changing of the traffic and adjust the signal with the different techniques of Artificial Intelligence.
IRJET- Improved Model for Big Data Analytics using Dynamic Multi-Swarm Op...IRJET Journal
The document proposes an improved model for big data analytics using dynamic multi-swarm optimization and unsupervised learning algorithms. It develops an algorithm called DynamicK-reference Clustering that combines dynamic multi-swarm optimization with a k-reference clustering algorithm. The k-reference clustering algorithm uses reference distance weighting, Euclidean distance, and chi-square relative frequency to cluster mixed datasets. It was tested on several datasets from a machine learning repository and was shown to more efficiently cluster large, mixed datasets than other clustering algorithms like k-means and particle swarm optimization. The dynamic multi-swarm optimization helps guide the clustering algorithm to obtain more accurate cluster formations by providing the best initial value of k clusters.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not
get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the
chip based design through automation .The main advantage of applying the machine learning & deep
learning technique is to improve the implementation rate based upon the capability of the society. The
main objective of the proposed system is to apply the deep learning using data driven approach for
controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs.
Through this system, huge volume of data’s that are generated by the system will also get control.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not
get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the
chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs.Through this system, huge volume of data’s that are generated by the system will also get control.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs. Through this system, huge volume of data’s that are generated by the system will also get control.
STOCKSENTIX: A MACHINE LEARNING APPROACH TO STOCKMARKETIRJET Journal
This paper presents an approach for analyzing stock market news articles using web scraping, natural language processing, machine learning, and data visualization techniques. Key aspects of the approach include:
1) Web scraping is used to collect stock market news articles from various online sources.
2) Data preprocessing with Pandas cleans and structures the data. Sentiment analysis then categorizes the sentiment of each article as positive, negative, or neutral.
3) Matplotlib and other tools are used to visualize sentiment trends in an easily interpretable way to help identify patterns and aid decision making.
Predictive geospatial analytics using principal component regression IJECEIAES
Nowadays, exponential growth in geospatial or spatial data all over the globe, geospatial data analytics is absolutely deserved to pay attention in manipulating voluminous amount of geodata in various forms increasing with high velocity. In addition, dimensionality reduction has been playing a key role in high-dimensional big data sets including spatial data sets which are continuously growing not only in observations but also in features or dimensions. In this paper, predictive analytics on geospatial big data using Principal Component Regression (PCR), traditional Multiple Linear Regression (MLR) model improved with Principal Component Analysis (PCA), is implemented on distributed, parallel big data processing platform. The main objective of the system is to improve the predictive power of MLR model combined with PCA which reduces insignificant and irrelevant variables or dimensions of that model. Moreover, it is contributed to present how data mining and machine learning approaches can be efficiently utilized in predictive geospatial data analytics. For experimentation, OpenStreetMap (OSM) data is applied to develop a one-way road prediction for city Yangon, Myanmar. Experimental results show that hybrid approach of PCA and MLR can be efficiently utilized not only in road prediction using OSM data but also in improvement of traditional MLR model.
A survey on Machine Learning and Artificial Neural NetworksIRJET Journal
This research paper provides an overview of machine learning and artificial neural networks. It discusses various machine learning techniques like supervised learning, unsupervised learning, reinforcement learning, and deep learning. It also describes artificial neural networks and how they are used to mimic biological neural networks. The paper reviews several related works applying machine learning and neural networks to tasks like hydrological modeling, facial expression recognition, and cattle detection. It highlights advantages like improved accuracy and automation, as well as limitations like data and computational requirements. Overall, the paper aims to improve knowledge of machine learning and neural networks techniques and their applications.
Similar to SCCAI- A Student Career Counselling Artificial Intelligence (20)
Understanding the Impact and Challenges of Corona Crisis on Education Sector...vivatechijri
n the second week of March 2020, governments of all states in a country suddenly declared
shutting down of all colleges and schools for a temporary period of time as an immediate measure to stop the
spread of pandemic that is of novel corona virus. As the days pass by almost close to a month with no certainty
when they will again reopen. Due to pandemic like this an alarm bells have started sounding in the field of
education where a huge impact can be seen on teaching and learning process as well as on the entire education
sector in turn. The pandemic disruption like this is actually gave time to educators of today to really think about
the sector. Through the present research article, the author is highlighting on the possible impact of
coronavirus on education sector with the future challenges for education sector with possible suggestions.
LEADERSHIP ONLY CAN LEAD THE ORGANIZATION TOWARDS IMPROVEMENT AND DEVELOPMENT vivatechijri
This document discusses the importance of leadership in leading an organization towards improvement and development. It states that leadership is responsible for providing a clear vision and strategy to successfully achieve that vision. Effective leadership can impact the success of an organization by controlling its direction and motivating employees. Leadership is different from traditional management in that it guides employees towards organizational goals through open communication and motivation, rather than simply directing work. The paper concludes that only leadership can lead an organization to change according to its evolving environment, while management may simply follow old rules. Leadership is key to adapting to new market needs and trends.
The topic of assignment is a critical problem in mathematics and is further explored in the real
physical world. We try to implement a replacement method during this paper to solve assignment problems with
algorithm and solution steps. By using new method and computing by existing two methods, we analyse a
numerical example, also we compare the optimal solutions between this new method and two current methods. A
standardized technique, simple to use to solve assignment problems, may be the proposed method
Structural and Morphological Studies of Nano Composite Polymer Gel Electroly...vivatechijri
The document summarizes research on a nano composite polymer gel electrolyte containing SiO2 nanoparticles. Key points:
1. Polyvinylidene fluoride-co-hexafluoropropylene polymer was used as the base polymer mixed with propylene carbonate, magnesium perchlorate, and SiO2 nanoparticles to synthesize the nano composite polymer gel electrolyte.
2. The electrolyte was characterized using XRD, SEM, and FTIR which confirmed the homogeneous dispersion of SiO2 nanoparticles and increased amorphous nature of the electrolyte, enhancing its ion conductivity.
3. XRD showed decreased crystallinity and disappearance of polymer peaks upon addition of SiO2. SEM revealed
Theoretical study of two dimensional Nano sheet for gas sensing applicationvivatechijri
This study is focus on various two dimensional material for sensing various gases with theoretical
view for new research in gas sensing application. In this paper we review various two dimensional sheet such as
Graphene, Boron Nitride nanosheet, Mxene and their application in sensing various gases present in the
atmosphere.
METHODS FOR DETECTION OF COMMON ADULTERANTS IN FOODvivatechijri
Food is essential forliving. Food adulteration deceives consumers and can endanger their health. The
purpose of this document is to list common food adulterant methods commonly found in India. An adulterant is
a substance found in other substances such as food, cosmetics, pharmaceuticals, fuels, or other chemicals that
compromise the safety or effectiveness of that substance. The addition of adulterants is called adulteration. The
most common reason for adulteration is the use of undeclared materials by manufacturers that are cheaper than
the correct and declared ones. The adulterants can be harmful or reduce the effectiveness of the product, or
they can be harmless.
The novel ideas of being a entrepreneur is a key for everyone to get in the hustle, but developing a
idea from core requires a systematic plan, time management, time investment and most importantly client
attention. The Time required for developing may vary from idea to idea and strength of the team. Leadership to
build a team and manage the same throughout the peak of development is the main quality. Innovations and
Techniques to qualify the huddles is another aspect of Business Development and client Retention.
Innovation for supporting prosperity has for quite some time been a focus on numerous orders, including PC science, brain research, and human-PC connection. In any case, the meaning of prosperity isn't continuously clear and this has suggestions for how we plan for and evaluate advances that intend to cultivate it. Here, we talk about current meanings of prosperity and how it relates with and now and then is a result of self-amazing quality. We at that point center around how innovations can uphold prosperity through encounters of self-amazing quality, finishing with conceivable future bearings.
An Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGEvivatechijri
Demand for data storage is growing exponentially, but the capacity of existing storage media is not keeping up, there emerges a requirement for a storage medium with high capacity, high storage density, and possibility to face up to extreme environmental conditions. According to a research in 2018, every minute Google conducted 3.88 million searches, other people posted 49,000 photos on Instagram, sent 159,362,760 e-mails, tweeted 473,000 times and watched 4.33 million videos on YouTube. In 2020 it estimated a creation of 1.7 megabytes of knowledge per second per person globally, which translates to about 418 zettabytes during a single year. The magnetic or optical data-storage systems that currently hold this volume of 0s and 1s typically cannot last for quite a century. Running data centres takes vast amounts of energy. In short, we are close to have a substantial data-storage problem which will only become more severe over time. Deoxyribonucleic acid (DNA) are often potentially used for these purposes because it isn't much different from the traditional method utilized in a computer. DNA’s information density is notable, 215 petabytes or 215 million gigabytes of data can be stored in just one gram of DNA. First we can encode all data at a molecular level and then store it in a medium that will last for a while and not become out-dated just like floppy disks. Due to the improved techniques for reading and writing DNA, a rapid increase is observed in the amount of possible data storage in DNA.
The usage of chatbots has increased tremendously since past few years. A conversational interface is an interface that the user can interact with by means of a conversation. The conversation can occur by speech but also by text input. When a chatty interface uses text, it is also described as a chatbot or a conversational medium. During this study, the user experience factors of these so called chatbots were investigated. The prime objective is “to spot the state of the art in chatbot usability and applied human-computer interaction methodologies, to research the way to assess chatbots usability". Two sorts of chatbots are formulated, one with and one without personalisation factors. the planning of this research may be a two-by-two factorial design. The independent variables are the two chatbots (unpersonalised versus personalised) and thus the speci?c task or goal the user are ready to do with the chatbot within the ?nancial ?eld (a simple versus a posh task). The results are that there was no noteworthy interaction effect between personalisation and task on the user experience of chatbots. A signi?cant di?erence was found between the two tasks with regard to the user experience of chatbots, however this variation wasn't because of personalisation.
The Smart glasses Technology of wearable computing aims to identify the computing devices into today’s world.(SGT) are wearable Computer glasses that is used to add the information alongside or what the wearer sees. They are also able to change their optical properties at runtime.(SGT) is used to be one of the modern computing devices that amalgamate the humans and machines with the help of information and communication technology. Smart glasses is mainly made up of an optical head-mounted display or embedded wireless glasses with transparent heads- up display or augmented reality (AR) overlay in it. In recent years, it is been used in the medical and gaming applications, and also in the education sector. This report basically focuses on smart glasses, one of the categories of wearable computing which is very popular presently in the media and expected to be a big market in the next coming years. It Evaluate the differences from smart glasses to other smart devices. It introduces many possible different applications from the different companies for the different types of audience and gives an overview of the different smart glasses which are available presently and will be available after the next few years.
Future Applications of Smart Iot Devicesvivatechijri
With the Internet of Things (IoT) bit by bit creating as the resulting time of the headway of the Internet, it gets critical to see the diverse expected zones for the utilization of IoT and the research challenges that are connected with these applications going from splendid savvy urban areas, to medical care administrations, shrewd farming, collaborations and retail. IoT is needed to attack into for all expectations and purposes for all pieces of our day-to-day life. Despite the fact that the current IoT enabling advancements have immensely improved in the continuous years, there are so far different issues that require attention. Since the IoT ideas results from heterogeneous advancements, many examination difficulties will arise. In like manner, IoT is planning for new components of exploration to be finished. This paper presents the progressing headway of IoT advancements and inspects future applications.
Cross Platform Development Using Fluttervivatechijri
Today the development of cross-platform mobile application has under the state of compromise. The developers are not willing to choose an alternative of either building the similar app many times for many operating systems or to accept a lowest common denominator and optimal solution that will going to trade the native speed, accuracy for portability. The Flutter is an open-source SDK for creating high-performance, high fidelity mobile apps for the development of iOS and Android. Few significant features of flutter are - Just-in-time compilation (JIT), Ahead- of-time compilation (AOT compilation) into a native (system-dependent) machine code so that the resulting binary file can execute natively. The Flutter’s hot reload functionality helps us to understand quickly and easily experiment, build UIs, add features, and fix bugs. Hot reload works by injecting updated source code files into the running Dart Virtual Machine (VM). With the help of Flutter, we believe that we would be having a solution that gives us the best of both worlds: hardware accelerated graphics and UI, powered by native ARM code, targeting both popular mobile operating systems.
The Internet, today, has become an important part of our lives. The World Wide Web that was once a small and inaccessible data storage service is now large and valuable. Current activities partially or completely integrated into the physical world can be made to a higher standard. All activities related to our daily life are mapped and linked to another business in the digital world. The world has seen great strides in the Internet and in 3D stereoscopic displays. The time has come to unite the two to bring a new level of experience to the users. 3D Internet is a concept that is yet to be used and requires browsers to be equipped with in-depth visualization and artificial intelligence. When this material is included, the Internet concept of material may become a reality discussed in this paper. In this paper we have discussed the features, possible setting methods, applications, and advantages and disadvantages of using the Internet. With this paper we aim to provide a clear view of 3D Internet and the potential benefits associated with this obviously cost the amount of investment needed to be used.
Recommender System (RS) has emerged as a significant research interest that aims to assist users to seek out items online by providing suggestions that closely match their interests. Recommender system, an information filtering technology employed in many items is presented in internet sites as per the interest of users, and is implemented in applications like movies, music, venue, books, research articles, tourism and social media normally. Recommender systems research is usually supported comparisons of predictive accuracy: the higher the evaluation scores, the higher the recommender. One amongst the leading approaches was the utilization of advice systems to proactively recommend scholarly papers to individual researchers. In today's world, time has more value and therefore the researchers haven't any much time to spend on trying to find the proper articles in line with their research domain. Recommender Systems are designed to suggest users the things that best fit the user needs and preferences. Recommender systems typically produce an inventory of recommendations in one among two ways -through collaborative or content-based filtering. Additionally, both the general public and also the non-public used descriptive metadata are used. The scope of the advice is therefore limited to variety of documents which are either publicly available or which are granted copyright permits. Recommendation systems (RS) support users and developers of varied computer and software systems to beat information overload, perform information discovery tasks and approximate computation, among others.
The study LiFi (Light Fidelity) demonstrates about how can we use this technology as a medium of communication similar to Wifi . This is the latest technology proposed by Harold Haas in 2011. It explains about the process of transmitting data with the help of illumination of an Led bulb and about its speed intensity to transmit data. Basically in this paper, author will discuss about the technology and also explain that how we can replace from WiFi to LiFi . WiFi generally used for wireless coverage within the buildings while LiFi is capable for high intensity wireless data coverage in limited areas with no obstacles .This research paper represents introduction of the Lifi technology,performance,modulation and challenges. This research paper can be used as a reference and knowledge to develop some of LiFitechnology.
Social media platform and Our right to privacyvivatechijri
The advancement of Information Technology has hastened the ability to disseminate information across the globe. In particular, the recent trends in ‘Social Networking’ have led to a spark in personally sensitive information being published on the World Wide Web. While such socially active websites are creative tools for expressing one’s personality it also entails serious privacy concerns. Thus, Social Networking websites could be termed a double edged sword. It is important for the law to keep abreast of these developments in technology. The purpose of this paper is to demonstrate the limits of extending existing laws to battle privacy intrusions in the Internet especially in the context of social networking. It is suggested that privacy specific legislation is the most appropriate means of protecting online privacy. In doing so it is important to maintain a balance between the competing right of expression, the failure of which may hinder the reaping of benefits offered by Internet technology
THE USABILITY METRICS FOR USER EXPERIENCEvivatechijri
THE USABILITY METRICS FOR USER EXPERIENCE was innovatively created by Google engineers and it is ready for production in record time. The success of Google is to attributed the efficient search algorithm, and also to the underlying commodity hardware. As Google run number of application then Google’s goal became to build a vast storage network out of inexpensive commodity hardware. So Google create its own file system, named as THE USABILITY METRICS FOR USER EXPERIENCE that is GFS. THE USABILITY METRICS FOR USER EXPERIENCE is one of the largest file system in operation. Generally THE USABILITY METRICS FOR USER EXPERIENCE is a scalable distributed file system of large distributed data intensive apps. In the design phase of THE USABILITY METRICS FOR USER EXPERIENCE, in which the given stress includes component failures , files are huge and files are mutated by appending data. The entire file system is organized hierarchically in directories and identified by pathnames. The architecture comprises of multiple chunk servers, multiple clients and a single master. Files are divided into chunks, and that is the key design parameter. THE USABILITY METRICS FOR USER EXPERIENCE also uses leases and mutation order in their design to achieve atomicity and consistency. As of there fault tolerance, THE USABILITY METRICS FOR USER EXPERIENCE is highly available, replicas of chunk servers and master exists.
Google File System was innovatively created by Google engineers and it is ready for production in record time. The success of Google is to attributed the efficient search algorithm, and also to the underlying commodity hardware. As Google run number of application then Google’s goal became to build a vast storage network out of inexpensive commodity hardware. So Google create its own file system, named as Google File System that is GFS. Google File system is one of the largest file system in operation. Generally Google File System is a scalable distributed file system of large distributed data intensive apps. In the design phase of Google file system, in which the given stress includes component failures , files are huge and files are mutated by appending data. The entire file system is organized hierarchically in directories and identified by pathnames. The architecture comprises of multiple chunk servers, multiple clients and a single master. Files are divided into chunks, and that is the key design parameter. Google File System also uses leases and mutation order in their design to achieve atomicity and consistency. As of there fault tolerance, Google file system is highly available, replicas of chunk servers and master exists.
A Study of Tokenization of Real Estate Using Blockchain Technologyvivatechijri
Real estate is by far one of the most trusted investments that people have preferred, being a lucrative investment it provides a steady source of income in the form of lease and rents. Although there are numerous advantages, one of the key downsides of real estate investments is lack of liquidity. Thus, even though global real estate investments amount to about twice the size of investments in stock markets, the number of investors in the real estate market is significantly lower. Block chain technology has real potential in addressing the issues of liquidity and transparency, opening the market to even retail investors. Owing to the functionality and flexibility of creating Security Tokens, which are backed by real-world assets, real estate can be made liquid with the help of Special Purpose Vehicles. Tokens of ERC 777 standard, which represent fractional ownership of the real estate can be purchased by an investor and these tokens can also be listed on secondary exchanges. The robustness of Smart Contracts can enable the efficient transfer of tokens and seamless distribution of earnings amongst the investors. This work describes Ethereum blockchainbased solutions to make the existing Real Estate investment system much more efficient.
Best Practices of Clothing Businesses in Talavera, Nueva Ecija, A Foundation ...IJAEMSJORNAL
This study primarily aimed to determine the best practices of clothing businesses to use it as a foundation of strategic business advancements. Moreover, the frequency with which the business's best practices are tracked, which best practices are the most targeted of the apparel firms to be retained, and how does best practices can be used as strategic business advancement. The respondents of the study is the owners of clothing businesses in Talavera, Nueva Ecija. Data were collected and analyzed using a quantitative approach and utilizing a descriptive research design. Unveiling best practices of clothing businesses as a foundation for strategic business advancement through statistical analysis: frequency and percentage, and weighted means analyzing the data in terms of identifying the most to the least important performance indicators of the businesses among all of the variables. Based on the survey conducted on clothing businesses in Talavera, Nueva Ecija, several best practices emerge across different areas of business operations. These practices are categorized into three main sections, section one being the Business Profile and Legal Requirements, followed by the tracking of indicators in terms of Product, Place, Promotion, and Price, and Key Performance Indicators (KPIs) covering finance, marketing, production, technical, and distribution aspects. The research study delved into identifying the core best practices of clothing businesses, serving as a strategic guide for their advancement. Through meticulous analysis, several key findings emerged. Firstly, prioritizing product factors, such as maintaining optimal stock levels and maximizing customer satisfaction, was deemed essential for driving sales and fostering loyalty. Additionally, selecting the right store location was crucial for visibility and accessibility, directly impacting footfall and sales. Vigilance towards competitors and demographic shifts was highlighted as essential for maintaining relevance. Understanding the relationship between marketing spend and customer acquisition proved pivotal for optimizing budgets and achieving a higher ROI. Strategic analysis of profit margins across clothing items emerged as crucial for maximizing profitability and revenue. Creating a positive customer experience, investing in employee training, and implementing effective inventory management practices were also identified as critical success factors. In essence, these findings underscored the holistic approach needed for sustainable growth in the clothing business, emphasizing the importance of product management, marketing strategies, customer experience, and operational efficiency.
How to Manage Internal Notes in Odoo 17 POSCeline George
In this slide, we'll explore how to leverage internal notes within Odoo 17 POS to enhance communication and streamline operations. Internal notes provide a platform for staff to exchange crucial information regarding orders, customers, or specific tasks, all while remaining invisible to the customer. This fosters improved collaboration and ensures everyone on the team is on the same page.
Social media management system project report.pdfKamal Acharya
The project "Social Media Platform in Object-Oriented Modeling" aims to design
and model a robust and scalable social media platform using object-oriented
modeling principles. In the age of digital communication, social media platforms
have become indispensable for connecting people, sharing content, and fostering
online communities. However, their complex nature requires meticulous planning
and organization.This project addresses the challenge of creating a feature-rich and
user-friendly social media platform by applying key object-oriented modeling
concepts. It entails the identification and definition of essential objects such as
"User," "Post," "Comment," and "Notification," each encapsulating specific
attributes and behaviors. Relationships between these objects, such as friendships,
content interactions, and notifications, are meticulously established.The project
emphasizes encapsulation to maintain data integrity, inheritance for shared behaviors
among objects, and polymorphism for flexible content handling. Use case diagrams
depict user interactions, while sequence diagrams showcase the flow of interactions
during critical scenarios. Class diagrams provide an overarching view of the system's
architecture, including classes, attributes, and methods .By undertaking this project,
we aim to create a modular, maintainable, and user-centric social media platform that
adheres to best practices in object-oriented modeling. Such a platform will offer users
a seamless and secure online social experience while facilitating future enhancements
and adaptability to changing user needs.
Exploring Deep Learning Models for Image Recognition: A Comparative Reviewsipij
Image recognition, which comes under Artificial Intelligence (AI) is a critical aspect of computer vision,
enabling computers or other computing devices to identify and categorize objects within images. Among
numerous fields of life, food processing is an important area, in which image processing plays a vital role,
both for producers and consumers. This study focuses on the binary classification of strawberries, where
images are sorted into one of two categories. We Utilized a dataset of strawberry images for this study; we
aim to determine the effectiveness of different models in identifying whether an image contains
strawberries. This research has practical applications in fields such as agriculture and quality control. We
compared various popular deep learning models, including MobileNetV2, Convolutional Neural Networks
(CNN), and DenseNet121, for binary classification of strawberry images. The accuracy achieved by
MobileNetV2 is 96.7%, CNN is 99.8%, and DenseNet121 is 93.6%. Through rigorous testing and analysis,
our results demonstrate that CNN outperforms the other models in this task. In the future, the deep
learning models can be evaluated on a richer and larger number of images (datasets) for better/improved
results.
OCS Training Institute is pleased to co-operate with
a Global provider of Rig Inspection/Audits,
Commission-ing, Compliance & Acceptance as well as
& Engineering for Offshore Drilling Rigs, to deliver
Drilling Rig Inspec-tion Workshops (RIW) which
teaches the inspection & maintenance procedures
required to ensure equipment integrity. Candidates
learn to implement the relevant standards &
understand industry requirements so that they can
verify the condition of a rig’s equipment & improve
safety, thus reducing the number of accidents and
protecting the asset.
Encontro anual da comunidade Splunk, onde discutimos todas as novidades apresentadas na conferência anual da Spunk, a .conf24 realizada em junho deste ano em Las Vegas.
Neste vídeo, trago os pontos chave do encontro, como:
- AI Assistant para uso junto com a SPL
- SPL2 para uso em Data Pipelines
- Ingest Processor
- Enterprise Security 8.0 (Maior atualização deste seu release)
- Federated Analytics
- Integração com Cisco XDR e Cisto Talos
- E muito mais.
Deixo ainda, alguns links com relatórios e conteúdo interessantes que podem ajudar no esclarecimento dos produtos e funções.
https://www.splunk.com/en_us/campaigns/the-hidden-costs-of-downtime.html
https://www.splunk.com/en_us/pdfs/gated/ebooks/building-a-leading-observability-practice.pdf
https://www.splunk.com/en_us/pdfs/gated/ebooks/building-a-modern-security-program.pdf
Nosso grupo oficial da Splunk:
https://usergroups.splunk.com/sao-paulo-splunk-user-group/
A vernier caliper is a precision instrument used to measure dimensions with high accuracy. It can measure internal and external dimensions, as well as depths.
Here is a detailed description of its parts and how to use it.
Conservation of Taksar through Economic RegenerationPriyankaKarn3
This was our 9th Sem Design Studio Project, introduced as Conservation of Taksar Bazar, Bhojpur, an ancient city famous for Taksar- Making Coins. Taksar Bazaar has a civilization of Newars shifted from Patan, with huge socio-economic and cultural significance having a settlement of about 300 years. But in the present scenario, Taksar Bazar has lost its charm and importance, due to various reasons like, migration, unemployment, shift of economic activities to Bhojpur and many more. The scenario was so pityful that when we went to make inventories, take survey and study the site, the people and the context, we barely found any youth of our age! Many houses were vacant, the earthquake devasted and ruined heritages.
Conservation of those heritages, ancient marvels,a nd history was in dire need, so we proposed the Conservation of Taksar through economic regeneration because the lack of economy was the main reason for the people to leave the settlement and the reason for the overall declination.
Natural Is The Best: Model-Agnostic Code Simplification for Pre-trained Large...YanKing2
Pre-trained Large Language Models (LLM) have achieved remarkable successes in several domains. However, code-oriented LLMs are often heavy in computational complexity, and quadratically with the length of the input code sequence. Toward simplifying the input program of an LLM, the state-of-the-art approach has the strategies to filter the input code tokens based on the attention scores given by the LLM. The decision to simplify the input program should not rely on the attention patterns of an LLM, as these patterns are influenced by both the model architecture and the pre-training dataset. Since the model and dataset are part of the solution domain, not the problem domain where the input program belongs, the outcome may differ when the model is trained on a different dataset. We propose SlimCode, a model-agnostic code simplification solution for LLMs that depends on the nature of input code tokens. As an empirical study on the LLMs including CodeBERT, CodeT5, and GPT-4 for two main tasks: code search and summarization. We reported that 1) the reduction ratio of code has a linear-like relation with the saving ratio on training time, 2) the impact of categorized tokens on code simplification can vary significantly, 3) the impact of categorized tokens on code simplification is task-specific but model-agnostic, and 4) the above findings hold for the paradigm–prompt engineering and interactive in-context learning and this study can save reduce the cost of invoking GPT-4 by 24%per API query. Importantly, SlimCode simplifies the input code with its greedy strategy and can obtain at most 133 times faster than the state-of-the-art technique with a significant improvement. This paper calls for a new direction on code-based, model-agnostic code simplification solutions to further empower LLMs.
20CDE09- INFORMATION DESIGN
UNIT I INCEPTION OF INFORMATION DESIGN
Introduction and Definition
History of Information Design
Need of Information Design
Types of Information Design
Identifying audience
Defining the audience and their needs
Inclusivity and Visual impairment
Case study.
Phone Us ❤ X000XX000X ❤ #ℂall #gIRLS In Chennai By Chenai @ℂall @Girls Hotel ...
SCCAI- A Student Career Counselling Artificial Intelligence
1. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 1
PP 1-6
1
www.viva-technology.org/New/IJRI
SCCAI- A Student Career Counselling Artificial Intelligence
Aditya M. Pujari1
, Rahul M. Dalvi1
, Kaustubh S. Gawde1
, Tatwadarshi P.
Nagarhalli1
1
(Computer Engineering Department, VIVA Institute of Technology, India)
Abstract: As education is growing day by day, the competition has prompted a need for the student to
understand more about the educational field. Many times the counselor isn’t available all the time and
sometimes due to the lack of proper knowledge about some educational field. Due to this, it creates an issue of
misconception of that field. This creates a problem for the student to decide a proper educational trajectory and
guidance is not always useful. The proposed paper will overcome all these problem using machine learning
algorithm. Various algorithms are being considered and amongst them the best suitable for our project are used
here. There are 3 major problems that come across our path and they are solved using Random forest, Linear
regression and Searching algorithm using Google API. At first Searching algorithm solves the problem of
location by segregating the college’s location vice, then Random Forest provides the list of colleges by using
stream and range of percentage and finally Linear Regression predicts the current cutoff using previous years’
data. Rather than this, the proposed system also provides information regarding all fields of education helping
students to understand and know about their field of interest better. The following idea is a total fresh idea with
no existing projects of similar kind. This project will help students guide them throughout.
Keywords – Machine learning, Random Forest, Linear Regression, K-means, Chatbot.
1. INTRODUCTION
Artificial Intelligence is also known machine intelligence, is intelligence demonstrated by different
machine [14]. Artificial Intelligence is defined as the research of intelligent agents, a device that can learn
information from environment and performs action that maximize the chances of successfully achieving its
goals. Modern machine capabilities generally classified as artificial include successfully understanding human
speech, competing at the highest level in strategic game, autonomously operating cars, and intelligent routing in
content delivery networks and military simulations. Artificial intelligence research has been divided into
subfields that often fail to communicate with each other. These sub-fields are based on technical consideration,
such as particular goals, the use of particular tools, or deep philosophical differences [14]. Artificial Intelligence
often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical
computer can execute. A complex algorithm is often built on top of other, simpler algorithms. Artificial
Intelligence algorithm are capable of learning from data; they can enhance themselves by learning new
heuristics, or can themselves write different algorithms. Some of algorithm used Bayesian network, decision
trees and nearest-neighbor. This learning of data using different algorithm is known as machine learning.
Machine learning is an interdisciplinary field that uses statistical techniques to give computer systems the
ability to learn from data, without being explicitly programmed. Machine learning explores the study and
construction of algorithm that can learn for and make prediction on data-such algorithms overcome following
strictly static program instructions by making data-driven and make prediction or decisions, through building a
model from sample inputs [13]. Machine learning is employed in a range of computing tasks where designing
and programming explicit algorithm with good performance is difficult or infeasible. Machine learning is
closely related to computational statistics, which also focuses on prediction-making through the use of
computers. Within the field of data analytic, machine learning is a method used to devise complex models and
algorithms that lend themselves to prediction; in commercial use, this is known as predictive analytics [13].
These analytical models allow researchers, data scientists, engineers, and analysts to "produce reliable,
repeatable decisions and results" and uncover "hidden insights" through learning from historical relationships
and trends in the data.
2. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 1
PP 1-6
2
www.viva-technology.org/New/IJRI
2. RELATED WORK
B. R. Ranoliya et. Al. [] had stated that Artificial Intelligence conversational agents are becoming popular
for web services and systems like scientific, entertainment and commercial systems, and academia. But more
effective human-computer interaction will take place by querying missing data by the user to provide
satisfactory answer. User inquiries are first taken care by AIML check piece to check whether entered inquiry is
AIML script or not. AIML is characterized with general inquiries and welcome which is replied by utilizing
AIML formats
T. R. V. Anandharajan, et. al. [5] had defined weather prediction based on the previous dataset. They intend
to develop an Intelligence Weather predicting module.
S. Kumar, et. al. [8] have focused on use of the Data Mining techniques for predicting rainfall of an area on
basis of some dependent feature like precipitation and wet day frequency.
S. Prabakaran, et. al. [7] have proposed rainfall prediction on the historical data is trending in research point
of view. The existing model use the data mining technique for predicting the state of atmosphere at a given time
of a weather variable like rainfall, cloud conditions, temperature etc.
H. L. Siew, et. al. [6] have examined the theory and practice of the regression technique for prediction of
stock price by using the transformed data set in ordinal dataset. In this the original pre-transformed data source
contain data of heterogeneous data type use for handling of currency value and financial ratio.
Y. Liu, et. al. [2] have explained the problem of traffic congestion is solved by using classification
algorithm random forest. The city area is divided in sections and prediction is done of the areas possible of
having heavy traffic. This is done by considering the environmental conditions such as Climate, Holiday, Road
Condition etc. The results show that the traffic prediction model established by using the random forest
classification algorithm has a prediction accuracy of 87.5%.
X. Xun, et. al. [9] have defined the management of land resources, not only solving the existing problems of
the land, instead also prediction of the problems of the land and prevention on land misuse are in demand
urgently due to the urbanization so that the propose system use Random Forest algorithm for prediction.
A. Ghosh, et. al. [1] hqve defined the problem of urbanization is solved by using Random Forest algorithm
along with Landsat archive and ancillary data. It proposes a methodology to map the urban areas with multi-
seasonal Landsat data. The Random forest classifier and decision level fusion are applied. The paper gives the
general idea about the random forest algorithm of urban landscape.
Y. C. Shiao, et. al. [10] have studied that according to the statistical data, each day there are over one
million passengers taking the MRT in Taipei. In this paper, author did a predicting MRT passenger flow with
random forest, by using different factors collected from the Taipei Main station as input for training. In this
paper, system use only the Taipei main station passenger flow to test the method.
H. Zhang, et. al. [4] have examined each chromosome is made up of a sequence of genes coding. The
number of genes of a chromosome is randomly chosen where n is the number of data points, which is randomly
selected a given data sets. Canopy is usually employed to capture the number of clusters.
M. Lehsaini, et. al. [3] have proposes a cluster-based routing scheme based on an enhanced version of K-
means approach. The improved version of K-means generates balanced clusters in the network, which does not
overload one cluster-head over the others unlike LEACH where one of the generated clusters may contain a
large number of nodes and another contains a small number of members. This paper proposes a cluster-based
routing scheme based on an enhanced version of K-means approach.
S. Ye, et. al. [11] have stated K-means algorithm is a clustering algorithm based on partition. Because of its
simplicity and efficiency, it has become one of the most widely used clustering algorithms. The original cuckoo
algorithm is influenced by step size A and probability of discovery P, and the step size and discovery probability
control the accuracy of CS algorithm global and local search, which has great influence on the optimization
effect of algorithm. K-Means algorithm is easy to fall into the local optimum and the Cuckoo search (CS)
algorithm is affected by the step size.
3. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 1
PP 1-6
3
www.viva-technology.org/New/IJRI
Z. Ya-Ling, et. al. [12] In the literature present an agglomerative fuzzy K-means clustering algorithm for
numerical data, an extension to the standard fuzzy K-means algorithm by introducing a penalty term to the
objective function to make the clustering process not sensitive to the initial cluster centres. The paper extends
the K-means clustering process to calculate a weight for each dimension in each cluster and use the weight
values to identify the subsets of important dimensions that categorize different clusters.
3. PROPOSED SYSTEM
The proposed system works on machine learning algorithm, which includes Linear Regression and Random
Forest. Dialogflow is a Google-owned developer of human–computer interaction technologies based on natural
language conversations. Dialogflow is best known for creating a virtual assistance for Android, iOS,
and Windows Phone smartphones that performs tasks and answers users' question in a natural language.
Dialogflow has also created a natural language processing engine that incorporates conversation context like
dialogue history, location and user preferences. The proposed system also uses Google Dialogflow(API.ai) for
Natural Language processing(NLP). Google Assistance is the main framework for the system. The dataset was
not available at any database information websites. The dataset was developed manually by using available
physical data. Different machine learning algorithm uses different features from dataset.
Figure 3.1 System Flow Diagram
Figure 3.1 shows the flow of overall proposed system. As the user gives command “Talk to hey
SCCAI” the bot receives the activation signal and gets activated. Then the proposed system requests for input
from user about how familiar he/she is regarding educational fields and also their knowledge about those fields.
Later analyzing the given information, it decides a path of how to approach the user and solve user’s problem.
The problems of the users are divided mainly into 3 types, they are: Information about different streams, general
idea about the fields, overall information. When the student chooses the first option he/she is directly directed to
the aptitude test, when he/she opts for the second option they get a question which field they want to know about
and then they are provided with the desired information they required. And finally when they opt the third
option they are directly provided with the information. All this information is stored in a data base which is
updated regularly, in all three problems the solution is fetched from the database. Also the aptitude test contains
the questions that are randomly selected amongst the questions stored in the data base. This test gives the system
the required field of interest of the user. Now all this information is provided to achieve various answers to the
questions like college name, cut-off list of college, and desired college with its location. All this is done with
voice command in a simple question answer way so that all group age people can use this.
people can use this.
4. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 1
PP 1-6
4
www.viva-technology.org/New/IJRI
Figure 3.2 Linear Regression
After finishing the informative part, the next step is the working of algorithms to solve various
problems and provide the required information to the user. A1 here is the user request to perform Linear
Regression algorithm to provide the desired list of college cut-off. Figure 3.2 shows the flow of linear
regression. Here the input is taken from the data base where the data of previous all cut-off is stored..
Figure 3.3 Random Forest
5. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 1
PP 1-6
5
www.viva-technology.org/New/IJRI
Figure 3.3 shows random forest algorithm. Random forest is used here to classify colleges according to range of
percentage and stream. B1 is where the user provides input and B2 is the required output here. The user will
provide a range of percentage and according to that classification algorithm will be undertaken.
Figure 4.6 Search API
Figure 4.6 shows the flow of Google maps search API. User can provide location according to their needs. Here
C1 is where the user request for the information and C2 is where the output is generated.
4. RESULTS AND ANALYSIS
The system is unique in nature and there is no such existing system in market. The system is created
using google assistant as framework. The natural language processing is done by google API. The system voice
output time is quick and have moderately high accuracy. The proposed system will use various machine
algorithm.
5. CONCLUSIONS
Education system is growing very rapidly and with this rapid growth the competition to be the best
amongst all has also increased. The system is unique and has never been implemented before. Here we have
defined five main features of the system. The first feature helps the user to gain all possible information
regarding education and education system. The second feature is a psychometric test that helps the user to know
where he stands or can see a better self in the future. The third feature provides the location of colleges. The
fourth provides the cutoff of various colleges with respect to cast as well and the fifth provides the list of
colleges to the user. Clubbed together these five features act as one single artificial brain that helps as a
counselor to the user.
The proposed system uses various machine learning algorithms to solve various problems. The random
6. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 1
PP 1-6
6
www.viva-technology.org/New/IJRI
forest helps in providing solution to predict the college list that the user wants provided his percentage and
stream. The linear regression helps with the solution of college cutoff and google maps API helps with the
location of the college the user wants to know about. The communicative part of the system is taken care by the
dialogue flow, where it helps with natural language processing.
REFERENCES
[1] A. Ghosh, R. Sharma, P.K. Joshi, “Random forest classification of urban landscape using Landsat archive and ancillary data:
Combining seasonal maps with decision level fusion”, Applied Geography Journal, 2014, pp. 31-41.
[2] Y. Liu, H. Wu, “Prediction of Road Traffic Congestion Based on Random Forest”, 10th International Symposium on
Computational Intelligence and Design, 2017, pp. 361-364.
[3] M. Lehsaini, M.B. Benmahdi, “An improved K-means Cluster-based Routing Scheme for Wireless Sensor Networks”, IEEE,
2018.
[4] H. Zhang, Z. Zhou, “A Novel clustering algorithm combining Niche genetic algorithm with canopy and K-means”, International
Conference on artificial Intelligence and Big Data, 2018, pp. 26-32.
[5] T.R.V. Anandharajan, G.A. Hariharan, K. K. Vignajeth, R. Jitendiran, “Weather Monitoring Using Artificial Intelligence”,
International Conference on Computational Intelligence and Networks, 2016.
[6] H. L. Siew, M.J, Nordin, “Regression Techniques for the Prediction of Stock Price Trend”, International Conference on Statistics
in Science, Business and Engineering (ICSSBE), 2012, pp. 1-5.
[7] S. Prabakaran, P. N. Kumar, P. S. M. Tarun, “Rainfall Prediction Using Modified Linear Regression”, ARPN Journal of
Engineering and Applied Sciences, 2017, pp. 3715-3718
[8] S. Kumar, M. Anamika Upadhyay, C. Gola, “Rainfall prediction based on 100 years of Meteorological data”, IEEE, 2017, pp.
162-166.
[9] X. Xun, L. Mo, Y. Yu, “Discovery and Prediction of the Unused Land for Construction Based on Random Forest”, Fifth
International Conference on Agro-Geoinformatics, 2016.
[10] Y. C. Shiao, L. Liu, Q. Zhao, R. C. Chen, “Predicting Passenger Flow using Different Influence Factors for Taipei MRT
System”, IEEE 8th International Conference on Awareness Science and Technology (iCAST), 2017.
[11] S. Ye, X. Huang, Y. Teng, Y. Li, “K-Means Clustering Algorithm Based on Improved Cuckoo Search Algorithm and Its
Application”, IEEE 8th International Conference on Awareness Science and Technology, 2018, pp. 447-451.
[12] Z. Ya-Ling, W. Ya-nan, Y. Lil, “An Improved Sampling K-means Clustering Algorithm Based on MapReduce”, IEEE 3rd
International Conference on Big Data Analysis,2017.
[13] https://en.wikipedia.org/wiki/Machine_learning , Last Accessed on 05th
Sept. 2018.
[14] https://en.wikipedia.org/wiki/Artificial_intelligence , Last Accessed on 05th
.Sept. 2018.
[15] https://en.wikipedia.org/wiki/Linear_regression , Last Accessed on 04th
Sept. 2018.
[16] https://en.wikipedia.org/wiki/Random_forest , Last Accessed on 05th
Sept. 2018.
[17] https://en.wikipedia.org/wiki/K-means_clustering , Last Accessed on 05th
Sept. 2018.
[18] B. R. Ranoliya, N. Raghuwanshi, S. Singh, “Chatbot for University Related FAQs”, International Conference on Advances in
Computing, Communications and Informatics (ICACCI), 2017.
[19] R. Ravi, “Intelligent Chatbot for Easy Web-Analytics Insights”, International Conference on Advances in Computing,
Communications and Informatics (ICACCI), 2017.