This study used a data mining approach to investigate user preferences in interactive multimedia learning systems without predetermined hypotheses. 80 participants used two systems that differed in interface design and were clustered based on their preferences. The largest cluster preferred a single color scheme. Computer experience significantly affected preferences - experts preferred multiple windows and dynamic buttons while novices preferred single windows and static buttons. The findings provide insights into user interface design without restricting results with predefined hypotheses.
The document discusses different modes of conducting surveys, including traditional, interview-based, email-based, web-based, mobile-based, and SMS-based surveys. It finds that SMS-based surveys provide the best solution in terms of time, cost and response rate compared to other modes. However, SMS-based surveys also have limitations like limited characters per SMS. The study concludes that existing survey modes need improvement to get maximum responses quickly at lowest cost. It contributes to understanding factors like cost, time and response rate for different survey modes.
This paper presents an evaluation methodology to reveal the relationships between the attributes of software products, practices applied during the development phase and the user evaluation of the products. For the case study, the games sector has been chosen due to easy access to the user evaluation of this type of software products. Product attributes and practices applied during the development phase have been collected from the developers via questionnaires. User evaluation results were collected from a group of independent evaluators. Two bipartite networks were created using the gathered data. The first network maps software products to the practices applied during the development phase and the second network maps the products to the product attributes. According to the links, similarities were determined and subgroups of products were obtained according to selected development phase practices. By this way, the effect of development phase on the user evaluation has been investigated.
This document discusses different types of internet surveys and how to choose the appropriate type. It outlines convenience sampling approaches like uncontrolled instrument distribution and volunteer panels that do not allow statistical inference about larger populations. It also describes probability sampling approaches like sampling from closed populations or using prerecruited panels that do allow broader inferences. The key decision is whether the researcher needs a convenience sample or probability sample to meet their study goals.
The purpose of this empirical study was to test specific factors of behavioral intention to use m-learning in a community college setting using a modified technology acceptance model and antecedent factors suggested by the researcher’s review of the literature. In addition, the study’s purpose was to expand understanding of behavioral intention to use m-learning and to contribute to the growing body of research. This research model was based on relevant technology acceptance literature. The study examines the significance of “prior use of e-learning” and correlation with the behavioral intention to use m-learning. Existing models have looked at prior use of e-learning in other domains, but not specifically m-learning. Other models and studies have primarily looked at the prior use of e-learning variable as a moderating variable and not one that is directly related to attitude and behavioral intention. The study found that there is a relationship between prior use of e-learning and behavioral intention to use m-learning. This research direction was proposed by Lu and Viehland.
In this paper we explore and analyse the heterogeneity existent within a seemingly homogenous sample of online consumer behaviours in terms of their demographic profile. The data from a sample of 371 survey respondents is clustered using various distance functions and a clustering algorithm. In doing so, the respondents are clustered based on their response profiles to online behaviour questions rather than their demographic characteristics or brand preferences. Through our results we highlight that high levels of heterogeneity amongst consumers within the same cluster exists in terms of the ‘types’ of brand categories they engage with through social media. This finding has implications for marketing strategies and consumer behaviour analysis as it highlights the importance of investigating consumer’s behavioural profiles in the online brand setting. Our method also provides an empirical guide to examining respondents’ heterogeneity in terms of response profiles rather than ‘traditional’ segmentation strategies based on basic demographic information or brand categories.
This document summarizes key considerations for evaluating collaborative filtering recommender systems. It discusses the user tasks being evaluated, types of analysis and datasets used, ways to measure prediction quality and other attributes, and how to evaluate the overall system from the user perspective. It presents empirical results showing that different accuracy metrics on one dataset collapsed into three groups that were either strongly or uncorrelated. The document aims to help researchers and practitioners properly evaluate and compare recommender system algorithms.
This document summarizes a presentation given by Katrien Verbert on explainable artificial intelligence and interactive explanation methods. It discusses Verbert's research group at KU Leuven which focuses on areas like recommender systems, visualization, and intelligent user interfaces. The presentation provides an overview of explainable AI, discussing objectives like explaining model outcomes to increase trust and allowing user interaction with explanations. It describes various recommendation techniques and presents examples of explainable recommendation systems. The presentation discusses how personal user characteristics can impact the effects of explanations and outlines related user studies. Finally, it summarizes several of Verbert's application areas for explainable AI like education, analytics, agriculture, and healthcare, touching on methodologies and results.
This document summarizes a study comparing the usability perceptions and performance of Taiwanese and North American users of an MP3 player. Surveys showed North American users had lower satisfaction and perceptions of effectiveness and efficiency than Taiwanese users. However, performance results were unclear, with similar effectiveness but conflicting results on efficiency between the groups. The study involved surveys and task observations with 23 Taiwanese and North American subjects to measure the impact of culture on usability factors like satisfaction, effectiveness and efficiency.
Bluetooth is one of the most prevalent technologies available on mobile phones. One of the key questions how to harness this technology in an educational manner in universities and schools. This paper is about a Bluetooth quizzing system which will be used to administer quizzes to students of a university. The Bluetooth quizzing application consists of a server and client mobile Android application. It will utilize a queuing system to allow many clients to connect simultaneously to the server. When clients connect, they can register or choose the option to complete a quiz that the lecturer selected. Results are automatically sent when quiz is done from the client application. Data analysis can then be done to review the progress of students.
This document outlines a proposed system to filter unwanted messages from online social networks. It discusses the existing problems of misuse on social media platforms. The proposed system would use machine learning techniques like SVM for text categorization and identification of fake profiles to filter content by category (e.g. abusive, vulgar, sexual). It presents the system architecture as a three-tier structure and provides results of testing the filtering mechanism and classifier. The conclusion is that the "Filtered wall" system could address concerns around unwanted content on social media walls.
This document proposes a multidirectional rank prediction algorithm (MDRP) for decision making in the textile industry using collaborative filtering methods. MDRP learns asymmetric similarities between users, items, ratings, and sellers simultaneously through matrix factorization to overcome data sparsity and scalability issues. The algorithm was tested on textile datasets and analyzed product and user preferences. Results showed MDRP provided more accurate recommendations than existing similarity learning and collaborative filtering methods. MDRP allows effective decision making for multiple entities with multiple attributes.
Interactive Recommender Systems: Bridging the gap between predictive algorithms and interactive user interfaces. Invited talk at UFMG, Brasil. March 2017. More on this topic: Chen He, Denis Parra, and Katrien Verbert. 2016. Interactive recommender systems. Expert Syst. Appl. 56, C (September 2016), 9-27. DOI=http://dx.doi.org/10.1016/j.eswa.2016.02.013
Founded in 2003 The Information Experience Laboratory, IE Lab – is a usability and user experience lab … … with the mission to improve learning technologies, information and communication systems. We here present the IE Lab and methods .
In collaborative filtering recommender systems user’s preferences are expressed as ratings for items, and each additional rating extends the knowledge of the system and affects the system’s recommendation accuracy. In general, the more ratings are elicited from the users, the more effective the recommendations are. However, the usefulness of each rating may vary significantly, i.e., different ratings may bring a different amount and type of information about the user’s tastes. Hence, specific techniques, which are defined as “active learning strategies”, can be used to selectively choose the items to be presented to the user for rating. In fact, an active learning strategy identifies and adopts criteria for obtaining data that better reflects users’ preferences and enables to generate better recommendations.
The importance of user-developer interactions during the development of an information system has been a long-running theme in information systems research. This research seeks to highlight a gap in the current literature: the contribution of the developer’s formal educational background to the relationship between developers and users. Using an interpretivist epistemology, the researchers employed qualitative interviews to examine how far developers’ perception of the importance of interacting with the user was influenced by their formal education, or the lack thereof. Interviewing both formally and informally trained developers, eleven categories of interest were identified as pertinent to determining the developers’ beliefs about the importance of user interaction. Three of these categories were explored as promising for future research: academic background, work experience, and developer’s access to user knowledge. This research has implications for education of information systems developers as well as for industry interested in hiring software developers.