What are discovery layers What brought about this topic and the five libraries chosen How did they implement How have they assessed What modifications were made Conclusions
Linked Data is exploding in the library world, but the biggest problems libraries have are coming up with the time or money involved in converting their records, looking into Linked Data programs, finding community support, and all the various other issues that arise as part of developing new methods. Likewise, one of the biggest hurdles for libraries and linked data is that they do not know what to do to get involved. As we have fewer people available and smaller budgets each year, we would like to explore ways in which libraries can get involved in the process without expending an undue amount of their already dwindling resources. To see how linked data can be applied, we will look at the example of the Smithsonian Libraries (SIL). Over the past 18 months, SIL has been preparing for the transition from MARC to linked open data. This session will talk about various SIL projects and initiatives (such as the FAST headings project and the introduction of Wikidata and WikiBase); how to incorporate linked data elements into MARC records; and how to develop staff and give them proficiency with new tools and workflows. Heidy Berthoud, Head, Resource Description, Smithsonian Libraries
This work describes the application of semantic wikis in distant learning for Semantic Web courses. The resulting system focuses its application of existing and new wiki technology in making a wiki-based interface that demonstrates Semantic Web features. A new layer of wiki technology, called “OWL Wiki Forms” is introduced for this Semantic Web functionality in the wiki interface. This new functionality includes a form-based interface for editing Semantic Web ontologies. The wiki then includes appropriate data from these ontologies to extend existing wiki RDF export. It also includes ontology-driven creation of data entry and browsing interfaces for the wiki itself. As a wiki, the system provides the student an educational tool that students can use anywhere while still sharing access with the instructor and, optionally, other students. Lloyd Rutledge and Rineke Oostenrijk. Applying and Extending Semantic Wikis for Semantic Web Courses, In: Proceedings of the 1st International Workshop on eLearning Approaches for the Linked Data Age (Linked Learning 2011) at the 8th Extended Semantic Web Conference (ESWC 2011), Heraklion, Greece, May 29th, 2011. http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-717/paper9.pdf
SPSS is a widely used statistical analysis program. It was originally developed in 1968 by Norman Nie and C. Hadlai Hull to analyze social science data. SPSS was later acquired by IBM in 2009. The main windows in SPSS are the Data Editor, Output Viewer, Chart Editor, and Syntax Editor. It has menus for File, Edit, View, Data, Transform, Analyze, Graphs, Utilities, and Help. SPSS allows users to manage data files, transform variables, summarize data graphically and numerically, and perform inferential statistics.
Thomas Heissenberger is a software engineer with experience in full stack development, big data, project management, and programming languages including Python, Java, JavaScript, SQL, C++ and more. He has a bachelor's degree in software engineering from Rochester Institute of Technology where he maintained a 3.4 GPA. His work experience includes positions at IntegrationPoint as a software developer and the Wisner & Wisner Law Firm for web design and administration. In his free time he enjoys personal coding projects and was captain of his high school robotics team.
Key considerations when developing data-driven actionable insights for reaching library stakeholders. Improve library services, understand library workflows, target resource acquisitions, make the library a better place through data analysis!
This document discusses tools for organizing, analyzing, and presenting healthcare data. It describes using databases and relational database management systems to structurally organize multidimensional healthcare data. Key concepts covered include tables, fields, records, primary keys, entity relationship diagrams, cardinality, and structured query language. Statistical software packages like SPSS and SAS are presented as tools for manipulating and analyzing stored data. Microsoft Excel, graphs, tables, and infographics are also discussed for presenting analyzed data.
CORAL is an open source electronic resource management system that UW-Parkside implemented to better manage their e-resources. They installed CORAL on a Windows server and customized the Resources and Licensing modules to track information about their 200+ e-journals, 10+ database packages, and licensing agreements. While implementation required work, CORAL now centralizes their previously dispersed e-resources data and provides workflows to track acquisitions and access. Future goals include adding more data, training staff, and exploring usage statistics tracking in CORAL.
Databases are useful for storing and organizing large amounts of information. They work well when data has a defined structure and relationships between records. Databases can retrieve information with high accuracy if properly managed. A database contains tables which hold records with the same field structure. Each record contains data fields for a particular item. Fields make up the columns in a table, while records form the rows. Databases also use keys like primary and foreign keys to link records together. Boolean logic operators like AND, OR and NOT can be used to perform operations on data within a database.
Kelly Marie Blanchat, presenter The need to continually evaluate electronic resources should not limited to a metric for how resources perform. The reporting tools that monitor and collect e-resource usage need to have their performance evaluated as well. This presentation will cover how vendor-provided systems -- designed to aid in the decision making process of the e-resources lifecycle -- can be assessed for reporting accuracy. Following this session, participants will have an understanding of what data points to review when assessing vendor-provided usage statistic tools, and will have a method to begin evaluating their own systems. In summer 2015, Yale Library brought up ProQuest’s 360 COUNTER Data Retrieval Service (DRS), a service in which COUNTER-compliant usage statistics are uploaded, archived, and normalized into consolidated reports twice per year. To date 360 COUNTER has freed up a significant amount of time for Yale's E-Resources Group, allowing for staff resources to be allocated elsewhere in the e-resources lifecycle. This extra staff time also allowed time to “kick the tires” of the system, which resulted in an assessment workflow using Microsoft Excel to compare how raw COUNTER data uploaded to the system was affected by title normalization in the knowledgebase. This assessment workflow helped to identify the volume of data available in the system, and also gave clarity to how the 360 COUNTER system works and what steps need to be taken–by both ProQuest and Yale Library–to improve reporting accuracy. Please note that this presentation will touch on issues found within the system, and how ProQuest worked with Yale to identify the source through title normalization decisions, and correct errors when possible. The primary purpose is to bring awareness for the need of reporting tool assessment, which can be applied to any assessment tool, not just 360 COUNTER.
Entities represent people, objects, or abstract concepts and have attributes that describe examples of the entity. Notation defines the entity name in capital letters and attributes in brackets. Entities and attributes are part of data modeling, where entities become database tables and attributes become fields during implementation. Examples provided include a PUPIL entity with attributes like name and DOB, a CAR entity with attributes like make and model, and a DOCTOR APPOINTMENT entity with date and time attributes.
This presentation was provided by Ellen Bishop of the Florida Virtual Campus for the NISO webinar, Integrating Library Management Systems, held on June 8, 2016
Collections metrics have always been an important component of effectively managing libraries. But today they are more important than ever before as user-focused libraries and information centers attempt to adjust their collections to current and future library user needs. Frequently this requires sharp turns, smart traffic control, and even drafting behind other libraries who might be in the lead at any given stretch in order to achieve ultimate success. In this presentation, perspectives from a corporate library context and a liberal arts college library will be presented. What are the key metrics today vs. five years ago? What factors are at work that create changes in metrics value over time? What changes might we expect to see in the future? These and other questions will be addressed. Speakers: Marija Markovic, Independent Consultant Steve Oberg, Wheaton College (IL)
This document discusses using Viewshare, an open-source visualization platform, to visualize different types of data including a MODS XML file of a collection, a scientific dataset ingested as an XSL file, and data about an academic community ingested as an XSL file. It also discusses visualizing a dataset from a cross-sectional study of E. coli bacteria including visualizing the raw data, human-readable data, and a visualization of the dataset. Finally, it discusses visualizing academic communities using Texas A&M University's Computer Science and Engineering department as an example and lessons learned about better data integration through linking data.
Brown, Christopher C. “The Front Face of the ERM: How we Left Our Home-Grown Database Management System and Enbraced a More Innovative One.” Presentation given at the Innovative Users Group 2013, 25 April 2013, San Francisco, CA.
A data dictionary contains metadata that describes the entities, attributes, data types, sizes, validation rules, and keys of data stored in a database. It is produced during database modeling and does not store actual data. The example shows a data dictionary for a school database that would store information about pupils and tutor classes, including each pupil's name and tutor class.