Tuesday, August 31, 2010
Metadata Creation Practices in Digital Repositories and Collections: Schemata, Selection Criteria, and Interoperability, by Jung-ran Park and Yuji Tosaka
This study explores the current state of metadata-creation practices across digital repositories and collections by using data collected from a nationwide survey of mostly cataloging and metadata professionals. Results show that MARC, AACR2, and LCSH are the most widely used metadata schema, content standard, and subject-controlled vocabulary, respectively. Dublin Core (DC) is the second most widely used metadata schema, followed by EAD, MODS, VRA, and TEI. Qualified DC’s wider use vis-à-vis Unqualified DC (40.6 percent versus 25.4 percent) is noteworthy. The leading criteria in selecting metadata and controlled-vocabulary schemata are collection-specific considerations, such as the types of resources, nature of the collection, and needs of primary users and communities. Existing technological infrastructure and staff expertise also are significant factors contributing to the current use of metadata schemata and controlled vocabularies for subject access across distributed digital repositories and collections. Metadata interoperability remains a major challenge. There is a lack of exposure of locally created metadata and metadata guidelines beyond the local environments. Homegrown locally added metadata elements may also hinder metadata interoperability across digital repositories and collections when there is a lack of sharable mechanisms for locally defined extensions and variants.
Batch Loading Collections into DSpace: Using Perl Scripts for Automation and Quality Control, by Maureen P. Walsh
This paper describes batch loading workflows developed for the Knowledge Bank, The Ohio State University’s institutional repository. In the five years since the inception of the repository approximately 80 percent of the items added to the Knowledge Bank, a DSpace repository, have been batch loaded. Most of the batch loads utilized Perl scripts to automate the process of importing metadata and content files. Custom Perl scripts were used to migrate data from spreadsheets or comma-separated values files into the DSpace archive directory format, to build collections and tables of contents, and to provide data quality control. Two projects are described to illustrate the process and workflows.
Authentication and Access: Accommodating Public Users in an Academic World, by Lynne Weber and Peg Lawrence
In the fall of 2004, the Academic Computing Center, a division of the Information Technology Services Department (ITS) at Minnesota State University, Mankato took over responsibility for the computers in the public areas of Memorial Library. For the first time, affiliated Memorial Library users were required to authenticate using a campus username and password, a change that effectively eliminated computer access for anyone not part of the university community. This posed a dilemma for the librarians. Because of its Federal Depository status, the library had a responsibility to provide general access to both print and online government publications for the general public. Furthermore, the library had a long tradition of providing guest access to most library resources, and there was reluctance to abandon the practice. Therefore the librarians worked with ITS to retain a small group of six computers that did not require authentication and were clearly marked for community use, along with several standup, open-access computers on each floor used primarily for searching the library catalog. The additional need to provide computer access to high school students visiting the library for research and instruction led to more discussions with ITS and resulted in a means of generating temporary usernames and passwords through a Web form. These user accommodations were implemented in the library without creating a written policy governing the use of open-access computers.
The Next Generation Library Catalog: A Comparative Study of the OPACs of Koha, Evergreen, and Voyager, by Sharon Q. Yang and Melissa A. Hofmann
Open source has been the center of attention in the library world for the past several years. Koha and Evergreen are the two major open-source integrated library systems (ILSs), and they continue to grow in maturity and popularity. The question remains as to how much we have achieved in open-source development toward the next-generation catalog compared to commercial systems. Little has been written in the library literature to answer this question. This paper intends to answer this question by comparing the next-generation features of the OPACs of two open-source ILSs (Koha and Evergreen) and one proprietary ILS (Voyager’s WebVoyage).
The Internet has greatly changed how library users search and use library resources. Many of them prefer resources available in electronic format over traditional print materials. While many documents are now born digital, many more are only accessible in print and need to be digitized. This paper focuses on how the Colorado State University Libraries creates and optimizes text-based and digitized PDF documents for easy access, downloading, and printing.