Tuesday, November 1, 2011

New preprints

We've added two preprints.
  • Library Use of Web-based Research Guides by Jimmy Ghaphery and Erin White
  • Reference Information Extraction and Processing using Conditional Random Fields by Tudor Groza, Gunnar AAstrand Grimnes and Siegfried Handschuh
 You'll find these and others at http://www.lita.org/ala/mgrps/divs/lita/publications/ital/prepub/index.cfm.

Wednesday, October 19, 2011

New Preprint

We recently added two preprints.
  • Practical Limits to the Scope of Digital Preservation by Mike Kastellec
  •  Eclipse Editor for MARC Records by Bojana Dimic Surla, PhD
 You'll find these and others at http://www.lita.org/ala/mgrps/divs/lita/publications/ital/prepub/index.cfm.

Wednesday, September 21, 2011

New Preprint

We recently added a preprint.
  • Mobile Technologies & Academics: Do Students Use Mobile Technology in their Academic Lives and are Librarians Ready to Meet this New Challenge? by Angela Dresselhaus and Flora Shrode
 You'll find this and others at http://www.lita.org/ala/mgrps/divs/lita/publications/ital/prepub/index.cfm.

Thursday, September 1, 2011

New preprint

We've just added a new preprint.
  •  Resource Discovery: Comparative Survey Results on Two Catalog Interfaces, by Heather Hessel and Janet Fransen
 This and other preprints can be found at http://www.lita.org/ala/mgrps/divs/lita/publications/ital/prepub/index.cfm.

Wednesday, August 31, 2011

Editorial and Technological Workflow Tools to Promote Website Quality, by Emily G. Morton-Owens

Library websites are an increasingly visible representation of the library as an institution, which makes website quality an important way to communicate competence and trustworthiness to users. A website editorial workflow is one way to enforce a process and ensure quality. In a workflow, users receive roles, like author or editor, and content travels through various stages in which grammar, spelling, tone, and format are checked. One library used a workflow system to involve librarians in the creation of content. This system, implemented in Drupal, an open-source content management system, solved problems of coordination, quality, and comprehensiveness that existed on the library’s earlier, static website.

Factors Affecting University Library Website Design, by Yong-Mi Kim

Existing studies have extensively explored factors that affect users’ intentions to use university library website resources (ULWR); yet little attention has been given to factors affecting university library website design. This paper investigates factors that affect university library website design and assesses the success of the university library website from both designers’ and users’ perspectives.

The findings show that when planning a website, university web designers consider university guidelines, review other websites, and consult with experts and other divisions within the library; however, resources and training for the design process are lacking. While website designers assess their websites as highly successful, user evaluations are somewhat lower. Accordingly, use is low, and users rely heavily on commercial websites. Suggestions for enhancing the usage of ULWR are provided.

Adoption of E-Book Readers among College Students: A Survey, by Nancy M. Foasberg

To learn whether e-book readers have become widely popular among college students, this study surveys students at one large, urban, four-year public college. The survey asked whether the students owned e-book readers and if so, how often they used them and for what purposes. Thus far, uptake is slow; a very small proportion of students use e-readers. These students use them primarily for leisure reading and continue to rely on print for much of their reading. Students reported that price is the greatest barrier to e-reader adoption and had little interest in borrowing e-reader compatible e-books from the library.

Librarians and Technology Skill Acquisition: Issues and Perspective, by Debra A. Riley-Huff and Julia M. Rholes

Libraries are increasingly searching for and employing librarians with significant technology skill sets. This article reports on a study conducted to determine how well prepared librarians are for their positions in academic libraries, how they acquired their skills and how difficult they are to hire and retain. The examination entails a close look at ALA-accredited LIS program technology course offerings and dovetails a dual survey designed to capture experiences and perspectives from practitioners, both library administrators and librarians who have significant technology roles.

Click Analytics: Visualizing Website Use Data, by Tabatha A. Farney

Click analytics is a powerful technique that displays what and where users are clicking on a webpage helping libraries to easily identify areas of high and low usage on a page without having to decipher website use data sets. Click analytics is a subset of web analytics, but there is little research that discusses its potential uses for libraries. This paper introduces three click analytics tools, Google Analytics’ In-Page Analytics, ClickHeat, and Crazy Egg, and evaluates their usefulness in the context of redesigning a library’s homepage.

New preprints

We've recently added three new preprints.
  • "Selecting a Web Content Management System for an Academic Library Web Site" by Elizabeth L. Black
  •  "Cataloging Theory in Search of Graph Theory and Other Ivory Towers. Object: Cultural Heritage Resource Description Networks" by Ronald J. Murray and Barbara B. Tillett
  • "Public Library Computer Waiting Queues: Alternatives to the First-Come-First-Served Strategy" by  Stuart Williamson
You'll find these and other preprints at http://www.lita.org/ala/mgrps/divs/lita/publications/ital/prepub/index.cfm.

Friday, June 17, 2011

Management and Support of Shared Integrated Library Systems, by Jason Vaughan and Kristen Costello

The University of Nevada, Las Vegas (UNLV) University Libraries has hosted and managed a shared integrated library system (ILS) since 1989. The system and the number of partner libraries sharing the system has grown significantly over the past two decades. Spurred by the level of involvement and support contributed by the host institution, the authors administered a comprehensive survey to current Innovative Interfaces libraries. Research findings are combined with a description of UNLV’s local practices to provide substantial insights into shared funding, support, and management activities associated with shared systems.

Benign Neglect: Developing Life Rafts for Digital Content, by Jody L. DeRidder

In his keynote speech at the Archiving 2009 Conference in Arlington, Virginia, Clifford Lynch called for the development of a benign neglect model for digital preservation, one in which as much content as possible is stored in whatever manner available in hopes of there someday being enough resources to more properly preserve it. This is an acknowledgment of current resource limitations relative to the burgeoning quantities of digital content that need to be preserved.

Seeing the Wood for the Trees: Enhancing Metadata Subject Elements with Weights, by Hong Zhang, Linda C. Smith, Michael Twidale, and Fang Huang Gao

Subject indexing has been conducted in a dichotomous way in terms of what the information object is primarily about/of or not, corresponding to the presence or absence of a particular subject term, respectively. With more subject terms brought into information systems via social tagging, manual cataloging, or automated indexing, many more partially relevant results can be retrieved. Using examples from digital image collections and online library catalog systems, we explore the problem and advocate for adding a weighting mechanism to subject indexing and tagging to make web search and navigation more effective and efficient. We argue that the weighting of subject terms is more important than ever in today’s world of growing collections, more federated searching, and expansion of social tagging. Such a weighting mechanism needs to be considered and applied not only by indexers, catalogers, and taggers, but also needs to be incorporated into system functionality and metadata schemas.

Building an Open Source Institutional Repository at a Small Law School Library:,Is it Realistic or Unattainable? by Fang Wang

Digital preservation activities among law libraries have largely been limited by a lack of funding, staffing and expertise. Most law school libraries that have already implemented an Institutional Repository (IR) chose proprietary platforms because they are easy to set up, customize, and maintain with the technical and development support they provide. The Texas Tech University School of Law Digital Repository is one of the few law school repositories in the nation that is built on the DSpace open source platform.1 The repository is the law school’s first institutional repository in history. It was designed to collect, preserve, share and promote the law school’s digital materials, including research and scholarship of the law faculty and students, institutional history, and law-related resources. In addition, the repository also serves as a dark archive to house internal records.

Saturday, April 2, 2011

New preprint

A new preprint.
  • "Click Analytics: Visualizing Web Site Use Data" by Tabatha A. Farney

Tuesday, March 29, 2011

New preprints

Two new preprints were recently added.
  • "Adoption of E-Book Readers among College Students: A Survey" by Nancy M. Foasberg
  • "Editorial and technological workflow tools to promote website quality" by Emily G. Morton-Owens
    (Originally presented at the 2010 LITA National Forum)

Wednesday, March 2, 2011

New preprints

Two new preprints have been added to the ITAL Web site today.
http://www.lita.org/ala/mgrps/divs/lita/ital/prepub/index.cfm

"Investigations into Library Web Scale Discovery Services" by Jason Vaughan

 "Graphs in Libraries: A Primer" by James E. Powell, Daniel Alcazar, Matthew Hopkins, Robert Olendorf, Tamara M. McMahon, Amber Wu, Linn Collins

A Simple Scheme for Book Classification Using Wikipedia, by Andromeda Yelton

Editor’s note: This article is the winner of the LITA/Ex Libris Student Writing Award, 2010.

Because the rate at which documents are being generated outstrips librarians’ ability to catalog them, an accurate, automated scheme of subject classification is desirable. However, simplistic word-counting schemes miss many important concepts; librarians must enrich algorithms with background knowledge to escape basic problems such as polysemy and synonymy. I have developed a script that uses Wikipedia as context for analyzing the subjects of nonfiction books. Though a simple method built quickly from freely available parts, it is partially successful, suggesting the promise of such an approach for future research.

The Internet Public Library (IPL): An Exploratory Case Study on User Perceptions, by Monica Maceli, Susan Wiedenbeck, and Eileen Abels

The Internet Public Library (IPL), now known as ipl2, was created in 1995 with the mission of serving the public by providing librarian-recommended Internet resources and reference help. We present an exploratory case study on public perceptions of an “Internet public library,” based on qualitative analysis of interviews with ten college student participants: some current users and others unfamiliar with the IPL. The exploratory interviews revealed some confusion around the IPL’s name and the types of resources and services that would be offered. Participants made many positive comments about the IPL’s resource quality, credibility, and personal help.

Semantic Web for Reliable Citation Analysis in Scholarly Publishing, by Ruben Tous, Manel Guerrero, and Jaime Delgado

Analysis of the impact of scholarly artifacts is constrained by current unreliable practices in cross-referencing, citation discovering, and citation indexing and analysis, which have not kept pace with the technological advances that are occurring in several areas like knowledge management and security. Because citation analysis has become the primary component in scholarly impact factor calculation, and considering the relevance of this metric within both the scholarly publishing value chain and (especially important) the professional curriculum evaluation of scholarly professionals, we defend that current practices need to be revised. This paper describes a reference architecture that aims to provide openness and reliability to the citation-tracking lifecycle. The solution relies on the use of digitally signed semantic metadata in the different stages of the scholarly publishing workflow in such a manner that authors, publishers, repositories, and citation-analysis systems will have access to independent reliable evidences that are resistant to forgery, impersonation, and repudiation. As far as we know, this is the first paper to combine Semantic Web technologies and public-key cryptography to achieve reliable citation analysis in scholarly publishing.

Web Accessibility, Libraries, and the Law, by Camilla Fulton

With an abundance of library resources being served on the web, researchers are finding that disabled people oftentimes do not have the same level of access to materials as their nondisabled peers. This paper discusses web accessibility in the context of United States’ federal laws most referenced in web accessibility lawsuits. Additionally, it reveals which states have statutes that mirror federal web accessibility guidelines and to what extent. Interestingly, fewer than half of the states have adopted statutes addressing web accessibility, and fewer than half of these reference Section 508 of the Rehabilitation Act or Web Content Accessibility Guidelines (WCAG) 1.0. Regardless of sparse legislation surrounding web accessibility, librarians should consult the appropriate web accessibility resources to ensure that their specialized content reaches all.

Usability of the VuFind Next-Generation Online Catalog, by Jennifer Emanuel

The VuFind open–source, next-generation catalog system was implemented by the Consortium of Academic and Research Libraries in Illinois as an alternative to the WebVoyage OPAC system. The University of Illinois at Urbana-Champaign began offering VuFind alongside WebVoyage in 2009 as an experiment in next-generation catalogs. Using a faceted search discovery interface, it offered numerous improvements to the UIUC catalog and focused on limiting results after searching rather than limiting searches up front. Library users have praised VuFind for its Web 2.0 feel and features. However, there are issues, particularly with catalog data.

Monday, January 31, 2011

Generating Collaborative Systems for Digital Libraries: a Model-Driven Approach, by Alessio Malizia, Paolo Bottoni, and S. Levialdi

The design and development of a digital library involves different stakeholders, such as: information architects, librarians, and domain experts, who need to agree on a common language to describe, discuss, and negotiate the services the library has to offer. To this end, high-level, language-neutral models have to be devised. Metamodeling techniques favor the definition of domain-specific visual languages through which stakeholders can share their views and directly manipulate representations of the domain entities. This paper describes CRADLE (Cooperative-Relational Approach to Digital Library Environments), a metamodel-based framework and visual language for the definition of notions and services related to the development of digital libraries. A collection of tools allows the automatic generation of several services, defined with the CRADLE visual language, and of the graphical user interfaces providing access to them for the final user. The effectiveness of the approach is illustrated by presenting digital libraries generated with CRADLE, while the CRADLE environment has been evaluated by using the cognitive dimensions framework.

The Middle Mile: The Role of the Public Library in Ensuring Access to Broadband, by Marijke Visser and Mary Alice Ball

This paper discusses the role of the public library in ensuring access to the broadband communication that is so critical in today’s knowledge-based society. It examines the culture of information in 2010, and then asks what it means if individuals are online or not. The paper also explores current issues surrounding telecommunications and policy, and finally seeks to understand the role of the library in this highly technological, perpetually connected world.

An Evolutive Process to Convert Glossaries into Ontologies, by José R. Hilera, Carmen Pagés, J. Javier Martínez, J. Antonio Gutiérrez, and Luis de-Marcos

This paper describes a method to generate ontologies from glossaries of terms. The proposed method presupposes an evolutionary life cycle based on successive transformations of the original glossary that lead to products of intermediate knowledge representation (dictionary, taxonomy, and thesaurus). These products are characterized by an increase in semantic expressiveness in comparison to the product obtained in the previous transformation, with the ontology as the end product. Although this method has been applied to produce an ontology from the “IEEE Standard Glossary of Software Engineering Terminology,” it could be applied to any glossary of any knowledge domain to generate an ontology that may be used to index or search for information resources and documents stored in libraries or on the Semantic Web.

Bridging the Gap: Self-Directed Staff Technology Training, by Kayla L. Quinney, Sara D. Smith, and Quinn Galbraith

Undergraduates, as members of the Millennial Generation, are proficient in Web 2.0 technology and expect to apply these technologies to their coursework—including scholarly research. To remain relevant, academic libraries need to provide the technology that student patrons expect, and academic librarians need to learn and use these technologies themselves. Because leaders at the Harold B. Lee Library of Brigham Young University (HBLL) perceived a gap in technology use between students and their staff and faculty, they developed and implemented the Technology Challenge, a self-directed technology training program that rewarded employees for exploring technology daily. The purpose of this paper is to examine the Technology Challenge through an analysis of results of surveys given to participants before and after the Technology Challenge was implemented. The program will also be evaluated in terms of the adult learning theories of andragogy and self-directed learning. HBLL found that a self-directed approach fosters technology skills that librarians need to best serve students. In addition, it promotes lifelong learning habits to keep abreast of emerging technologies. This paper offers some insights and methods that could be applied in other libraries, the most valuable of which is the use of self-directed and andragogical training methods to help academic libraries better integrate modern technologies.

Next-Generation Library Catalogs and the Problem of Slow Response Time, by Margaret Brown-Sica, Jeffrey Beall, and Nina McHale

Response time as defined for this study is the time that it takes for all files that constitute a single webpage to travel across the Internet from a Web server to the end user’s browser. In this study, the authors tested response times on queries for identical items in five different library catalogs, one of them a next-generation (NextGen) catalog. The authors also discuss acceptable response time and how it may affect the discovery process. They suggest that librarians and vendors should develop standards for acceptable response time and use it in the product selection and development processes.