Since writing this article I have moved on to different software that is very user friendly and is open source (FREE). It is produced by TechSmith and is called JING. I have created four flash tutorials in JING and they can be seen in the ASU Nursing LibGuide. Here is the link http://libguides.asu.edu/content.php?pid=4690&sid=401513 (Tab -Advanced Tutorials EBP)
The fourth flash tutorial can be found at the same link under the RefWorks tab and is moving citations from PubMed to RefWorks (the bibliographic management tool).
Unfortunately the streaming video "How to Order an Article that ASU Does Not Own" has been taken down. The ASU Libraries Online Learning Workgroup has decided to brand all videos on the tutorials web page. All of my videos that were created with Camtasia have been taken down.
Kathleen Carlson
Tuesday, November 24, 2009
Friday, October 9, 2009
From the Authors- Employing Virtualization in Library Computing: Use Cases and Lessons Learned
One of the interesting things we discovered while writing our article is just how popular virtualization technology is in the private and corporate IT environment. However, it is utilized far less in academic and public library computing. We’re interested in hearing thoughts on why that seems to be the case.
Do library IT shops apply a more conservative approach to managing their resources? Does the lack of profit-based competition lead to less risk-taking with regard to new technologies? Or is it simply that virtualization might not be the best fit for library computing needs?
Or are we missing the point entirely?
Do library IT shops apply a more conservative approach to managing their resources? Does the lack of profit-based competition lead to less risk-taking with regard to new technologies? Or is it simply that virtualization might not be the best fit for library computing needs?
Or are we missing the point entirely?
Thursday, September 17, 2009
September issue now online at ITAL Web site
The index and linked articles for the September issue are at http://www.ala.org/ala/mgrps/divs/lita/ital/282009/2803sep/toc.cfm.
Delivering Information to Students 24/7 with Camtasia, by Kathleen Carlson
This article examines the selection process for and use of Camtasia Studio software, a screen video capture program created by TechSmith. The Camtasia Studio software allows the author to create streaming videos which gives students 24 hour access on any topics including how to order books through interlibrary loan.
The Efficient Storage of Text Documents in Digital Libraries, by Przemysław Skibiński, et al
In this paper we investigate the possibility of improving the efficiency of data compression, and thus reducing storage requirements, for seven widely used text document formats. We propose an open-source text compression software library, featuring an advanced word-substitution scheme with static and semidynamic word dictionaries. The empirical results show an average storage space reduction as high as 78 percent compared to uncompressed documents, and as high as 30 percent compared to documents compressed with the free compression software gzip.
Gender, Technology, and Libraries, by Melissa Lamont
Information technology (IT) is vitally important to many organizations, including libraries. Yet a review of employment statistics and a citation analysis show that men make up the majority of the IT workforce, in libraries and in the broader workforce. Research from sociology, psychology, and women’s studies highlights the organizational and social issues that inhibit women. Understanding why women are less evident in library IT positions will help inform measures to remedy the gender disparity.
Success Factors and Strategic Planning: Rebuilding an Academic Library Digitization Program, by Cory Lampert, et al
This paper discusses a dual approach of case study and research survey to investigate the complex factors in sustaining academic library digitization programs. The case study involves the background of the University of Nevada, Las Vegas (UNLV) Libraries’ digitization program and elaborates on the authors’ efforts to gain staff support for this program. A related survey was administered to all Association of Research Libraries (ARL) members, seeking to collect baseline data on their digital collections, understand their respective administrative frameworks, and to gather feedback on both negative obstacles and positive inputs affecting their success. Results from the survey, combined with the authors’ local experience, point to several potential success factors including staff skill sets, funding, and strategic planning.
Employing Virtualization in Library Computing: Use Cases and Lessons Learned, by Arwen Hutt, et al
This paper provides a broad overview of virtualization technology and describes several examples of its use at the University of California, San Diego Libraries. Libraries can leverage virtualization to address many long-standing library computing challenges, but careful planning is needed to determine if this technology is the right solution for a specific need. This paper outlines both technical and usability considerations, and concludes with a discussion of potential enterprise impacts on the library infrastructure.
Friday, June 5, 2009
Adding Delicious Data to Your Library Website, by Andrew Darby and Ron Gilmour
Social bookmarking services such as Delicious offer a simple way of developing lists of library resources. This paper outlines various methods of incorporating data from a Delicious account into a webpage. We begin with a description of Delicious Linkrolls and Tagrolls, the simplest but least flexible method of displaying Delicious results. We then describe three more advanced methods of manipulating Delicious data using RSS, JSON, and XML. Code samples using PHP and JavaScript are provided.
Labels:
Delicious,
social bookmarking
Missing Items: Automating the Replacement Workflow Process, by Cheri Smith, et al
Academic libraries handle missing items in a variety of ways. The Hesburgh Libraries of the University of Notre Dame recently revamped their system for replacing or withdrawing missing items. This article describes the new process that uses a customized database to facilitate efficient and effective communication, tracking, and selector decision making for large numbers of missing items.
Labels:
automation,
missing items,
replacements,
workflow
Public Access Technologies in Public Libraries: Effects and Implications, by John Carlo Bertot
Public libraries were early adopters of Internet-based technologies and have provided public access to the Internet and computers since the early 1990s. The landscape of public-access Internet and computing was substantially different in the 1990s as the World Wide Web was only in its initial development. At that time, public libraries essentially experimented with public access Internet and computer services, largely absorbing this service into existing service and resource provision without substantial consideration of the management, facilities, staffing, and other implications of public-access technology (PAT) services and resources. This article explores the implications for public libraries of the provision of PAT and seeks to look further to review issues and practices associated with PAT provision resources. While much research focuses on the amount of public access that public libraries provide, little offers a view of the effect of public access on libraries. This article provides insights into some of the costs, issues, and challenges associated with public access and concludes with recommendations that require continued exploration.
Labels:
public access technology,
public libraries
Can Bibliographic Data be Put Directly onto the Semantic Web? by Martha Yee
This paper is a think piece about the possible future of bibliographic control; it provides a brief introduction to the Semantic Web and defines related terms, and it discusses granularity and structure issues and the lack of standards for the efficient display and indexing of bibliographic data. It is also a report on a work in progress—an experiment in building a Resource Description Framework (RDF) model of more FRBRized cataloging rules than those about to be introduced to the library community (Resource Description and Access) and in creating an RDF data model for the rules. I am now in the process of trying to model my cataloging rules in the form of an RDF model, which can also be inspected at http://myee.bol.ucla.edu/. In the process of doing this, I have discovered a number of areas in which I am not sure that RDF is sophisticated enough yet to deal with our data. This article is an attempt to identify some of those areas and explore whether or not the problems I have encountered are soluble—in other words, whether or not our data might be able to live on the Semantic Web. In this paper, I am focusing on raising the questions about the suitability of RDF to our data that have come up in the course of my work.
[Note: (8/20/2009) Commentary by Karen Coyle on this article, and additional discussion, may be found at http://futurelib.pbworks.com/YeeRDF.]
[Note: (8/20/2009) Commentary by Karen Coyle on this article, and additional discussion, may be found at http://futurelib.pbworks.com/YeeRDF.]
Labels:
bibliographic data,
FRBR,
RDF,
semantic Web
Monday, March 2, 2009
CatQC and Shelf-Ready Material: Speeding Collections to Users While Preserving Data Quality, by Michael Jay, et al
Abstract:
Libraries contract with vendors to provide shelf-ready material, but is it really shelf-ready? It arrives with all the physical processing needed for immediate shelving, then lingers in back offices while staff conduct itemby-item checks against the catalog. CatQC, a console application for Microsoft Windows developed at the University of Florida, builds on OCLC services to get material to the shelves and into the hands of users without delay and without sacrificing data quality. Using standard C programming, CatQC identifies problems in MARC record files, often applying complex conditionals, and generates easy-to-use reports that do not require manual item review.
Libraries contract with vendors to provide shelf-ready material, but is it really shelf-ready? It arrives with all the physical processing needed for immediate shelving, then lingers in back offices while staff conduct itemby-item checks against the catalog. CatQC, a console application for Microsoft Windows developed at the University of Florida, builds on OCLC services to get material to the shelves and into the hands of users without delay and without sacrificing data quality. Using standard C programming, CatQC identifies problems in MARC record files, often applying complex conditionals, and generates easy-to-use reports that do not require manual item review.
LaneConnex: An Integrated Biomedical Digital Library Interface, Debra S. Ketchell, et al
Abstract:
This paper describes one approach to creating a search application that unlocks heterogeneous content stores and incorporates integrative functionality of Web search engines. LaneConnex is a search interface that identifies journals, books, databases, calculators, bioinformatics tools, help information, and search hits from more than three hundred full-text heterogeneous clinical and bioresearch sources. The user interface is a simple query box. Results are ranked by relevance with options for filtering by content type or expanding to the next most likely set. The system is built using component-oriented programming design. The underlying architecture is built on Apache Cocoon, Java Servlets, XML/XSLT, SQL, and JavaScript. The system has proven reliable in production, reduced user time spent finding information on the site, and maximized the institutional investment in licensed resources.
This paper describes one approach to creating a search application that unlocks heterogeneous content stores and incorporates integrative functionality of Web search engines. LaneConnex is a search interface that identifies journals, books, databases, calculators, bioinformatics tools, help information, and search hits from more than three hundred full-text heterogeneous clinical and bioresearch sources. The user interface is a simple query box. Results are ranked by relevance with options for filtering by content type or expanding to the next most likely set. The system is built using component-oriented programming design. The underlying architecture is built on Apache Cocoon, Java Servlets, XML/XSLT, SQL, and JavaScript. The system has proven reliable in production, reduced user time spent finding information on the site, and maximized the institutional investment in licensed resources.
A Semantic Model of Selective Dissemination of Information for Digital Libraries, by J. M. Morales-del-Castillo, et al
Abstract:
In this paper we present the theoretical and methodological foundations for the development of a multi-agent Selective Dissemination of Information (SDI) service model that applies Semantic Web technologies for specialized digital libraries. These technologies make possible achieving more efficient information management, improving agent–user communication processes, and facilitating accurate access to relevant resources. Other tools used are fuzzy linguistic modelling techniques (which make possible easing the interaction between users and system) and natural language processing (NLP) techniques for semiautomatic thesaurus generation. Also, RSS feeds are used as “current awareness bulletins” to generate personalized bibliographic alerts.
In this paper we present the theoretical and methodological foundations for the development of a multi-agent Selective Dissemination of Information (SDI) service model that applies Semantic Web technologies for specialized digital libraries. These technologies make possible achieving more efficient information management, improving agent–user communication processes, and facilitating accurate access to relevant resources. Other tools used are fuzzy linguistic modelling techniques (which make possible easing the interaction between users and system) and natural language processing (NLP) techniques for semiautomatic thesaurus generation. Also, RSS feeds are used as “current awareness bulletins” to generate personalized bibliographic alerts.
Labels:
digital libraries,
SDI,
semantic Web
Classification of Library Resources by Subject on the Library Website, by Mathew J. Miles and Scott J. Bergstrom
Abstract:
The number of labels used to organize resources by subject varies greatly among library websites. Some librarians choose very short lists of labels while others choose much longer lists. We conducted a study with 120 students and staff to try to answer the following question: What is the effect of the number of labels in a list on response time to research questions? What we found is that response time increases gradually as the number of the items in the list grow until the list size reaches approximately fifty items. At that point, response time increases significantly. No association between response time and relevance was found.
The number of labels used to organize resources by subject varies greatly among library websites. Some librarians choose very short lists of labels while others choose much longer lists. We conducted a study with 120 students and staff to try to answer the following question: What is the effect of the number of labels in a list on response time to research questions? What we found is that response time increases gradually as the number of the items in the list grow until the list size reaches approximately fifty items. At that point, response time increases significantly. No association between response time and relevance was found.
Labels:
library,
subject classification,
subject labels,
website
One Law with Two Outcomes: Comparing the Implementation of CIPA in Public Libraries and Schools, by Paul T. Jaeger and Zheng Yan
Abstract:
Though the Children’s Internet Protection Act (CIPA) established requirements for both public libraries and public schools to adopt filters on all of their computers when they receive certain federal funding, it has not attracted a great amount of research into the effects on libraries and schools and the users of these social institutions. This paper explores the implications of CIPA in terms of its effects on public libraries and public schools, individually and in tandem. Drawing from both library and education research, the paper examines the legal background and basis of CIPA, the current state of Internet access and levels of filtering in public libraries and public schools, the perceived value of CIPA, the perceived consequences of CIPA, the differences in levels of implementation of CIPA in public libraries and public schools, and the reasons for those dramatic differences. After an analysis of these issues within the greater policy context, the paper suggests research questions to help provide more data about the challenges and questions revealed in this analysis.
Though the Children’s Internet Protection Act (CIPA) established requirements for both public libraries and public schools to adopt filters on all of their computers when they receive certain federal funding, it has not attracted a great amount of research into the effects on libraries and schools and the users of these social institutions. This paper explores the implications of CIPA in terms of its effects on public libraries and public schools, individually and in tandem. Drawing from both library and education research, the paper examines the legal background and basis of CIPA, the current state of Internet access and levels of filtering in public libraries and public schools, the perceived value of CIPA, the perceived consequences of CIPA, the differences in levels of implementation of CIPA in public libraries and public schools, and the reasons for those dramatic differences. After an analysis of these issues within the greater policy context, the paper suggests research questions to help provide more data about the challenges and questions revealed in this analysis.
Labels:
CIPA,
public libraries,
schools
Subscribe to:
Posts (Atom)