Back
 JSSM  Vol.14 No.1 , February 2021
Evaluation of a Digital Library: An Experimental Study
Abstract: The development of digital libraries has changed the handling and access to information. Using such a library involves a computer-human interface as well as commands and search strategies to retrieve information. The purpose of this study was to evaluate a digital library in an institution of higher education that serves approximately 75,000 students. Quantitative and qualitative data were collected from a total of 206 participants (structured interviews, popup questionnaires and transactional log analysis). Descriptive statistics and thematic analysis were used for quantitative and qualitative data. Online journals were the most commonly used resources while reference resources were least used. The usability and information retrieval capacity of the library was good. However, there was a need to improve the user interface of the digital library, create more awareness and subscribe to more online journals to meet the information requirements of the users.

1. Introduction

The library together with the information sector has undergone tremendous changes in recent years. These changes involve the collection and arrangement of information. Consequently, libraries can offer their services without confinement to geographical borders. Advances in information and communication technologies have revolutionised the type, subject matter, design and perception of libraries (Harris, 2017). Hence, the progression of digital libraries originated from the amalgamation of technology with conventional library tools to satisfy the increasing information needs of users. At present, most libraries all over the world are moving towards the “digital” mode. A digital library can be defined as a collection of digital objects, such as text, images video or audio, which can be accessed through the internet or CD-ROM (compact disk read-only memory) (Lyman, 2017).

The creation of digital libraries is a big investment considering the financial, technological and human resources required. Institutions and organizations creating digital libraries need to invest heavily on software, databases, and information security, to facilitate access to scholarly resources. Floratou et al. (2011) examine the cost of creating and managing database for cloud operations. According to the analysis of cost, Floratou et al. (2011) argue that different software may attract different costs; someone using MySQL may incur a different cost from another person using SQL database and database management system. According to Floratou et al. (2011), companies such as Oracle, Microsoft, and Amazon are market leaders in provision of databases at various costs. Amazon provides users with a databases supported by MySQL at a cost of $2.60 per compute-hour, while Oracle option is over 31% higher at about $3.4 per computer hour (Floratou et al., 2011). Institutions providing digital libraries also have to pay for software licenses, which are expensive to keep their services accessible to researchers and students. The average hourly license ranges between $0.6 and $3.9 depending on the service provider (Floratou et al., 2011). The initial physical costs are hefty, because institutions invest in hardware and software components, as well as the professional services from IT specialists, database designers, and managers. The cheapest server a business can own is worth about $1500, while others such as MySQL’s R730 go for about $7655 (Floratou et al., 2011). Admittedly, organization requires millions to set up and manage digital libraries.

Therefore, it is important to conduct regular appraisals to determine whether the intended objectives of the library are attained. Evaluation guides the decision-making process of digital library developers as well as other stakeholders who are directly or directly involved with the library. Evaluation is delineated as the methodical procedure of establishing the advantages, importance and usefulness of something (Pinfield, 2017). In the context of a digital library, evaluation can be regarded as the process of finding out whether the original objectives that led to the establishment of the system have been achieved. This process is often confused with an assessment. The key difference is that evaluation concentrates on factors such as systems, services and products while assessment concentrates on consumers, their attitudes, abilities and other related attributes (Wynne et al., 2016). Therefore, performance assessment can be incorporated into the evaluation structure. Nevertheless, both processes enable the making of sound decisions.

Before conducting an evaluation, it is important to consider the following areas during the planning stages. First, the main purpose of the evaluation should be ascertained together with specific attributes to be appraised. The process and timing of the evaluation should also be considered. Digital libraries are multifarious structures. Thus, evaluation approaches and metrics differ based on whether the digital libraries are seen as information systems, institutions, collections, new technologies or new services (Lamb, 2017). The purpose of this paper is to evaluate a digital library in an institution of higher education.

1.1. Forms of Evaluation

Evaluation can be done at different stages of the development of a digital library. Four forms of evaluation are explained herein. They include formative, summative, iterative and comparative evaluations. Formative evaluation is commonly conducted during the initial stages of a project (Stefl-Mabry, 2018). For instance, before starting a digital library project, it is necessary to find out the information needs of the target users to determine whether or not a digital library should be established. Thus, a formative evaluation is akin to conducting a market survey before introducing a commodity. The findings of the evaluation can guide developers to include certain aspects into the structure of the digital library or implement corrective measures in the early phases of a project. Additionally, a formative evaluation provides baseline data that can be used in subsequent stages of evaluation to determine whether the project has achieved some of its intended uses.

Summative evaluation is done at the end of a project to ascertain whether the original targets leading to the initiation of the project have been met or not (Pinfield, 2017). Therefore, the focus of this evaluation is the outcome of an initiative. Iterative evaluations encompass short-term appraisals that are performed in the course of a project. They act as “in-between” assessments that help to ensure that the project is on the right track (Tank, Maradiya, & Bhatt 2017). These evaluations can be done as many times as possible in the course of the project.

Comparative evaluations are complete appraisals performed using formats that can be contrasted across similar systems. In other words, they can be used as benchmarking processes to determine the value of a digital library (Campbell, 2018). Similar systems are compared in a comparative evaluation. An example is comparing various digital library platforms across several institutions of higher learning.

1.2. Digital Library Evaluation Framework

Digital library evaluation frameworks enable the evaluator to conduct a detailed and logical assessment. However, before designing such a framework, it is necessary to point out the key constituents that typify the scope of the digital library environment. Three major components in the domain of a digital library are users, content and technology (Agosti, Ferro, & Silvello, 2016). Service is a valuable parameter that can be investigated. However, it often falls under content. These three parameters should be examined in detail to develop an effective framework for the evaluation of a digital library.

Users form the most critical entity in the information chain regardless of the library platform (whether it is a digital or conventional library). It is important to verify the target users and their information needs to appraise a digital library effectively. To understand the users fully, four points are important. The identity of the users should be ascertained, for example, students, researchers or professionals. Their information-seeking behaviour should also be determined. The type of information required needs to be clarified and this has to do with specific subject areas. Lastly, the purpose of the information should be identified. In summary, four questions to ask regarding the users of a digital library are “who”, “when”, “what” and “why”.

Contains refers to what kind of information is available in a digital library. The collection may differ based on the objectives of the library. For instance, the contents of an academic library may differ from the components of a professional library. The primary objects may be reports, books or journal articles, whereas secondary data could be metadata schemes or bibliographic descriptions. Various formats can be used to present the data, for instance, video, text or audio. When planning an evaluation, the type of content (audio, text or video), metadata schemes (indexing, citation, thesaurus and bibliographic arrangement) and quality of content (pertinence and subject coverage) should be considered.

Technology refers to the aggregate of skills, techniques and processes used in the development of commodities or services. Technological matters that are factored in digital libraries include user interface, management of access, document technology and system structure (Lyman, 2017). The user interface takes care of diverse options that a digital library offers to its users and the ease of content access. A system ought to have efficient triangulation tools and recovery techniques to aid in the access of information. In contrast, system structure entails the structural design of the system, for example, protocols, database and middleware used in developing the platform. Matters concerning the depiction of documents are considered in document technology, including format and model. Model denotes the conjectural features of a document such as semantic content, hyperlinked logical structure and external features. In contrast, format identifies the core document depiction such as rich text format (RTF), PDF and DOC (Fenlon et al., 2016).

2. Research Focus

The digital library at an institution of higher learning was unveiled in 2018 December. Its objective was to provide access to various resources offered by the library through a single window on campus and remotely. The architecture of the digital library included a host of hardware and software. Four different servers supported the main one. They included a server for web OPAC, another one for the institutional digital depository, a third server for databases based on hard-disk storage and a fourth CD mirror server for audio-visual materials stored on CD or DVD. The library was also linked to a virtual private network to facilitate access for external users (remote use). The key contents of the digital library included information about the library, e-books, online journals, institutional repository, web, online and offline databases and CD/DVD based training tools. Internet protocol (IP) facilitated access to online resources. Therefore, there was no need to log in to find individual resources.

The digital library has about 5000 users, including research scholars, faculty members, undergraduate and postgraduate students. Approximately 80% of users log on to the digital library through the institution’s intranet. However, the remaining 20% accesses the services of the library remotely through a virtual private network (VPN) server. The focus of this study is to evaluate the digital library in terms of patterns of use, usability and information retrieval. These three forms of evaluation are explained further under this section.

2.1. Usability Evaluation

Usability inquiry testing was done. This type of evaluation entails appraising the usability of a digital library when target users are performing normal day to day tasks instead of evaluator-assigned tasks. This mode of evaluation is useful when trying to collect information concerning the needs, likes and dislikes of the users (Sánchez-Gálvez & Fernández-Luna, 2015). Several approaches can be used in this regard, including focus groups, interviews, questionnaires and field observations.

2.2. Information Retrieval

People seek information for various purposes. When evaluating of a digital library, information retrieval refers to finding the information that is being sought by the user. The retrieval of information in the context of a single digital library is a multifarious process entailing aspects such as cataloguing, metadata and indexing. The convolution of a digital library is proportional to the number of aspects (indexing and cataloguing) that are effective at the same time when a user searches for data across various collections that apply diverse metadata systems (Gaona-García, Martin-Moncunill, & Montenegro-Marin, 2017). Nevertheless, details regarding these intricacies are of no use to the library users. Their main concern is being able to find information efficiently and effectively. Thus, information retrieval evaluation is dual, user-focused and systems-oriented. Information retrieval evaluation from a user’s standpoint is determining the effectiveness with which a user’s search for information satisfies their interests or needs. In contrast, information retrieval evaluation from a systems perspective ascertains the usefulness and efficiency of the retrieval system, which is a core objective of all digital libraries.

In user-focused evaluation, the emphasis is on the user’s experience with the information recovery tools provided by the digital library (Cabrerizo et al., 2015). A digital library is of little value to users if they cannot find the information they desire effectively, notwithstanding the quality of information or sophistication of its technology. Therefore, when conducting a user-focused evaluation, it is necessary to determine the performance of the information retrieval system with respect to the users’ interests, requirements and anticipations.

The main challenge faced by most digital libraries is the storing, configuration and recovery of its contents (Places et al., 2016). Therefore, they aim to possess information retrieval systems that permit users to find specific items effectively in the shortest time possible. Assessing the information retrieval potential of a digital library provides valuable information that may guide future decisions concerning the hypothetical and practical modules of the library to optimise the efficacy of user searches.

Other aspects to be considered in the evaluation information retrieval are precision and recall. Precision can be described as the fraction of retrieved documents that satisfy the search requirement (are pertinent to the information being sought by the user). Conversely, recall is the fraction of relevant documents that are recovered from the assortment of all appropriate files. The estimation of recall is more convoluted than precision because the cataloguing of most digital libraries does not allow the identification of all the potentially relevant documents. This study focused on information retrieval from the user’s standpoint with a focus on precision.

3. Research Method

A formal evaluation was done a year following the establishment of a digital library in an institution of higher learning. Qualitative and quantitative data were obtained from the users. Different methods were applied to various categories of users. For example, questionnaires were used to collect information from students, whereas informal interviews were used to make inquiries from faculty. Transaction log analysis was used to collect quantitative data from all users. Through this mixed-method approach, it was possible to determine the information about the usage patterns of different groups as well as the usability and information retrieval potential of the digital library.

Using different methods of data collection was necessary because each approach they differ in effectiveness in given situations. Questionnaires are effective where the target response does not require detailed explanations. The approach is also cheap and timely compared to in-person interviews that require explanations that are more detailed. In this case, using both interviews and questionnaires allowed the research to gather more data and information for analysis from the participants.

3.1. Sample

A sample of 206 participants took part in the study. Out of this number, 200 were students at undergraduate and postgraduate levels, whereas the remaining 6 were faculty. The students were chosen through random sampling as determined by a computer algorithm. The 6 members of faculty were identified through systematic sampling. Faculty members received letters inviting them to take part in the study. The letters contained a brief description of the study, its objectives and the expected duration of the study. The letter also contained an informed consent form that participants were expected to complete to verify their participation in the study. A copy of the invitation letter (Appendix A) and the informed consent form (Appendix B) are included in the appendices.

Sample determination was based on numerous factors, including cost, accessibility, willingness to participate in the study, and convenience. The cost of typing and printing questionnaires, distributing them to the participants, and conducting interviews with individual participants was considered before choosing the sample size above. The study determined that it would costly to interview and collect data from a larger sample size beyond 206. Furthermore, it also observed that including a larger sample would lead dealing with huge data for analysis, which would have increased the error margins and other related inconveniences.

3.2. Informal Interviews with Faculty

The chosen faculty members were interviewed for about 30 minutes. Individual interviews were conducted using a predetermined set of questions as indicated in Appendix D. The informal interviews entailed questioning the user, recording their responses, transcribing the interviews before performing data analysis. Structured interviews were chosen for this study to minimise ambiguity and narrow down the responses to specific areas of study that were targeted by the researcher.

3.3. Questionnaires

Surveys are among the old-fashioned ways that libraries use to collect data. The most common form of carrying out surveys is via questionnaires. When designing the questionnaires, questions should be selected carefully to capture the research objectives. Two types of questions can be used: closed or open-ended. However, data from closed-ended questions are easier to analyse than information from open-ended ones. Nonetheless, open-ended questions allow the respondent to express themselves fully by providing additional explanations, which enriches the quality of data (Oosterveld, Vorst, & Smits, 2019).

Popup questionnaires are effective when using the online platform to collect data and information from distant participants. Researcher or institution with websites uses them. Furthermore, modern enterprises use them to collect data about customer experiences and satisfaction rates. According to Stoet (2017), popup questionnaires are more effective and efficient than the embedded ones because the researcher controls how, when, and where they appear on their websites.

Questionnaires are the most common instruments used in general evaluations. However, they have limited utility in usability evaluations. Currently, surveys can be conducted using web-based. Such data can easily be interpreted and analysed using various software tools (Stoet, 2017). Statistical analyses may also be needed to make inferences of the resultant data. Popup questionnaires, however, are useful in gauging the usability of a digital library. They are programmed to emerge when a user does something out of the blue in a digital library. In some cases, a short on-screen questionnaire regarding usability matters may be instigated after a specified duration of time or when leaving the library. It is also possible to email the questionnaires to users if identification protocols are needed to access the library because they would provide the users’ email addresses. Overall, questionnaires used in usability testing need to be as brief as possible notwithstanding their mode of presentation (Sánchez-Gálvez & Fernández-Luna, 2015). Brevity encourages the users to complete them without feeling that they are wasting their time. They should also be clear and unambiguous to yield accurate responses. A researcher can consider including incentives for prospective users to take part, especially if very many responses are required (Stoet, 2017). Some of the motivations that can be used include entries into draws to win a prize or coupons for online shopping in specified stores. In this study, a popup questionnaire containing a prompt to redirect the user to a longer questionnaire was used (Appendix C).

Using the design principles of questionnaires is another vital aspect that reinforces their effectiveness when used to collect data from participants using different approaches. One needs to decide the questions that they will ask the participants according to the research variables, objectives, and hypotheses (Stoet, 2017). Researchers need to apply design principles such as pretests and revisions to ensure that the questions included meet the primary objective of the studies they are conducting.

3.4. Transactional Log Analysis

Transaction log analysis is a common approach in the evaluation of digital libraries. It was originally designed to collect quantitative data for the appraisal of OPAC libraries (Arshad & Ameen, 2015). It has since been adopted for the evaluation of other types of digital libraries. Transactional log analysis is an effective tool when applied alongside other assessment tools. It provides valuable information such as the people who use digital libraries, the specific resources used, how long the library is used among other parameters. Transactional logs can also generate information such as frequency and sequence of feature use, response times of the system, hit rates, location of users, error rates and the number of transactions per use (Siguenza-Guzman et al., 2015). In this study, transactional log analysis was used to determine information regarding patterns of use of the digital library. This information was also partly captured by the questionnaires.

3.5. Data Collection

A user questionnaire popped up on the screen after an individual had interacted with the digital library for at least 30 minutes. The popup questionnaire was brief and had only 2 questions. However, it contained a link to redirect the user to a longer questionnaire. The system was set to produce the popup questionnaires for 14 days until a total of about 200 users had completed the longer survey. Appendix C shows a copy of the questionnaire (popup and longer questionnaire). Evaluation parameters that were covered in the questionnaires were usability and information retrieval. This form of questionnaire has previously produced acceptable data supporting the reliability and validity of web-based questionnaires (Tella, 2015).

3.6. Limitations

The first limitation of the study is the number of participants, which was relatively small. The 206 participants formed about 4.12% of the entire population of the digital library users. A larger sample would have been desirable though it was not possible due to financial, technical and time constraints. The second limitation was related to the various perspectives of the participants in this appraisal. Some of the partakers were postgraduate students while other was undergraduate students. The information needs of these two groups of students have been shown to differ significantly due to the scope of their studies. The questionnaires did not capture the input of faculty members. These different perspectives should be considered when analysing the outcomes of the evaluation.

3.7. Logistics

The researcher coordinated the execution of this evaluation plan, which consisted of planning, collection of data and data handling. The institution’s assistant librarian, who was in charge of the digital library project, was the primary point of contact. All data were processed, analysed, construed and reported by the author. All reports were conveyed to the digital library’s project manager and members of the development team. The evaluation outcomes were further disseminated to other stakeholders such as the administrators of the learning institution.

3.8. Data Analysis

Quantitative data obtained through questionnaires and transactional log analysis were recorded in an Excel spreadsheet for further analysis. Deductions were made by further categorising the responses into three main groups. For example, findings on “strongly agree” and “agree” were combined to mean “agreeing with the statement”, whereas “strongly disagree” and “disagree” were combined to mean “disagreeing with the statement”. The ensuing data were summarised using descriptive statistics such as means and percentages. Thematic analysis was used to analyse qualitative data from the structured interviews.

4. Findings and Discussions

About 47% of users who interacted with the system for more than 30 minutes completed the popup questionnaire. Out of this number, 32% completed the longer survey. Out of the complete responses, 43% were from female students while 57% were from male students (Figure 1). The majority of the participants were in their third year of study (25%) whereas the lowest proportion of partakers was in their first year of study (13%). Postgraduate students consisted 22% of the subjects (Figure 2). Figure 3 showed that about half of the students used the digital library to find information about specific research topics. Journals and e-journals were the most used resources at 42% while the least used facility was the reference resources at less than 5% (Figure 4). This observation was also reiterated by the transactional log analysis in Table 1.

Figure 1. Gender of participants (Source: author).

Figure 2. Level of study of participants (Source: author).

Figure 3. Purpose of digital library use (Source: author).

Figure 4. Resources used in the digital library (Source: author).

Table 1. Patterns of use as shown by the transactional log analysis.

The usability evaluation (Figure 5) showed that 75% of users could accomplish all their tasks with ease, 50% could access information fast and effectively, whereas 60% did not need guidelines to use the library. Furthermore, 60% of users could recommend the library to their fellow students. However, 60% of the participants agreed that the system needed some improvements.

The information retrieval assessment showed that most users agreed about the system’s ability to yield results based on search criteria, retrieval of relevant data within a short time and agreement between search criteria and users’ expectations (Figure 6). More than 50% of the users had access to materials that were pertinent to their study areas and were satisfied with the system’s ability to retrieve information. However, about 50% of users agreed that the system was inconsistent and had difficulties remembering all information retrieval steps. 60% of the users reported that the system was unreliable in terms of providing information based on their needs.

Figure 5. Usability evaluation (Source: author).

Figure 6. Information retrieval (Source: author).

Based on the information above, digital libraries have positive reviews from most of students who use the platform to access information and literature. Over 50% of the participants hold that the digital libraries make their work easier. However, despite the positive perceptions, about 50% of the participants think that the systems need improvement for them to maximize the benefits. Furthermore, the results also show that most leaners 60% are comfortable using the digital libraries, while about 25% need help. The results show that digital libraries improve the learning experiences even though some challenges that need to be addressed still exist.

Thematic analysis from the interview involving members of faculty revealed 3 main themes: low awareness, underutilisation and satisfactory usability. They agreed that there had been substantial changes in the quality of work submitted by their students since the development of the digital library. The usability of the digital library was satisfactory as it was relatively easy to access information. However, it was apparent that there were low awareness levels about different resources that could be found in the digital library. Therefore, most users preferred to use the same resources frequently instead of other reserves of a similar nature.

Discussion

Usability, in the perspective of digital libraries, can be described as the efficacy with which people can access and use the wherewithal of a library successfully. A human-computer interface is commonly used in digital libraries. Thus, the interface and functionality are the most crucial aspects of digital libraries that should be evaluated and enhanced. According to Iqbal and Ullah (2016), the usability of computer platforms is determined by five attributes as determined by the user. They include ease of learning and use, the fast accomplishment of tasks, low error rates, user satisfaction and user retention.

Therefore, when creating digital libraries, the developer should ensure that accessing resources in the library is instinctive and easy. It should take the shortest time to find the needed resources. Additionally, errors of omission and commission should be minimised. Errors of omission entail the failure to find the user’s requirements, while errors of commission involve retrieving the wrong findings. A user should also be able to learn and understand the functions and navigational organisation of the library with the lowest level of cognitive overload. This means that a user should dedicate their thoughts and focus on impending tasks such as reviewing information to match precise needs instead of figuring out how to navigate the system.

Challenges that may arise in the usage of digital libraries include general unfamiliarity with computers, inadequate knowledge about search strategies and inexperience with the functionalities of the library (Jabeen et al., 2017). Usability issues are not faced by new users only. Frequent patrons may also face difficulties if new features are introduced without their knowledge. Therefore, these changes should be communicated to users once implemented. Users should also be informed that the modifications are meant to enhance their experience.

5. Conclusion and Recommendations

5.1. Conclusions

Digital libraries have transformed the idea of information services by reaching out to many users without temporal and spatial limitations. The advent of open-source software for online library platforms has also improved digital library technologies. However, new inventions are created each passing day. Therefore, it is necessary to conduct regular evaluations to determine whether appropriate developments are being incorporated. This study showed that the usability and information retrieval capability of the institution’s digital library were good. Nonetheless, the usage patterns showed that more 3rd and 4th years students, as well as those pursuing postgraduate studies, used the digital library than those in their initial years of study. Furthermore, users faced certain challenges that need to be addressed to enhance the usability and information retrieval of the library.

5.2. Recommendations

Based on the findings, four main recommendations were made. A collection development policy should be created to subscribe to as many online journals as possible given that this resource was most used. The user interface of the digital library should be redesigned to simplify it and ease the navigation process. User education should be done to enhance the usage of different available databases. More awareness should be created about the existence of the digital library, available resources and benefits of the facility.

Creating awareness among users about the existences of digital libraries should be a primary goal to increase usability. Numerous avenues to create awareness are available for institutions and service providers to use. Using the social media framework is one of the ways to increase public awareness on the existence of digital libraries to potential users. Stakeholder engagement is another critical avenue that could be used to create awareness among users about the existence of the online digital libraries over the internet. The process should not only target learners and researcher, but also scholars with ability to develop research products and publish them on the digital platforms. Teachers and school administrators should also be encouraged to adopt digital learning platforms to encourage students to use the available online resources.

Appendices

Appendix A: Invitation to Part in the Study

Dear Participant,

My name is …, a faculty member at … University. I am researching on the topic Evaluation of a digital library. I am requesting your participation in the study by completing a short interview that is expected to last approximately 30 minutes. The interview will be recorded to facilitate subsequent analysis. The findings of this study will provide information about the patterns of use, usability and information retrieval of the institution’s digital library. This information will contribute to the improvement of the digital library’s services. The results of the study may be published in a peer-reviewed journal.

Participation in this study is voluntary and the researcher will protect your anonymity and privacy. Involvement in the study will involve no cost or payment to you. You are free to decline to answer any question or to opt out of the study at any time. Please complete the informed consent form to confirm your understanding of the project and agreement to take part in the study. Please, feel free to contact the researcher for additional information or inquiries.

Thank you.

Student’s Name

Phone number:

Email:

Appendix B: Informed Consent Form

I, __________________ (participant name), confirm that I have read and understood what the study entails and what is expected of me during participation. I am aware that my participation is voluntary and that my privacy and confidentiality will be upheld throughout the study. I, therefore, agree to take part in the study.

Participant Name…………………………………………

Participant Signature……………………………………..Date…………………

Appendix C: Questionnaire

SECTION A: Popup Questionnaire

Are you enjoying the online library platform?

Are you willing to share your experiences with the online library?

Your feedback will help improve your online experience.

Please let us know more about your experiences by completing a short survey that is available at this link.

SECTION B: Long Questionnaire

Please mark with X where appropriate

Gender

Year of Study

What do you use the online library for?

Which one of the following resources do you use most on the online platform?

Usability Evaluation of the Digital Library

Please mark with X where appropriate

Please indicate the extent to which you agree with the following statements on a scale of 1 to 5 (1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree).

Information Retrieval on the Digital Library Platform

Please indicate the extent to which you agree with the following statements on a scale of 1 to 5 (1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree).

Thank you for your participation.

Appendix D: Interview Schedule for Faculty

Questions

1) How has been your experience with the digital platform of the library?

2) Have you noted any changes in students’ performance or use of academic resources since the inception of the digital library?

3) Do you think digital resources are utilised adequately by students and faculty? Why?

4) What problems/challenges have you encountered so far?

5) What do you think can be done to improve the user interface of the digital library?

6) What are your suggestions to enhance searching for e-books, e-journals and research papers?

Thank you for your participation.

Cite this paper: Alokluk, J. and Al-Amri, A. (2021) Evaluation of a Digital Library: An Experimental Study. Journal of Service Science and Management, 14, 96-114. doi: 10.4236/jssm.2021.141007.
References

[1]   Agosti, M., Ferro, N., & Silvello, G. (2016). Digital Library Interoperability at High Level of Abstraction. Future Generation Computer Systems, 55, 129-146.
https://doi.org/10.1016/j.future.2015.09.020

[2]   Arshad, A., & Ameen, K. (2015). Usage Patterns of Punjab University Library Website: A Transactional Log Analysis Study. The Electronic Library, 33, 65-74.
https://doi.org/10.1108/EL-12-2012-0161

[3]   Cabrerizo, F. J., Morente-Molinera, J. A., Pérez, I. J., López-Gijón, J., & Herrera-Viedma, E. (2015). A Decision Support System to Develop a Quality Management in Academic Digital Libraries. Information Sciences, 323, 48-58.
https://doi.org/10.1016/j.ins.2015.06.022

[4]   Campbell, H. M. (2018). A Comparative Evaluation of Three Digital Libraries.
https://www.ideals.illinois.edu/bitstream/handle/2142/104716/Campbell_2018_
ComparativeEvaluationThreeDigitalLibraries.pdf?sequence=2&isAllowed=y

[5]   Fenlon, K., Wood, L. C., Downie, J. S,. Han, R., & Kinnaman, A. O. (2016). Toward Accessible Course Content: Challenges and Opportunities for Libraries and Information Systems. Proceedings of the Association for Information Science and Technology, 53, 1-10.
https://doi.org/10.1002/pra2.2016.14505301027

[6]   Floratou, A., Patel, J. M., Lang, W., & Halverson, A. (2011). When Free Is Not Really Free: What Does It Cost to Run A Database Workload in the Cloud? In R. Nambiar, & M. Poess (Eds.), Technology Conference on Performance Evaluation and Benchmarking (pp. 163-179). Berlin, Heidelberg: Springer.
https://doi.org/10.1007/978-3-642-32627-1_12

[7]   Gaona-García, P. A., Martin-Moncunill, D., & Montenegro-Marin, C. E. (2017). Trends and Challenges of Visual Search Interfaces in Digital Libraries and Repositories. The Electronic Library, 35, 69-98.
https://doi.org/10.1108/EL-03-2015-0046

[8]   Harris, M. W. (2017). What’s a Digital Library?
https://next-nexus.info/writing/infostudies/digital_libraries.php

[9]   Iqbal, M., & Ullah, A. (2016). Qualitative Usability Evaluation of Databases: A Case Study. Library Hi Tech News, 33, 8-10.
https://doi.org/10.1108/LHTN-09-2015-0064

[10]   Jabeen, M., Qinjian, Y., Yihan, Z., Jabeen, M., & Imran, M. (2017). Usability Study of Digital Libraries: An Analysis of User Perception, Satisfaction, Challenges, and Opportunities at University Libraries of Nanjing, China. Library Collections, Acquisitions, & Technical Services, 40, 58-69.
https://doi.org/10.1080/14649055.2017.1331654

[11]   Lamb, A. (2017). Debunking the Librarian “Gene”: Designing Online Information Literacy Instruction for Incoming Library Science Students. Journal of Education for Library and Information Science, 58, 15-26.
https://doi.org/10.3138/jelis.58.1.15

[12]   Lyman, P. (2017). What Is a Digital Library? Technology, Intellectual Property, and the Public Interest. In S. R. Graubard, & P. LeClerc (Eds.), Books, Bricks and Bytes (pp. 1-34). New York, NY: Routledge.
https://doi.org/10.4324/9781315082073-2

[13]   Oosterveld, P., Vorst, H. C., & Smits, N. (2019). Methods for Questionnaire Design: A Taxonomy Linking Procedures to Test Goals. Quality of Life Research, 28, 2501-2512.
https://doi.org/10.1007/s11136-019-02209-6

[14]   Pinfield, S. (2017). eLib in Retrospect: A National Strategy for Digital Library Development in the 1990s. In J. Andrews (Ed.), Digital Libraries (pp. 19-34). New York, NY: Routledge.
https://doi.org/10.4324/9781315257778-3

[15]   Places, Á. S., Fariña, A., Luaces, M. R., Pedreira, Ó., & Seco, D. (2016). A Workflow Management System to Feed Digital Libraries: Proposal and Case Study. Multimedia Tools and Applications, 75, 3843-3877.
https://doi.org/10.1007/s11042-014-2155-3

[16]   Sánchez-Gálvez, L. A., & Fernández-Luna, J. M. (2015). A Usability Evaluation Methodology of Digital Library. Service Computation, 12, 23-28.

[17]   Siguenza-Guzman, L., Saquicela, V., Avila-Ordóñez, E., Vandewalle, J., & Cattrysse, D. (2015). Literature Review of Data Mining Applications in Academic Libraries. The Journal of Academic Librarianship, 41, 499-510.
https://doi.org/10.1016/j.acalib.2015.06.007

[18]   Stefl-Mabry, J. (2018). Documenting Evidence of Practice: The Power of Formative Assessment. Knowledge Quest, 46, 50-57.

[19]   Stoet, G. (2017). PsyToolkit: A Novel Web-Based Method for Running Online Questionnaires and Reaction-Time Experiments. Teaching of Psychology, 44, 24-31.
https://doi.org/10.1177/0098628316677643

[20]   Tank, S. D., Maradiya, M., & Bhatt, R. (2017). Blended Librarianship for Academic Libraries in Digital Era Theory and Practice.

[21]   Tella, A. (2015). Electronic and Paper-Based Data Collection Methods in Library and Information Science Research: A Comparative Analyses. New Library World, 116, 588-609.
https://doi.org/10.1108/NLW-12-2014-0138

[22]   Wynne, B., Dixon, S., Donohue, N., & Rowlands, I. (2016). Changing the Library Brand: A Case Study. New Review of Academic Librarianship, 22, 337-349.
https://doi.org/10.1080/13614533.2016.1156000

 
 
Top