Higher education in health sciences has been the target of several attempts of methodological improvement. Formerly focused exclusively on the traditional method of face-to-face lectures and passive demonstrations in laboratories, health education is changing. There are many innovations in this field of knowledge.
It is well-established that retrieval-based strategies promote effective long-term learning. Retrieval-based learning, also known as the testing effect (Carrier & Pashler, 1992; Ho et al., 2015; Wheeler & Roediger, 1992; Chan et al., 2006), consists in the engagement in active practice of knowledge evocation (Karpicke et al., 2014).
The retrieval-based approach promotes learning in a robust and durable way (Carpenter, 2012; Karpicke, 2012). By actively recovering knowledge, we increase our capacity to rebuild it in the future (Yong & Lim, 2016). Subsequent learning becomes easier because the updated learning context can be used to narrow down the search and focus on a specific memory (Karpicke, Lehman, & Aue, 2014). The challenge that arises is how to develop ways to implement retrieval-based learning in real-world learning contexts.
A new teaching perspective emerges when we consider tests beyond their standard learning assessment role. It is then possible to develop retrieval-based learning tools, such as mobile applications, to improve learning, both inside and outside the classroom.
Currently, there are many mobile applications (apps) relevant for medical education (Briz-Ponce et al., 2016; Fuller & Joynes, 2015). The Mobile Learning paradigm (m-Learning) emerged from the use of mobile computing technologies (e.g., smartphones, tablets, wireless networks) as part of an integrated learning model (Marçal, Andrade, & Rios, 2005). M-Learning applied to medical education is also known as Mobile Medical Education (Davies et al., 2012).
A branch of Medicine that stands out for the variety of existing mobile applications is Anatomy (Chakraborty & Cooperstein, 2017), as it requires a wide range of teaching and learning strategies. In particular, the complex organization of the human brain poses an additional obstacle for educators, who witness the students’ difficulty in mastering it (Kennedy, 2013). Other challenges for Neuroanatomy Education are difficulties to obtain properly preserved cadavers, high costs for neuroanatomy lab maintenance and the high number of students to perform dissections (Neuwirth, Dacius Jr., & Mukherji, 2018; Faria et al., 2014).
This study integrates retrieval-based and mobile learning to improve teaching and learning of Neuroanatomy. We developed a mobile application that enables the systematic use of cumulative tests to improve neuroanatomy learning. Subsequently, usability testing of the mobile application was conducted with thirty undergraduate Health Sciences students.
This section describes the steps to building the application and the details of usability assessment.
2.1. Application Development Process
In this research, we followed the Co-Design methodology (Millard et al., 2010). Our multidisciplinary team included two medical doctors, a computer scientist, a computer systems analyst, a computer programmer and a graphic designer. Figure 1 shows the application development process, which was adapted from the Co-Design methodology (Marçal, Andrade, & Viana, 2017). Below is a step by step description of the development process.
1) Scope. In this phase, we defined the learning objectives of the system. The medical doctors specified the anatomical regions to be included in the mobile
Figure 1. Mobile application development process.
2) Shared Understanding. In this phase, stakeholders shared their experience and expertise in application scenarios, mobile technologies, and pedagogical methodologies that could serve as a basis for implementation.
3) Brainstorming. In this phase, we outlined the first mobile application interfaces, considering the actors, scenarios, technologies and pedagogical methodologies that were identified in the previous stage. Stakeholders evaluated artifacts and provided specific suggestions for improvement.
4) Refinement. As the application design was getting closer to its final version, the project team completed assessment requirements and project modeling diagrams (e.g., mind maps, use cases, class and activity diagrams).
5) Implementation. After the definition of system requirements, the programming team applied iterative development with incremental delivery. It is important to note that phases III, IV and V occurred in sprints (iterations), allowing the correction of errors identified in the previous steps. After elaborating a version without apparent errors, we have proceeded to the user evaluation phase.
After the development process, the application was tested by thirty students divided into three different groups. Our sample size (N = 30) was in accordance with the minimum suggested sample size for usability testing (Sauro, 2011). All students attended higher education courses in health sciences: ten first-term nursing and dental students at Federal University of Ceará; ten third-term and ten seventh-term medical students at Unichristus University Center. Students were invited to partake in the usability tests. They were not rewarded with course credit or other incentives.
The study protocol was duly approved by the Research Ethics Committee of the Christus University Center (protocol number: CEP 62614816.3.0000.5049). All participants signed an Informed Consent Form (ICF) agreeing to take part in the study.
A standard questionnaire to evaluate usability of the mobile application was developed based on other existing questionnaires. The first ten questions were based on the SUS (System Usability Scale) (Brooke, 1996), which is an easy-to-apply method for investigating system usability. Each question contained five response options that follow the 5-point Likert scale (from Totally Disagree to Totally Agree). The questionnaire gathers information about ease of use (Usability) and simplicity to learn how to use the application (Learning Capacity). Below are the first ten questions of the questionnaire that we used:
Q1. I think that I would like to use this system frequently.
Q2. I found this system unnecessarily complex.
Q3. I thought the system was easy to use.
Q4. I think that I would need assistance to be able to use this system.
Q5. I found the various functions in this system were well integrated.
Q6. I thought that there was too much inconsistency in this system.
Q7. I would imagine that most people would learn to use this system very quickly.
Q8. I found the system very cumber some to use.
Q9. I felt very confident using the system.
Q10. I needed to learn a lot of things before I could get going with this system.
The second part (five questions) of the evaluation tool aimed at identifying student’s perception of app usefulness (Perceived Utility). Four questions were objective, with options based on the Likert 5-point scale:
Q11. The application helped me to understand Neuroanatomy better.
Q12. With the app, it will be easier to differentiate the various anatomical regions.
Q13. My expectation is that I will continue to have difficulty learning Neuroanatomy.
Q14. I understand that the application helped me to demystify the subject.
The last part of the questionnaire consisted of an open question in which students could make comments, criticisms, compliments, and improvement suggestions.
Application testing was carried out in the classroom environment, at the end of the neuroanatomy module, which consists of practical and theoretical neuroanatomy classes. After reading and signing the ICF, each student started to use the mobile application. Students received a random device, which could be an iPhone, iPad (iOS platform) or a tablet with Android Operating System. All devices contained identical, pre-tested versions of the application, with shortcuts on the desktop. After a demonstration of how to use the app, students were able to test it once.
The Brain Anatomy App is a learning mobile application that tests neuroanatomy knowledge (see detailed description below). Students were required to identify brain regions. The total application test time was of approximately 15 minutes. At the end of the test, each student could see his final performance (total of mistakes and hits).
After using the app, each student answered a questionnaire to evaluate usability of the mobile application.
This section describes the application developed to assist Neuroanatomy teaching, and the application usability evaluation by undergraduate health sciences students.
3.1. The App
The developed mobile application, Brain Anatomy App, presents the student with real images of human brains viewed from different directions and angles: lateral, medial, superior and inferior. Before the beginning of the tests, the application provides a tutorial on how to fill out the answers. Subsequently, the student must identify each of the anatomical regions (up to twenty) of each figure, using the on-screen keyboard and observing the correct anatomical terminology. You can download Brain Anatomy App for free from the Google Play Store.
The application highlights anatomical structures (Figure 2(a)). Users can pinch two or more fingers together or apart to adjust zoom. They have 45 seconds to name each structure (remaining time is displayed on the screen). Users also have the option to skip the answer. In case of mistake, Brain Anatomy App shows the correct name of the anatomical structure.
Figure 2. (a) Screen for identifying anatomical regions; (b) Screen with student’s tests record.
The application logs user’s responses and, in the end, shows the total number of correct and incorrect answers. Brain structures are randomly presented. Thus, students can perform different tests on the same slide without repeating the order of presented structures. The application records date, time, and score that the student achieved for each test (Figure 2(b)), so that it is possible to track performance over time.
3.2. Analysis of Results
Table 1 summarizes the Brain Anatomy App Usability Evaluation. The results demonstrate that the application received a positive evaluation based on the System Usability Scale, obtaining an average SUS score of 85.3 (with a standard deviation of 9.2). The SUS score for this population is between 86.3 and 94.9 (margin of error: 4.3), with 95% confidence.
To certify the reliability of the data, we used Cronbach’s alpha (Bonett & Wright, 2015). As it can be seen in Table 1, the sample achieved a good level of internal reliability: Cronbach’s alpha of 0.701 (Sauro, 2011).
The analysis of the answers to the second part of the questionnaire, regarding the usefulness of the application, was made through comparative analysis of participant’s scores for each rating scale question. Figure 3 shows the frequency (%) of students’ answers about the usefulness of the Brain Anatomy App. Overall, study participants agree that the application is useful for learning Neuroanatomy. We emphasize the answers to question 12, showing that for 100% of the participants the app facilitates the differentiation of anatomical regions.
Figure 3. Frequency of students’ answers about the utility of the application. Questions 11 to 14.
Table 1. Summary of the brain anatomy app usability evaluation (N = 30).
Participants were also able to high light positive and negative aspects of the app and to make improvement suggestions. The answers confirm the good acceptance of the mobile application, emphasizing the students’ interest in continuing to use the app as well as new versions of it with other parts of the human body. Here are some of the opinions collected about the application:
“Application is easy to use and supports learning. The time provided to answer the questions is enough. I like the app and want to use it when it is available.”
“Very good. I think it should address other parts of the body and other systems.”
“The application is of great use to health sciences students, since it brings the content in an interactive way. It also shows the correct answers right after user’s response, helping to pinpoint the correct answer.”
“Accessible, easy to use, didactic. Displays the correct answer after the student makes a mistake and celebrates when the student gives the right answer. It also has a warm interface. It is a very promising tool to study anatomy.”
“I found the application very good. However, I think it would be good to increase the response time a bit.”
“Very creative app. Improve the appearance. Make it for the rest of the body. Congratulations.”
“It would be interesting if we could rotate the images.”
“I found the app very interesting and didactic, facilitating learning even for students who have not studied Neuroanatomy yet. To make it even better, I think that it would be useful to make 3D images available.”
The opportunity to actively retrieve knowledge learned in the past through cumulative tests favors future retention of knowledge (Roediger & Karpicke, 2006). This phenomenon, known as the testing effect or retrieval-based learning, has been extensively studied (Roediger & Karpicke, 2006). Strategies that produce rapid learning generate poor long-term performance (Bjork & Geiselman, 1978). Cumulative tests, on the other hand, introduce a desirable degree of difficulty in learning (Bjork & Geiselman, 1978). They raise cognitive effort in the acquisition phase in order to promote longer-term learning (Poole, Dobson, & Pusch, 2017). A general challenge is to develop ways to implement retrieval-based learning in educational settings.
This study employed retrieval-based learning in the development of an application for mobile devices (such as smartphones and tablets). Our aim was to translate a well-established strategy from learning research to a real-world experience in the classroom. We developed the Brain Anatomy App, a mobile application that enables students to learn Neuroanatomy by taking cumulative tests. Thirty undergraduate Health Sciences students evaluated application usability. The application received a positive evaluation, based on the System Usability Scale, demonstrating that study participants considered the tested application useful for Neuroanatomy learning.
Although the study of human cadavers is still the gold standard for human anatomy learning, there are financial, ethical, and operational constraints that make it difficult to implement (Neuwirth, Dacius Jr., & Mukherji, 2018; Faria et al., 2014). The emergence of systems with virtual reality or augmented reality expands Anatomy learning opportunities (Moro et al., 2017).
Anatomy practical classes traditionally rely on students’ passive exposure to cadaveric material, with the lecturer pointing to and naming anatomical structures (Neuwirth, Dacius Jr., & Mukherji, 2018). In our study, students were stimulated to retrieve anatomical knowledge and to actively identify the structures in anatomical images of a mobile application. Neuroanatomy learning with such a device seems to motivate and actively engage students.
The progressive implementation of computer-based learning methods in Health Sciences Education is in line with curricular changes and medical education reforms, providing new challenges and opportunities for Anatomy teaching (Trelease, 2016). There is already enough evidence that Computer Assisted Learning (CAL) is a good option to traditional methods of Anatomy teaching. It is important to develop ways to integrate it to Health Sciences curricula. Previous studies have shown that the use of tablets engages students, achieves learning goals and improves the efficiency of teaching in anatomical dissection (Mayfield, Ohara, & O’Sullivan, 2012).
A limitation of this study is that we did not investigate the impact of application use on student learning. Moreover, we have not looked at study participants’ performance on the application Neuroanatomy tests. These and other analyses more related to learning outcomes should be contemplated in a future work.
Our study showed the development process of a mobile application to aid the learning of anatomical regions. We demonstrated its usability and students’ perception of its usefulness for Neuroanatomy teaching. These results may encourage higher education lecturers in Health Sciences to adopt mobile applications in their classrooms.
 Bonett, D. G., & Wright, T. A. (2015). Cronbach’s Alpha Reliability: Interval Estimation, Hypothesis Testing, and Sample Size Planning. Journal of Organizational Behavior, 36, 3-15.
 Briz-Ponce, L., Juanes-Méndez, J. A., García-Penalvo, F. J., & Pereira, A. (2016). Effects of Mobile Learning in Medical Education: A Counterfactual Evaluation. Journal of Medical Systems, 40, 136.
 Chan, J. C., McDermott, K. B., & Roediger III, H. L. (2006). Retrieval-Induced Facilitation: Initially Nontested Material Can Benefit from Prior Testing of Related Material. Journal of Experimental Psychology: General, 135, 553.
 Davies, B. S., Rafique, J., Vincent, T. R., Fairclough, J., Packer, M. H., Vincent, R., & Haq, I. (2012). Mobile Medical Education (MoMEd) How Mobile Information Resources Contribute to Learning for Undergraduate Clinical Students—A Mixed Methods Study. BMC Medical Education, 12, 1.
 Faria, J. W. V., Figueiredo, E. G., Brito, D. R., & Teixeira, M. J. (2014). A evolucao histórica do ensino da neuroanatomia. Revista de Medicina, 93, 146-150.
 Fuller, R., & Joynes, V. (2015). Should Mobile Learning Be Compulsory for Preparing Students for Learning in the Workplace?. British Journal of Educational Technology, 46, 153-158.
 Ho, A. M. H., Critchley, L. A., Leung, J. Y., Kan, P. K., Au, S. S., Ng, S. K., Lee, A. P. et al. (2015). Introducing Final-Year Medical Students to Pocket-Sized Ultrasound Imaging: Teaching Transthoracic Echocardiography on a 2-Week Anesthesia Rotation. Teaching and Learning in Medicine, 27, 307-313.
 Karpicke, J. D. (2012). Retrieval-Based Learning: Active Retrieval Promotes Meaningful Learning. Current Directions in Psychological Science, 21, 157-163.
 Karpicke, J. D., Blunt, J. R., Smith, M. A., & Karpicke, S. S. (2014). Retrieval-Based Learning: The Need for Guided Retrieval in Elementary School Children. Journal of Applied Research in Memory and Cognition, 3, 198-206.
 Karpicke, J. D., Lehman, M., & Aue, W. R. (2014). Retrieval-Based Learning: An Episodic Context Account. In Psychology of Learning and Motivation (Volume 61, Chapter 7, pp. 237-284). Amsterdam: Elsevier Inc.
 Kennedy, S. (2013). Using Case Studies as a Semester-Long Tool to Teach Neuroanatomy and Structure-Function Relationships to Undergraduates. Journal of Undergraduate Neuroscience Education, 12, A18.
 Marcal, E., Andrade, R. M. C., & Viana, W. (2017). Development and Evaluation of a Model-Driven System to Support Mobile Learning in Field Trips. Journal of Universal Computer Science, 23, 1147-1171.
 Mayfield, C. H., Ohara, P. T., & O’Sullivan, P. S. (2012). Perceptions of a Mobile Technology on Learning Strategies in the Anatomy Laboratory. Anatomical Sciences Education, 6, 81-89.
 Millard, D., Howard, Y., Gilbert, L., & Wills, G. (2010). Co-Design and Co-Deployment Methodologies for Innovative m-Learning Systems. In Multiplatform e-Learning Systems and Technologies: Mobile Devices for Ubiquitous ICT-Based Education (pp. 147-163). IGI Global.
 Moro, C., Stromberga, Z., Raikos, A., & Stirling, A. (2017). The Effectiveness of Virtual and Augmented Reality in Health Sciences and Medical Anatomy. Anatomical Sciences Education, 10, 549-559.
 Poole, J. C., Dobson, K. S., & Pusch, D. (2017). Anxiety among Adults with a History of Childhood Adversity: Psychological Resilience Moderates the Indirect Effect of Emotion Dysregulation. Journal of Affective Disorders, 217, 144-152.
 Roediger III, H. L., & Karpicke, J. D. (2006). Test-Enhanced Learning: Taking Memory Tests Improves Long-Term Retention. Psychological Science, 17, 249-255.
 Trelease, R. B. (2016). From Chalkboard, Slides, and Paper to e-Learning: How Computing Technologies Have Transformed Anatomical Sciences Education. Anatomical Sciences Education, 9, 583-602.
 Wheeler, M. A., & Roediger III, H. L. (1992). Disparate Effects of Repeated Testing: Reconciling Ballard’s (1913) and Bartlett’s (1932) Results. Psychological Science, 3, 240-246.