MOOCs first appeared in online distance education in 2008, with the aim of democratizing higher education, offering knowledge to anyone interested without restrictions and conditions. Their forerunner can be considered the OpenCourseWare program that was started in 2002 by MIT and sparked the Open Educational Resources (OER) movement. They are online courses developed mainly by known higher education institutions and are a tool for access to higher education by millions of people who want to improve their lives (UNESCO, 2016).
Participants in MOOCs do not pay tuition fees nor do they have to meet certain criteria to enroll in them, even if their creator suggests having specific knowledge and skills to be able to understand their content. Their learning material is offered through short videos, slides, or other digital files (Hoy, 2014) and is hosted on online platforms such as Coursera and Edx. For the evaluation of the learners, assignments are assigned that are graded by graduates, teachers, or other learners. Small, closed-ended quizzes that are automatically graded by computers are also used. Upon successful completion of the program, a free of charge non-formal electronic certificate of completion or a formal certificate of payment and participation in formal examinations is provided (Karnouskos & Holmlund, 2014).
Despite the ease of access and the training opportunities they offer, a very small percentage manage to complete them. Globally, completion rates range from 5% - 15% (Jordan, 2013). The obstacles that the learners face during the courses and lead to their abandonment are lack of time (Fini, 2009; Kop, Fournier, & Mak, 2011; Belanger & Thornton, 2013; Cross, 2013; Grainger, 2013; Zutshi, O’Hare, & Rodafinos, 2013; Beaven, Codreanu, & Creuzé, 2014; Cassidy, Breakwell, & Bailey, 2014; Gütl, Rizzardini, Chang, & Morales, 2014; Nawrot & Doucet, 2014; Schulze, 2014; Kizilcec & Halawa, 2015; Skrypnyk, de Vries, & Hennis, 2015; Zheng, Rosson, Shih, & Carroll, 2015; Veletsianos, Reich, & Pasquini, 2016; Kizilcec & Cohen, 2017; Shapiro et al., 2017) and the delay in their schedule due to other obligations (Nawrot & Doucet, 2014; Kizilcec & Halawa, 2015), the absence of a cognitive background that would allow the understanding of new information (Belanger & Thornton, 2013; Gütl et al., 2014; Park, Jung, & Reeves, 2015; Shapiro et al., 2017), the quality and difficulty of learning material and assessments (Belanger & Thornton, 2013; Gütl et al., 2014; Nawrot & Doucet, 2014; Schulze, 2014; Park et al., 2015; Skrypnyk et al., 2015; Whitehill, Williams, Lopez, Coleman, & Reich, 2015; Zheng et al., 2015; Huang & Hew, 2016; Veletsianos et al., 2016), the course design (Gütl et al., 2014; Nawrot & Doucet, 2014; Park et al., 2015), the awareness of the absence of formal recognition of their knowledge (Schulze, 2014; Gamage, Fernando, & Perera, 2015), the absence but also the quality of feedback/assistance either from other learners or from teaching and support staff (Gütl et al., 2014; Schulze, 2014; García, Tenorio, & Ramírez, 2015; Tomkin & Charlevoix, 2014; Park et al., 2015), the lack of communication with teaching staff (Kop et al., 2011; Gütl et al., 2014), lack of motivation from third parties (Gütl et al., 2014), the absence of a sense of community (Gütl et al., 2014; Nawrot & Doucet, 2014; Zheng et al., 2015) and the difficulty of collaborating (Zutshi et al., 2013; Koutsodimou & Tzimogiannis, 2016). However, some learners may leave the program, not because they faced any of the above difficulties and obstacles, but because they achieved the goal for which they participated, before the completion of the program (Nawrot & Doucet, 2014; Schulze, 2014; Kizilcec & Halawa, 2015; Whitehill et al., 2015) or why they realized that the program did not meet their needs (Schulze, 2014; Whitehill et al., 2015).
The purpose of this study is to examine the extent to which helping learners apply the Mental Contrasting with Implementation Intentions (MCII) self-regulatory strategy in conjunction with a number of other self-regulatory processes of Zimmerman’s model, contributed to the increase of self-regulation, performance and completion rates of those who participated in a MOOC course that we created. As the investigation of the contribution of self-regulated learning to MOOCs is still incomplete (Alonso-Mencía et al., 2019), the results of the research will contribute to proposals for better instructional design, organization, and assistance to learners’ self-regulation to complete distance learning programs in which they participate more successfully, achieving better performance and showing lower dropout rates.
2. Theoretical Framework
2.1. Self-Regulated Learning
Self-regulated learning is an “important manifestation” (Kostaridou-Euclides, 2011) of self-regulatory behavior, which concerns the academic world (Kostaridou-Euclides, 2011; Cleary, Callan, & Zimmerman, 2012). Although it is difficult to be defined theoretically and empirically (Boekaerts, Pintrich, & Zeidner, 2000), making it difficult to give a clear definition (Dinsmore, Alexander, & Louglin, 2008), it is not currently considered as “a mental ability or an academic performance skill; rather it is the self-directive process by which learners transform their mental abilities into academic skills” (Zimmerman, 2002: p. 65).
Zimmerman (2011) describes a cyclical model of three repetitive phases, those of Forethought, Performance, and Self-reflection. These phases are not independent of each other, but interdependent, with the results of one influencing the processes of the other. Each cycle of repetition is completed when the processes of Self-reflection affect the Forethought phase (Cleary et al., 2012).
Each phase is broken down into classes, while each class includes some self-regulatory subprocesses (Zimmerman, 2011). The Forethought phase refers to the metacognitive processes and proactive emotions that precede the learning process and lay the foundations for its successful completion (Zimmerman, 2000, 2011). It consists of two classes, task analysis and self-motivational beliefs (Zimmerman, 2000, 2011).
The basic processes of task analysis are goal setting and strategic planning. Goal setting is about making decisions about the expected learning outcomes or performance of the learner (Zimmerman, 2000; Auvinen, 2015) and is a very important process, as on the one hand, it influences his motivations and on the other, it functions as an evaluation criterion of his performance and effort (Cleary et al., 2012). Strategic planning is the selection of appropriate personal strategies or methods to achieve the desired goals in the best possible way (Zimmerman, 2000).
Every self-regulatory skill is worthy when the learner can motivate himself to use it. For this reason, the second category of self-regulatory processes includes self-motivation beliefs that are analyzed in four different self-regulatory processes: self-efficacy, outcome expectations, task interest/value, and learning goal orientation (Zimmerman, 2000, 2011).
While outcome expectations relate to the consequences that the learner expects to have in achieving his goals, self-efficacy refers to his belief in his abilities, which will allow him to achieve the goals he has set (Zimmerman, 2000). The task interest/value of the task activates the learner to actively participate in the learning or other process either because he is interested, or because he expects some benefits from his participation (Panadero & Alonso Tapia, 2014). Finally, goal orientation concerns the motivation of the learner to continue his learning effort to achieve his goals.
The Performance phase concerns the processes that are performed during learning and affect the attention and action of the learner. To date, two basic types have been studied, self-control and self-observation (Zimmerman, 2000, 2011).
Self-control includes processes that allow the learner to focus on achieving his goals, maintaining his motivation and concentration (Auvinen, 2015), and optimizing his effort (Zimmerman, 2000). Such processes are the imagery that refers to the mental representation of the image of a task or a process aimed at organizing information and enhancing memory (Auvinen, 2015), task strategies related to the learner’s ability to focus on the most important parts of a process and to reorganize these points by giving them meaning (Zimmerman, 2000) and the volition strategies that allow the control of his actions and emotions (Zimmerman, 2011). Also, strategies that allow him to motivate himself are included, such as Self-consequences concerning rewarding or even punishing himself, environmental structuring to make it more attractive, less disorienting, and more helpful in achieving his goals (Zimmerman, 2011; Auvinen, 2015), the interest enhancement to “see” the difficult tasks as challenges (Zimmerman, 2011), the help-seeking from classmates and teachers to overcome problems that he is not able to overcome on his own, the right management of time and the self-instruction that refers to his self-direction, giving instructions and directions to himself or by asking himself oral questions (Auvinen, 2015).
Self-observation refers to the learner’s recording of specific aspects of his performance, the conditions in which he performs them, and the results he produces (Zimmerman, 2000). It includes the processes of metacognitive monitoring and self-recording (Zimmerman, 2011). Through self-recording, he can record important information as it happens, structure it in such a way that it makes sense to him, control and maintain its accuracy, and create a database that proves his progress, increasing in this way the effectiveness of his self-observation (Zimmerman, 2000). Finally, his metacognitive observations allow him to compare his activities with external criteria (Auvinen, 2015).
The third phase of the Zimmerman’s model (Self-reflection) which includes two classes, self-judgment, and self-reaction, concerns the processes that take place after the learning process and which affect the student’s reaction either positively, whether he is happy with his learning outcomes, or negatively if he is not, leading him to modify the first phase of the model (Forethought), i.e. to modify his goals and strategies (Zimmerman, 2000, 2011; Cleary et al., 2012).
Self-judgment includes the processes of self-evaluation in which the learner compares his performance with a pattern or a goal using various evaluation criteria. A second process of self-judgment is the causal attribution that concerns the student’s own explanations of the reasons for his performance (Auvinen, 2015).
Self-reaction concerns the way the learner reacts to his self-criticism (Auvinen, 2015). It includes two other self-regulatory processes, self-satisfaction/affect, and adaptive/defensive inferences. Self-satisfaction refers to perceptions of his satisfaction or dissatisfaction with his performance. If self-satisfaction stems from achieving the goals he has set, then he will intensify his efforts even more. Adaptive conclusions are the conclusions reached by the learner about how to modify his future efforts and can either lead him to choose a more efficient strategy and/or to modify his goals or to adopt a defensive stance to protect himself from future failures and dissatisfaction (Zimmerman, 2000, 2011; Auvinen, 2015).
2.2. Self-Regulatory Strategy MCII
Self-regulation can be seen as a process that helps people overcome obstacles in their quest to achieve the desired results, while self-regulatory strategies are the tools that help them turn their motivations and expectations of success into appropriate actions towards this direction (Oettingen & Gollwitzer, 2015).
A self-regulatory strategy that research has shown to have positive results in achieving goals in various areas (Oettingen, Kappes, Guttenberg, & Gollwitzer, 2015) but also in MOOCs (Kizilcec & Cohen, 2017) is “Mental Contrasting with Implementation Intentions”. This strategy combines two different self-regulatory strategies, Mental Contrasting (MC) with Implementation Intentions (II) and is based on a two-step process. In the first one, the goals are set (Goal setting) and the commitment to achieve them (Goal orientation), while in the second, an implementation plan is made (Strategic planning), and the necessary actions are taken to achieve them, overcoming the obstacles and difficulties which are likely to occur (Oettingen & Gollwitzer, 2010). Research has shown that combining these two strategies yields better results than each separately (Oettingen & Gollwitzer, 2015; Kizilcec & Cohen, 2017).
The Mental Contrasting (MC) strategy is a conscious strategy that influences unconscious cognitive and motor processes (Oettingen & Gollwitzer, 2015; Gollwitzer, Mayer, Frick, & Oettingen, 2018). It helps the learner to imagine the positive results that the achievement of his goals will bring, but also reflect on the current situation which can act as an obstacle in the effort to achieve them.
This strengthens his commitment to achieving his goals, as he believes that the desired future can be achieved and the negative reality can be changed and push him to act in this direction, especially when his expectations of success are high. If however, he simply imagines his desired future or the current negative situation, he will achieve his goals is not affected or is affected to a small extent ( Oettingen, 2000;
The second strategy, Implementation Intentions (II), is implemented by pre-determining the actions to be taken, with suggestions such as If X happens, then I should do Y (Gollwitzer, 2014; Oettingen & Gollwitzer, 2015), where X represents a critical event or point in time, while Y represents the reaction to it (Gollwitzer et al., 2018). This creates a link between the deterrent event and the action that must be taken to overcome it (Gollwitzer, 2014; Oettingen & Gollwitzer, 2015), determining the time, place, and manner in which the goal will be achieved (Oettingen & Gollwitzer, 2010), thus increasing the likelihood that he will react more effectively and automatically in case of this event (Oettingen & Gollwitzer, 2010; Kappes et al., 2012).
3. Review of Relevant Research Projects
Learners’ self-regulatory skills become even more necessary in an autonomous learning environment (Barnard, Lan, To, Paton, & Lai, 2009; Barnard-Brak, Lan, & Paton, 2011; Harris, Reinhard, & Pilia, 2011), as the physical absence of the teacher, the absence of immediate feedback (Banerjee & Duflo, 2014; Hew & Cheung, 2014; Zheng et al., 2015; Kizilcec, Pérez-Sanagustín, & Maldonado, 2017) and support (Kizilcec et al., 2017), the absence of consequences from the unsuccessful completion of the program (Nawrot & Doucet, 2014) and the lack of external pressure for progress and continuation of studies, motivation, and interaction with other members of the program (Harris et al., 2011) can lead the learners to dropout.
One way to improve the completion rates of the MOOCs is to improve the self-regulation of the learners by improving the self-regulatory characteristics of the courses offered. Various research efforts have been made in this direction regarding changes in the structure of the MOOC program itself (Crosslin, 2016; Onah & Sinclair, 2017), in the technological enrichment of the course hosting platform (Milligan, Littlejohn, & Margaryan, 2013; Haug, Wodzicki, Cress, & Moskaliuk, 2014; Davis, Chen, Jivet, Hauff, & Houben, 2016a; Diana, Eagle, Stamper, & Koedinger, 2016; Jivet, 2016; Alario-Hoyos, Estévez-Ayres, Pérez-Sanagustín, Kloos, & Fernández-Panadero, 2017; Davis et al., 2017), in enriching the instructional material with various optional activities (Ruipérez-Valiente et al., 2016), or finally, in various interventions aimed at implementing or guiding to implement various self-regulatory strategies (Davis, Chen, Van der Zee, Hauff, & Houben, 2016b; Kizilcec, Pérez-Sanagustín, & Maldonado, 2016; Kizilcec & Cohen, 2017).
One such intervention that guided the learners to implement various self-regulatory strategies is the effort of Kizilcec et al. (2016). During the pilot implementation of the xMOOC training program they created, they asked the learners who successfully completed it with great performance to record the self-regulatory strategies they followed and write recommendations to future learners, to help them achieve similar performance. During the normal implementation of the program, the learners who formed the experimental group (331 people) were given the seven (7) strategies (continuous review of the objectives, recording notes and summary of the course content for better understanding, application of new knowledge, planning from setting realistic goals, finding other learners who could work together, selecting a suitable study environment) recorded during the pilot application and being asked to rate how useful they would be and to write a short text to help young students to assimilate these strategies. The control group was given a description of the modules and the program and was asked to rate how useful these modules would be for their careers and to write a text to the program designers stating which of them they found less or more interesting. Although most of the learners considered this intervention quite helpful, the results showed that it could not ultimately have positive effects on reducing abandonment and their performance. On the contrary, according to the researchers, the technological support of the same strategies throughout the program could bring better results.
Following the same approach, Davis, Chen, Van der Zee et al. (2016b) also attempted to determine whether self-regulatory strategies of reflection and strategic planning can be beneficial to learners, without modifying the structure or in some way enriching the MOOCs they have created. For this purpose, they implemented their interventions in two different xMOOCs, 13 and 7 weeks about Functional Programming and Industrial Biotechnology, respectively. In the first MOOC, they implemented the reflection strategy by incorporating after the last video of each section, a question (prompt) that helped the learners to process the information they saw before proceeding to the quiz of the week. Exception, in this format, was in one of the sections where the learning material was more difficult. In this section, the prompt was applied to all videos and not just the last one. In the second MOOC, the strategic planning strategy was implemented. Before the beginning of each module, the learners had to record the goals they would like to achieve and a study plan to achieve them, while at the end of the module, they had to reflect and record how faithfully they followed their study plan and to what extent they achieved the goals they had set. The results showed that the intervention in the 1st MOOC did not bring any change, neither in terms of the participation of the learners nor in terms of their performance. Also, the partial involvement with the recording of the study plan and the goal-setting process applied in the 2nd MOOC did not bring statistically significant changes. In contrast, those who were really involved in drawing up a study plan and recording their goals had greater participation and better performance.
Finally, Kizilcec & Cohen (2017) conducted two different studies to determine whether the self-regulatory strategy of “Mental Contrasting with Implementation Intentions (MCII)” has positive results in MOOCs. The two xMOOCs created for the needs of the two surveys, lasted 10 and 6 weeks and involved 9619 and 8344 learners respectively. The analysis of the research data collected through a questionnaire showed an increase of the successful completion rate of the programs by 32% in the 1st research and by 15% in the 2nd when both techniques are implemented (MC & II), while there was no statistically significant increase in the completion rate in case that either one or the other technique was implemented.
4.1. Current Study
The purpose of the study is to investigate the extent to which the implementation of the self-regulatory strategy MCII in combination with various processes of the self-regulatory model of Zimmerman (2011), contributed to increasing self-regulation, completion rates, and performance of participants who attended the eight (8) week MOOC program on “Violence and bullying in schools”.
The research question that was posed was:
• What are the differences between the two research groups at the beginning, the middle, and the end of the program in terms of their degree of self-regulation?
• What are the differences between the two research groups in terms of project completion rates and their final performance?
4.2. Research Model and Procedure
The experimental design was chosen to conduct the research, to determine whether the application developed and implements both the self-regulatory strategy of Mental Contrasting with Implementation Intentions (MCII) and various self-regulatory processes of Zimmerman’s self-regulatory model, improves the degree of self-regulation of the learners, the completion rates of the program and their performance. The research data collected are quantitative.
The learners were divided into two research groups, automatically when activating their account on the course hosting platform. The platform was an OpenEdx platform that we installed on a server of the University of the Aegean.
The control group attended the course participating in its activities and was asked to respond to the Self-regulated Online Learning Questionnaire-Revised (SOL-Q-R), at the beginning, middle, and end of the program. The experimental group attended the same program and was asked to answer the same questionnaires as the control group during the same phases of the program. The learners of the experimental group were also invited, at the beginning of the program, to use the MCII+ research application embedded in the OpenEdx platform to implement the MCII self-regulatory strategy, setting one or more goals they wanted to achieve by participating in the program (Goal setting) and commit to achieving them (Goal orientation) by developing a plan (Strategic planning). Also, for each goal they set, they stated what they expected from achieving it (Outcome expectations), how important it was for them (Task interest/value), and how capable they felt of achieving it (Self-efficacy).
Then, during each weekly module, the learners of the experimental group were asked, with instructions given to them, to observe and/or record important aspects of their performance and the way of achieving their goals through the individual graphs of the MCII+ application, the conditions in which they were performed and their results (Self-recording), having the ability to compare them with other learners (Metacognitive monitoring) through the comparative graphs of the application MCII+, to be able to perform their self-reflection at the end of the weekly unit, where they were asked to evaluate their effort (Self-judgment), to explain the reasons for their overall performance (Causal attributions), to state satisfied or dissatisfied with their effort and its result (Self-satisfaction/affect) and to come to conclusions and decisions of how they would modify them with their efforts (Adaptive/defensive inferences).
4.3. MCII+ Research Application
The purpose of the research application developed was to provide feedback to learners on the progress of achieving their goals and to assist in the implementation of the self-regulatory strategy of Mental Contrasting with Implementation Intentions (MCII), which, according to research (Oettingen & Gollwitzer, 2015; Gollwitzer, Mayer, Frick, & Oettingen, 2018) helps to achieve the goals set by the learners. This strategy is related to three (3) different processes of the first phase (Forethought) of the Zimmerman model (Goal setting, Outcome expectations, Strategic planning). Also, other features have been integrated into the application to support phase 1, Self-efficacy, and Task interest/value processes, as well as all phase 3 (Self-reflection) processes of the same model. However, it did not cover, at least directly, the processes of the 2nd phase (Performance), as this phase concerns processes that take place during learning and we did not want to distract the action and attention of the learners from their learning effort. However, the learners were instructed to observe and/or record their learning course and the course of achieving their goals during each weekly unit, the conditions in which they took place, and their results (Self-recording). The learners were also instructed to compare their course with the course of the other learners (Metacognitive monitoring) using the graphs of the application MCII+, to utilize the specific processes of the 2nd phase.
The application enables each learner to set one or more personal goals (Goal setting) related to his participation in the course and then, for each goal, to state how important it is (Task interest/value), how capable he feels of achieving it (Self-efficacy), what is the most likely obstacle that will prevent him from achieving it (MCII self-regulatory strategy) and what actions will he take to overcome it (Strategic planning) (Figure 1).
During the courses and in particular, after the completion of each weekly unit, each learner has the opportunity to reflect on the achievement of each goal (Figure 2), stating the degree to which he was able to achieve it (Self-evaluation),
Figure 1. Form for adding a new goal.
Figure 2. Reflection form.
how feels about the degree of achievement of his goal (Self-satisfaction), to give an explanation for the positive or negative course of his achievement (Causal attribution) and finally, to describe the actions he will take to continue the positive course of his achievement or to improve it (Adaptive/defensive inferences).
In terms of providing feedback to learners and motivating them to participate more, the application presents various general and individual statistics (Figure 3), such as how many goals have been set, what is the maximum number of goals per learner, the average terms of the importance of the goals, the ability to achieve the goals, the reflections that have taken place, the satisfaction of the learners in achieving their goals, etc.
There are also various graphs showing the individual variation of the learner’s goals (satisfaction, degree of achievement) (Figure 4), but also graphs that compare the learner’s goals with all other co-learners (Figure 5).
Finally, learners can receive badges and points when using the application, thus incorporating gamification features (Figure 6).
Figure 3. General and individual statistics.
Figure 4. Individual charts.
Figure 5. Comparative graphs.
Figure 6. Badges and points.
4.4. Research Context
The program, which was the first attempt at the University of the Aegean in the field of MOOCs, was conducted from 3/2 to 29/3/2020.
It lasted eight weekly sections that were activated every Monday, and each included:
1) Instructional goals for what the learners were expected to achieve by attending each module.
2) Short introductory video (up to 2 minutes) that summarized the highlights of the previous week and informed about the topic and goals of the week that was starting.
3) Motivational activities that motivated the learners to submit their previous views, knowledge, attitudes, experiences and to develop a dialogue among them.
4) The main instructional material with short videos of up to 6 minutes with built-in slides that highlighted the main points that were heard or presented other explanatory elements (graphs, sketches, etc.). Videos with facts, testimonies, simulations, and analogies were also used as examples to explain the concepts presented in the main instructional material.
5) A multiple-choice quiz of 5 - 10 questions of knowledge, understanding, application, evaluation, analysis, and composition of data, after each video. Each response provided feedback justifying the correctness or error of each response. The answers to the quizzes could be submitted until the end of the program.
6) One or more optional activities that led to the recall of the knowledge presented and their application to address incidents of violence and bullying in schools (case studies).
7) A final assignment of 300 - 500 words at the end of each weekly unit that included open-ended questions aimed at analyzing, synthesizing, and applying knowledge to resolve incidents of violence and bullying in schools. The assignments were evaluated by other learners (peer review). The learners had two weeks to submit their works.
8) Additional instructional material to deepen the knowledge presented.
During the program, there was ongoing support and assistance to the learners either through the discussion forum or through the program e-mail support. At the end of each week, the learners received an e-mail informing them of issues that concerned them, urging them to continue the program, summarizing the knowledge of the completed section, and informing them about the topic of the next section.
At the end of the program, an official certificate of successful completion was provided to those who met the criteria.
Initial interest in attending the program expressed 1952 active teachers, pedagogical students, and individuals, but the majority were active teachers. Some participants did not activate their account or never showed up when the program started. In total, 1309 people participated in at least one of the activities of the program (Control group: 659, Experimental group: 650). Of these, 80.7% were women and 19.3% men. Regarding their age, most of them were between 31 - 40 years old (31.1%) followed by 41 - 50 years old (27.6%), 20 - 30 years old (24.1%) and 51 - 60 years old (16.9%).
After the beginning of the program, another 259 learners dropped out (Control group: 131, Experimental group: 128) at some point, most in the first week of the course. Finally, 1050 people completed the program (Control group: 528, Experimental group: 522).
The SOL-Q-R questionnaire was used to investigate the degree of self-regulation of the learners. The questionnaire was developed by Jansen, Van Leeuwen, Janssen, & Kester (2018) combining questions from four other questionnaires (MSLQ, MAI, OSLQ, LS), covering, finally, five dimensions of self-regulation. The statements (42 in total) of each self-regulatory dimension examine self-regulatory practices and specifically:
• Metacognitive activities before learning cover the Forethought phase and include statements about goal setting, learning strategy choices, and overcoming barriers.
• Metacognitive activities during learning relate to the Performance phase and include statements about the learning strategies used, the reasons chosen and the reasons for their possible change.
• Metacognitive activities after learning, cover the Self-reflection phase and include reflection statements.
• Time management concerns the Performance phase and includes statements about how the learners allocate time in the course and their consistency with their schedule.
• Environment structuring concerns the Performance phase and includes statements about the study area, the selection and change criteria.
• Persistence concerns the Performance phase and includes statements about the degree of effort that learners make to continue their study, even if they face difficulties.
• and Help-seeking concerns the Performance phase and includes statements about the extent to which they seek help from other learners or program managers to resolve problems or seek clarification.
The instrument was translated from English to Greek following the forward-backward translation methodology which is completed in four different stages (Van de Vijver & Leung, 1997; Lee, Chinna, Lim Abdullah, & Zainal Abidin, 2018). The internal consistency index (Cronbach’s alpha) of the scale, found to be above the limit of 0.7 in all factors (Metacognitive activities: 0.955; Time Management: 0.749; Environment structuring: 0.898; Persistence: 0.891; Help-seeking: 0.924), but also in total (0.952).
Regarding the second research question, the scores in the quizzes and the final weekly assignments of the learners were used.
4.7. Data Analysis
For the analysis of the answers of the SOL-Q-R questionnaire the parametric Independent sample t-test was used, as the distribution of the sample was close to normal. This test checks the statistical significance of the differences in the mean values between independent samples, that is, samples that are not related to each other.
For the second research question, the non-parametric Mann-Whitney U test was used to check statistical differences between the performance of the two research groups, as the distribution of the sample was not close to normal, as well as the parametric Independent samples t-test to test whether there were statistically significant differences between the participants who reached the threshold for obtaining the certificate of completion of the program (70.0%).
5. Research Results
5.1. Research Question 1
Initially, the two research groups show very small differences ranging from 0.01 to 0.06 in the averages of all self-regulatory factors, but also overall, which are not statistically significant (Table 1).
As the program progresses, the experimental group displays higher averages than the control group in all self-regulatory factors, except Time management, where they display the same mean (4.64) and the Metacognitive activities during learning and Environmental structuring that the control group shows higher averages with small differences, 0.01 (5.50) and 0.05 (5.82) respectively, non-statistically significant. The only factor in which both groups significantly reduce their averages is the Help-seeking, with the experimental group to be in a better position (Control group: 3.02; Experimental group: 3.25). This difference is statistically significant between research groups.
At the end of the program, the experimental group continues to show higher averages than the control group, with differences from 0.06 to 0.21. The only factor in which the control group has a higher average (5.89) is the Environmental structuring with a very small difference (+0.04). In the Help-seeking factor, the two groups still lower their averages, but the experimental group (2.79) is in a better position than the control group (2.58). At the end of the program, the two groups show statistically significant differences in the factors of Metacognitive activities after learning, Metacognitive activities as a whole, Persistence, and Help-seeking.
In the overall degree of self-regulation, the two groups show almost the same picture at the beginning of the program, with a small non-statistically significant difference (+0.02) for the control group (5.05). In the middle of the program, the two groups reduce their degree of self-regulation due to the large reduction they show in the Help-seeking factor and to the smaller ones in the factor Metacognitive activities after learning, with the experimental group maintaining a difference of +0.06 (4.93) from the control group (4.87). At the end of the program,
Table 1. Descriptive statistics of SOL-Q-R.
while the control group continues to show a decrease of 0.04 in the self-regulation average (4.83), due to the decrease in the factors of Metacognitive activities during the learning and the Help-seeking, the experimental group, despite the continuing decline in the Help-seeking factor shows a small increase of 0.01 (4.94). The differences between the two groups in the overall average of their self-regulation at the end of the program are statistically significant (Table 2).
Table 2. Independent samples t-test of the SOL-Q-R questionnaire.
5.2. Research Question 2
In total, 1952 people showed interest in attending the program by creating an account on the platform of the program, but in the end, 1863 people activated their account and were automatically allocated by the hosting platform into two research groups (Control: N = 932, Experimental: N = 931). Of these individuals, some did not respond to the survey questionnaires and did not participate in the program. Specifically, 273 people (f = 14.7%) from the control group and 281 from the experimental group (f = 15.1%) did not participate. Eventually, 1309 people started the program, 70.3% of the initial subscribers, of which 659 (f = 35.4%) belonged to the control group and 650 (f = 34.9%) to the experimental Group.
During the program, for various reasons, some learners dropped out. We consider that a learner dropped out after he participated in a program activity (quizzes, final weekly assignments) and later no other involvement of him is identified until the end of the program, including their participation in the research. In total, after the start of the course, another 131 participants left the control group (f = 19.9%), while another 128 (f = 19.6%) left the experimental group. The highest dropout rate is observed up to the middle of the program (4th week) when 119 people left the control group (f = 18.1) and 118 (f = 18.2) from the experimental group. After this point, the situation stabilizes since only 22 people left (Control group: N = 12, Experimental group: N = 10). Finally, 1050 people (f = 80.2%), 528 (f = 80.1%) from the control group and 522 (f = 80.3%) from the experimental group completed the program.
The performance of the learners fluctuated at high levels in both research groups. 62.5% of the control group and 66.5% of the experimental group performed at the highest scale (90% - 100%) and at the immediately preceding (80% - 89%) 24.6% and 21.1% respectively. In the other rating scales, there is a relative equivalence. Overall, 95.5% of the control group and 95.4% of the experimental group achieved the grade point for obtaining the certificate (70.0%).
In order to check if there are statistically significant differences in the final performance between the two research groups, a regularity test was performed using the Skewness and Kurtosis measures which showed that the sample distribution did not approach normality (Skewness: −2421, Kurtosis: 9232). The non-parametric Mann-Whitney test showed that there is no statistically significant difference between the two research groups in terms of their performance (U = 131,460,500, p = 0.196 > 0.05) (Table 3).
Table 3. Mann-whitney U final performance test.
Independent samples t-test (Skewness: −1.097, Kurtosis: 0.616) conducted on the performance of learners who had reached the threshold for obtaining the certificate of the program showed that there is no statistically significant difference between the two research groups (t(1002) = −1095, p = 0.274 > 0.05) (Table 4 & Table 5).
6.1. Research Question 1
The instructional design of the program and its general organization helped both research groups to improve their self-regulation and to further develop the self-regulatory strategies they used. However, the self-regulation of the learners of the experimental group was further strengthened, in all the self-regulating factors, except for the Environment structuring factor where the control group was superior by a very small difference, not statistically significant. Even in the Help-seeking factor in which both groups showed a large drop, due to the isolation of the learners and their avoidance of asking for help or exchanging ideas and reflections in the platform discussion forum, as has been found to be the case in distance learning and MOOCs (Stonebraker & Hazeltine, 2004; Puzziferro, 2008; Bárcena, Read, Martín-Monje, & Castrillo, 2014; Milligan & Littlejohn, 2014; Engle, Mankoff, & Carbrey, 2015; Goldberg et al., 2015; Yang, Wen, Howley, Kraut, & Rose, 2015; Broadbent, 2017; Kizilcec et al., 2017) or due to the design of the program, the experimental group displays higher, statistically significant, averages. In particular, statistically significant differences between groups appear in the factors of Metacognitive activities after learning, Persistence and Help-seeking, but also in overall, in the factor Metacognitive activities and the overall degree of self-regulation.
By applying the MCII self-regulatory strategy at the start of the program, in conjunction with the Task interest/value and Self-efficacy processes of the first phase of Zimmerman’s (2011) self-regulatory model, further strengthened their commitment (Persistence) to achieve their goals and continue their effort, as research has shown that it can succeed (Oettingen, 2000; Oettingen, Pak, & Schnetter, 2001; Oettingen & Gollwitzer, 2010; Gollwitzer, Oettingen, Kirby,
Table 4. Descriptive final performance statistics ≥ 70%.
Table 5. Independent Samples t-test of final performance ≥ 70%.
Then, applying the self-regulatory processes of Self-recording and Metacognitive monitoring of the 2nd phase (Performance) of the Zimmerman’s model during the week and the four (4) additional processes of the 3rd phase (Self-reflection) of the same model (self-evaluation, causal attribution, self-satisfaction/affect, adaptive/defensive inferences) further strengthened their self-regulation. The positive role of these processes in self-regulation has been highlighted by various empirical studies (Ley & Young, 2001; Whipp & Chiarelli, 2004; Barnard, Paton, & Lan, 2008; Milligan & Littlejohn, 2016; Kizilcec et al., 2017; Callan & Cleary, 2019; Handoko, Gronseth, McNeil, Bonk, & Robin, 2019). Also, the provision of feedback to the learners and the possibility of controlling their course of achieving their goals, through the individual graphs that presented their course and the comparative graphs that compared it with the course of all the other learners of the experimental group, worked positively in their self-regulation, as in the research of Davis, Chen, Jivet et al. (2016a), who used another application.
The above results seem to confirm that self-regulation is a complex skill that takes time to build and master (Harris et al., 2011), since, among the research groups, statistically significant differences appeared only at the end of the program, except from the Help-seeking factor in which a statistically significant difference appeared in the middle of the program.
Finally, interpreting the movements of individuals between self-regulatory groups, it seems that in trying to self-regulate themselves, either tried new strategies or adapted them to remain effective, as Zimmerman (2000) states, no strategy is as effective for all or continuously or in all jobs and circumstances. The way they are implemented also plays an important role in the effectiveness of the strategies, as it is not enough to be implemented, but to be implemented correctly, as has been shown in the research of Davis, Chen, Van der Zee et al. (2016b).
6.2. Research Question 2
The participation of the learners shows what Clow (2013) likened to a funnel to represent the continuous decrease of the trainees, from the period of enrollment until the completion of the MOOCs programs.
In our program, a significant percentage showed interest in attending but never participated, while an equally significant percentage left the program during the first weeks, confirming a number of other studies (Dillahunt, Wang, & Teasley, 2014; Gütl et al., 2014; Heutte, Kaplan, Fenouillet, Caron, & Rosselle, 2014; Ho et al., 2014; Perna et al., 2014; Santos, Klerkx, Duval, Gago, & Rodríguez, 2014; Stein & Allione, 2014; Tucker, Dickens, & Divinsky, 2014; Whitmer et al., 2014; Wilkowski, Deutsch, Russell, 2014; Greene, Oswald, & Pomerantz, 2015; Kleiman, Wolf, & Frye, 2015; Koedinger, Kim, Jia, McLaughlin, & Bier, 2015; Lackner, Ebner, & Khalil, 2015; Skrypnyk et al., 2015; Allione & Stein, 2016; Davis, Chen, Jivet et al., 2016a; Evans, Baker, & Dee, 2016; Fidalgo-Blanco, Sein-Echaluce, & García-Peñalvo, 2016; Maldonado et al., 2016; Tseng, Tsao, Yu, Chan, & Lai, 2016; Crosslin, Dellinger, Joksimovic, Kovanovic, & Gaševic, 2017; Tawfik et al., 2017).
Nevertheless, a very high percentage of learners completed the program, and specifically 80.1% (N = 528) of the control group and 80.3% (N = 522) of the experimental group, and a total of 80.2% (N = 1050) of those who started it.
The slightly higher percentage of the experimental group is due to the greater increase in its self-regulation. However, the fact that the two research teams showed very high rates of self-regulation without statistically significant differences between them shows that other factors contributed to this result, such as good instructional design of the program (Khalil & Ebner, 2013; de Barba, Kennedy, & Ainley, 2016) and its average duration (Jordan, 2014; Jordan, 2015), short video duration (Kim et al., 2014; Thille et al., 2014; Guo, Kim, & Rubin, 2014; Hone & El Said, 2016) and their type (with explanatory slides) (Kim et al., 2014; Guo et al., 2014), the type of evaluations they included (peer review) (Jordan, 2015), the satisfaction of learners with the program and instructional material (Whitmer et al., 2014; Alraimi et al., 2015; Hew, 2016; Hone & El Said, 2016), the ongoing support provided to them (Kop et al., 2011; Belanger & Thornton, 2013; Castano-Munoz, Kalz, Kreijns, & Punie, 2016; Hadi & Rawson, 2016; Hew, 2016; Hone & El Said, 2016), their (timely) feedback (Fournier et al., 2014; Ramesh, Goldwasser, Huang, Daume III, & Getoor, 2014; Wilkowski, Deutsch et al., 2014; Davis et al., 2017), the connection of theory and practice through the case studies they were called upon to deal with (Hew, 2016), the hints and feedback provided in quizzes and final weekly assignments (Koedinger et al., 2015), the flexible evaluation policy of the program (Li, Kidziński, Jermann, & Dillenbourg, 2015), the moderate workload required by the program except for the first two weeks when the program was more demanding (Cassidy et al., 2014), even their interest in obtaining the certificate of completion (Haug et al., 2014; Castano-Munoz et al., 2016; Greene et al., 2015; Pursel, Zhang, Jablokow, Choi, & Velegol, 2016).
Regarding the performance, the Experimental group shows higher percentages of individuals in the highest scale (90% - 100%) compared to the control group, but without statistically significant differences between them. Therefore, in addition to the self-regulation of the learners, the good instructional design of the program (Castaño, Maiz, & Garay, 2015), their cognitive background for the learning object and their relevant experience from their daily life in schools (DeBoer, Stump, Seaton, & Breslow, 2013; Engle et al., 2015; Phan, McNeil, & Robin, 2016), participation in peer reviews (Admiraal, Huisman, & Van de Ven, 2014), their active participation in course activities (Guo & Reinecke, 2014; Diver & Martinez, 2015; Koedinger et al., 2015; de Barba, Kennedy, & Ainley, 2016; Ruipérez-Valiente et al., 2016; Tseng et al., 2016) and in the forum or start-up activities and optional activities (Coetzee, Fox, Hearst, & Hartmann, 2014; Comer, Clark, & Canelas, 2014; Diver & Martinez, 2015; Alario-Hoyos, Muñoz-Merino, Pérez-Sanagustín, Delgado Kloos, & Parada, 2016; Phan et al., 2016), where they exchanged views and/or were informed of the views of others.
The results of the research are consistent with the results of Davis, Chen, Van der Zee et al. (2016b) and Kizilcec et al. (2017), who in their research concluded that the implementation of specific self-regulatory strategies (goal setting, strategic planning, environment structuring, help-seeking, reflection) did not affect the performance of trainees, but the participation and completion rates of programs. According to Davis, Chen, Van der Zee et al. (2016b), in a second application of their research, they found that those who were really involved in developing a study plan and recording their goals had greater participation and better performance. It is not enough, then, to simply implement a self-regulatory strategy, but to implement it correctly in order to bring positive results.
They are also consistent with the Jivet (2016) study which showed an increase in the performance, without statistically significant differences, of learners who used a similar application to MCII+ (Learning tracker), which presented in graphs the participation and performance of learners in comparison with the best performance of people who had used it in a previous training period.
7. Conclusion, Limitations and Future Research
The present study investigated the extent to which the implementation of the MCII self-regulatory strategy in combination with the self-regulatory processes of Zimmerman’s self-regulatory model, from all three phases, helped to increase the self-regulation of the learners, the completion rates, and the performance of those who attended the program.
The findings show that the support of the learners to apply self-regulatory strategies and processes helped to strengthen their self-regulation in the factors Metacognitive activities after learning and Persistence but also in overall in the factor Metacognitive activities. However, it failed to enhance self-regulation in the Help-seeking factor, for reasons that are not solely due to the program itself, but also the learners.
However, the fact that there were high rates of completion of the program (Control group: N = 528, f = 80.1%; Experimental group: N = 522, f = 80.3%) despite the large drop in the factor Help-seeking, suggests that it is not a critical factor for the successful completion of the learners, as it does not affect their commitment to achieving their goals, a finding consistent with the research of Kizilcec et al. (2017).
The high completion rates of both research groups, the high performance, and the general increase in their self-regulation (apart from the Help-seeking factor) suggest that several other features of the program played an important role. The main role in these results was played by the instructional design, the good organization of the program and its quality instructional material, and to a lesser extent the self-regulation of the learners.
Despite the large sample of our research, some limitations do not allow the results to be generalized. In particular, our research examined only one course in which a relatively homogeneous sample participated, as mainly teachers with at least a higher education degree, as well as knowledge and experience in the subject of the program participated. Nevertheless, the findings show that not only self-regulation but also various other factors related to the instructional design, organization, and instructional material of the program play an important role in the performance and high completion rates of the program. Therefore, attention should be paid to these features of the program by future designers of similar programs, so that they satisfy larger percentages of learners and do not focus only on strengthening one factor e.g. self-regulation or instructional material. Of course, our findings should be confirmed in programs in which a heterogeneous sample participates.
 Alario-Hoyos, C., Estévez-Ayres, I., Pérez-Sanagustín, M., Kloos, C. D., & Fernández Panadero, C. (2017). Understanding Learners’ Motivation and Learning Strategies in MOOCs. The International Review of Research in Open and Distributed Learning, 18.
 Alario-Hoyos, C., Muñoz-Merino, P. J., Pérez-Sanagustín, M., Delgado Kloos, C., & Parada, G. (2016). Who Are the Top Contributors in a MOOC? Relating Participants’ Performance and Contributions. Journal of Computer Assisted Learning, 32, 232-243.
 Allione, G., & Stein, R. M. (2016). Mass Attrition: An Analysis of Drop Out from Principles of Microeconomics MOOC. The Journal of Economic Education, 47, 174-186.
 Alonso-Mencía, M. E., Alario-Hoyos, C., Maldonado-Mahauad, J., Estévez-Ayres, I., Pérez-Sanagustín, M., & Delgado Kloos, C. (2019). Self-Regulated Learning in MOOCs: Lessons Learned from a Literature Review. Educational Review, 72, 319-345.
 Alraimi, K. M., Zo, H., & Ciganek, A. P. (2015). Understanding the MOOCs Continuance: The Role of Openness and Reputation. Computers & Education, 80, 28-38.
 Bárcena, E., Read, T., Martín-Monje, E., & Castrillo, M. D. (2014). Analysing Student Participation in Foreign Language MOOCs: A Case Study. EMOOCs 2014: European MOOCs Stakeholders Summit, 11-17.
 Barnard, L., Lan, W. Y., To, Y. M., Paton, V. O., & Lai, S. L. (2009). Measuring Self-Regulation in Online and Blended Learning Environments. The Internet and Higher Education, 12, 1-6.
 Barnard, L., Paton, V., & Lan, W. (2008). Online Self-Regulatory Learning Behaviors as a Mediator in the Relationship between Online Course Perceptions with Achievement. The International Review of Research in Open and Distributed Learning, 9.
 Barnard-Brak, L., Lan, W. Y., & Paton, V. O. (2011). Measuring and Profiling Self-Regulated Learning in the Online Environment. In G. Dettori, & D. Persico (Eds.), Fostering Self-Regulated Learning through ICT (pp. 27-38). Hershey, PA: IGI Global.
 Beaven, T., Codreanu, T., & Creuzé, A. (2014). Motivation in a Language MOOC: Issues for Course Designers. In E. Martín-Monje (Ed.), Language MOOCs: Providing Learning, Transcending Boundaries (pp. 48-66). Berlin: De Gruyter Open.
 Boekaerts, Μ., Pintrich, R. P., & Zeidner, M. (2000). Self-Regulation: An Introductory Overview. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of Self-Regulation (pp. 1-9). San Diego, CA: Academic Press.
 Broadbent, J. (2017). Comparing Online and Blended Learner’s Self-Regulated Learning Strategies and Academic Performance. Internet and Higher Education, 33, 24-32.
 Callan, G. L., & Cleary, T. J. (2019). Examining Cyclical Phase Relations and Predictive Influences of Self-Regulated Learning Processes on Mathematics Task Performance. Metacognition and Learning, 14, 43-63.
 Cassidy, D., Breakwell, N., & Bailey, J. (2014). Keeping Them Clicking: Promoting Student Engagement in MOOC Design. The All Ireland Journal of Teaching and Learning in Higher Education, 6, 1-15.
 Castano-Munoz, J., Kalz, M., Kreijns, K., & Punie, Y. (2016). Influence of Employer Support for Professional Development on MOOCs Enrolment and Completion: Results from a Cross-Course Survey. Research Track, 251-263.
 Cleary, T. J., Callan, G. L., & Zimmerman, B. J. (2012). Assessing Self-Regulation as a Cyclical, Context-Specific Phenomenon: Overview and Analysis of SRL Microanalytic Protocols. Education Research International, 2012, Article ID: 428639.
 Clow, D. (2013). MOOCs and the Funnel of Participation. In K. Verbert, E. Duval, & X. Ochoa (Eds.), Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 185-189). New York, NY: Association for Computing Machinery.
 Coetzee, D., Fox, A., Hearst, M. A., & Hartmann, B. (2014). Should Your MOOC Forum Use a Reputation System? In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 1176-1187). New York, NY: Association for Computing Machinery.
 Comer, D. K., Clark, C. R., & Canelas, D. A. (2014). Writing to Learn and Learning to Write across the Disciplines: Peer-to-Peer Writing in Introductory-Level MOOCs. The International Review of Research in Open and Distributed Learning, 15, 26.
 Crosslin, M. (2016). Customizable Modality Pathway Learning Design: Exploring Personalized Learning Choices through a Lens of Self-Regulated Learning. Denton, TX: UNT Digital Library.
 Crosslin, M., Dellinger, J. T., Joksimovic, S., Kovanovic, V., & Gaševic, D. (2017). Customizable Modalities for Individualized Learning: Examining Patterns of Engagement in Dual-Layer MOOCs. Online Learning, 22, 19-38.
 Davis, D., Chen, G., Jivet, I., Hauff, C., & Houben, G. J. (2016a). Encouraging Metacognition and Self-Regulation in MOOCs through Increased Learner Feedback. In S. Bull, B. M. Ginon, J. Kay, M. D. Kickmeier-Rust, & M. D. Johnson (Eds.), Proceedings of the LAK 2016 Workshop on Learning Analytics for Learners (pp. 17-22). Aachen, Germany: CEUR-WS.
 Davis, D., Chen, G., Van der Zee, T., Hauff, C., & Houben, G. J. (2016b). Retrieval Practice and Study Planning in MOOCs: Exploring Classroom-Based Self-Regulated Learning Strategies at Scale. In K. Verbert, M. Sharples, & T. Klobučar (Eds.), European Conference on Technology Enhanced Learning (pp. 57-71). Cham: Springer.
 Davis, D., Jivet, I., Kizilcec, R. F., Chen, G., Hauff, C., & Houben, G. J. (2017). Follow the Successful Crowd: Raising MOOC Completion Rates through Social Comparison at Scale. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference (pp. 454-463). New York, NY: Association for Computing Machinery.
 De Barba, P. G., Kennedy, G. E., & Ainley, M. D. (2016). The Role of Students’ Motivation and Participation in Predicting Performance in a MOOC. Journal of Computer Assisted Learning, 32, 218-231.
 DeBoer, J., Stump, G. S., Seaton, D., & Breslow, L. (2013). Diversity in MOOC Students’ Backgrounds and Behaviors in Relationship to Performance in 6.002x. In Proceedings of the Sixth Learning International Networks Consortium Conference (Vol. 4).
 Diana, N., Eagle, M., Stamper, J. C., & Koedinger, K. R. (2016). Extracting Measures of Active Learning and Student Self-Regulated Learning Strategies from MOOC Data. In Proceedings of the 9th International Conference on Educational Data Mining (pp. 583-584).
 Dillahunt, T., Wang, B., & Teasley, S. (2014). Democratizing Higher Education: Exploring MOOC Use among Those Who Cannot Afford a Formal Education. The International Review of Research in Open and Distributed Learning, 15, 1-19.
 Dinsmore, D. L., Alexander, P. A., & Louglin, S. M. (2008). Focusing the Conceptual Lens on Metacognition, Self-Regulation, and Self-Regulated Learning. Educational Psychology Review, 20, 391-409.
 Engle, D., Mankoff, C., & Carbrey, J. (2015). Coursera’s Introductory Human Physiology Course: Factors That Characterize Successful Completion of a MOOC. The International Review of Research in Open and Distributed Learning, 16, 46-68.
 Fidalgo-Blanco, á., Sein-Echaluce, M. L., & García-Peñalvo, F. J. (2016). From Massive Access to Cooperation: Lessons Learned and Proven Results of a Hybrid Xmooc or cMOOC Pedagogical Approach to MOOCs. International Journal of Educational Technology in Higher Education, 13, Article No. 24.
 Fini, A. (2009). The Technological Dimension of A Massive Open Online Course: The Case of the CCK08 Course Tools. The International Review of Research in Open and Distributed Learning, 10.
 Gamage, D., Fernando, S., & Perera, I. (2015). Factors Leading to an Effective MOOC from Participants Perspective. 2015 8th International Conference on Ubi-Media Computing (UMEDIA), 24-26 August 2015, Colombo, Sri Lanka, 230-235.
 García, B. J., Tenorio, G. C., & Ramírez, M. S. (2015). Self-Motivation Challenges for Student Involvement in the Open Educational Movement with MOOC. Universities and Knowledge Society Journal, 12, 91-103.
 Goldberg, L. R., Bell, E., King, C., O’Mara, C., McInerney, F., Robinson, A., & Vickers, J. (2015). Relationship between Participants’ Level of Education and Engagement in Their Completion of the Understanding Dementia Massive Open Online Course. BMC Medical Educatio, 15, 60.
 Gollwitzer, A., Oettingen, G., Kirby, T. A., Duckworth, A. L., & Mayer, D. (2011). Mental Contrasting Facilitates Academic Performance in School Children. Motivation and Emotion, 35, 403-412.
 Gollwitzer, P. M., Mayer, D., Frick, C., & Oettingen, G. (2018). Promoting the Self-Regulation of Stress in Health Care Providers: An Internet-Based Intervention. Frontiers in Psychology, 9, 838.
 Greene, J. A., Oswald, C. A., & Pomerantz, J. (2015). Predictors of Retention and Achievement in A Massive Open Online Course. American Educational Research Journal, 52, 925-955.
 Guo, P. J., & Reinecke, K. (2014). Demographic Differences in How Students Navigate through MOOCs. In Proceedings of the First ACM Conference on Learning@ Scale Conference (pp. 21-30). New York, NY: Association for Computing Machinery.
 Guo, P. J., Kim, J., & Rubin, R. (2014). How Video Production Affects Student Engagement: An Empirical Study of MOOC Videos. In Proceedings of the First ACM Conference on Learning@ Scale Conference (pp. 41-50). New York, NY: Association for Computing Machinery.
 Gütl, C., Rizzardini, R. H., Chang, V., & Morales, M. (2014). Attrition in MOOC: Lessons Learned from Drop-Out Students. In L. Uden, J. Sinclair, Y. H. Tao, & D. Liberona (Eds.), International Workshop on Learning Technology for Education in Cloud (pp. 37-48). Cham: Springer.
 Hadi, S. M., & Rawson, R. (2016). Driving Learner Engagement and Completion within MOOCs: A Case for Structured Learning Support. Proceedings of the European Stakeholder Summit on Experiences and best Practices in and Around MOOCs (EMOOCS 2016), Graz, Austria, 81.
 Handoko, E., Gronseth, S. L., McNeil, S. G., Bonk, C. J., & Robin, B. R. (2019). Goal Setting and MOOC Completion: A Study on the Role of Self-Regulated Learning in Student Performance in Massive Open Online Courses. International Review of Research in Open and Distributed Learning, 20.
 Harris, B. R., Reinhard, W., & Pilia, A. (2011). Strategies to Promote Self-Regulated Learning in Online Environments. In G. Dettori, & D. Persico (Eds.), Fostering Self-Regulated Learning through ICT (pp. 295-315). Hershey, PA: IGI Global.
 Haug, S., Wodzicki, K., Cress, U., & Moskaliuk, J. (2014). Self-Regulated Learning in MOOCs: Do Open Badges and Certificates of Attendance Motivate Learners to Invest More. In U. Cress, & C. D. Kloos (Eds.), Proceedings of the European MOOC Stakeholder Summit 2014 (pp. 66-72).
 Heutte, J., Kaplan, J., Fenouillet, F., Caron, P. A., & Rosselle, M. (2014). MOOC User Persistence. In L. Uden, J. Sinclair, Y. H. Tao, & D. Liberona (Eds.), International Workshop on Learning Technology for Education in Cloud (pp. 13-24). Cham: Springer.
 Hew, K. F. (2016). Promoting Engagement in Online Courses: What Strategies Can We Learn from Three Highly Rated MOOCS. British Journal of Educational Technology, 47, 320-341.
 Hew, K. F., & Cheung, W. S. (2014). Students’ and Instructors’ Use of Massive Open Online Courses (MOOCs): Motivations and Challenges. Educational Research Review, 12, 45-58.
 Ho, A. D., Reich, J., Nesterko, S., Seaton, D. T., Mullaney, T., Waldo, J., & Chuang, I. (2014). HarvardX and MITx: The First Year of Open Online Courses. HarvardX and MITx Working Paper No. 1.
 Huang, B., & Hew, K. F. T. (2016). Measuring Learners’ Motivation Level in Massive Open Online Courses. International Journal of Information and Education Technology, 6, 759-764.
 Jansen, R. S., Van Leeuwen, A., Janssen, J., & Kester, L. (2018). Validation of the Revised Self-Regulated Online Learning Questionnaire. In V. Pammer-Schindler, M. Pérez-Sanagustín, H. Drachsler, R. Elferink, & M. Scheffel (Eds.), European Conference on Technology Enhanced Learning (pp. 116-121). Cham: Springer.
 Jivet, I. (2016). The Learning Tracker. A Learner Dashboard that Encourages Self-Regulation in MOOC Learners. Master Thesis, Delft: Delft University of Technology.
 Jordan, K. (2014). Initial Trends in Enrolment and Completion of Massive Open Online Courses. The International Review of Research in Open and Distributed Learning, 15, 133-160.
 Jordan, K. (2015). Massive Open Online Course Completion Rates Revisited: Assessment, Length and Attrition. International Review of Research in Open and Distributed Learning, 16, 341-358.
 Kappes, A., Oettingen, G., & Pak, H. (2012). Mental Contrasting and the Self-Regulation of Responding to Negative Feedback. Personality and Social Psychology Bulletin, 38, 845-857.
 Khalil, H., & Ebner, M. (2013). “How Satisfied Are You with Your MOOC?”—A Research Study on Interaction in Huge Online Courses. In EdMedia: World Conference on Educational Media and Technology (pp. 830-839). Association for the Advancement of Computing in Education (AACE).
 Kim, J., Guo, P. J., Seaton, D. T., Mitros, P., Gajos, K. Z., & Miller, R. C. (2014). Understanding In-Video Dropouts and Interaction Peaks in Online Lecture Videos. In Proceedings of the First ACM Conference on Learning@ Scale Conference (pp. 31-40). New York, NY: Association for Computing Machinery.
 Kizilcec, R. F., & Cohen, G. L. (2017). Eight-Minute Self-Regulation Intervention Raises Educational Attainment at Scale in Individualist but Not Collectivist Cultures. Proceedings of the National Academy of Sciences of the United States of America, 114, 4348-4353.
 Kizilcec, R. F., & Halawa, S. (2015). Attrition and Achievement Gaps in Online Learning. In Proceedings of the Second (2015) ACM Conference on Learning@ Scale (pp. 57-66). New York, NY: Association for Computing Machinery.
 Kizilcec, R. F., Pérez-Sanagustín, M., & Maldonado, J. J. (2016). Recommending Self-Regulated Learning Strategies Does Not Improve Performance in a MOOC. In Proceedings of the Third (2016) ACM Conference on Learning@ Scale (pp. 101-104). New York, NY: Association for Computing Machinery.
 Kizilcec, R. F., Pérez-Sanagustín, M., & Maldonado, J. J. (2017). Self-Regulated Learning Strategies Predict Learner Behavior and Goal Attainment in Massive Open Online Courses. Computers & Education, 104, 18-33.
 Kleiman, G., Wolf, M. A., & Frye, D. (2015). Educating Educators: Designing MOOCs for Professional Learning. In The MOOC Revolution: Massive Open Online Courses and the Future of Education (pp. 117-146).
 Koedinger, K. R., Kim, J., Jia, J. Z., McLaughlin, E. A., & Bier, N. L. (2015). Learning Is Not a Spectator Sport: Doing Is Better than Watching for Learning from a MOOC. In Proceedings of the Second (2015) ACM Conference on Learning@ Scale (pp. 111-120). New York, NY: Association for Computing Machinery.
 Kop, R., Fournier, H., & Mak, J. S. F. (2011). A Pedagogy of Abundance or a Pedagogy to Support Human Beings? Participant Support on Massive Open Online Courses. The International Review of Research in Open and Distributed Learning, 12, 74-93.
 Koutsodimou, K., & Tzimogiannis, A. (2016). Mass Open Courses and Teacher Professional Development: Design Issues and Study of Participants’ Views. In T. A. Mikropoulos, N. Papachristos, A. Tsiara, & P. Chalki (Eds.), Proceedings of the 10th Pan-Hellenic and International Conference “ICT in Education”. Ioannina: HAICTE.
 Lee, W. L., Chinna, K., Lim Abdullah, K., & Zainal Abidin, I. (2018). The Forward-Backward and Dual-Panel Translation Methods Are Comparable in Producing Semantic Equivalent Versions of a Heart Quality of Life Questionnaire. International Journal of Nursing Practice, 25, e12715.
 Li, N., Kidziński, L., Jermann, P., & Dillenbourg, P. (2015). MOOC Video Interaction Patterns: What Do They Tell Us? In G. Conole, T. Klobucar, C. Rensing, J. Konert, & E. Lavoué (Eds.), Design for Teaching and Learning in a Networked World (pp. 197-210). Cham: Springer.
 Maldonado, J. J., Palta, R., Vázquez, J., Bermeo, J. L., Pérez-Sanagustín, M., & Munoz-Gama, J. (2016). Exploring Differences in How Learners Navigate in MOOCs Based on Self-Regulated Learning and Learning Styles: A Process Mining Approach. In 2016 XLII Latin American Computing Conference (CLEI) (pp. 1-12).
 Milligan, C., & Littlejohn, A. (2014). Supporting Professional Learning in A Massive Open Online Course. The International Review of Research in Open and Distributed Learning, 15, 197-213.
 Milligan, C., & Littlejohn, A. (2016). How Health Professionals Regulate Their Learning in Massive Open Online Courses. The Internet and Higher Education, 31, 113-121.
 Nawrot, I., & Doucet, A. (2014). Building Engagement for MOOC Students: Introducing Support for Time Management on Online Learning Platforms. In Proceedings of the 23rd International Conference on World Wide Web (pp. 1077-1082). New York, NY: Association for Computing Machinery.
 Oettingen, G., & Gollwitzer, P. M. (2010). Strategies of Setting and Implementing Goals: Mental Contrasting and Implementation Intentions. In J. E. Maddux, & J. P. Tangney (Eds.), Social Psychological Foundations of Clinical Psychology (pp. 114-135). New York: Guilford Press.
 Oettingen, G., & Gollwitzer, P. M. (2015). Self-Regulation: Principles and Tools. In G. Oettingen, & P. M. Gollwitzer (Eds.), Self-Regulation in Adolescence (pp 3-29). New York: Cambridge University Press.
 Oettingen, G., Kappes, H. B., Guttenberg, K. B., & Gollwitzer, P. M. (2015). Self-Regulation of Time Management: Mental Contrasting with Implementation Intentions. European Journal of Social Psychology, 45, 218-229.
 Oettingen, G., Pak, H.-J., & Schnetter, K. (2001). Self-Regulation of Goal-Setting: Turning Free Fantasies about the Future into Binding Goals. Journal of Personality and Social Psychology, 80, 736-753.
 Onah, D. F., & Sinclair, J. (2017). Assessing Self-Regulation of Learning Dimensions in a Stand-Alone MOOC Platform. International Journal of Engineering Pedagogy, 7, 4-21.
 Park, Y., Jung, I., & Reeves, T. C. (2015). Learning from MOOCs: A Qualitative Case Study from the Learners’ Perspectives. Educational Media International, 52, 72-87.
 Perna, L. W., Ruby, A., Boruch, R. F., Wang, N., Scull, J., Ahmad, S., & Evans, C. (2014). Moving through MOOCs: Understanding the Progression of Users in Massive Open Online Courses. Educational Researcher, 43, 421-432.
 Phan, T., McNeil, S. G., & Robin, B. R. (2016). Students’ Patterns of Engagement and Course Performance in a Massive Open Online Course. Computers & Education, 95, 36-44.
 Pursel, B. K., Zhang, L., Jablokow, K. W., Choi, G. W., & Velegol, D. (2016). Understanding MOOC Students: Motivations and Behaviours Indicative of MOOC Completion. Journal of Computer Assisted Learning, 32, 202-217.
 Puzziferro, M. (2008). Online Technologies Self-Efficacy and Self-Regulated Learning as Predictors of Final Grade and Satisfaction in College-Level Online Courses. American Journal of Distance Education, 22, 72-89.
 Ramesh, A., Goldwasser, D., Huang, B, Daume III, H., & Getoor, L. (2014). Understanding MOOC Discussion Forums Using Seeded LDA. In J. Tetreault, J. Burstein, & C. Leacock (Eds.), Proceedings of the 9th ACL Workshop on Innovative Use of NLP for Building Educational Applications (pp. 28-33). Stroudsburg, PA: Association for Computational Linguistics.
 Ruipérez-Valiente, J. A., Muñoz-Merino, P. J., Kloos, C. D., Niemann, K., Scheffel, M., & Wolpers, M. (2016). Analyzing the Impact of Using Optional Activities in Self-Regulated Learning. IEEE Transactions on Learning Technologies, 9, 231-243.
 Santos, J. L., Klerkx, J., Duval, E., Gago, D., & Rodríguez, L. (2014). Success, Activity and Drop-Outs in MOOCs an Exploratory Study on the UNED COMA Courses. In Proceedings of the Fourth International Conference on Learning Analytics and Knowledge (pp. 98-102). New York, NY: Association for Computing Machinery.
 Schulze, A. S. (2014). Massive Open Online Courses (MOOCs) and Completion Rates: Are Self-Directed Adult Learners the Most Successful at MOOCs? Doctoral Dissertation, Malibu, CA: Pepperdine University.
 Shapiro, H. B., Lee, C. H., Roth, N. E. W., Li, K., Cetinkaya-Rundel, M., & Canelas, D. A. (2017). Understanding the Massive Open Online Course (MOOC) Student Experience: An Examination of Attitudes, Motivations, and Barriers. Computers & Education, 110, 35-50.
 Skrypnyk, O., de Vries, P., & Hennis, T. (2015). Reconsidering Retention in MOOCs: The Relevance of Formal Assessment and Pedagogy. eMOOCs 2015-Proceedings of the Third European MOOCs Stakeholder Summit, Mons, Belgium, 166-172.
 Tawfik, A. A., Reeves, T. D., Stich, A. E., Gill, A., Hong, C., McDade, J. et al. (2017). The Nature and Level of Learner-Learner Interaction in A Chemistry Massive Open Online Course (MOOC). Journal of Computing in Higher Education, 29, 432-433.
 Tomkin, J. H., & Charlevoix, D. (2014). Do Professors Matter?: Using an a/b Test to Evaluate the Impact of Instructor Involvement on MOOC Student Outcomes. In Proceedings of the First ACM Conference on Learning@ Scale Conference (pp. 71-78). New York, NY: Association for Computing Machinery.
 Tseng, S. F., Tsao, Y. W., Yu, L. C., Chan, C. L., & Lai, K. R. (2016). Who Will Pass? Analyzing Learner Behaviors in MOOCs. Research and Practice in Technology Enhanced Learning, 11, Article No. 8.
 Tucker, C. S., Dickens, B., & Divinsky, A. (2014). Knowledge Discovery of Student Sentiments in MOOCs and Their Impact on Course Performance. In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (pp. V003T04A028-V003T04A028). American Society of Mechanical Engineers.
 UNESCO (2016). Making Sense of MOOCs: A Guide for Policy-Makers in Developing Countries. Canada: United Nations Educational, Scientific and Cultural Organization (UNESCO), France, and Commonwealth of Learning (COL).
 Veletsianos, G., Reich, J., & Pasquini, L. A. (2016). The Life between Big Data Log Events: Learners’ Strategies to Overcome Challenges in MOOCs. AERA Open, 2.
 Whitehill, J., Williams, J., Lopez, G., Coleman, C., & Reich, J. (2015). Beyond Prediction: First Steps toward Automatic Intervention in MOOC Student Stopout. In Proceedings of the 8th International Conference on Educational Data Mining (pp. 171-178).
 Whitmer, J., Schiorring, E., & James, P. (2014). Patterns of Persistence: What Engages Students in a Remedial English Writing MOOC? In Proceedings of the Fourth International Conference on Learning Analytics And Knowledge (pp. 279-280). New York, NY: Association for Computing Machinery.
 Wilkowski, J., Deutsch, A., & Russell, D. M. (2014). Student Skill and Goal Achievement in the Mapping with Google MOOC. In Proceedings of the first ACM Conference on Learning@ Scale Conference (pp. 3-10). New York, NY: Association for Computing Machinery.
 Yang, D., Wen, M., Howley, I., Kraut, R., & Rose, C. (2015). Exploring the Effect of Confusion in Discussion Forums of Massive Open Online Courses. In Proceedings of the Second (2015) ACM Conference on Learning@ Scale (pp. 121-130). New York, NY: Association for Computing Machinery.
 Zheng, S., Rosson, M. B., Shih, P. C., & Carroll, J. M. (2015). Understanding Student Motivation, Behaviors and Perceptions in MOOCs. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 1882-1895). New York, NY: Association for Computing Machinery.
 Zimmerman, B. J. (2000). Attaining Self-Regulation: A Social Cognitive Perspective. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of Self-Regulation (pp. 13-41). San Diego, CA: Academic Press.
 Zimmerman, B. J. (2011). Motivational Sources and Outcomes of Self-Regulated Learning and Performance. In B. J. Zimmerman, & D. H. Schunk (Eds.), Handbook of Self-Regulation of Learning and Performance (Chap. 4, pp. 49-64). Abingdon-on-Thames: Taylor & Francis.
 Zutshi, S., O’Hare, S., & Rodafinos, A. (2013). Experiences in MOOCs: The Perspective of Students. American Journal of Distance Education, 27, 218-227.