1. Background and Context
Training is often defined as planned and systematic activities constructed to stimulate the acquisition of expertise, skills, and attitudes among its participants (Wisshak & Hochholdinger, 2018). Training programs prepare their participants to succeed in their respective workplaces with constant improvement of required values, skills, and knowledge (Bernardino & Curado, 2020; Cheng et al., 2019). For any civil service, regular training is critically essential. Bangladesh Civil Service Administration Academy (BCSAA) is one of the leading public training institutes in Bangladesh. As an attached department of the Ministry of Public Administration, the Academy1 is mainly responsible for imparting training for Bangladesh Civil Service (Administration) cadre officials2. Being situated in the capital city of Dhaka, the Academy offers 40 - 50 training courses every year for the civil servants working in the field administration and central secretariat. These residential training courses are rigorous and comprehensive—comprising of academic sessions, extra-academic learning sessions, skill development sessions, physical exercises, exchange programmes and extracurricular activities. These programmes acutely emphasize preparing disciplined human resources for the government administration. Several well-reputed, experienced, and skilled members of the Administration cadre work as faculty members in the Academy. The management of these courses regularly invites learned speakers from all walks of society, including retired government officials, top-level bureaucrats, renowned academics, reputed civil society members, and bright professionals for conducting sessions and sharing their knowledge, skills, and values with the trainee participants. As a developing country, Bangladesh needs competent and confident civil servants to face emerging national and international challenges (Hoque, 2018a; Adnan, Ying et al., 2021; Hoque & Tama, 2020; Tama et al., 2018). They need to be prepared to tackle a wide range of sustainable development issues including climate change, institutional development, migration, and so forth (Hoque, 2018b; Sarker et al., 2020; Hoque, 2021; Tama et al., 2021). The participants take part in sessions relating to various national-level issues in different fields of development including agricultural food productions and public policy interventions (Adnan, Sarker et al., 2021; Hoque & Tama, 2021; Tama et al., 2015). Visual communication, sports and reporting are also taught (Hoque, 2014).
The Academy stresses the quality of training and the trainers. Regular evaluation is an integral part of the ongoing quality assurance process (Wiig, 2002). The faculty members minutely evaluate the performance of the participants, while the participants consistently evaluate all segments of the programme. Evaluation of speakers by the participants is an essential part of this entire process. On each working day, the participants receive a prescribed evaluation form in which they put their marks, remarks, observations, and suggestions, and submit to the management. The course management team (CMT) collects these forms from all participants to assess how the programme is being run. The team also takes account of the comments, and suggestions received from the participants as references for quality improvement. The CMT also at times evaluates the training programme on a weekly basis and asks the participants if they have anything to say about the performance of the speakers and the quality of the training. The evaluation wing of the Academy is responsible for providing an in-depth analysis of the information received through the forms. This research work aimed at understanding the evaluation of speakers by the participants, and exploring its function, limitation, and challenges in relation to the training programme.
2. Statement of the Problem
The main objective of the evaluation of speakers by the participants is to understand how the participants perceive his/her performance during the session and decide whether the CMT should reinvite him/her to same or similar sessions. The performance, as prescribed by the evaluation form, is rated on five categorical parameters. Those parameters are: 1) speaker’s knowledge of the subject, 2) ability to present the topic clearly, 3) ability to engage the participants, 4) ability to answer the questions from the participants, and 5) ability to manage time efficiently. Each of these parameters carries five marks and therefore a participant can rate the performance of the speaker on a scale of 25. Typically, about 40 participants take part in a training course. The faculty committee of the Academy takes the average number obtained by the speaker into consideration and decides whether the speaker should continue taking that session. The committee also considers the comments (which are optional) placed by the participants. This process of evaluation of speakers remains a crucial reference point for the next CMT3 on preparing a list of probable speakers for respective training modules. The Academy traditionally considers anything above 90% as “very good” while anything below 70% as “poor”.
However, several issues remain crucially unclear in this evaluation process. First, it is often found that a speaker obtains a very good rating on a topic by the participants while another group of participants in another section give poor marks to the same speaker. Second, several speakers on receiving poor marks from the participants claim that they have been unjustly evaluated. They say that participants do not think much before putting a quantitative remark. They also argue that if a speaker does not take the session in a relaxed manner, and instead applies strict rules, some participants tend to give him poor marks and remarks. Third, the CMT often identifies inconsistencies of the marks and remarks on the same speaker by the same group of participants. Fourth, the faculty members are believed to be carefully chosen individuals who perform in respective training sessions, so the evaluation method comes under serious question when any of the faculty members receive extremely poor marks or remarks from the participants. Therefore, it has been a long-held debate whether this evaluation of speakers by the participants is effective. However, it is a fact the evaluation results as the only official record to refer to for the CMT when they decide to invite a speaker to the Academy.
This long-held debate gave birth to this research idea to answer the key questions regarding the evaluation of speakers, and to provide the readers with a comprehensive understanding of the pros and cons of the method of evaluation by the participants. Hence, this research aims to form an analysis to generate useful insights about the credibility and effectiveness of the evaluation results, and to give a set of theoretical and practice recommendations.
3. Research Questions
This study intends to understand how speaker evaluations are carried out by the participants and what learning and insights can pave the way of achieving a sustainable solution to the related issues. It was also deemed critical to dig out the strengths, weaknesses, and loopholes of the current method. As a researcher, I wanted to lead this work with a few answerable questions that could meet the abovementioned objectives. Considering data availability, and viability of the research operation, I led this study with two research questions—1) Why does the instrument (speaker evaluation by the participants) lack credibility? 2) How can the evaluation be made more useful for the Academy? The first question proposes to explain various execution aspects of the instrument and relevant insights, while the second question seeks to explore the ways the instrument can be made more effective and credible.
This paper addresses these two questions and is organized as follows. The next sections summarize the review of literature which illustrates the key conceptual foundations of this research work. The subsequent section describes the methodology of the study followed by the findings and analysis segment. Based on the findings and analysis, I have put several recommendations before the conclusion section.
4. Literature Review and Conceptual Framework
A wide range of academic literature has looked at the methods and processes of teachers’ evaluation by students (Greimel-Fuhrmann & Geyer, 2003; Shepherd, 2011). However, little academic attention has been invested to understand the issues related to the speaker or trainer evaluation by matured participants in a training session or program. Governance and administration of training remain central to it (Hoque, 2016). The Academy also did not conduct any such formal studies before. I neither could find any studies that were published on this topic in any of the journals published by training institutes in Bangladesh. However, as part of my background research, I consulted with similar studies, reports, and evaluation policies of several prominent Bangladeshi national-level training institutions, and non-academic essays regarding observations of participants regarding the performance of the speakers. My experience of working as a faculty member of BCSAA also gave me ample opportunity to explore the thoughts of other faculty members, and these, along with the observations of participants, formed the background of this research work.
4.1. Issues and Factors
Almost all national-level training institutes in Bangladesh use identical methods for the speaker-evaluation. For instance, the 2013 Training Evaluation Policy Guidelines (amended) of Bangladesh Public Administration Training Centre (BPATC4) states the method of evaluating the performance of a speaker by the participants with almost the same criteria as the Academy (BPATC, 2013: p. 18). However, a rating parameter of whether the participants have enjoyed the session has been added. Whatever the criteria are, evaluation of teachers remains an area of debate. Williams (1989) argues that such classroom observations can often be misleading; even if someone believes that evaluations can lead to better training, s/he must ask if it is the best way of achieving this objective, especially because different teachers have different ways of teaching in different situations. Similarly, Campbell & Ronfeldt (2018), after reviewing critical evidences conclude that teacher evaluations can neither be equitable, nor address the systematic grouping of teachers. Art remains very important in communication (Moni, 2011). Several other studies have also put doubt on the appropriateness of evaluating instructors performance by the participants; bias and manipulation can pollute the integrity of the process. Mirus (1973) identified several implications of such evaluations: 1) student evaluation of teachers may be subject to manipulation, 2) budgetary stringency can have an impact on instructors’ performance, which participants may not be aware of, 3) certain beliefs about odd hours affecting the ratings can have adverse effects, and 4) an orderly bias may result from the type of material or nature of the subject matter. Ghosh et al. (2011) explored six key factors that contribute to the evaluation of a training program. These are—clarity of trainer, other facilities, venue of the programme, food served, practical application, and communication of trainer. This study involves a few of these factors—namely the clarity of trainer, practical application, and the capacity of communication. Ghosh et al. (2012) examined the training effectiveness in relation to trainers’ characteristics in a lecture-based learning environment, and found that the satisfaction of the trainees significantly depends on the trainer’s comfort level with the subject-matter and the trainer’s rapport with participants.
4.2. Concepts of Evaluation
Freeman (1982) synthesizes the approaches to the observer-teacher relationship—namely the supervisory, the alternative, and the non-directive approach. The Academy’s evaluation of speakers’ performance falls in the category of the supervisory approach. While explaining the merit of this approach, Freeman (1982) describes that the backbone of this approach is the clarity of the set standards of evaluation and in its emphasis on enhancing specific teaching skills. The limitation of this approach is the unequal power relationship between the speaker and the participants. In such cases, the approach can lead to friction between two parties, Freeman adds. Another approach to such evaluation has been developed which works when the goals and objectives of a particular training session or programme is priorly set. Swanson and Sleezer (1987) developed a model of training effectiveness evaluation that could be used to assess whether the training program delivered the desired results. They proposed a set of four questions that in their opinion should be asked of every training program—1) was the training delivered professionally? 2) were the learning objectives met? 3) was the original training need met? and 4) was the training valuable? In the case of Academy, the process involves a planned and structured, albeit a very generic approach that does not put much emphasis on achieving session or course objectives.
Regarding the conduct of speaker-evaluation, a few concepts are pertinent. Maxwell (2001) chracterizes the teacher observations into two broad categories—incidental and planned observations. First, incidental observation has been defined as unplanned evaluation that occurs during the ongoing (deliberate) interactions between the instructors and learners. These observations can be used as a formal assessment if the reports are preserved accordingly. Second, planned observations are priorly designed deliberate assessements that measures outcomes against specific learning objectives. Such planned observations often involve practical sessions. Swanson and Sleezer (1987) also emphasize the importance of planned evaluation and highlight three elements of it—1) an effectiveness evaluation plan, 2) tools for measuring training effectiveness, and 3) the evaluation report.
The speaker-evaluation of academy is a regular planned assessment. The portion of this evaluation done by the participants is largely dependent on trainee satisfaction. As Ghosh et al. (2012) show, this satisfaction level can be a highly complicated matter and can depend on different factors in different environments. Schwartz (2017) describes students’ feedback at a higher education institution as a direct way of weighing the effectiveness of teaching methods, and states that two common methods of collecting such feedback are questionnaires and interviews. Although both methods are employed in the Academy, this research focuses on the questionnaire-based evaluation. Since many of the speakers regularly come from outside the Academy, the concept of trainee satisfaction and the outcome of the questionnaire-based evaluation process may depend on many other factors. This study contributes to the literature by exploring some of the critical factors.
5.1. Research Methods
This study is based on both secondary and primary research data to answer the questions. Several methods were employed for data collection. Firstly, as mentioned earlier, before commencing this research work, I had two years of prior experience of working as a faculty member in the Academy. This gave me ample opportunity to get to know about the ongoing rhetoric and narratives regarding the effectiveness of speaker evaluation. Coming across the arguments both in favour and against the current evaluation method equipped me with insights regarding various relevant issues. Secondly, as part of secondary research, I collected and went through relevant materials including various course guidelines, evaluation guidelines, and reports published by the Academy as well as from other prominent national public training institutes. A review of literature was conducted to grasp existing studies, evidence, conceptual, and pragmatic reflections. Thirdly, since the inception of this project, I kept a diary to record critical observations that I came across from faculty members, participants, and speakers. The notes were helpful as references while conducting data analysis. Finally, I conducted primary research with the participants and faculty members to obtain their views. A mixed method of qualitative and quantitative data collection was employed for this purpose. Combining both approaches to the data collection and analysis is deemed crucial for two reasons. First, it was valuable to form a comprehensive understanding of a delicate matter like speaker evaluation. Second, one of the objectives of this research was to find alternatives to current evaluation practices, and for that it was crucial to integrate deep insights and reactive observations.
5.2. Data Collection
As mentioned earlier, secondary data and information were collected through secondary research. The sources of primary data were experiences, observations and interactions, and a questionnaire-led survey. The following sequence of techniques and events were used for data collection. Firstly, a list of 220 speakers and instructors were collected from the relevant section of the Academy. The list included members of the faculty and regular visiting speakers in the Law and Administration Course (LAC). After collecting the curriculum vitae (CV) of these 220 speakers, information regarding their 1) years of schooling, 2) length of professional service, 3) relevance of study to the topic of the session, 4) number of sessions taken, and 5) the evaluation marks attained on a 100 scale were accumulated. Following this, I selected 20 speakers from the list who had conducted at least 15 sessions in the Academy during the period between January 2017 and March 2018 in the Academy’s core training—Law and Administration Training Course. For the purpose, I primarily identified 36 speakers who already had conducted more than 5 sessions, and then I finally selected 20 speakers who fulfill the criteria. The final section of samples was purposive, and the conditions that were considered are—there should be at least five female speakers, at least five current or former faculty members. Secondary data was collected from those 20 selected CV documents.
Secondly, another set of secondary data was obtained from 300 randomly selected evaluation filled-in forms of LAC. Those forms were also collected from the same period as the speakers’ information. A careful examination of those forms was conducted later with a view to identify the trends of putting marks and remarks by the trainee participants. While conducting the content analysis of these forms, particular attention was invested to understand if the participants follow any common ways of putting high or low marks on any specific parameters. This was helpful to recognize participants’ tendencies towards scoring.
Thirdly, a questionnaire-led survey was conducted among 36 participants of one Law and Administration Course in 2018. The questionnaire was prepared comprising of consent statement followed by both open-ended and close-ended questions. The consent statement explained why the survey was being conducted, and how the anonymity and confidentiality of information will be maintained while publishing. The questionnaire was handed over to the participants in the morning and was collected through one representative of the training course. This was done so that the participants had enough time to reflect on the questions, and anonymity was ensured to allow them to put their honest opinions without any fear. The open-ended questions helped in collecting insights and thoughts regarding various aspects of the evaluation process. This survey was a key tool to collect primary data for this research work.
6. Results and Discussion
6.1. Desk Review Results
The following table (Table 1) presents the evaluation of 20 speakers who have taken at least 15 sessions in the Academy during the period of January 2017 to March 2018 in the Law and Administration Training Course at the Academy. For presentation purposes, the abbreviated forms that are used in the table are—Educational Qualification (EQ); Length of Study (LoS); Relevance of the Study (ROS: I—irrelevant; R—relevant); Length of Professional Service (LoPS); Morning Sessions (MS); and Evening Sessions (ES). The first column depicts that there were five faculty members and the other 15 were invited speakers (Faculty and Non-Faculty).
Table 1 conveys several key messages. First, it is evident that the speakers who regularly come to conduct sessions in the Academy all received a considerably high level of formal education. The average number of years of schooling is 17.8 which depicts that all speakers have received formal education and training even after graduation. Second, most of the speakers have completed their Bachelor, Master or Doctoral studies which are found relevant to the session they conduct in the Academy.
Table 1. Summary of speaker-evaluation by participants.
Third, the regular speakers of the Academy have at least 10 years of professional service experience. As the Academy invites the guest speakers mostly from BCS (Administration) Cadre, the speakers in almost all cases have relevant work experience on what they were teaching. Fourth, the Academy invites the speakers depending on their performance as well as the evaluation marks they receive from the participants. The respective column shows that the speakers at an average obtained relatively very high marks. The average mark of all the speakers is 92.676 (which belongs to the category of very good).
Fifth, one of the ongoing narratives about the evaluation is that the evaluation marks vary depending on the time of the session—morning and evening. However, the above table shows that there is hardly any consistency in such differences between the morning and evening sessions. The speakers received almost the same marks in the session across both the morning and evening sessions. Therefore, evidence from this research does not support this narrative.
Sixth, the correlation value between the Length of Study (LoS) is: −0.12999 (see Table 2), meaning that there is almost no relation between a speaker’s formal
Table 2. Correlation between the length of studies and professional service of the speakers and their obtained marks.
education and the obtained marks in the evaluation by the participants. It also shows that an increased number of higher educational degrees does not guarantee that the speaker will receive higher marks from the participant. One year of increased study, can, in fact, cause a slight reduction of marks.
However, these marks are given by the participants, and this process has its own limitations. The following section outlines these analyses. In short, what it conveys is that those more educational degrees do not guarantee better performances. Finally, the correlation between the length of service and their respective obtained marks also results along similar lines. The value of 0.285 represents that there is hardly any difference of marks in relation to the variation of Length of Professional Service. However, it conveys that more experienced professional speakers tend to obtain greater marks.
6.2. Survey Outcomes
A descriptive analysis technique was employed to analyze the data collected through the survey instrument. Although the analysis generated a wide range of findings, this discussion only highlights the most pertinent insights that help with answering the research questions.
All the participants regularly fill in the evaluation forms. A small percentage of participants (5.88%) fill in the evaluation immediately after the respective session ends, while the majority does so at the end of the day. A few of them shared that they do it as soon as they get the form in hand, even before the session starts. This shows that some participants do not value the evaluation at all. The participants on an average spend one minute and 16 seconds time to fill in the form, meaning that they spend about 20 to 25 seconds to think and put marks for one session-speaker. More than half of the participants (64.71%) believe that the most valuable measure for evaluating a speaker is the speaker’s ability to present ideas clearly. They also agree that the ability to involve the participants is also critical. About 53% believe that ability to manage time is the least valuable measure to evaluate the performance of a speaker.
About 41% of the participants believe that the measures prescribed in the form to evaluate the speaker are inadequate. Their suggestions indicate that the form must include some criteria, including smartness of the speaker, communication skills especially regarding speaking in English, clear pronunciation, ability to present ideas in a simplistic way, and reputation of the speaker in the field. Half of the participants agree that they do not write descriptive comments at the bottom of the form, while about 33% said that they sometimes write comments with only a few words. Most of the participants (82.35%) who write descriptive comments do not write their names in fear of being discovered or identified by the CMT or the higher authority of the Academy. About 85% of participants shared that they often speak to the CMT to convey their evaluation about some regular speakers. Most of the times, the CMT takes those into account and takes necessary actions accordingly.
6.3. Content Analysis
Analyzing the content of the 300 evaluation forms informed this research work. A few critical trends were identified which are crucial to understand how the speaker evaluation works. In general, participants were found to put maximum score on the measure of Knowledge of the subject, and minimum score against the measure of Ability to present idea clearly. This finding reflects that the participants value the experience and knowledge of the speakers since as trainees they remain keen to learn concepts and skills relevant for their profession. It also indicates that participants often find the lectures ambiguous, and speakers struggle to convey complicated concepts in simple and clear ways. The analysis also highlights that the participants very seldom put down written comments in the form. Only six forms out of 300 had written comments. This finding also shows that the forms are usually filled in hastily without investing much thought to it.
6.4. Key Learning and Insights
From the analysis of primary and secondary data collected in the research work, several critical learning and insights can be highlighted in relation to the research questions. It is evident that there is a lack of conceptual understanding of speaker evaluation in the Academy. Recognizing this and carrying out further research can be the first step towards creating a solid conceptual base. Evidence clearly suggests that the speakers coming to the Academy for conducting sessions are highly educated and competent. There is no significant deviation of marks given by the participants in comparison to speakers’ length of study or professional services. It may have a wide range of implications. However, making the evaluation more effective would need some reforms. In doing so, the conceptual underpinnings and the factors that have been outlined in the literature review section can be consulted. While the evaluation process has many good things, it also has some limitations to overcome. The outcome of the survey and the trend analysis has illustrated some of these crucial limitations.
The findings, discussion, and key insights indicate the required reforms in speaker evaluation process in the Academy. First, the Academy may explore similar exercises that are followed by internationally recognized research and training institutes. The existing theoretical literature conveys that such evaluations must combine both planned and incidental evaluation processes. Second, the evaluation form needs to be recreated and the suggestions from the participants can guide the changes. Some qualitative measures including color codes can be included. For regular speakers, keeping a measure of whether the speaker has improved from last time can be useful. As part of the planned evaluation, the CMT can discuss some of the issues with the participants in a weekly dedicated session. Oral feedbacks can often be more constructive. The evaluation data can be collected through online platforms which will allow the participants extra time to think and rate. Third, the participants must also be accountable if they are abusing the evaluation process. Since the evaluation is done anonymously, they often do not value the importance of the task. Therefore, the CMT can, from time to time, remind the participants about the importance of speaker evaluation and motivate them to use it wisely. Finally, further research can explore what ways trainee satisfaction can be enhanced inside the classroom. The findings of this study can complement such a study to create a comprehensive understanding of the issue.
This study explored the issues related to speaker-evaluation by participants in the Academy. Based on a relevant literature review and conceptual understanding, this study examined why the evaluation lacks credibility, and what can be done to improve its effectiveness for the Academy. The findings indicate that the current evaluation system has several critical loopholes and the participants do not use it wisely. Participants do not invest enough time to think before rating a speaker, and often remain reluctant to put descriptive remarks. Although the participants enjoy more power in this process, they lack accountability. It is evident that the measures are inadequate and sometimes ambiguous. The Academy must acknowledge these limitations and initiate reforms to make the evaluation more effective. The recommendations made in this research piece can be helpful. However, this research had a few key limitations. The perspectives of the guest speakers could add more value to this research. Since evaluation is a subjective matter, more in-depth interviews could provide the research with deeper insights. Hopefully, further research can address these limitations. Having a solid and effective speaker evaluation system remains critical to ensure the quality delivery of any training program.
1 In this article, the Academy refers to the Bangladesh Civil Service Administration Academy.
2The Academy is located at Shahbagh, Dhaka. For more information about the Academy and its faculty members, visit www.bcsadminacademy.gov.bd.
3For each training course, a fresh CMT is formed by the Academy. Since there are numerous courses run by a handful number of faculty members, the Academy regular reforms CMT for a training programme.
4BPATC is an apex public sector training institute in Bangladesh which imparts training for all BCS cadre officials. For more details, please visit: http://www.bpatc.org.bd/index.php?pageid=157.
 Adnan, K. M. M., Sarker, S. A., Zannat Tama, R. A., & Pooja, P. (2021). Profit Efficiency and Influencing Factors for the Inefficiency of Maize Production in Bangladesh. Journal of Agriculture and Food Research, 5, Article ID: 100161.
 Adnan, K. M. M., Ying, L., Sarker, S. A., Yu, M., & Tama, R. A. Z. (2021). Simultaneous Adoption of Diversification and Agricultural Credit to Manage Catastrophic Risk for Maize Production in Bangladesh. Environmental Science and Pollution Research.
 Bernardino, G., & Curado, C. (2020). Training Evaluation: A Configurational Analysis of Success and Failure of Trainers and Trainees. European Journal of Training and Devel-opment, 44, 531-546.
 Cheng, S., Corrington, A., Dinh, J., Hebl, M., King, E., Ng, L., Reyes, D., Salas, E., & Traylor, A. (2019). Challenging Diversity Training Myths. Organizational Dynamics, 48, Article ID: 100678.
 Ghosh, P., Prasad Joshi, J., Satyawadi, R., Mukherjee, U., & Ranjan, R. (2011). Evaluating Effectiveness of a Training Programme with Trainee Reaction. Industrial and Commercial Training, 43, 247-255.
 Ghosh, P., Satyawadi, R., Prasad Joshi, J., Ranjan, R., & Singh, P. (2012). Towards More Effective Training Programmes: A Study of Trainer Attributes. Industrial and Commercial Training, 44, 194-202.
 Greimel-Fuhrmann, B., & Geyer, A. (2003). Students’ Evaluation of Teachers and Instructional Quality—Analysis of Relevant Factors Based on Empirical Evaluation Research. Assessment & Evaluation in Higher Education, 28, 229-238.
 Hoque, M. M. (2018a). Information Institutions and the Political Accountability in Bangla-desh. International Journal of Scientific & Engineering Research, 9, 1586-1596.
 Hoque, M. M. (2021). Forced Labour and Access to Education of Rohingya Refugee Children in Bangladesh: Beyond a Humanitarian Crisis. Journal of Modern Slavery, 6, 20-35.
 Hoque, M. M., & Tama, R. A. Z. (2021). Implementation of Tobacco Control Policies in Bangladesh: A Political Economy Analysis. Public Administration Research, 10, 36.
 Moni, M. H. (2011). Voice for Social Changes in Creative Documentary: Better Understanding for Better Feeling. CPMS.
 Sarker, S. A., Wang, S., Adnan, K. M. M., Anser, M. K., Ayoub, Z., Ho, T. H., Tama, R. A. Z., Trunina, A., & Hoque, M. M. (2020). Economic Viability and Socio-Environmental Impacts of Solar Home Systems for Off-Grid Rural Electrification in Bangladesh. Ener-gies, 13, 679.
 Tama, R. A. Z., Tama, R., Begum, I., Alam, M., & Islam, S. (2015). Financial Profitability of Aromatic Rice Production in Some Selected Areas of Bangladesh. International Journal of Innovation and Applied Studies, 12, 235-242.
 Tama, R. A. Z., Ying, L., Happy, F. A., & Hoque, M. M. (2018). An Empirical Study on Socio-Economic Status of Women Labor in Rice Husking Mill of Bangladesh. South Asian Journal of Social Studies and Economics, 2, 1-9.
 Tama, R. A. Z., Ying, L., Yu, M., Hoque, M. M., Adnan, K. M., & Sarker, S. A. (2021). Assessing Farmers’ Intention towards Conservation Agriculture by Using the Extended Theory of Planned Behavior. Journal of Environmental Management, 280, Article ID: 111654.
 Wisshak, S., & Hochholdinger, S. (2018). Trainers’ Knowledge and Skills from the Perspective of Trainers, Trainees and Human Resource Development Practitioners. International Journal of Training Research, 16, 218-231.