OJN  Vol.6 No.9 , September 2016
Factor Analytical Examination of the Evidence-Based Practice Beliefs Scale: Indications of a Two-Factor Structure
ABSTRACT
Background: Promotion of Evidence-Based Practice (EBP) in nursing appears to be developing slowly. Research indicates that nurses’ beliefs in EBP may play an even more significant role than knowledge and resources in making implementation feasible. To address this issue, measurement of nurses’ beliefs regarding EBP is paramount. Aims and objectives: This study explores the internal consistency reliability and the construct factor structure of the Norwegian version of the original Evidence-Based Practice Beliefs Scale (EBP-BS). Methods: The study has a Non-experimental exploratory survey design. A Norwegian translation of the EBP-BS was tested in a convenience sample of 118 healthcare professionals (95% nurses) attending a continuing education program at a University College in Norway. The response rate was 95% (n = 112). The internal consistency of the scale was measured by Cronbach’s alpha, and an explorative Principal Component Analysis (PCA) was used to explore the construct structure. Results: The overall internal consistency of the EBP-BS was acceptable. The PCA indicated a four-factor structure. The psychometric properties of two of the factors were too weak for expanding to a four-factor model. Based on our investigation of the EBP-BS, we suggest a two-factor structure model. The factors were named 1) General knowledge and confidence concerning EBP and 2) Task specific beliefs in EBP. This finding differs from previous results that indicated a unidimensional structure. Conclusion: As a starting point, reliable and valid measurement of nurses’ beliefs about EBP is required in order to identify possible obstacles and to optimize implementation in the individual clinical setting. Our results indicate that the EBP-BS has a two-factor structure. Further exploration of the factor structure is needed. Further empirical research may contribute to the resolving of controversies concerning basic understandings of the concept of EBP.

1. Introduction

Several studies have reported that evidence-based nursing practices have a positive impact on patient outcome [2] - [4] . EBP also reduces healthcare costs [5] [6] and improves the quality of care [4] [7] . Even though the positive effects of EBP on patient outcomes and healthcare costs have been described in the literature for years, promotion of EBP in nursing appears to be developing slowly. Nurses’ beliefs, skills, and knowledge related to EBP have an impact on the use of evidence in practice [8] . Strong beliefs in the value of EBP and positive attitudes toward it are associated with nurses’ use of evidence in practice and are seen as important mediators in changing behavior related to EBP [9] . Given the influence of practitioners’ beliefs on the successful implementation of EBP, a first step in its implementation may be to assess the nursing staff’s beliefs regarding this approach. If their perceptions of this approach are positive, the chances of success are good; if not, there is a high risk of failure.

Despite the globalization of EBP, studies reporting the translation and adaption of instruments that measure EBP beliefs are scarce. Reliable and valid measurement of beliefs towards EBP is crucial to further progress.

In 2003, Melnyk and Fineout-Overholt developed the EBP Beliefs Scale (EBP-BS) to examine nurses’ beliefs about EBP and their opportunities to use research results in practice [10] . The self-report scale is based on Prochaska and Velicer’s Transtheoretical Model of Organizational Change [11] , a recognized model for changing health behavior, and the ARCC model (Advancing Research and Clinical Practice through Close Collaboration) for implementation of research into clinical practice [12] . These models demonstrate that organizational culture and climate for EBP may have an impact on clinicians’ beliefs about its value and the extent to which they deliver evidence-based care [13] . EBP is a complex process that may involve changes both in clinicians’ behaviors and in healthcare systems’ operations. Key factors facilitating EBP adoption include strong beliefs that EBP improves patient care and outcomes [7] . Therefore, it is crucial to have instruments that assess nurses’ beliefs, culture, and readiness for EBP.

In a recent publication, Gu, Ha, and Kim [14] reported on the development of an objective instrument for assessment of entry-level EBP knowledge and skills in nurses. They claimed objective measurement to be superior to self-reported perceptions of knowledge and skills concerning EBP.

Still, the psychometric properties of the EBP-BS have been tested in the U.S. [9] [15] , China [16] and Iceland [17] , and these studies generally report that the EBP-BS has a well-established construct validity and acceptable internal consistency reliability. Previous research by both Melnyk et al. [9] and Thorsteinsson [17] concluded that the EBP-BS had a unidimensional factor structure. However, Estrada [15] suggested that the scale described four dimensions of EBP: 1) knowledge beliefs, 2) value beliefs, 3) resource beliefs, and 4) time and difficulty beliefs. A clear construct structure and firm psychometric properties are of paramount importance to the use and relevance of an instrument. The main aim of this research is to explore the internal consistency and the factor structure of the EBP-BS.

2. Methods

2.1. Design

This study has a non-experimental, exploratory, and descriptive study design.

2.2. Sample

We used a convenience sample of 118 (N) students in a continuing education program at a university in Norway. We included all available students, but apart of six dropouts (n = 112). The part-time students all worked in community or specialist health services. The majority of the participants, 106 students (95%) had bachelor degrees in nursing, and the remaining six participants had health or social work education at the bachelor level. Hundred and nine of the participants were women, and three were men.

2.3. Materials and Data Collection

Data were collected in 2011 (n = 56) and in 2015 (n = 56). The EBP-BS consists of 16 items and is a self-report instrument for investigating clinicians’ beliefs about the impact of EBP on clinical care, their ability to implement EBP, the knowledge and skills needed for EBP and the behaviors related to use of EBP, and their confidence in how EBP can improve clinical practice [10] . Examples of items are “I am sure that evidence-based guidelines can improve clinical practice” and “I believe I can overcome barriers to implementing EBP”. The respondents rated each of the 16 items on a five- level Likert scale, ranging from 1 = strongly disagree to 5 = strongly agree. The items are presented in Table 1.

2.4. Translation of the EBP-Belief Scale

The WHO principles [18] for bidirectional translation and adaption of instruments were followed when translating the EBP-BS scale to Norwegian. A group of bilingual researchers from the Centre for Evidence-Based Practice, Faculty of Health and Social Sciences, at the University College of Bergen translated the scale. To insure the original

Table 1. Descriptive statistics of EBP-BS items (n = 16).

meanings were preserved, a bilingual researcher discussed the Norwegian items with the developers of the scale [10] . The goal of the translation was to establish a semantic equivalent to the original instrument rather than a word-for-word translation. The Centre for Evidence-Based Practice, Faculty of Health and Social Sciences, at the University College of Bergen gave permission to use the translated EBP-BS.

2.5. Statistical Analyses

The Statistical Package for Social Sciences [SPSS] software version 22 for Windows analyzed the data. Individual items that had missing data were eliminated by the list-wise deletion option in SPSS. Only nine observations [0.50%] out of a total of 1792 scores were missing. Missing values were replaced by mean score values. Based on Melnyk et al.’s [9] recommendations, reverse scoring of two negatively phrased items (Items 11 and 13) was done to fit the score scale of the other items, before performing the calculations.

The internal consistency values of the scale and the factors were calculated with Cronbach’s α [alpha]. A Cronbach’s α value of 0.70 or more was considered to reflect good internal consistency [19] . To test the appropriateness of using factor analysis on this data set, we used the Kaiser-Meyer-Olkin (KMO). The KMO index was >0.72 (p < 0.01), well above the recommended value of 0.50. Gorsuch (1983) claimed that five respondents per variable would be sufficient for a reliable factor analysis. Our investigation tested 16 variables, and, according to Gorsuch’s criterion, 80 respondents would have been sufficient for a full factor analysis study (Gorsuch, 1983).

A principal component analysis (PCA) was used to explore the meaning structure of the scale. The reason for not using a Confirmatory Factor Analysis was that investigations of the factor structure of the BPS-BS are scarce and inconclusive. We conducted an orthogonal rotation with a Varimax rotation process [20] - [22] .

Thus, based on several statistical procedures and considerations, our data met the basic criteria to fit a factor analytic design. Since the significance of a factor loading depends on the sample size, we set the cut-off for factor loadings at 0.51 [23] . The One-Sample Kolmogorov-Smirnov Test [24] was used to test the mean scores and distributions of the items in the scale. The scores were significantly different from a normal distribution, but according to Jolliffe [25] , a PCA does not require normality.

2.6. Ethical Considerations

We received permission from the University College to conduct the study. The sample received oral and written information about the study, and their participation was voluntary. Only students who gave informed consent were included in the study. The sample completed the questionnaire anonymously. The study was reported neither to the Regional Committee for Medical Research Ethics nor to the Norwegian Social Science Data Services because no demographic variables were registered and the respondents could withdraw their consent to participate at any time.

3. Results

The response rate was 95% (n = 112). The overall Cronbach’s alpha value for the scale, with 16 items, was 0.73. The Descriptive scores on the items are presented in Table 1.

We found some very large differences in mean score values between items. Items 2 (lowest) and 5 (highest) were at the extremes. Item 2 (I am clear about the steps of EBP) covers perceived knowledge or competency concerning EBP, whereas Item 5 (I am sure that evidence-based guidelines can improve clinical care) measures perceived belief in EBP. It is not very easy to apprehend that assessing one’s knowledge of and one’s belief in the effect of EBP belong to the same underlying dimension. There are reasons to question the unidimensional quality of a scale with items that cover such different features as these two items do. As will be seen in the PCA analysis, Item 2 and Item 5 measure different aspects of beliefs concerning EBP, and, in our model, they do not belong to the same factor.

3.1. Principal Component Analysis [PCA]

The eigenvalues were inspected to determine the numbers of factors to be extracted. The eigenvalue criterion [greater than 1.0] suggested extraction of a maximum of four factors (see Table 2). An inspection of the scree plot indicated a five-factor structure,

Table 2. Explained variance in principal component analysis*.

Note: * = Orthogonal Varimax rotation. 1: Component is used synonymously with factor in the text.

but the fifth factor did not meet the eigenvalue criterion. Four factors explained 55% of the variance in the original items (Table 2).

The first factor had an eigenvalue of 3.86 and accounted for 24% of the variance in the scale. Three other factors had eigenvalues greater than 1.0 [2.2, 1.5, and 1.2, respectively]. They accounted for 13.5%, 9.5%, and 8% of the variance in the scale.

The PCA analysis clustered the 16 items into four different factors with loadings ranging from 0.52 - 0.79 (Table 3).

Investigating the Internal Consistency in the Factors

Cronbach’s alpha was used to examine the internal consistency reliability in each factor derived from the PCA analysis. The results showed that the internal consistency of the factors ranged from 0.53 - 0.72 (Table 4).

Estimates of internal consistency were not entirely satisfactory for all four factors, indicating that the data did not support a four-factor structure. The two first factors appeared to be psychometrically solid with regard to internal consistency (reliability and interpretability). Low internal consistency values in Factor 3 and 4 raised doubts about the validity of these factors as distinguishing parts of the EBP-BS factor structure. Based on the item content, the factors were preliminarily named as follows: 1) General knowledge and confidence concerning EBP (four items) and 2) Task specific beliefs in EBP (four items). The first factor, General knowledge and confidence, comprised nurses’ own confidence concerning implementation of EBP in their work. The second factor, Task specific belief in EBP, covered more practical issues like the use of EBP- related resources and finding the time to apply EBP, such as use of resources and finding time to apply EBP. We refrained from further interpretation of Factor 3 and 4 due to low reliability scores.

4. Discussion

4.1. Exploration of the EBP-BS

The main aim of this investigation was to explore the factor structure of the underlying construct of the EBP-BS. Results from previous studies have shown that the items of the EBP-BS generally have high construct validity [9] [16] [17] and that the factor structure of the scale is unidimensional. Melnyk and colleagues [9] found a major factor accounting for 40% of the variance. They did detect three other factors, but decided to

Table 3. Principal component matrix with factor loadings for the 16 items*.

Note: * = Principal component analysis with Varimax rotation.

leave these out as the scree plot indicated discontinuity between the first and the second factors, concluding that “a single-factor solution was the most parsimonious interpretation of the results (9:212)”. They reported, however, that the three other factors accounted for 26% of the variance (11%, 8%, and 7%). We question whether not including these three factors in the model actually may have hidden a multifactorial structure of the EBP-BS, such as the two-factor structure found in our study.

In our view, the findings of two factors in the scale concurs with the complexity that the Beliefs Scale intends to measure. This result may also invite for a differentiation into at least two belief foci, general knowledge and confidence and task specific EBP practice, in order to enhance the implementation process of evidence-based practice in nursing.

Evidence as a construct is characterized by a high degree of complexity, and differences in understanding and definition of the concept have led to debate in clinics and academia [26] . As suggested by Estrada [15] , nurses may understand EBP as a multifactorial phenomenon. The two main factors found in our study overlap relatively with Estrada’s [15] theoretical categories: beliefs related to knowledge and beliefs related to the value of EBP. The finding of a two-factor structure with factor loadings above 0.62

Table 4. Internal consistency values for four components found by principal component analysis of the EBP-BS.

and meaningful and interpretable clusters of items indicate that our two-factor structure has relatively high content validity.

4.2. Reliability and the Construct Factor Structure

According to Nunnally and Bernstein [27] , the internal consistency of the scores in the EBP-BS would be acceptable for early stages of the testing of research tools. They recommend 0.70 as an acceptable value. The results on internal consistency concur with similar findings in previous tests of the scale’s internal consistency, which have yielded Cronbach’s alpha values over 0.80 in different contexts [9] [16] [17] . This preliminary finding suggests that the questionnaire is a reliable tool for measurement of the strength of beliefs about EBP. A high internal consistency value for an instrument is essential because it indicates that raters appear to assess the items in a consistent way. However, some statistical issues must be considered in interpreting internal consistency values. The most important one is that if one expands a tool with more and similar items, the consistency estimates will automatically become higher. This does not mean that the tool has improved by adding items. A high internal consistency has no value unless the validity of the tool is good. Redundancy of items is a significant threat because it makes a tool less user-friendly and more time-consuming.

The first factor was labeled General knowledge and confidence concerning EBP. Knowledge and confidence skills affect beliefs [7] , and Bandura [28] emphasized that belief in self-efficacy increases motivation, academic performance, and interest in the topic. It is therefore vital that EBP instruments measure participants’ self-efficacy in relation to evidence-based practice. Measures of EBF beliefs that reveal low levels of perceived self-efficacy may indicate the need for more education about what EBP actually is and to moderate groundless fear and undervaluation of professional competency.

Nurses generally report favourable views of EBP and believe in its value for quality of care [29] . However, these positive beliefs toward research do not necessarily translate to an increased use of research findings in practice [30] .

The second factor in our model was Task specific beliefs in EBP. According to Bandura [28] , positive attitudes and beliefs in one’s own knowledge and skills can increase motivation for engagement in making changes in one’s practice. If we in a nursing staff identify a discrepancy between very low scores on confidence in specific EBP skills and high scores on positive beliefs regarding EBP application in practice, we may hypothesize that the motivation for EBP is very good, whereas the confidence in one’s own competency is low. This would be an action-triggering type of information that indicates a need for further education focused on development of the necessary skills [task specific] required for engaging in EBP. Building knowledge and self-confidence emerges as a vital platform for implementing EBP in clinical units. Thus, the idea of identifying factors turns out to have more practical change value than just interpreting a total score value on a unidimensional EBP-BS.

At this stage, Factors 3 and 4 appear as two groups of preliminary redundant items. Further large-scaled studies are required for a more permanent decision on the interpretation of these possible factors. The following discussion of these items must be interpreted within the limitations set by the weak reliability estimates for them. The loadings of Factors 3 and 4 had one item each with a negative correlation. Item 11 (I believe that EBP takes too much time) correlates negatively with the other items in Factor 3. This signifies that if nurses reported that EBP takes too much time, this would suggest low scores on beliefs in core values concerning EBP, such as critical appraisal of evidence, EBP results in the best clinical care, and being sure that evidence-based guidelines can improve clinical practice. It makes sense that a nurse would underrate possible positive aspects of EBP if she perceived that it took too much time. In contrast, it does not make much sense that if you believe strongly that your work is evidence- based (Item 16), then you believe that you cannot overcome barriers to implementing EBP (Item 7), you are sure that EBP cannot improve care [Item 9], and you believe that EBP is difficult (Item 13).

4.3. Further Development of the EBP-BS

Based on our investigation of the EBP-BS, we suggest a two-factor structure model. It will be of special interest to investigate whether large-scale studies can confirm the preliminary findings of the beliefs concerning EBP as a two-factor phenomenon. As Nunnally and Bernstein (1967) point out, “Most measures should be kept under constant surveillance to see if they are behaving as they should (p. 87).”

In our opinion, this research provides a conceptual framework and point of departure for further developments of the Evidence-Based Practice Beliefs Scale. Nunnally and Bernstein [27] claim that as a first step in a measurement procedure, a researcher should specify the domain of indicators of a construct. Thus, any attempt to operationalize a theoretical construct such as EBP-BS on the empirical level may be encumbered with errors. Therefore, it is essential to specify the domains to prevent the instrument from including irrelevant information or under representing the constructs [31] . In the case of EBP, a firm focus on domain specifications will increase the likelihood of clarifying EBP in a given study and reducing the chances of confusion about which aspects of EBP are referred to.

In practice, there are good reasons to assume that various mediators and moderators may have an impact on the two factors of our model, such as work experiences and educational background. There may be a discrepancy regarding EBP beliefs between nurses who were recently educated and those who were not educated in EBP. Therefore, nurse leaders and educators play a key role in creating a context to support clinical environments that optimize best practices for patient outcomes [32] . Furthermore, it would also be useful and interesting to investigate how the two factors are influenced by other phenomena such as organizational culture, leadership, and climate for EBP [13] [33] . Our understanding of EBP as a phenomenon, as well as its relationship to other influences and potential outcomes, remains limited.

4.4. Study Limitations

The possibility of making inaccurate predictions or assumptions is normal in small- scale studies [34] . Therefore, we must interpret the results within the limitations set by the small-scale design of our research.

First, the participants involved in the testing were not randomly selected. Second, the principal component analysis involved a relatively small sample, limiting generalizability and making it more difficult to replicate and interpret the results. However, it should be borne in mind that MacCallum [35] , among others, claims that common rules of designing sample size in factor analysis may not always be useful. Wetzel [36] posits that factor analysis methods can be done to explore validity in studies with fewer respondents than 100. With reference to Gorsuch [37] , we assume that a sample of 112 respondents is sufficient.

Third, even if we did not plan any comparison of respondents, it may be a limitation that we did not collect and analyze data of respondents’ background variables such as gender, age, seniority, and discipline. However, this limitation is moderated by Squires et al. [38] finding of no significant relationships between gender, age, seniority, and discipline and beliefs and behaviors related to EBP.

The high response rate in our investigation may also indicate that the qualitative understanding of the content in the Norwegian translation was adequate. The finding of no significant deficiencies in completing the EBP-BS indicates that the questionnaire was easy to use. The fact that the respondents came from different parts of both municipal and specialist health services increases the likelihood that the participants were representative of other nurses.

5. Conclusion

Due to a limited number of empirical investigations, our main scope was to explore the factor structure of the scale. To our knowledge, this was the first study that systematically examined the reliability and validity of the EBP-BS in a Norwegian context. The results indicate that the EBP-BS has a two-factor structure. However, further exploration of the factor structure is requisite, especially because there are still controversies regarding the concept and the application of EBP.

Cite this paper
Utheim Grønvik, C. , Ødegård, A. and Bjørkly, S. (2016) Factor Analytical Examination of the Evidence-Based Practice Beliefs Scale: Indications of a Two-Factor Structure. Open Journal of Nursing, 6, 699-711. doi: 10.4236/ojn.2016.69072.
References
[1]   Sackett, D., Straus, S., Richardson, W., Rosenberg, W. and Haynes, R. (2000) Evidence-Based Medicine: How to Practice and Teach EBM. Churchill Livingstone, London.

[2]   Barr, J., Hecht, M., Flavin, K., Khotrana, A. and Gould, M. (2004) Outcomes in Critically Ill Patients before and after Implementation of an Evidence-Based Nutritional Management Protocol. Chest, 125, 1446-1457.
http://dx.doi.org/10.1378/chest.125.4.1446

[3]   Leufer, T. and Cleary-Holdforth, J. (2009) Evidence-Based Practice: Improving Patient Outcomes. Nursing Standard, 23, 35-39.
http://dx.doi.org/10.7748/ns.23.32.35.s46

[4]   Melnyk, B.M., Gallagher-Ford, L., Long, L.E. and Fineout-Overholt, E. (2014) The Establishment of Evidence-Based Practice Competencies for Practicing Registered Nurses and Advanced Practice Nurses in Real-World Clinical Settings: Proficiencies to Improve Healthcare Quality, Reliability, Patient Outcomes, and Costs. Worldviews on Evidence-Based Nursing, 11, 5-15.
http://dx.doi.org/10.1111/wvn.12021

[5]   Melnyk, B.M. and Fernstein, N.F. (2009) Reducing Hospital Expenditures with the COPE (Creating Opportunities for Parent Empowerment) Program for Parents and Premature Infants: An Analysis of Direct Healthcare Neonatal Intensive Care Unit Costs and Savings. Nursing Administration Quarterly, 33, 32-37.
http://dx.doi.org/10.1097/01.NAQ.0000343346.47795.13

[6]   Levin, R.F., Fineout-Overholt, E., Melnyk, B.M., Barnes, M. and Vetter, M.J. (2011) Fostering Evidence-Based Practice to Improve Nurse and Cost Outcomes in a Community Health Setting: A Pilot Test of the Advancing Research and Clinical Practice through Close Collaboration Model. Nursing Administration Quarterly, 35, 21-33.
http://dx.doi.org/10.1097/NAQ.0b013e31820320ff

[7]   Melnyk, B.M., Gallagher-Ford, L., Long, L.E. and Fineout-Overholt, E. (2014) The Establishment of Evidence-Based Practice Competencies for Practicing Registered Nurses and Advanced Practice Nurses in Real-World Clinical Settings: Proficiencies to Improve Healthcare Quality, Reliability, Patient Outcomes, and Costs. Worldviews on Evidence-Based Nursing, 11, 5-15.
http://dx.doi.org/10.1111/wvn.12021

[8]   Melnyk, B.M., Fineout-Overholt, E., Fischbeck Feinstein, N., Li, H., Small, L., Wilcox, L., et al. (2004) Nurses’ Perceived Knowledge, Beliefs, Skills, and Needs Regarding Evidence-Based Practice: Implications for Accelerating the Paradigm Shift. Worldviews on Evidence-Based Nursing, 1, 185-193.
http://dx.doi.org/10.1111/j.1524-475X.2004.04024.x

[9]   Melnyk, B.M., Fineout-Overholt, E. and Mays, M.Z. (2008) The Evidence-Based Practice Beliefs and Implementation Scales: Psychometric Properties of Two New Instruments. Worldviews on Evidence-Based Nursing, 5, 208-216.
http://dx.doi.org/10.1111/j.1741-6787.2008.00126.x

[10]   Melnyk, B.M. and Fineout-Overholt, E. (2003) Evidence-Based Practice in Nursing & Healthcare: A Guide to Best Practice. Lippincott Williams & Wilkins, Philadelphia. (Norwegian Translation of EBP Belief Scale by: Olsen, N.R. (2008) University College of Bergen, Bergen).

[11]   Prochaska, J.O. and Velicer, E.F. (1997) The Transtheoretical Model of Health Behavior Change. American Journal of Health Promotion, 12, 38-48.
http://dx.doi.org/10.4278/0890-1171-12.1.38

[12]   Melnyk, B.M. and Fineout-Overholt, E. (2010) ARCC [Advancing Research and Clinical Practice through Close Collaboration]: A Model for System-Wide Implementation and Substainability of Evidence-Based Practice. In: Rycroft-Malone, J.B.T., Ed., Evidence Based Nursing: Models and Frameworks for Implementing Evidence-Based Practice Linking Evi-dence to Action, 5th Edition, John Wiley & Sons, West Sussex, 169-183.

[13]   Melnyk, B.M., Fineout-Overholt, E., Giggleman, M. and Cruz, R. (2010) Correlations among Cognitive Beliefs, EBP Implementation, Organizational Culture, Cohesion and Job Satisfaction in Evidence-Based Practice Mentors from a Community Hospital System. Nursing Outlook, 58, 301-308.
http://dx.doi.org/10.1016/j.outlook.2010.06.002

[14]   Gu, M., Ha, Y. and Kim, J. (2015) Development and Validation of an Instrument to Assess Knowledge and Skills of Evidence-Based Nursing. Journal of Clinical Nursing, 24, 1380-1393.
http://dx.doi.org/10.1111/jocn.12754

[15]   Estrada, N. (2009) Exploring Perceptions of a Learning Organization by Rn’s and Relationship to EBP Beliefs and Implementation in Acute Care Setting. Worldviews on Evidence-Based Nursing, 6, 200-209.
http://dx.doi.org/10.1111/j.1741-6787.2009.00161.x

[16]   Wang, S., Lee, L., Wang, W., Sung, H., Chang, H., Hsu, M., et al. (2012) Psychometric Testing of the Chinese Evidence-Based Practice Scales. Journal of Advanced Nursing, 68, 2570-2577.
http://dx.doi.org/10.1111/j.1365-2648.2012.06011.x

[17]   Thorsteinsson, H. (2012) Translation and Validation of Two Evidence-Based Nursing Practice Instruments. International Nursing Review, 59, 259-265.
http://dx.doi.org/10.1111/j.1466-7657.2011.00969.x

[18]   WHO (2011) Process of Translation and Adaption of Instruments.
http://www.who.int/substance_abuse/research_tools/translation/en/index.html#

[19]   Hair, J., Black, W., Babin, B. and Anderson, R. (2010) Multivariate Data Analysis: Global Edition. 7th Edition, Pearson Higher Education, Upper Saddle River.

[20]   Kline, P. (1994) An Easy Guide to Factor Analysis. Routledge, Cornwall.

[21]   Pett, M.A., Lackey, N.R. and Sullivan, J.J. (2003) Making Sense of Factor Analysis: The Use of Factor Analysis for Instrument Development in Health Care Research. SAGE Publications, Thousand Oaks.
http://dx.doi.org/10.4135/9781412984898

[22]   Watson, R. and Thompson, D. (2005) Intergrative Literature Reviews and Meta-Analyses. Use of Factor Analysis in Literture Review. Journal of Advanced Nursing, 55, 330-341.
http://dx.doi.org/10.1111/j.1365-2648.2006.03915.x

[23]   Stevens, J. (2002) Applied Multivariate Statistics for the Social Sciences. 4th Edition, Erlbaum, Hillsdale.

[24]   Field, A. (2009) Discovering Statistics Using SPSS. 3rd Edition, Sage Publications Ltd., London.

[25]   Jolliffe, I. (2002) Principal Component Analysis. 2nd Edition, Springer, New York.

[26]   Scott, K. and McSherry, R. (2008) Evidence-Based Nursing: Clarifying the Concepts for Nurses in Practice. Journal of Clinical Nursing, 18, 1085-1095.
http://dx.doi.org/10.1111/j.1365-2702.2008.02588.x

[27]   Nunally, J.C. and Bernstein, I.H. (1967) Psychometric Theory. 3rd Edition, McGraw-Hill, New York.

[28]   Bandura, A. (2003) Self Efficacy. The Exercises of Control. WH Freeman and Company, New York.

[29]   Alanen, S., Kaila, M. and Valimaki, M. (2009) Attitudes toward Guidelines in Finish Primary Nursing: A Questionnaire Survey. Worldviews on Evidence-Based Nursing/Sigma Theta Tau International. Honor Society of Nursing, 6, 229-236.

[30]   Hutchinson, A. and Johnston, L. (2006) Beyond the BARRIERS Scale: Commonly Reported Barriers to Research Use. Journal of Nursing Administration, 36, 189-199.
http://dx.doi.org/10.1097/00005110-200604000-00008

[31]   Messick, S. (1995) Validity of Psychological Assessment. Validation of Inferences from Persons’ Responses and Performances as Scientific Inquiry into Score Meaning. American Psychologist, 5D, 741-749.
http://dx.doi.org/10.1037/0003-066X.50.9.741

[32]   Melnyk, B.M., Gallagher-Ford, L., Fineut-Overholt, E. and Kaplan, L. (2012) The State of Evidence-Based Practice in US Nurses. Critical Implications for Nurse Leaders and Educators. The Journal of Nursing Administration, 42, 410-417.
http://dx.doi.org/10.1097/NNA.0b013e3182664e0a

[33]   Sandström, B., Borglin, G., Nilsson, R. and Willman, A. (2011) Promoting the Implementation of Evidence-Based Practice: A Literature Review Focusing on the Role of Nursing Leadership. Worldviews on Evidence-Based Nursing, 8, 212-223.
http://dx.doi.org/10.1111/j.1741-6787.2011.00216.x

[34]   Van Teijlingen, E. and Hundley, V. (2002) The Importance of Pilot Studies. Nursing Standard, 16, 33-36.
http://dx.doi.org/10.7748/ns.16.40.33.s1

[35]   MacCullum, R., Widaman, K., Zhang, S. and Hong, S. (1999) Sample Size in Factor Analysis. Psychological Methods, 4, 84-99.
http://dx.doi.org/10.1037/1082-989X.4.1.84

[36]   Wetzel, A. (2012) Factor Analysis Methods and Validity Evidence: A Review of Instrument Development across the Medical Education Continuum. Academic Medicine, 87, 1060-1069. http://dx.doi.org/10.1097/ACM.0b013e31825d305d

[37]   Gorsuch, R. (1983) Factors Analysis. Lawrence Erlbaum Assosicates, Hillsdale.

[38]   Squires, J., Estabrooks, C., Gustavsson, P. and Wallin, L. (2011) Individual Determinants of Research Utilization by Nurses: A Systematic Review Update. Implementation Science, 6, 1.
http://dx.doi.org/10.1186/1748-5908-6-1

 
 
Top