The impact of an enhanced assessment tool on students’ experience of being assessed in clinical practice: a focus group study.

 

 

Jackie Haigh*

School of Health Studies, University of Bradford

[email protected]

Tel. 01274 236337

 

Chris Dearnley

School of Health Studies, University of Bradford

[email protected]

Tel. 01274 236449

 

Fiona Meddings

School of Health Studies, University of Bradford

[email protected]

Tel. 01274 236479

 

 

 

Abstract

 

 

As part of a CETL (Centre of Excellence in Teaching and Learning) funded project ALPS (Assessment and Learning in Practice Settings) (HEFCE 2007 p18) 29 student midwives and 9 midwifery link lecturers (academics with responsibility for liaising with a particular placement area) were provided with personal digital assistants (PDAs) and trained in their use. These devices were used to record action planning, review and assessment interviews between the link lecturer, mentor and student during an eight week clinical practice placement. The change to a PDA format facilitated the design and implementation of an enhanced assessment tool which included learner self assessment and more transparent links between assessment criteria and learning outcomes.

 

Three focus groups conducted at the end of the placement explored the concept of clinical practice assessment and the impact of the enhanced assessment tools on the students’ experience of clinical practice and its assessment. Data was analysed from an activity theory perspective in that the assessment tools were viewed as artefacts mediating situated knowing about student assessment in a particular socio-historical context. Findings suggest that students perceive clinical assessment as contested with different mentors having different understandings of it. However the enhancements to the assessment tool promoted a shared understanding of the assessment process which was pragmatic and acceptable to students, mentors and link lecturers. The significance of this study is that it highlights the role of assessment tools in creating a shared understanding of the assessment process rather than simply articulating that understanding.

 

 

Keywords: clinical practice, assessment, activity theory.

 

 

 

Introduction

 

 

The education of Health and Social Care professionals is now predominately based in Higher Education though most programmes retain a strong practice-based element. The way this practice-based element is managed and assessed varies significantly between professional groups and between HE institutions. The ALPS (Assessment and Learning in Practice Settings) project, a collaborative programme between five Higher Education Institutions (HEI) (HEFCE 2007) aims to develop a shared understanding of core competencies relevant to all Health and Social Care practice through the creation of a common assessment tool. A key aim of the project is to exploit the potential of mobile technology as a vehicle for this tool.

 

In this paper clinical assessment is considered as a social practice, situated in a particular time and place. ‘Knowing’ about clinical assessment is provisional and renegotiated as circumstances and relationships change, and it is mediated through language, systems and artefacts (Blackler 1995). Shared meaning about assessment is embedded in the practical collective activity of using the assessment tool so changing the tool has the potential to change the shared understanding of this process. Before seeking to radically change the assessment tool for all professional groups it is as well to investigate and understand current processes and how the introduction of mobile technology and new tools might impact on them.

 

This paper analyses student midwives’ experience of using an electronic assessment tool on a PDA (Personal digital assistant) to facilitate self-assessment, feedback and grading in clinical practice placements. It explores the potential of the new tool to mediate a shared understanding of the assessment process and to enhance student learning from experience in practice. The wider significance of the project lies in its potential to illuminate the capability of new technologies and tools to change the shared meaning of clinical practice assessment.

 

 

Context of the Study

 

 

Activity theory provides a useful framework from which to explore the context of this study (Engeström 2001, Fanghanel 2004). The object of the activity is practice based assessment of student midwives as illustrated in figure 1.

 

 

 

Figure 1 : Assessment of Practice Activity System

 

The context is a Midwifery programme with a well developed system of practice-based assessment. Students are assigned to work with a qualified clinician (the mentor) for the duration of each placement (usually seven or eight weeks). The assessment process in each placement area is supervised by a link lecturer i.e. a midwifery lecturer employed by the university. This process consists of a series of three tripartite interviews per placement involving the mentor, the placement link lecturer and the student. At the first interview the relevant performance indicators to be achieved in the course of the placement are reviewed and an action plan agreed. The intermediate interview is an opportunity to review progress and adapt the action plan as necessary to focus on areas for development. The final interview is an assessment of the student’s performance in the indicators agreed in the action plan resulting in an overall grade for the placement.

 

Students are graded on their achievement of performance indicators relevant to the module learning outcomes. The grade is determined using a marking tool assessing the following criteria: knowledge base, application of theory to practice, clinical skills, reflection, communication and ethical awareness. For the duration of the project all tripartite interview records and the marking criteria were converted to an electronic format accessed through a PDA.

 

 

Literature Review

 

 

The education of midwives has evolved rapidly since 1990 (Fraser 2000a) from a hospital-based post registration course for nurses, consisting of a period of apprenticeship and a final national examination, to a three year degree programme for students with no previous medical experience. Throughout this period there has been debate both in nursing and midwifery about the relationship between theory and practice, university and clinical practice; the key question being how best to ensure fitness for practice at the point of registration.

 

In particular the Ace Project ( Phillips Bedford and Robinson, 1994, Phillips et al. 1993) focussed on the assessment of clinical practice. Although this study used a theoretical framework based on the research process rather than a social practice theory of knowing, it concludes that an understanding, of what clinical competence is, develops in a particular context through dialogue between the actors involved and through the structures and documents used. In order to develop an appropriate understanding of competence, which includes knowledge, skills, attitudes and understanding, rather than just the ability to perform tasks, it is essential that documents and procedures support the examination of evidence for these aspects. This entails dialogue between student, mentor and link lecturer to ensure acceptance and ownership on all sides and dedicated time prioritised for this discussion.

 

This holistic view of competence with the potential to change and adapt to new circumstances and requirements is what is meant by competence in this paper. The many different meanings of competence and competencies have been debated many times, see (Cowan Norman and Coopamah, 2005) for a more detailed discussion, suffice here to highlight a general agreement that a narrow focus on the ability to perform tasks is not sufficient assessment of competence for a contemporary health worker.

 

Looking more specifically at midwifery education Fraser’s evaluation of Midwifery programmes (Fraser 2000b) concluded that they were producing fit-for-practice midwives but that there were weaknesses in the assessment of clinical practice, particularly in mentoring. The solution she proposed was closer involvement of academic staff in monitoring the assessment decisions and the use of portfolios of evidence to support the decisions made. In the context of the present study both these safeguards have been in place since the three year programme was introduced in 2000.

 

More recently the Nursing and Midwifery Council has commissioned a report on Pre-registration Midwifery programmes (Moore and Way 2004). This report examined evidence from published research and focus groups. Several points were made about the assessment of clinical competence. There is still a perceived problem with mentorship and the three factors influencing students experience in clinical practice are summarised as climate (welcoming, enquiring and reflective culture), structure (clarity of learning opportunities) and attention (interested and skilled mentor). These three were seen as more important to the student’s learning experience than the length of placement per se. Better assessment in clinical practice is seen as key to improving midwifery programmes and the grading of clinical practice is seen as one way to confirm its importance. Fraser (2000b) also recommended grading, as a safeguard to prevent unsafe practitioners achieving registration, since evidence from pass-fail assessments was not considered robust enough to remove a student from a programme.

 

The midwifery programme in the study has a particularly robust system of assessment in practice. This includes: grading of clinical competence which contributes to final degree classification; trained mentors and annual updating arrangements; full involvement of link lecturers in all assessment interviews and clear documentation which is cross referenced to standards of proficiency (NMC 2004) and benchmark statements (QAAHE 2001). The assessment process at the study site is therefore seen to be meeting expectations of good practice in the preparation of midwives for contemporary practice.

 

Practice, however, is not a static concept. A current tension in Health Professional Education is the need to prepare students for a reformed NHS where midwives will need to work as part of a network of public health workers as well as specialists in maternity care (Wanless 2004). Shared learning across professional groups and effective use of Information Technology in clinical practice are part of the vision of this new health service (DoH 2001, DOH 2004).

 

The use of mobile technology is an increasing part of everyday life. Communicating through mobile phones and text messaging is endemic particularly in youth culture (Thornton and Houser 2005). Personal digital assistants (PDAs) are becoming increasingly popular as administrative tools for professionals (Garritty and El Emam 2006). Exploitation of this new technology for educational purposes has lagged behind its social use as has been the case with other innovations in information technology (Haigh 2004). However a focus on the technology per se is misguided, a more important focus is to increase understanding of the social practices of learning and how these can be enhanced through technology (Roschelle 2003, Roschelle Sharples and Chan 2005)Smørdal and Gregory 2003 have found that this is by no means a simple undertaking. They report on a study which explored the potential of PDAs to provide easy access to net-based information in medical education. Despite careful planning and investment in infrastructure the uptake of the facilities provided was disappointingly low, because there was insufficient motivation to change established social practices. PDAs it seems are not successful as a bolt-on extra but have more impact if they are an integral part of a clearly defined activity (Roschelle 2003).

 

The prospect of shared learning across professional groups creates an opportunity for all those groups to think in new ways about professional competence and the preparation of health workers to meet the public health challenges of the 21 st century (Skills for Health 2006). The challenge of ensuring parity in assessment measures across so many diverse groups may demand the creative use of new technologies yet the impact of using mobile devices in the assessment process is not necessarily benign. Technology can bring problems as well as solutions. It needs to be pragmatic and beneficial to those using it. Pilots such as this one can help to raise awareness of potential pitfalls and result in smoother implementation for projects which follow.

 

Adopting a social practice perspective this paper attempts to describe the social dynamics of the assessment process and to analyse the impact of the new electronic tool on student and link lecturer perceptions of the process. It therefore asks the following questions:

 

What conflicts of understanding are there in the current assessment process?

 

How did the use of the electronic marking tool impact on the shared understanding of the assessment process?

 

 

Research Design

 

 

The IT Project and design of enhanced assessment tools

 

 

29 student midwives and their link lecturers were given an electronic version of assessment documents and portfolio template on a PDA. These were used in their second 8 week clinical placement in the first year of the programme in spring 2006. The devices were provided as part of an IT pilot project for ALPS (Assessment and Learning in Practice Settings) one of 74 Centres for Excellence in Teaching and Learning (CETLs) funded by the Higher Education Funding Council for England (HEFCE) to promote excellence in HE teaching and learning.

 

The necessity to redesign the clinical portfolio to a format which was workable on the small screen of the PDA led to a revised assessment tool which integrated the learning outcomes for the particular stage of training with the generic criteria for assessment. In the paper version these were presented separately but the use of dropdown boxes in the electronic forms supported a more concise and transparent approach. Previously the mentor had been asked to consider all the assessment criteria and give the student an overall percentage mark whereas on the PDA each criterion was considered and marked separately in the context of the learning outcomes. The final percentage was an average of these marks.

 

To promote full student participation in this process, a student self-assessment form was created whereby students estimated their grade banding for each criteria in advance of the final interview. The idea of students self assessing their performance was implicit in the paper version of the final interview report but the PDA version made this explicit. A SWOT analysis tool was also added to the intermediate interview form, thus making explicit to the student the requirement to analyse strengths and weaknesses in performance and identify opportunities for improvement. These amendments reflected practices which the link lecturers already sought to encourage but the introduction of the electronic portfolio facilitated a new format and provided the opportunity to reify these processes in the structure of the interview.

 

Ethics

 

Students and Link Lecturers gave verbal consent to take part in the pilot, which was confirmed by signing their acceptance of a PDA .Students were informed that they could withdraw from the study at any point by returning the PDA and reverting to the paper-based portfolio. They were assured that withdrawal from the pilot project would have no detrimental effect upon their continuance on the programme or upon their subsequent clinical assessment and grading. Signed informed consent was obtained from students prior to the focus group interviews and the link lecturers interviewed gave verbal consent.

 

Mentors were informed of the pilot but were not asked to give consent since they were not required to personally accept a PDA and they were not involved in the interviews. Mentors were not interviewed because of difficulty gaining NHS ethical approval within the time frame of the ALPS IT project, (March to July 2006) and it is recognised that this is a limitation of the study.

 

Data Collection

 

At the end of the placement students were invited to attend focus groups to discuss their experience of using the PDAs for practice-based assessments. The researchers, who each facilitated one focus group, were two midwifery lecturers and a lecturer from outside the division. The external lecturer also conducted individual interviews with four midwifery link lecturers after the focus groups.

 

Three focus groups of eight students and one facilitator used the same question guide to explore:

 

 

All groups were tape recorded and transcribed verbatim. The question guide was used to good effect in all three interviews in that the key topics were explored without inhibiting the discussion and interaction between the students. There was evidence in all transcriptions that the facilitator provided opportunities for quieter students to contribute. The interviews with staff were not transcribed but notes were taken from the audio files and the analysis focussed on the level of agreement with points raised by the students.

 

Analysis

 

Data from the focus groups was analysed using Nvivo coding software. The three headings of the question guide i.e. being assessed in clinical practice: comparison of paper and electronic portfolio: and the convenience and durability of the particular PDA became tree nodes from which further analytical categories were developed.

 

The first tree node ‘being assessed in clinical practice’ was explored to distinguish the different meanings this experience had for the students. The second tree node contained the student response to the question asking them to compare paper and electronic portfolio. The final node related to the students’ individual use of the PDA portfolio to record their practice experiences as evidence of achievement in the required performance indicators. The issues relating to use of a PDA to support an electronic portfolio are beyond the scope of the current paper and have been reported elsewhere (Dearnley and Haigh 2006). This paper will focus on the students’ experience of being assessed in clinical practice and the impact of the enhanced assessment tool on this experience.

 

The data analysis was an interpretive process attempting to categorise what the students said in response to each question area in terms of the meanings they gave to what they were doing or what they perceived was being done to them. These categories were then critically analysed using a theory of knowing framework (Blackler 1995) to identify conflicts between assessment policy and assessment as practiced (Blackler and Regan 2005).

 

The results remain an interpretation of the data by the researchers but use of the Nvivo tool provides a clear audit trail of how the data has been manipulated. Verbatim quotes have been selected to represent a common theme across all focus groups rather than the opinion of one student. Different approaches to the assessment process reported by the students are accepted at face value whereas interviewing mentors or ethnographic observation may have given a different mentor perspective on the process.

   

Findings

 

Being assessed in clinical placement: shared understanding through dialogue

 

This section summarises the main themes of discussion when students were asked to comment on their experience on being assessed in placement. Each student had two experiences of this, one prior to the introduction of the PDA tools and one using the PDA tools.

There was data in each focus group which suggested that some students saw the assessment process as a learning opportunity. Students valued the assessment process as a means of providing feedback on their performance in practice. This process was described as reassuring and confidence-boosting by the students as well as an opportunity for the mentor to provide them with clear guidance on improvement:

 

Yes ‘cos my mentor actually sat down with me and said you are doing okay but you need to work on this and that and I had 4-5 weeks to work on that and really build it up (focus group 2)

 

They also saw the time dedicated to the assessment interviews as their time, when the focus was on their learning needs. This time was perceived as valuable because generally in clinical practice time to focus on the student was limited. The interviews are social practices which allocate scarce resources i.e. staff time to the process of assessment. Collaboration in this practice involves a shared acknowledgement that the assessment of students is an important practice.

 

And also I think as well you don’t always get the chance to discuss certain areas because you are so busy – so it gives you to chance to air any issues that you’ve got (focus group 2)

 

Overwhelmingly however the data focussed on fair summative evaluation and what enhanced or distracted from this. In particular there was evidence that a clear dialogue between the link lecturer, mentor and student can lead to a shared understanding which enhances the student perception of being fairly assessed.

 

The conflicted object

 

The process of assessing clinical practice described in this paper can be seen as a conflicted object (Blackler and Regan 2005) since the three parties involved may have different understandings of the process i.e. student – personal learning, link lecturer - educational outcomes, and mentor – clinical performance (Phillips et al. 1993). We have described how a clear dialogue between the three parties enhances the student perception of the fairness of the process. The time dedicated to this discussion is seen as a valuable learning opportunity by students offering them a clearer understanding of how they could improve their performance to become more valued members of the community.

 

However the effectiveness of this process was constrained in some cases; for example some students found the process of clinical assessment intimidating. This was only discussed directly in the focus group facilitated by the external lecturer but students’ sense of powerlessness in the process and importance of getting on with the mentor was alluded to in all three. The assessment policy and design of both paper-based and PDA tools implied the full involvement of the student in the assessment process; however in practice the student contribution is dependent on the encouragement and support of the other more powerful participants. Students sometimes felt excluded from the process:

 

I always feel as if I’m not there because they’re talking about you,

They’re talking about you to your lecturer

As if you’re not there – and I want to say ‘hello –I’m here!’ (focus group3)

 

The focus group data highlights the difficulties students have in reconciling the university concept of fair assessment with the practice reality of needing to fit in and get on with the clinicians. The performance indicators and assessment criteria tools developed in the university strive to create a robust, transparent and equitable assessment process; yet in daily clinical practice getting on with the mentor and becoming a more accepted member of the community might be more meaningful to the students both in terms of learning and assessment:

 

I mean if you think they are doing something wrong or they are pushing you too hard or, you know sometimes I think it is hard to say it because you are scared of what reaction you will get or will it be used against you when it comes to grading, if you have not been getting on with your mentor. (focus group 1)

 

What students wanted from the assessment process was a fair and transparent evaluation of their performance in the placement. The main barrier to this, as they perceived it, was the idiosyncrasies of individual mentors which at times seemed to take precedence over the standard procedure as taught by the link lecturers. There is a hierarchical relationship between student and mentor which leaves the student in a powerless situation if the mentor grades the student intuitively according to her own criteria without considering how the student has performed in the outcomes set by the university:

 

they all had different ideas what they were as mentors and what they were looking for and how they graded and they all had different ideas and yet they all went and had this yearly update and everything, it’s still the luck of the draw (focus group 2)

 

One particular cause for complaint was the students’ perception that they were being judged according to some ‘ideal type’ student midwife rather than on the performance indicators for the placement and their level of study. This made some mentors very reluctant to give high grades to a first year student:

 

I found that I don’t think they understood that it is at level 1 that you are excellent I think they thought that you had to had an overall excellent based on everything, you know, not just on the outcomes. And they’d say oh you can’t get an A in your first placement. (focus group1)

 

Link lecturers were seen as having the responsibility to see that mentors did assess using the correct criteria. In some cases the link lecturer was seen to steer the mentor through the process of considering each performance indicator in the context of the stage of training:

 

they had this long telephone conversation and the mentor did at one point say oh I can’t give her that higher percentage because she is a first year and she said ‘yes but is she doing this that and the other ?’ and the mentor said ‘well yes she is doing really well’ ‘ Well you can because she is a first year student she is not meant to have the skills of a third year.’ (focus group 1)

 

In the context of this study the tripartite interview process was an established social practice that had the effect of reasserting the university’s claim to manage the student’s learning. The assessment tool can be seen as an instrument of control in that it directs the interaction between student, mentor and link lecturer to focus on the performance indicators and assessment criteria set by the university. However the lecturers and mentors represent different voices within the activity system and use different ways of engaging with students. The link lecturers are familiar with an analytical process which considers student progress in terms of defined criteria. The mentors are not as familiar with this practice and seem to find it easier to assess the student intuitively on their overall performance as a member of the community of practice, thus the tendency to reserve higher grades for students with more experience.

 

Using the new assessment tool

 

There was general agreement across all three groups that the design of the assessment tool as presented on the PDA led to a fairer assessment and more detailed feedback for students in most cases. The new assessment tool made use of the enhanced features of the electronic format to integrate key aspects of the assessment process into one pragmatic tool. Thus, when a particular criterion was being assessed, e.g. clinical skills, the device flagged up a descriptor of the level to be expected at this stage of the programme. This prompted the mentor to look for evidence related to level of assessment and from this to gauge whether the students self assessment was accurate or in need of adjustment.

 

In fact the principles of the assessment had not changed but the design encouraged a closer and more interactive examination of the relevant criteria, leading to a shared understanding of the process. The tool encouraged dialogue on student performance in each criterion (knowledge base, application of theory to practice, clinical skills, reflection, communication and ethical awareness) leading to a shared understanding of different levels of ability:

 

The way I was graded from my first placement to my second placement was a lot better, ’cos my first one, although I think my mentor knew me and she was really good, when she evaluated me it was ‘that grade’ whereas with the PDA it was ‘what do you think to this? where would you put her within this banding?’(focus group 2)

 

This suggests that the design of the tool had a positive effect on the way mentors related to the grading process but this was not unanimous some students complained that the mentors had not used the tool correctly because the link lecturer failed to give adequate direction.

 

But it has a lot to do with how your link lecturer works with your mentor, you know saying you have to give so many for this etc (focus group 3)

 

There was variation in the ability of mentors to use the device effectively. The clinicians were not supplied with their own PDAs but did need to write comments and sign on the interview record using either the student’s or the link lecturer’s PDA. To do this most needed guidance from the link lecturers. Several were reluctant to engage with this new tool to the extent that they dictated what they wanted to say via the link lecturer or student, confining their input to a signature:

 

She seemed like hassled about ‘how long is it going to take the interview?’ ‘is it going to take a lot longer’ and ‘what have we got to do with this’(focus group 1)

 

Thus negotiation of meaning was only possible when the link lecturer was present to balance the power dynamic otherwise the mentor’s understanding predominated. Students reported frustration when trying to influence mentor behaviour when the link lecturer was not present at the interview. This suggests that the tool per se is not enough to change practice but it does provide a lever for change if used to stimulate dialogue between parties.

 

The reaction of the mentors to the PDA was one aspect where the link lecturers’ impressions differed from the students. The link lecturers reported that mentors were generally supportive of the new tool. This may reflect the lecturers’ more empathetic understanding of the mentors’ anxiety when using a new tool or the tendency of mentors to appear more positive when the link lecturer was present. The short term nature of this pilot also militated against the mentors’ acceptance of changes to the interview process to promote the effective use of the new tool.

 

From the student perspective it appears that there were advantages in using the new grading tool. Students were encouraged to self-assess their own performance in advance of the interview; this was then used as a basis for discussion of each aspect of the marking criteria. This ensured that the mentor considered each aspect individually and took the student’s self-assessment into consideration. Thus a basis for giving more constructive feedback to the student was created. The use of the new assessment tool was integral to the collaborative activity and so facilitated a learning process that changed the shared meaning of clinical assessment.

 

Discussion

 

The findings from this study suggest that mobile devices may have a place in mediating student assessment processes in clinical placements. The study has demonstrated that assessment tools which are acceptable to clinicians, educators and students can be designed not only to replace current forms but to enhance them. They can encourage a student-focussed approach and careful consideration of all aspects of competent practice. They can also provide discrete prompts to mentors regarding levels of achievement expected at each stage of the programme etc.

 

Such a carefully directed approach to the conduct of assessment interviews should help address problems with clinical assessment identified in the literature. For example Fraser’s concern that inadequate practice be identified and failed is more easily achieved if the mentor has the tool to identify exactly what is inadequate (Fraser 2000b). This will not always result in removal from the programme but will give the student clear guidance on how practice must be improved.

 

The interview structure of action-planning, review and evaluation complements the personal development planning approach which is now the right of every student in Higher Education (QAAHE 2000). It also provides a clear framework for a portfolio approach to assessment which has the potential to fully integrate university-based and work-based learning. This would support a more holistic view of competence as envisaged in the Assessment of Competencies in Nursing and Midwifery Education and Training (the Ace Project)(Phillips et al. 1993) and show development over time which a mechanistic ticking of boxes approach does not.

 

One major defect in the study was the lack of involvement of the clinicians in the design of the new tool or adequate training in its use. This was mainly due to the short time frame of the project. However in view of new NMC standards to support learning and assessment in practice (NMC 2006) which place the responsibility for safe assessment very clearly on sign-off mentors, it is imperative that mentors are comfortable with the assessment tools. This should involve better educational opportunities for mentors and collaborative design of assessment tools which support student learning and ensure fitness for practice.

 

The design of user-friendly tools is clearly going to be a key aspect of their acceptance in clinical setting. Accessibility and security is another aspect to consider when contemplating the change to electronic records. The forms used for the study were only available via the PDA whereas students would have preferred to prepare their action plans etc. on their home computers and then transfer them to their PDA. Students were understandably anxious about losing records as PDA batteries run down quickly and data can be lost. One answer to this would be a web-based system whereby whatever was recorded on the PDA was instantly safely stored on a central database. This is an important consideration if clinical practice is graded and contributes to final award. Some form of safe electronic record keeping will be essential to monitor equity of process across professional groups in the assessment of multi-professional core competencies as envisaged by the ALPS project (HEFCE 2007)

 

The short time span of this project i.e. one 8-week placement, also impeded familiarity and full exploitation of these devices. Lecturers in particular commented on their growing expertise in the process after the first two or three interviews. They felt familiarity with the tool allowed them to conduct a more student-focussed interview where the PDA was an effective tool not the centre of attention.

Further projects to test a similar assessment tool for a core competency e.g. communication skills across professional groups may uncover the contested meanings underlying such concepts and may help to develop shared understanding of best practice which might enhance multi-professional team-working.

 

Summary

 

This study has used a social practice, activity theory perspective to consider the pilot introduction of an enhanced assessment tool accessible from a PDA to facilitate learning and assessment in a practice setting from. The assessment interview process in this particular context can be seen as a social practice linking student, mentor and link lecturer in the objective of monitoring and assessing the student’s practice performance. This process is rooted and made sense of in the context of the regulatory structures of health professional education which rise out of a particular socio-historical context.

 

From the student perspective The PDA assessment tool was seen to improve the standardisation of the assessment process by focussing discussion on specific performance indicators and clear assessment criteria. The design of the tool encouraged dialogue between student, mentor and link lecturer. It involved the student through the inclusion of a student self-assessment form, created a more transparent marking process and had the potential to produce more structured feedback to students.

 

PDAs have the potential to be a useful and acceptable device to facilitate clinical assessment interviews through carefully designed tools. Changing the design of assessment tools has the potential to change the interaction and thus the shared meaning of assessment for the actors involved. The significance of this study is that it highlights the role of assessment tools in creating a shared understanding of the assessment process rather than simply articulating that understanding. This needs to be considered when implementing major revision of practice-based assessment such as that envisioned by the ALPS project (HEFCE 2007).

 

References

 

Blackler, F. (1995) Knowledge, knowledge work and organisations: an overview and interpretation. Organization Studies, (6), 1021-1046.

 

Blackler, F. and Regan, S. (2005) The Conflicted Object; Strategy as Organisational Practice. Lancaster University.

 

Cowan, D. T., Norman, I. and Coopamah, V. P. (2005) Competence in nursing practice: A controversial concept - A focused review of literature. 25(5), 355-362.

 

Dearnley, C. and Haigh, J. (2006) Using mobile technology for assessment and learning in practice settings: the Bradford Pilot. In Assessment for Excellence, Northumbria EARLI SIG Assessment Conference Northumbria University.

 

DoH (2001) Working together, learning together: a framework for lifelong learning in the NHS. Department of Health Publications, London.

 

DOH (2004) The NHS Knowledge and Skills Framework (NHS KSF) and the Development Review Process. Department of Health Publications, London.

 

Engeström, Y. (2001) Expansive learning at work: toward an activity theoretical reconceptualization. Journal of Education and Work, 14(1), 133-156.

 

Fanghanel, J. (2004) Capturing dissonance in university teacher education environments. Studies in Higher Education, 29(5), 575- 590.

 

Fraser, D. M. (2000a) Action research to improve the pre-registration midwifery curriculum - Part 1: an appropriate methodology. Midwifery 16(3), 213-223.

 

Fraser, D. M. (2000b) Action research to improve the pre-registration midwifery curriculum Part 3: can fitness for practice be guaranteed? The challenges of designing and implementing an effective assessment in practice scheme. Midwifery 16(4), 287.

 

Garritty, C. and El Emam, K. (2006) Who's Using PDAs? Estimates of PDA Use by Health Care Providers: A Systematic Review of Surveys. In Journal of Medical Internet Research, Vol. 8, pp. e7.

 

Haigh, J. (2004) Information technology in health professional education: why IT matters. Nurse Education Today, 24(7), 547-552.

 

HEFCE (2007) Complete list of funded CETLs http://www.hefce.ac.uk/Learning/tinits/cetl/final/cetllist.pdf updated 1 st March 2007 accessed 9 th April 2007

 

Moore, D. and Way, S. (2004) A Report Prepared for the Midwifery Committee. Nursing and Midwifery Council, Pre-registration Midwifery Education Review Steering Group.http://www.nmc-uk.org/aFrameDisplay.aspx?DocumentID=1166

 

NMC (2004) Standards of Proficiency for Pre-registration Midwifery Education. Nursing and Midwifery Council London

 

NMC (2006) Standards to support learning and assessment in practice. NMC standards for mentors,practice teachers and teachers. Nursing and Midwifery Council London

 

Phillips, T., Bedford, H. and Robinson, J. (1994) Education, Dialogue and Assessment: creating partnerships for improving practice (The ACE Report). English National Board for Nursing Midwifery and Health Visiting, London

 

Phillips, T., Schostak, J., Bedford, H. and Robinson, J. (1993) Assessment of Competencies in Nursing and Midwifery Education and Training (the Ace Project). In Research Highlights, English National Board for Nursing Midwifery and Health Visiting, London

 

QAAHE (2000) Policy statement on a progress file for Higher Education. http://www.qaa.ac.uk/academicinfrastructure/progressFiles/default.asp accessed 08.03.2005.

 

QAAHE (2001) Subject benchmark statements: Health care programmes. The Quality Assurance Agency for Higher Education.

 

Roschelle, J. (2003) Keynote paper: Unlocking the learning value of wireless mobile devices. Journal of Computer Assisted Learning, 19(3), 260-272.

 

Roschelle, J., Sharples, M. and Chan, T. W. (2005) Introduction to the special issue on wireless and mobile technologies in education. Journal of Computer Assisted Learning, 21(3), 159-161.

 

Skills for Health (2006) Ensuring and enhancing the quality of healthcare education, interim standards. Department of Health London

 

Smørdal, O. and Gregory, J. (2003) Personal Digital Assistants in medical education and practice. Journal of Computer Assisted Learning, 19(3), 320-329.

 

Thornton, P. and Houser, C. (2005) Using mobile phones in English education in Japan. Journal of Computer Assisted Learning, 21(3), 217-229.

 

Wanless, D. (2004) Securing good health for the whole population:final report RCGP Summary paper 2004/02. Royal College of General Practitioners.

 

 

*Corresponding author

 

ISSN 1750-8428 (online) www.pestlhe.org.uk

ã PESTLHE