GDN Logo Proceedings from the HEFCE/DENI FDTL Geography National Conference

Improving Teaching, Learning and Assessment

Professor Graham Gibbs
Centre for Higher Education Practice, Open University, and FDTL National Co-ordinator

 

Abstract

This paper will argue that the mechanisms which underlie quality in research have exact parallels which show us how to improve the quality of teaching, learning and assessment. These mechanisms operate in two main ways - within institutions, and across institutions through disciplines. Most academic departments in most disciplines work within existing institutional frameworks, without influencing or challenging them, and work across institutions in relation to their research but not in relation to their teaching. Geography works across institutions to share and debate best teaching practice far more than do most disciplines. The Funding Councils in the UK are embarking on national initiatives which will support developments in both institutional strategies to support teaching and learning and in networks which will support collaboration within disciplines across institutions. Possible foci of attention to improve teaching and learning within institutions and across institutions will be outlined. Finally the paper will suggest a possible focus of attention for attempts to improve assessment.

Improving Research

The Funding Councils in the UK do not seem especially worried about the overall quality or quantity of research. The UK has been the most cost-effective country in the world, for many years, in terms of producing research papers, and our international standing in terms of research is tolerably high. There is a problem of how to allocate finite resources to support research but there seems to be no shortage of good research to support. In contrast there is a perception that there is a problem with the quality of our teaching. Students complain, employers complain, and ever more intrusive teaching quality assurance mechanisms are imposed both inside and outside institutions because there is insufficient confidence in the quality and standards involved. Perhaps teaching has something to learn from research and we should look at what it is about the way the entire research enterprise is conducted which achieves such high standards so cost-effectively. The main things I would pick out are:
  1. Training researchers
  2. Employing (only) well trained researchers
  3. Retaining, rewarding and promoting (only) excellent researchers (through peer review)
  4. Funding facilities and equipment
  5. (Institutional) funding of research development work (peer reviewed)
  6. Funding research (peer reviewed, highly competitive)
  7. Publication of the outcomes of work (peer reviewed, competitive)
  8. Peer review of overall research for future funding
  9. A scholarly approach which builds on work which has gone before (including reading the literature!)
  10. A scholarly approach which involves sharing and discussion (seminars, visits, exchanges...)
  11. An emphasis on 'going public' which shares knowledge, builds on past knowledge, and provides platforms for both debate and peer esteem.
I would single out peer review and funding as central issues. What is judged by peers is valued and what is valued is usually pursued with vigour and intelligence.

It might be argued that good researchers make good teachers. If this were true we would not have a problem with teaching. The research evidence makes it clear that the scholarship of discovery and the scholarship of teaching are such distinct phenomena that the correlations between measures of each are usually zero and, in recent years, negative (for reviews see Feldman, 1987 and Brew and Boud, 1995). The enduring nature of the myth of a strong relationship between research and teaching has been commented on by research reviewers in the US (Terenzini and Pascarella, 1994).

Improving Teaching

If we want to improve teaching we have to invest not in more or better research but in mechanisms such as those which support research so well. In particular:
1) Training teachers
The new Learning and Teaching Institute will help here, but at entry teachers will still have no training. The training they are likely to get will be, at most, a part time one year Postgraduate Certificate, compared with a three year PhD. And the standard of evidence involved in judging the achievement of professional membership of the UK's Institute for Learning and Teaching (ILT) may be a little lower than that involved in judging a candidate for a doctorate.

2) Employing (only) well trained teachers
Appointment practices are not yet very effective at identifying good, or potentially good, teachers. Mechanisms exist, such as the 'pedagogical colloquium' used in Ivy League institutions in the US, and are being developed in the UK through projects supported by the Fund for the Development of Teaching and Learning. These mechanisms can operate at the department level within existing institutional policy guidelines. ILT 'Associate Membership' is likely to become achieved by most Teaching Assistants before they apply for their first Lectureship, and that would provide a convenient filter, at least in terms of vocation, if not standards.

3) Retaining, rewarding and promoting excellent teachers (through peer review).
A survey I conducted in the early 90's showed that while over 90% of institutions claimed to include teaching amongst promotion criteria, only about 10% of decisions were actually made on the basis of teaching excellence (Gibbs, 1995). One institution stated that there had been no promotion which involved teaching excellence 'in living memory'. Colleges faired even worse than research institutions. 'Learning from Audit 2'(HEQC, 1996) claimed that the situation was getting worse, not better. The Robbins Report highlighted this phenomenon as a 'potential problem' 35 years ago.

Again there are existing, well documented mechanisms for doing this which can operate at the department level without changing institutional rules. An international survey of what mechanisms academics had most confidence in to improve teaching rated reward and promotion for excellent teaching first in every country surveyed Wright, 1995). In contrast the mechanism ranked last in every country was 'speakers on issues of higher education'.

4) Funding facilities and equipment
Compared with research, a much higher proportion of teaching funding is spent on people than on facilities. It is unclear to me that this is sensible given the appalling quality and overcrowded nature of many learning spaces. My previous university was almost entirely dependent on its students learning somewhere else and could only provide chairs for about a quarter at any one time. The notion that they could provide computers when they could not even provide chairs seemed somewhat fanciful.

5) (Institutional) funding of teaching development work
Increasingly institutions have their own internal equivalent of the FDTL, supporting innovation in teaching (even if this sometimes only means using IT) and, less often, disseminating it effectively across the institution. Very occasionally departments do the same thing at a very local level. I know some large departments that have invested substantial sums and entire posts in this. Investment in sabbaticals and 'release from teaching' to develop research is common. 'Release from research' to develop teaching is not a concept I have yet encountered.

6) Funding teaching
Imagine the impact on the quality of course design if no course would be funded to operate unless its plans had been through a competitive peer review exercise in which only one in six were approved.

7) Publication of the outcomes of work
Imagine that the funding mechanism described above in (6) relied on published outcomes from previously funded courses i.e. that only if rigorous evaluation of previously funded courses were published and listed on a c.v. would a new application for a course by an individual lecturer be considered.

8) Peer review of overall teaching for future funding
Vice Chancellors recently rejected the Funding Council for England proposal that teaching funding should be linked to Teaching Quality Assessment (TQA) ratings. They rejected it not on principle, but because the ratings were not robust enough. The logical thing to do would be to make them more robust, as they have tried to do with Research Assessment Exercise ratings, but of course they have decided to do the opposite.

9) A scholarly approach which builds on work which has gone before (including reading the literature!)
The FDTL has highlighted the extent to which everybody re-invents the wheel when it comes to teaching. We have encountered departments that have been praised in their TQA reports for their innovative approach to an aspect of teaching that have then succeeded in getting FDTL funding to disseminate their practice. Only then did they discover that they were actually way behind the curve and that many others already had more developed practices. Neither the TQA panel nor the department had bothered to read the literature to discover what was going on. This is a little harsh because the people with the better practices had usually not bothered to write up their practice and publish it in the first place. Research would have trouble progressing if no-one wrote up what they did or read what others did, but that is how we behave in teaching. Geography has its Journal of Geography in Higher Education and now its Geography Discipline Network web site. This is way ahead of most disciplines but in comparison with the number and specialism of its research journals it can only be seen as a first step. A recent review of discipline-specific journals on teaching (Weimer, 1993) described them as operating "in a sort of splendid isolation with respect to any writing or research done outside the field" (Healey, 1998).

10) A scholarly approach which involves sharing and discussion (seminars, visits, exchanges...)
The FDTL has again emphasised how little discussion of teaching goes on. GDN workshops have, on several occasions, provided the first opportunity a department has ever offered its staff to talk about their teaching. If you heard that a department was organising its first ever research seminar you would be rightly suspicious of the standard of their entire research enterprise. Many disciplines (other than Geography) have no national forum for discussing teaching at all. And how often do lecturers visit colleagues in other institutions to discuss their teaching or arrange to work in a team at another institution to learn more about how they teach?

11) An emphasis on 'going public' which shares knowledge and provides platforms for both debate and peer esteem.
It is common to advise a young researcher to: 'find an appropriate conference, present a poster or a short paper, and get your self seen and known about'. Going public is crucial to becoming a member of the 'community of scholars'. Most learning is socially organised and constructed through communities of many kinds. Geography has its 'Higher Education Study Group' but is this a 'community of scholars' for the scholarship of teaching in the same mould as your community of scholars for the scholarship of discovery?

Working through institutions and working through disciplines

Most teachers relate to their institution through their department and to their discipline across institutions. Educational development activity has not been organised in this way but has engaged individuals or groups through short-term projects independently either of departments or disciplines. The FDTL has attempted to work more through disciplines and the GDN is an excellent example of how this can be done. All the FDTL Languages projects have collaborated through the Centre for the Improvement of Language Teaching (CILT), a permanent organisation based in London. But the GDN is a project with a limited life and cannot leave behind it the kind of permanent networking which research is based on or that CILT represents.

At the level of the department, working within institutions you can:

Most of this can be done within existing institutional policies and rules - it is up to Geography Departments to get on with it.

At a national level, across institutions in a Geography discipline network, you can:

There will be an HEFCE initiative, starting in 1999, to support discipline networks. Geography is a good position to make full use of both this initiative. The challenge is to manage this not as a project, but to create a permanent pattern of networking which is integral to the way Geography functions in higher education in the UK and which lives and thrives once project funding is over.

Improving Assessment

The above analysis focuses on how improvement might come about, but not on what might change. In making some comments about improving assessment I'd like to focus on what I think needs changing, rather than how. I am doing this because the current pre-occupation, and one which will dominate for a while yet, is with learning outcomes and the possibility that Geography, either with or without the QAA, may move towards specifying the skills or knowledge which should be assessed. In an attempt to counteract this emphasis I'd like to focus on what assessment is for and to recommend an increased emphasis on quite different functions than those which obsess the QAA.

When I am working with teachers or departments I analyse assessment systems in terms of five main functions:

  1. capturing student attention and effort
  2. generating appropriate learning activity
  3. providing feedback to the student
  4. allocating marks - to distinguish between students or to distinguish degree classifications
  5. accountability - to demonstrate to outsiders that standards are satisfactory
The pressures of increased marking loads have shifted the emphasis back to exams and tests, including computer-based assessment: towards an emphasis on 4. The QAA is almost entirely concerned with 5. I suspect that we need very little of 4 and 5: a statistical analysis of students' marks in science at the University of Portsmouth showed that 5% of their current volume of marks would produce the same degree classification. You could probably assess 75% of your degree's learning outcomes in one assignment: your final year project. Other disciplines and other institutions have assessment systems which rely on a very small number of tasks or tests for 4 and 5.

What supports learning is 1-3 and we need these functions to be performed all the time. And while we cannot afford to do it the way we used to we have to find ways to keep these functions alive and well. Some of the innovations described in the case studies on the GDN web site involve using assessment to grab student learning hours, to engage them in activity which inevitably produce learning, or to give them some sense of how well they are doing even if the lecturer does not have time to mark things herself. We need to use course requirements, portfolios, self and peer assessment and a range of other devices which are strong on functions 1-3 but which do not need to address functions 4 and 5 at all.

If I was allowed a single message to improve student learning it would be to manipulate the assessment system so that functions 1-3 were performed as often as possible. Evidence from diary studies suggest that students are almost exclusively oriented to the assessment system, spending as little as 10% of their time by year three on work which is not assessed. If we want to change student learning that is where we have leverage. Being pre-occupied by function 5 will not impact student learning in helpful ways.

To find out how to emphasise functions 1-3, read the literature. Start with the GDN publication on assessment. Read the case studies on the GDN www site and visit the authors to find out how they do it. Involve others in working with you. Keep talking to each other. Evaluate it. Write it up. Publish it. Hold a seminar about it. Seek funding to develop it further and disseminate it to your colleagues in other institutions. Then apply for promotion within the new promotion mechanisms you have developed.

And the very best of luck to you.

References

Brew, A. and Boud, D. (1995) Research and learning in higher education. In B.Smith & S.Brown (Eds.) Research, Teaching and Learning in Higher Education. London: Kogan Page.

Feldman, K. (1987) Research productivity and scholarly accomplishment of college teachers as related to their instructional effectiveness: a review and exploration. Research in Higher Education, 26, pp227-298.

Geography Discipline Network (1998) Assessment in Geography. Cheltenham: Geography Discipline Network.

Gibbs, G. (1995) How can promoting excellent teachers promote excellent teaching? Innovations in Education and Training International, 32, pp74-84.

Healey, M. (1998) Developing and disseminating good educational practices: lessons from geography in higher education. Paper presented to the International Consortium of Educational Development 2nd International Conference: Supporting Educational and Faculty Development within departments and disciplines. Austin, Texas.

HEQC (1996) Learning from Audit 2. London: Higher Education Quality Council.

Terenzini, P.T. & Pascarella, E.T. (1994) Living with myths. Undergraduate education in America. Change, January/February, pp 28-32.

Weimer, M. (1993) The disciplinary journals of pedagogy. Change, November/December pp44-51.

Wright, A. (1995) Teaching improvement practices: international perspectives. In A.Wright (Ed.) Successful faculty development: strategies to improve university teaching. Bolton: Anker.


Return to FDTL Conference: proceedings

GDN Home

Page created 6/11/98
GDN pages maintained by Phil Gravestock