Research Policy and Planning: The journal of the Social Services Research Group – Vol 17 (1) 1999
Assessing outcomes in child welfare: a critical review
Martin Stevens, Social Services Research and Information Unit (SSRIU), University of Portsmouth
This article starts by looking at some of the reasons for studying outcomes and at some of the ways in which they have been defined and studied in social care and child welfare. The physical science approach to studying outcomes is described and a critique developed, based on a brief examination of the concept of an outcome. What can be known about the outcomes of child welfare interventions is compared with the concept of causality in everyday physical processes. It is argued that there are fundamental differences between the ways in which the concept has to be applied in these different settings and that this implies an important distinction between the natural and social sciences, crucial for assessing outcomes in child welfare. This argument against the simple adoption of the physical science model is followed by a brief description of two approaches that developed out of such critiques. Firstly, interpretivisnzlconstructionism, which takes as its starting point the interpretation of subjective meanings of experience and, secondly, critical theory where research is characterised as part of a process of change. An attempt is made to resolve the central tension underlying these different approaches by proposing that results of research where they have been used may be thought of as offering explanation at different levels. Links between the different levels may be examined by combining methods and approaches.
Since the 1960s, assessing outcomes has been increasingly important in social work generally, and in child welfare and protection in particular, as a result of concerns about cost, cases of child death such as Maria Colwell and the growth of the consumer movement (Parker et al., 199 1). These authors make the point that the importance of outcome measures to researchers is ‘self-evident’ (1991:11). Hill et al. (1996) also point to the importance of the topic for researchers. In recent years there has been a move towards finding an evidence base for social work practice. Central Government, via the Department of Health (DoH, 1994), which has initiated, in collaboration with a consortium of local authorities in the South West region, the Centre for Evidence-Based Social Services (CE13SS) with the remit of promoting the use of research-based evidence in the development of social work practice. As we shall see, this is based on a particular understanding of what counts as evidence (Macdonald, et al., 1992; Sheldon, 1994).
But why study outcomes at all? This may appear self-evident, but a brief look at the reasons reveals much about the way in which they may be studied. One reason is to link actions to objectives to provide a means of evaluating practice ( Parker et al., 199 1 ). In other words, to identify the best means of making child care decisions and the most appropriate types of intervention. However, as Gibbons (1997) points out, the objectives of the child care system and the problems it seeks to address are by no means clear and this introduces an element of controversy. If objectives are not clear it is difficult to evaluate the impact of a service or intervention. Different stakeholders may have distinct aims and expectations about outcomes, and therefore a variety of reasons for studying them. Political motivations may for example be linked to the need to save money or make decisions about the best use of resources. Practitioners may want to know the safest approach to child protection so that they can protect themselves from the negative consequences of a child death. Families would probably want to know that the interventions they undergo have a good chance of resolving their problems.
Having multiple perspectives and different objectives creates a difficulty because the different agendas will influence how outcomes are defined and how the relationship between intervention and outcome is postulated (Smith and Cantley, 1988). For all these stakeholders outcomes are a crucial issue. It is thus important to be clear about the level of confidence to be placed in the findings of research and the types of statement which can be made about the outcomes of child welfare or indeed any intervention in social care.
Analysis of the concept of outcomes in research
Some researchers (Whitaker et al., quoted in DoH 1991:66; Parker et al., 1991; Nocon and Qureshi, 1996) have argued that it is more realistic to look for ‘pattems of benefit and loss’ when trying to establish outcomes in both social care generally and child welfare than to try and find a uni-causal model (Frost, 1989). Hill et al. also suggest that in child welfare outcomes: “the progress of children and families who have major difficulties may be complex with improvements and setbacks, depending on which aspects of development are considered” (1996:257).
Much work is therefore needed to establish how an intervention which happens at a particular time in the history of a child and a family, is connected with various states of affairs which occur much later. Several writers make a distinction between outputs and outcomes. Outputs tend to be defined as those aspects of the situation related to the intervention, such as the receipt of services or the production of a plan. Outcomes, on the other hand, tend to be linked more to the impact on the client or, in this case, the family (Nocon and Qureshi, 1996).
For example, Hudson et al. (1996) describe a model of programme outcomes where there is a logical progression from inputs and resources, through tasks and activities to immediate results/outputs, immediate results/outcomes, intermediate results/outcomes and ultimate results and outcomes. Other ways of analysing the concept of outcome have been suggested which also use the terms immediate, intermediate and ultimate or final, and make the distinction between outcomes and outputs (Nocon and Qureshi, 1996). However, as these authors point out, not all writers use the terms in the same ways. What is needed to establish the link between the programmes or interventions and the various features which are seen as outcomes (for example, ‘child protected from abuse and neglect’, Hudson et al., 1996:15), is often not examined. The impact of accepting the notion that child abuse and protection are socially constructed, which must affect the nature of the link between interventions and outcomes, has also been ignored with respect to linking interventions and outcomes. (Parton, 1996).
The physical science approach to researching outcomes
One approach to researching outcomes which has gained a high degree of acceptance is the direct adoption of the methods of the physical sciences, in other words treating outcomes of child welfare interventions as if they were the outcomes of a physical intervention. Such researchers take the position that experimental design, with random allocation of people into experimental and control groups and employing quantifiable, behavioural outcome measures, is the best approach to researching outcomes, or in their terms ‘effectiveness’.
There is no doubt that random ly-al located equivalent group designs provide the most persuasive and potentially irrefutable evidence of effectiveness (or ineffectiveness) and such studies are essential (Macdonald et al., 1992:618)
Newman and Roberts (1997) are proponents of essentially the same view in child care research, that outcomes can only be determined by the use of the methods of physical science in the form of randomised controlled trials or, at the very least, their approximation in the natural setting. Sheldon (who collaborated with Macdonald et al., 1992) admits that before the application of scientific methods can be possible, there is a need for ‘developmental, illuminative and action research that seeks to draw lessons for practice as it proceeds’ (1988: 100). He has no doubts however that natural scientific methods are the final arbiter of what works in social work, and that it is entirely possible to determine objective outcome measures and thence laws connecting them to interventions.
The same line is taken by Gibbons (1997) who claims that experimental methods would resolve the issue about whether the child protection system was linked to a reduction in the number of episodes of repeated non-accidental child harming. She describes the type of experiment which would need to be carried out, with random allocation of referrals for child harm into groups which received or did not receive child protection plans, and then monitoring the number of episodes of repeated harm. In her view this would count as evidence of the outcomes of the child protection process. Gibbons is not actually advocating this approach (the ethical considerations would probably militate against it), however, the implication is clear that such an experiment, if it could be attempted, would come up with causal explanations of the differences in the numbers of episodes of abuse with a known degree of accuracy and certainty.
Essentially this is to apply the methods of physical science to the problems of social welfare interventions, albeit in the frustratingly complex and ‘messy’ real world environment. On this view, the link between intervention and outcome can be given the status of a scientific law: that the application of this intervention in these types of circumstance will produce this rate of child abuse cases, this number of looked after children and so on. This would be most expedient for political and policy purposes, but has many problems which are described below.
Outcomes and causality
One major criticism of this approach is that it represents an oversimplification of causal explanation in the human sciences. Outcomes have been defined basically as the consequence of an intervention or service (Nocon and Qureshi, 1996; Parker et al., 1991) and as such are linked with the concept of causality. Indeed the ways in which Macdonald et al. (1992) describe outcome research as being focused on the effectiveness of social work makes this link explicit (effectiveness meaning the ability to cause positive effects). This is worthy of some analysis, as the issue of cause and effect in social research is problematic.
By looking at the physical world, especially a simple situation, we can start to build a critique of a dogmatic application of experimental methods or the physical science approach to outcome research. Causality can be seen as a category which distinguishes the regularity with which night follows day (which is not a causal link) with the regularity of the behaviour of billiard balls (which is causally determined). In other words this is to distinguish predictive success and causal connection (Greenwood, 1986). Intuitively, it is possible to grasp that the change in position and speed of the red billiard ball as it has been hit by the white is the effect or outcome of the collision. Given a thorough knowledge of the speed of the white and the type of surface, and composition of the red ball (the pre-existing conditions), a clear causal chain may be established with an identifiable outcome.
What can be known about this therefore is the description of what happened, (the red ball moved at this speed, in this direction, for this amount of time) and that this was the result of the red being hit by the white ball (notwithstanding the intention of the player). Thus the situation may be divided into preconditions: white ball moving towards the red at x metres per second, at y angle, on such a surface, etc.; event: white strikes red; outcome: red moves off at 1 metres per second at q angle. Repeated observations can establish that, given identical preconditions, the identical result will occur, within the limits of experimental error, and allowing for microscopic influences and therefore prediction of the outcome becomes possible in a given case. In more complicated physical systems or ‘multi-causal situations’, such as the weather, the situation is basically the same except that it is much more difficult to apply the concept of causality in practice because of the number of factors involved and the different ways in which they interact (Behi and Nolan, 1996).
Establishing preconditions is not a trivial exercise (Anscombe, 1975) and there are debates about how to characterise even these situations in terms of the logical form of statements about causation and necessary or sufficient conditions. These are, unfortunately, beyond the scope of this article (but see for example Sosa, 1975). However, assessing the cause and effect relationships (what cause will have what effect) in actual events of this kind, within useful practical limits of accuracy, is relatively unproblematic when dealing with objects of everyday size, which are not travelling at very high speed (to avoid the uncertainty of quantum effects and the problems of relativity). It may be that we can use this concept without being able to analyse it (Anscombe, 1975).
In contrast to physical systems, however, causal chains in child welfare, and social action generally, are much more difficult to pinpoint: identification of the problem, the intervention and outcome are problematic. Several authors (Thorpe, 1989 and Parton, 1996; 1991) have argued that the term ‘child abuse’ is a social construction and that. as such, it is not possible to have a single, objective description of the initial problem. For example take a case where a child has been excluded from school after behaving aggressively for an extended period, and the parents do not know what to do with her. What is the problem? The behaviour of the child? the situation of the family (perhaps in poverty and isolated) the unemployment of the parent? mental illness of the parent? racial abuse in the neighbourhood? the rules of the school? Clearly these are not mutually exclusive descriptions of the problem, but they do show that there are a number of ways of characterising the situation of families, and the description chosen will be crucial in determining the type of intervention which is thought to be appropriate. Thus not only it is very difficult to describe the preconditions as there will be different perspectives on the situation, but explanation of the impact of prior events becomes difficult.
Having established that preconditions are not easily identifiable, the intervention chosen also is not something which can be distinguished precisely (Smith and Cantley, 1988). For example, one intervention for the family may be regular sessions of Child and Family Therapy. What this actually represents will be influenced by the relationship between the child and the child guidance professional, the attitude of the parents to the intervention, the impact of being away from school and so on. In other words, the experience and meaning of the intervention will vary according to the perspective and values of the people involved. Respite care, as another example, may be perceived very differently by the child going into the children’s home for the weekend and parents having a welcome break from looking after him or her.
Thirdly, describing the outcome is problematic. For example, changes in the family situation can have many competing descriptions depending on the perspective of the people describing them (Lupton and Stevens, 1997), These authors quote the example of the outcome where the baby of a 16 year old is adopted by the grandparents. They question whether this should be viewed as a successful or unsuccessful outcome, in that it may be described as allowing the 16 year old to live a full life whilst ensuring the safety of the baby, or as the failure to keep a baby with her natural mother. In other words the values of the person defining the outcome alters its description and evaluation. Professional definitions of outcome have typically taken precedence.
This is not intended to deny the possibility of shared meanings for problems, interventions or outcomes. Indeed these aspects of the social world are created by intersubjective agreement. However billiard balls and the weather have a physical dimension in addition to shared meanings and this affects the ways in which we understand the relationship between prior conditions, events or interventions and outcomes in the two domains.
One reaction to the difficulties associated with the physical science option is to adopt an approach which takes as its starting point the problem of accounting for and understanding individual experience (interpretivism). This is based on different fundamental beliefs about the nature of the social world and the ways in which we can know about it (ontological and epistemological standpoints) from those held by researchers adopting a natural science approach (Guba, 1990). With interpretivism, no independent existence can be ascribed to the situation (Guba and Lincoln, 1994; Shipman, 1988) in which the relationships would be characterised differently (Hammersley, 1989). According to this view, the preconditions, the intervention and the outcome can only be seen as interpretations or constructions of interactions or a series of shared understandings, they do not have an independent existence. The aim always is to interpret and understand the subjective experience of others. Taylor (1971) argues that any science of humanity has to be interpretative because of the nature of the subject (humans) as a ‘self- interpreting animal’ and the role of language. Subtle distinctions in definitions of the ways in which we can understand each other make for a whole range of slightly different schools within this broad approach (see Annells, 1996 and Schwandt, 1994 for a more detailed discussion).
In child welfare, problems would be seen to reside in the interactions between all the people involved, as by trying to describe the problem the social worker will be interacting with the family members and will influence the situation. More importantly, the social worker’s and family members’ descriptions of the problem will be mediated by their values and the relationship between the protagonists, which again is something which can be seen not to exist independently of that relationship. In the example quoted above of the child behaving aggressively, a researcher using this approach would be concerned to obtain an understanding of the meanings ascribed to the behaviour and the different perspectives on the situation, as opposed to identifying causal factors in the behaviour. The aim would be to use individual cases to typify universal aspects of a situation (Denzin and Lincoln, 1994).
Critical theory/participative approaches
An alternative response to criticisms of the positivist approach, based on an understanding of social reality as being’shaped by congeries of social, political, cultural , economic, ethnic and gender factors’ (Guba and Lincoln, 1994:110) is that of critical theory. Within this approach, research should be a ‘catalyst which leads to the transformation of the social order’ (Fay, 1993:33). For Trinder the essence of this approach is an awareness of the power relations inherent in the research process:
Research is not posited as a neutral fact-finding activity. Instead research, researchers and research participants are located in a world where power is unequally distributed between gender, classes, ethnic groups, professionals. Lives and life-stories are significantly structured by these unequal power relations, with significant implications for uncovering ‘truths’. (1996:238)
Critical theorists (as do interpretivists) stress the importance of the relationship between researcher and researched and that findings are essentially created by this interaction. (Guba and Lincoln, 1994). Also in common with the interpretivists, ‘qualitative’ data gathering techniques are the methods of choice. However there are difficulties around privileging the subject’s views and accounting for the interaction between the researcher and the subject, which is historically situated and as prone to power imbalances as any other interaction (Trinder 1996). In the example of the excluded, aggressive child, critical theorists would want to involve the family in the research process and help them to use the findings to influence the institutions (school, social services) in their approach to the situation. There are, of course, many sub-schools and issues within this approach (Kincheloe and McLaren, 1994). However the important aspect for this article is that we are still left with the central dilemma of how to evaluate different claims to ‘truth’ because of the emphasis on the interaction between a particular researcher and research participant (Trinder, 1996).
Tensions in research
To sum up the debate as outlined so far. Large numbers of factors potentially influence outcomes in child welfare, or indeed in social care generally (Frost, 1989), one of which is the influence of agency on the situation. When dealing with outcome in social care, including child welfare, many of the typical events which contribute to outcome are actions by agents (Lupton and Stevens, 1997) and therefore the impact of external inputs have to be considered. For Greenwood, the aim is to provide a theoretical account of how agents are enabled to ‘determine their own actions for the sake of reasons’ (1988:110) rather than to try and ascertain a set of sufficient conditions which make actions inevitable.
In principle physical science can handle complexity, as it does in say meteorology or cosmology. However there are more intractable problems in accounting for agency and subjectivity which 1 would argue make it impossible simply to apply the experimental approach to understanding the relationships between interventions and ‘outcomes’ thought of in terms of their impact on the actions of others. It is this fundamental issue which lies behind the many critiques of the natural science approach, as identified by for example, Guba and Lincoln (1994) and Thorpe (1994). Furthermore, Guba and Lincoln claim that:
… the notion that findings are created through the interaction of inquirer and phenomenon (which in the social sciences, is usually people) is often a more plausible description of the inquiry process than is the notion that findings are discovered through objective observation as they really are. (1994:107)
The debate is also discussed by Shipman (1988). He claims that research using the natural scientific approach has highlighted important issues and deficits in public policy and quotes research into the education system which revealed the poverty of opportunity for children from poor backgrounds which has led to some real changes in policy. Furthermore, the research was seen to carry weight because it used the methods of natural science which enjoyed a powerful influence in society. However he notes the danger of reducing social research to statistical relationships between variables and the related difficulty of determinism. He also identified a further problem where’ the models were deified’, given a concrete almost god like nature’ (1988:24). In other words, socially constructed concepts, such as class, were described as having their own aims and defences as if they were real entities.
On the other hand, Guba and Lincoln, (1994) and Hammersley (1989) acknowledge that adopting the interpretivist approach, (or, as we have seen, critical theory) makes evaluating different interpretations of the world problematic and it becomes difficult to apply the results beyond the immediate research setting. Guba and Lincoln’s response to this is to propose sets of criteria for the judgement of ‘trustworthiness’ (credibility) or ‘authenticity’ (fairness) of the results of research (1994:114). However all these authors, and Schwandt (1994), accept that this issue is difficult for interpretivists. Trinder (1996) acknowledges much the same dilemma for critical theorists.
Combining research approaches: the way forward?
One possible way forward is ‘pluralistic evaluation’, in which multiple methods are used to overcome the links between different data sources and the interests of particular groups in the situation (Smith and Cantley, 1988). When combining methods, the different theoretical underpinnings and links between methods and particular type of research problem and any contradictions in the findings produced should be explicit (Brennen 1992:16-17). Smith and Cantley put forward some of the criticisms of the natural science approach made by interpretivists and participative researchers. For example, they question the possibility of an objective or neutral approach to evaluation research. However they do not dismiss the possibility of quantification.
When we were able to quantify precisely, we did so. But a qualitative account which identifies important factors for measurement is a necessary prerequisite to any attempt at the quantification of variables… (1988: 126),
Certainly Smith and Cantley saw their work as an ethnography and felt that their aim was to understand, within. a context of power relations, a dynamic situation where policy was being implemented and where the setting was changing all the time. Thus they felt able to use locally collected statistics in addition to in-depth methods. Nocon and Qureshi (1996) make essentially the same point, describing how qualitative approaches can play a large role in determining the dimensions for quantification of outcomes in community care. The point is that both groups of methods were considered equally important.
A central question if we are to combine methods in this way is to examine the link between the various epistemological positions and the actual methods used in gathering data and how these data are used to answer specific research questions. Two possibilities have been proposed. Firstly there is the view that epistemological position determines choice of methods. Secondly there is the view that the researcher chooses what he or she sees as the best tools for the job, regardless of the ov’erarching epistemology (Bryman, 1998). A possible resolution to this dilemma is the idea that different research strategies may produce different levels of explanation. In this way it is possible that the underlying tensions between research strategies may be acknowledged, whilst retaining the possibility of combining methods and approaches.
Those proposing the positivist approach have a clear answer to this question as has been shown earlier. Their view is that using large samples, control groups and looking at strictly defined aspects of experience which can be measured is the only way of providing strong evidence about outcomes or any other aspect of human life. Other methods are subservient. Interpretivist and critical theorists/participative researchers also tend to stick a particular type of method, namely in-depth techniques, where large amounts of data are obtained from a small number of people, and the results characterised as various levels of understanding (verstehen) or in terms of transforming social structure and redressing power imbalances.
Brennen (1992) alludes to the idea that there are different levels of explanation which may be served by the different approaches to research. Research focusing on the macro world may need to involve the use of statistical and quantitative methods and is where the adoption of some of the approaches of the physical sciences may be valuable. Certainly, as Shipman (1988) has pointed out, such research has proved fruitful (without conceding its superiority). Within this approach research findings apply to large groups of individuals, in terms of tightly defined, abstract concepts or variables. As Layder notes:
As we move up towards the macro or institutional elements of the setting the primacy of qualitative analysis of forms of interaction become less and less important (1993:115)
Taking a more interpretivist slant would mean illuminating micro interactions or subjective experience. In this way the research questions addressed by researchers would be different. However it is important to remember that the macro- micro distinction is socially constructed, and that macro features are in some way connected to micro interactions and subjective experience. The nature of this connection is the crux of the problem, one to which there is no hard-and-fast solution. However some light may be shed on the connections between the micro and macro elements in a particular setting by combining the results of different research strategies, possibly those nominally addressing different research questions (Layder, 1993; Brennen, 1992).
To summarise then, it is in the level of explanation required (the focus of the original research problem) and the interpretation of the different forms of data that the differences in fundamental beliefs about the world will be addressed, rather than in the choice of particular type of method or overall approach to research.
To illustrate this idea of combining research strategies 1 will describe an investigation of the impacts of a new way of supporting family decision making in child protection, Family Group Conferences (FGCs). Due to commence early in 1999, a quasi-experimental study (Lupton and Brown, 1998) will examine differences in respect of certain tightly defined outcome measures between cases randomly allocated to FGC and traditional child protection interventions. Families will be allocated blindly to either traditional child protection conferences or FGC. The researchers will take a series of measures, at the time of the intervention, and then nine months later. Differences in the changes noted between the two groups will then be ascribed to the different interventions. Thus this is to take a macro slice, illuminating the effect of the different approaches on the groups rather than on the individual families themselves.
On the other hand, 1 am engaged in a study which aims to identify, from the perspective of the family members themselves, the extent to which and the ways in which, the FGC has had an impact on their lives and the meanings they give to the various changes occurring in their situation over an 18 month period. My approach involves in-depth interviews with family members from a maximum of ten families and unstructured ‘diary’ updates. Family members are telling me the story of the background to the FGC, the experience of the conference, and the changes that take place afterwards, and are being asked to make any links they can between the interventions and those changes. The idea is to build up a theoretical understanding of the impact of the FGC on the situations. Essentially I am taking an interpretivist approach, seeing my role as interpreting the family members experiences. However, I do expect to abstract from the data to produce a theoretical account of the impact of the FGC rather than merely present the families experiences. My role in the process is important and will therefore be explicitly acknowledged. The eventual aim is to present the results of the two projects as different slices of the same picture, the subjective experience combined with the differences identified in the randomly allocated groups. This involves research which has different philosophical underpinnings. Combining the results of the two projects will therefore illuminate the micro and macro aspects of the outcomes of FGCs, and the links between these different domains. Without the in-depth work, the findings of the quasi experiment would have less validity and would be divorced from the subjective experience of family members. Unless it is accepted that the results of the more in-depth research can be generalised to this extent there is the danger of a thorough going relativity, which renders the research pointless (Williams, 1998).
Clearly there will be some sort of link between an intervention (in child welfare or other areas of social care) and salient features of the subsequent situation. The question is how to establish the nature of this relationship, the meanings of the subjective experience and the extent to which regularities can be established across numbers of individual cases.
What I have attempted to argue in this article is that the most powerful and valid research will utilise a wide package of methods and approaches to obtain both micro and macro pictures of the outcomes of child welfare interventions from the perspective of different stakeholders and to examine the links between these levels of explanation. This will inevitably mean combining the results of research strategies which may be linked to different fundamental beliefs about the nature of the world and the ways in which we can know about it.
However, a certain level of humility about the limits of research should be retained, although it is always important not to let this paralyse the researcher. Research in the field of human studies can never hope to approach the degree of certainty (however that may be characterised) obtained in mathematical reasoning, or even the accuracy with which we can establish causal relationships in the movement of billiard balls. These issues may seem to be obscure, but 1 would argue that it is important to be aware of the complexities of conceptualising the impact of child welfare interventions when considering the relevance of research to the development of policy and practice. It is in this way that these issues have direct relevance to the response needed to such government initiatives as Quality Protects (DoH, 1998). Without thinking through the possible limitations of research, being aware that there are always multiple perspectives to be considered and acknowledging that there are different levels of explanation, it is not possible to use research in a realistic fashion.
Annells, M. (1996) Hermeneutic Phenomenology: Philosophical perspectives and Current Use in Nursing Research’, Journal of Advanced Nursing, Vol 23:705-713.
Anscombe, G.E.M. (1975) ‘Causality and Determination’ in Sosa, E. (Ed) (1975) Causation and Conditionals. Oxford: Oxford University Press.
Barnes, M. (1993) ‘Introducing New Stakeholders – User and Researcher Interests in Evaluative Research: A Discussion of Methods Used to Evaluate the Birmingham Community Care Special Action Project’, Policy and Politics, No. 1:47-58.
Behi, R., and Nolan, M. (1996) ‘Causality and Control: Key to the Experiment’, British Journal ofNursing, Vol 5(4):53 – 255.
Brennen, J. (1992) ‘Combining Qualitative and Quantitative Approaches: an *Overview’, in Brennen, J. (Ed) (1992) Mixing Methods, Qualitative and Quantitative Research. Aldershot: Avebury Ashgate Publishing Limited.
Bryman, A. (1998) ‘Quantitative and Qualitative Research Strategies’, in May, T. and Williams, M. (1998) Knowing the Social World. Buckingham: Open University Press.
Denzin, N. K. and Lincoln, Y. S. (1994) ‘Introduction: Entering the Field of Qualitative Research’ in Denzin, N. K. and Lincoln, Y. S. (Eds) (1994) Handbook of Qualitative Research. London: Sage Publications Ltd.
Department of Health (1998) Quality Protects. London: DoH, Local Authority Circular (98) 28.
Department of Health (1994) A Wider Strategy for Research &
Development Relating to Personal Social Services. London: HMSO.
Department of Health (1991) Patterns and Outcomes in Child Placement Messages from Current Research and Their Implications. London: H.M.S.O.
Fay, B. (1993) ‘The Elements of Critical Social Science’ in Hammersley, M. (Ed) (1993) Social Research: Philosophy, Politics and Practice. London: Sage Publications.
Frost N. (1989) The Politics ofChild Weffiare: Inequality, Power and Change. London: Harvester Wheatsheaf.
Gibbons, J. (1997) ‘Relating Outcomes to Objectives in Child protection Policy’ in Parton, N. (Ed.) (1997) Child Protection and Family Support: Tensions, Contradictions and Possibilities. London: Routledge.
Gibbons, L, Conroy, S., and Bell, C. (1995) Operating the Child Protection System. London HMSO.
Greenwood, J.D. (1988) ‘Agency, Causality, and Meaning’, Journal fdr the Theory ofSocial Behaviour, Vol 18 (l):95-115.
Guba, E.G. (1990) ‘The Alternative Paradigm Dialog’ in Guba, E.G. (1990) The Paradigm Dialog. London: Sage Publications.
Guba, E. G. and Lincoln, Y. S. (1994) ‘Competing Paradigms in Qualitative Research’ in Denzin, N. K. and Lincoln, Y. S. (1994) Handbook of Qualitative Research. London: Sage Publications.
Hammersley, M. (1989) The Dilemma ofQualitative Method.. Herbert Blumer and the Chicago Tradition. London: Routledge.
Hill, M., Triseliotis, L, Borland, M., and Lambert L. (1995) ‘Outcomes of Social Work Interventions with Young People’ in Hill, M. and Aidgate, J. (Eds) (1996) Child Welfare Services: Developments in Law Policy, Practice, Research. London: Jessica Kingsley.
Hudson, L, Galaway, B., Morris, A., and Maxwell, G. (1996) ‘Introduction’ in Hudson, L, Morris, A., Maxwell, G. and Galaway, B. (Eds.) (1996) Family Group Conferences: Perspectives on Policy and Practice. Anndale, NSW,Australia: The Federation Press/ Criminal Justice Press.
Johnson, T., Dandeker, C. and Ashworth, C. (1984) The Structure of Social Theory. Basingstoke: Macmillan Education.
Kincheloc, J. L. and McLaren, P.L. (1994) ‘Rethinking Critical Theory and Qualitative Research’ in Denzin, N. K. and Lincoln, Y. S. (1994) Handbook of Qualitative Research. London: Sage Publications.
Layder, D. (!993) New Strategies in Social Research. Oxford, Blackwell.
Lupton, C. and Brown, L. (1998) Do Family group Conferences Produce Better Outcomes for Children than Traditional Child Protection Conferences? Proposal for Project Grant to Nuffield Institute and the Centre for Evidence-Based Social Services.
Lupton, C. and Stevens, M. (1997) Family Outcomes: Following Through on Family Group Conferences. Portsmouth: University of Portsmouth/ Hampshire County Council, SSRIU Report No 34.
Macdonald, G. and Sheldon, B. with Gillespie, J. (1992) ‘Contemporary Studies of the Effectiveness of Social Work’, British Journal ofSocial Work Vol 22(6):615-643.
Newman, T. and Roberts, H. (1997) ‘Assessing Social Work Effectiveness in ChildCare Practice: The Contribution of Randomized Controlled Trials’, Child Care Health and Development Vol. 23(4)287-296.
Nocon, A. and Qureshi, H. (1996) Outcomes of Community Carefor Users and Carers. Buckingham: Open University Press.
Parker, R., Ward, H., Jackson, S., Aldgate, J. and Wedge, P. (199 1) Assessing Outcomes in Child Care. London: HMSO.
Parton, N. (ed.) (1997) Child Protection and Family Support: Tensions, Contradictions and Possibilities. London: Routledge.
Parton, N. (1996)’Child Protection, Family Support and Social Work: Critical Appraisal of the Department of Health Research Studies in Child Protection’, Child and Family Social Work Vol. L3-1 1.
Parton, N. (199 1) Governing the Family.. Child Care, Child Protection and the State. Basingstoke: Macmillan Education.
Schwandt, T. A. (1994) ‘Constructivist, Interpretivist Approaches to Human Inquiry’ in Denzin, N. K. and Lincoln, Y. S. (1994) Handbook of Qualitative Research. London: Sage Publications,
Sheldon, B. (1994) ‘Social Work Effectiveness Research: Implications for Probation and Juvenile Justice Services’, The Harvard Journal, Vol. 33(3), Aug 94:218 – 232.
Sheldon, B. (1988) ‘Group Controlled Experiments in the Evaluation of Social Work Services’ in Lishman, J. (Ed.) (1988) Research Highlights in Social Work 8: Evaluation. Aberdeen: University of Aberdeen Department of Social Work.
Shipman, M. (1988) The Limitations of Social Research. Harlow, Essex: Longman Group.
Smith, G. and Cantley, C. (1988) ‘Pluralistic Evaluation’ in Lishman, J. (Ed.) (1988) Research Highlights in Social Work 8.. Evaluation. Aberdeen: University of Aberdeen Department of Social Work.
Sosa, E. (Ed) (1975) Causation and Conditionals. Oxford: Oxford University Press.
Taylor, C. (197 1) ‘Interpretation and the Sciences of Man’, The Review of Metaphysics, Vol. 25 (1) in Taylor, C. (1985) Philosophy and the Human Sciences. Cambridge: The Press Syndicate of the University of Cambridge.
Thorpe, D. (1994) Evaluating Child Protection. Buckingham and Philadelphia: Open University Press.
Trinder, L. (1996) ‘Social Work Research: the state of the art (or science)’, Child and Family Social Work Vol. 1:233-242.
Williams, M. (1998) ‘The Social World as Knowable’ in May, T. and Williams, M. (1998) Knowing the Social World. Buckingham: Open University Press.