What the Lancet study tells us about health isn’t as interesting (to us) as what it tells us about transparency
Jun 30, 2010
A few weeks ago, we ran a blog about the Lancet’s April 17 study of domestic government spending on health and aid fungibility. That blog, and others, have raised a series of questions about how we should interpret the findings from the study. But these commentaries have ignored an equally important issue raised by the article, and one that is much closer to our core concerns here at IBP: transparency of public expenditure.
The core of the Lancet analysis is based on estimating how much governments spend as “agent” (techno-speak for how much they spend when you combine money they raise themselves with money that is given to them to manage by donors, or other sources) and then subtracting the amount that is not raised by governments themselves in order to estimate how much governments spend as “source” (just the money they raise on their own from domestic sources like taxes).
When you think about it, this is a pretty indirect way to determine how much governments spend on health from their own resources. You might think that it would be easier to just ask governments for the numbers and skip the calculations. The study authors do not speak to this issue in their article, but the fact that they were forced to take such a circuitous route to answering this question tells us something important: governments do not report standardized information on how much they spend on health from their own resources.
Is this a problem? I think so, but it is worth exploring why. Suppose you are a single mother trying to provide for your family, and you get money from your parents, a central government welfare scheme, a local charity, and from working part-time at the library. Do you care how much comes from each source, or just whether the total is enough to pay all of your bills? You probably care mostly about the total. However, if you want to know how big the total is, you need to know how big each component is. And to know how much you need to work at the library, you need to know how much you are receiving from other sources. So, while the main thing you care about is the total, you should also care about the component parts. Which means, if we return to the case of governments: they also should care about how much they are spending on health as “source,” not just agent. And if they care about this, we can also expect them to make that information publicly available. If they either don’t collect or don’t share this information, this seems like a problem that needs to be addressed. (As an aside, this example clarifies why governments that have their own target for health spending and then begin to receive a certain amount in aid funding might want to shift resources to other priorities, as the article finds, but also why this is not necessarily such a terrible thing, as we argued in the previous blog. If you are getting more help from your parents, you can work fewer hours at the library and spend your time raising your children…or building your skill set.)
Now, it is true that collecting this information may be complex. Health spending is often fragmented across different ministries and agencies of government, some of which may be autonomous. Donors have further complicated things by demanding that their funding be managed by different parts of government and accounted for in different ways. Tracking all of this spending may require governments to engage in a special exercise, like the National Health Accounts (NHA) method, which goes beyond simply opening one’s books. Conducting an exercise like the NHA is neither cheap nor simple, and may require support from donors.
However difficult it may be, it does seem important to know at least how much governments spend in total on health, but even here, if we, like the study authors, resort to collecting information on total spending as agent, we still find that there is a dearth of public information. The Lancet authors use two sources of data on government-as-agent spending: the WHO and the IMF. In both of these datasets, there is a fair amount of missing data, which the study authors ultimately have to estimate. Over the twelve years of the study, 35 percent of the observations are missing from the WHO dataset. That means that, 35 percent of the time, governments are not providing data even on their total spending on health. In the IMF dataset, the figure is 25 percent. So even if we lower the bar from disaggregated spending figures to aggregate spending figures (all spending by governments, including that raised locally and that from donors), there is a lot of information missing.
This is definitely a problem. To go back to our analogy of the single mother, there is no way to figure out if you have enough money to pay the bills if you don’t know… how much money you have to pay the bills. And there is no way for citizens to know how much their governments are spending on health if governments don’t report this information.
There is one more important issue raised by the Lancet article only obliquely, but that deserves further discussion. Explaining the sources they relied upon, the authors mention that they did get data from “national sources” for 22 countries. But they actually discarded these figures. Their explanation for discounting this data raises additional questions about how we collect this data at the international level. According to the authors, the data sourced from individual countries “were not as complete as those from WHO and IMF.” Now, consider the possible explanations for this. Since the data that WHO and IMF report originally comes from countries themselves, there are only a couple of possibilities. One possibility is that country governments collect high quality data and pass it on to international bodies, but then destroy or misplace it, while the international bodies maintain it. An alternative explanation is that these countries are providing low quality data to the international agencies and they in turn are manipulating the data to improve its quality. This may be done by various methods, all of which involve estimating missing data, usually from data that you do have. The quality of these estimations is highly dependent on the data you do have, and the assumptions you use to make the estimations.
All of this is pretty depressing. In the first case, governments are collecting data, but just being sloppy about maintaining it. If the data is worth collecting, it should be worth maintaining. In the second case, the quality of the data is low and we are being forced to rely on estimations, estimations that should inspire limited confidence at best. Indeed, although this is not the intent of the Lancet article, the authors make this point somewhat indirectly, when they attack the methods the WHO has used to make its datasets whole:
“The user could not distinguish between data reported by countries and the imputations in the public dataset….WHO’s imputation methods are not standardized, and the imputations were often based on the assumption that the ratio of government health spending to general government spending was constant with time…use of these ad-hoc and non-standardised imputations would bias the results.”
Note the rather counter-intuitive estimating assumption used by WHO: that in a period when almost everyone thinks there is more spending on health relative to other priorities, health spending has (by assumption) risen only proportionately to spending on other priorities. This alone is sufficient grounds for doubting the usefulness of the estimates, without knowing anything about underlying data quality.
Lurking beneath a contentious article about aid fungibility then, are some pretty stark findings on the availability of good data on public expenditure. This ought to make us howl at least as much as the possibility that governments don’t spend aid money the way the donors would like them to.