Written by Calvin Lakhan, Ph.D, Co-Investigator: “The Waste Wiki” – Faculty of Environmental Studies at York University
Preface: I spent an abnormally long time trying to think of a title for this article – the topic of Extended Producer Responsibility is contentious, complex and requires nuance….. eventually I gave up and figured I would just tell it as it is.
I recently had the pleasure of presenting at the Conneticut Recyclers Coalition, joined by Resa Dimino of RRS, and Jim Gordon of the Ontario Association of Municipalities, where we discussed the merits and pitfalls of EPR legislation for packaging waste.
While there are differences of opinion with respect to the efficacy of EPR legislation for packaging waste, I found the conversation enlightening – this type of dialogue is necessary to better understand the impacts of EPR legislation, and whether it is a path worth going down.
During one of the presentations, two slides in particular caught my attention (Source: Resa Dimino, RRS/Signal Fire):
Understanding your baseline
The data in figures 1 and 2 come from a study conducted by RRS on behalf of the state of Oregon, which looked at the relationship between the jurisdictions who have EPR legislation for printed paper and packaging and overall recycling rates. The study found that jurisdictions who implement producer responsibility have, on average, higher recycling rates than those that don’t, and that recycling performance increased post implementation of EPR. British Columbia in particular was touted as being particularly successful, enjoying recycling rates well in excess of 70% for residential packaging waste.
What is the conclusion we can draw from this finding? While most would be inclined to say that EPR leads to higher recycling rates (as was the conclusion of the report), the reality is that nothing can be inferred based on this information alone – context is critical. While I could write at length about the issues associated with this, for the purposes of brevity, contextual factors that need to be considered include:
· What was the recycling rate of each jurisdiction prior to the implementation of EPR?
· Were there any programmatic or infrastructural changes that accompanied the implementation of EPR?
· Available infrastructure of waste management systems both pre/post implementation of EPR, and when comparing EPR and non EPR jurisdictions
· Demographics (age, income, education, ethnicity etc.) of EPR and non EPR jurisdictions, and have demographics changed over time?
· Relative maturity of waste management systems when comparing EPR and non EPR jurisdictions
· Exogenous factors, including macro-economic conditions, commodity pricing for recyclables, national/international legislation etc.
In order to specifically isolate the effect of EPR legislation on jurisdictional recycling rates, you would have to control for the above factors. While I will avoid delving into the statistical nuances of how to control for identifying dependent and independent variables, correcting for collinearity/endogeneity etc., the situation is (as best as I can) explained by the following: Not everyone begins from the same starting point – some jurisdictions may have well developed collection/processing infrastructure in place, while others may have depot systems only. Demography is also an often neglected, but significant predictor of recycling participation – as an example, a jurisdiction characterized by a higher density of multi-residential buildings and immigrant population is going to have markedly different recycling performance. The same can be said of areas characterized by rural communities and lower population densities.
Comparing any one jurisdiction with another *requires* you to control for these differences in order for meaningful inferences to be made. Referring specifically to the EPR study, I would argue that EPR is not responsible for higher recycling rates. Rather, jurisdictions who are at a point where they have implemented EPR are likely to have more of the conditions that lead to a successful recycling system, i.e. high levels of access/service coverage, informed population, robust collection networks etc etc.
The relationship between EPR and price of consumer goods
The findings in figure 2 can be found in an RRS study that undertook a comprehensive examination of jurisdictions across Canada (both with and without EPR legislation) and compared the prices of various packaged goods to determine whether EPR had an impact on product pricing. The hypothesis was that if EPR were to have an impact on price, then jurisdictions with EPR policy should observe higher prices than those without.
As shown in Figure 2, RRS did not observe any statistically significant differences between product prices, and concluded that EPR did not have a discernable effect on the price of packaged goods. This finding was embraced by supporters of EPR policy, who often contend with claims that EPR will not adversely affect consumers.
Unfortunately, this finding was based on a faulty premise, as there were numerous methodological deficiencies that were not addressed in the study. This ultimately lead to erroneous conclusions that could be supported by the data – this isn’t a question of opinion – given the way the study was designed, it is not possible for the study to make any statements regarding the effect of EPR policy on packaging prices. Comparing costs across jurisdictions (even for like products and retailers) is not likely to yield any meaningful inferences with respect to the impact of EPR policies.
There are literally hundreds of variables that affect the price of goods across localities (even for the same product and retailer). Demographics, infrastructure, relative purchasing power, proximity to markets, density of competing retailers etc. all effect price. In order for the authors of that study to make the statements they did, they would have to control for all of these factors using statistical techniques to specifically isolate the effects of EPR on packaging prices. Given that many of these explanatory variables are collinear (interrelated), they would also need establish controls for interdependency among explanatory variables.
It should be noted that the aforementioned statistical controls were outside the scope of the RRS study. They were asked to compare the prices of packaging products across jurisdictions with and without EPR policy and that’s exactly what they did. The issue has less to do with what the authors of the study did, and more to do with the question being asked. The question of “what are we trying to compare and why?” is what is of paramount importance, largely because it is used to inform the next question “what data do I need to answer that question?”.
It’s too often that the latter part of that question is neglected, which is problematic, because it can lead to faulty interpretations and bad policy. While we have all heard the expression by Mark Twain “There are lies, damned lies and statistics” – this is one of the rare instances where there is a right and wrong answer.
Using the example of the RRS study, there is quite literally no inference you can draw based on the analysis that was conducted (at least with respect to the impact EPR has on packaging prices). What the report showed was that prices for packaged goods vary based on retailer and by location, but not why they differ. Beyond making an educated guess, we are left no more informed about EPR and packaged goods than when we started. While statistics is often used to obfuscate the truth (or some version of it), this isn’t one of those times.
Understanding system maturity
The maturity of a waste management system is rarely discussed when evaluating the effectiveness of waste management policy. As an extension to a point I made earlier, where you are starting from radically affects the potential efficacy of given program or policy. During a program’s onset, initiatives such as promotion and education, service expansion, increased accessibility are likely to yield significant improvements in the overall recycling rate. However, as a program matures, the impact of these initiatives diminishes – not because the policies no longer work, but because they have already “captured” people who are likely to participate in recycling. As a system matures, so does the difficulty in diverting the “marginal tonne”. Initial program success is characterized by the recovery of readily recyclable materials (newsprint, OCC/OBB, Aluminum etc.) among groups who face low barriers to participation (single family homes with curbside access). Once a program reaches a “stasis” point, going over and above that particular level of recycling requires increasingly more effort (expressed in terms of time, cost, resource etc.).
Future increases or decreases in diversion rates are unlikely to differ significantly from this stasis point, barring major programmatic changes or systemic disruption (i.e. Financial crisis of 2009/2010, Chinese sword etc.). As an example, Ontario’s “steady state” recycling rate for the Blue Box program is between 60% and 67%. By comparison, British Columbia’s “steady state”, appears to be between 75% and 80%. Differences in the steady state point across jurisdictions are often a function endemic factors that are specific to a particular area (demography, infrastructural access etc.) and cannot be readily replicated by other cities/provinces/states.
Returning to the discussion of cause and effect, policy makers will often erroneously conflate the effectiveness of a particular policy in one area (at a given point in time) and assume that to be true for all other areas. Recycling promotion and education is a good example of this – the effectiveness of P&E at a programs onset (where awareness and attitudes towards recycling are low) is demonstrable. It is a critical tool that yields positive results….. at first. As a system matures, the effectiveness of recycling P&E diminishes (my first published paper as on this topic: https://doi.org/10.1016/j.resconrec.2014.07.006) .
While the study linked above goes into a more detailed discussion as to why, the simplest way to describe this phenomenon is that at a certain point, the people targeted by P&E will already be recycling. Appeals to environmental altruism, sustainability and collective responsibility will resonate with certain households, who will then make recycling a habitual behavior (hence the uptick in recycling performance at a programs onset). However, that same messaging over time does very little to encourage recycling among households who either don’t care about the importance of recycling, or more likely, face infrastructural, knowledge based or cultural barriers to access.
There is a temporal dimension to understanding cause and effect in waste that obscures the relationship between action and outcome. Without taking the time to consider system maturity or a given jurisdictions “starting” and “stasis points”, it is incredibly easy to arrive at the wrong conclusions.
The importance of study design
The expression “an ounce of prevention is worth a pound in cure” seems fitting when discussing cause and effect with respect to waste. So many of the issues and erroneous conclusions that stem from confusing correlation with causation can be mitigated through appropriate study design.
Anecdotally, I have encountered several situations where I was unable to answer the question I wanted to, or alternatively, arrived at the wrong conclusion because I didn’t take the time to lay out the study properly. This was particularly true of behavioral research earlier in my career, where my failure to establish controls among study participants lead to several months of data collection being rendered useless. While extraordinarily frustrating at the time, it was a valuable lesson that you cannot rush when trying to understand the relationships between outcomes (i.e. recycling rates) and causes (i.e. P&E policy).
While this article was written with waste in mind, it is important to understand that the confusion between correlation and causation is not unique to the sector. A lack of technical proficiency, combined with the sheer difficulty of collecting the right data has (and will continue to have) a negative impact on policy and decision making. In many ways, it’s the desire to understand cause and effect that ultimately leads to the wrong conclusion – we desperately seek to understand the relationship between two things, and in doing so, risk making inferences and connections that aren’t really there. That is why it is so imperative to (specific to waste):
1) Understand your baseline – where am I starting from, what data do I have, what are the characteristics of my community? etc.
2) Measure maturity – Is my recycling system mature, or still in its infancy? What percentage of the program has service access? What percentage of households are participating in recycling initiatives? Is there any form of supporting legislation to encourage managing waste?
3) Understand what you are trying to compare and why – What do I want to know? What scenarios do I want to test? What data currently exists? What data will I need to answer these questions, and how do I go about collecting it?
4) Establishing controls: What is my dependent variable? What is my basis of comparison?, What variables do I need to control for? How do I know what variables are independent? How do I measure the strength of the relationship between my dependent and independent variables?
5) Correcting for collinarity? Are any of my variables related to one another, and if so, how do I correct for it? How do I measure the strength of the relationship amongst my independent variables that are collinear?
While the above is not an exhaustive list, the intent is to better understand the distinction between correlation and causation, and provide a framework that touches on key elements for consideration. Despite the difficulties of data collection and analysis, the risks associated with poor study design and resulting policy/legislation can be catastrophic.
About the Author
Calvin LAKHAN, Ph.D, is currently co-investigator of the “Waste Wiki” project at York University (with Dr. Mark Winfield), a research project devoted to advancing understanding of waste management research and policy in Canada. He holds a Ph.D from the University of Waterloo/Wilfrid Laurier University joint Geography program, and degrees in economics (BA) and environmental economics (MEs) from York University. His research interests and expertise center around evaluating the efficacy of municipal recycling initiatives and identifying determinants of consumer recycling behavior.