Reports from the Modeling Team – 2020
The modeling team would like to thank the Cornell community for the sincerity and energy put into providing comments on our work. The webpage provides links to our reports and updates.
Modeling Update for Gateway Testing (August 5)
We provide this updated modeling analysis of gateway testing prompted by three recent events: COVID-19 prevalence has risen in parts of the U.S. since we wrote our report in June; a potential lack of test access for some Cornell students in their home location; and the recently instituted requirement that people coming to NY State from some high-prevalence are as must self-quarantine upon arrival regardless of test results.
Addendum (July 17)
This provides an in-depth analysis of some of the questions posed in response to the June 15 Report. In particular, the addendum studies (a) alternative methodologies for estimating contacts / day and transmission rate, (b) the effectiveness of increased test frequency in mitigating the effect of higher-than-modeled contacts / day, (c) the effect of non-compliance with testing, and (d) the effect of offering testing to virtual instruction students.
Original Report (June 15)
Comments on the June 15 report (and responses) are given below. Comments are indexed for easy referral.
Issue (Last Updated July 3) |
Contacts-Per-Day |
Higher-than-anticipated transmission rate |
Effectiveness of Testing |
Testing in the virtual instruction setting + testing compliance in the residential setting |
Off-campus students are tested in the residential scenario |
Number of Students Returning in the Virtual Instruction Scenario |
Modeling fatalities |
Racial and Ethnic Disparities |
Pressure from university leadership |
Effect of raising transmission rate equally in virtual and residential instruction settings |
Capacity in local hospitals |
Framing of uncertainty |
Crediting of experts in disease modeling |
Impact on Tompkins County |
Impact of students arriving early because we would start Sep 2 |
Number of cases missed in gateway testing |
Note On Data
Data is central to the scientific approach that is being taken. Two surveys taken early in the summer had a big impact on the university’s approach to F20.The Student survey gets at residential vs online instruction and the likelihood of coming to campus/Ithaca for the fall semester. The Faculty survey that gets at in-person vs online instruction and risk assessment of being on campus.
Modeling Team Replies
1. Contacts per day
A number of comments expressed concern about our assumption of 8.3 contacts per day.
First, it is absolutely true that there is uncertainty about the “right” choice for the number of contacts per day. Figure 11 in the report shows how our results vary as this parameter changes. It clearly plays an important role, so the concerns expressed about its value are valid.
Second, our initial modeling views the transmission probability assumed (2.6%) as appropriate for a close contact. While definitions of a close contact vary, the CDC definition is “6 feet or less for 15 minutes or more” (https://www.cdc.gov/coronavirus/2019-ncov/php/public-health-recommendations.html).
While the literature is uncertain about the importance of transmission through interactions with smaller duration or larger distance, the CDC considers their probability of transmission low enough to treat individuals with these kinds of interactions in the same way as those in the general population.
Given that the C-TRO report calls for spacing out lecture halls so that students will be more than 6 feet apart from each other (bottom of p.9 in the executive summary), sharing a lecture hall does not constitute close contact according to the CDC definition. In response to F28, we have discussed this with Kim Weeden and Ben Cornwell. Our addendum will discuss the extent to which being in the same lecture hall further apart than 6 feet can lead to transmission if people are well-separated, masks are worn, and individuals do not cough, sneeze, sing, yell, spit or otherwise propel material into the air at high velocity. The most critical aspect of this is the extent to which SARS-CoV-2 is transmitted via aerosols, a point on which the literature is highly uncertain. Regardless, it likely is a good idea to strongly discourage students from yelling in class, and to be vigilant about coughing.
As another datapoint in specific response to questions about Weeden & Cornwell 2020, Samitha Samaranayake (Civil and Environmental Engineering) and PhD student Matthew Zalesak are undertaking a modeling study in collaboration with Weeden, Cornwell, Nathaniel Hupert (Cornell Weill) and modeling team-member David Shmoys where spread due to sharing classrooms is studied under different course scheduling policies. One aspect being modeled is the fact that transmission diminishes as the distance between individuals grows. Our understanding is that including this important biological detail results in a significantly diminished risk from within-classroom spread.
Third, the nominal value of 8.3 contacts per day is meant to reflect an average over all members of the campus community, including faculty, staff and graduate and undergraduate students. We expect some members of the campus community to have a larger number of contacts per day, while others will have smaller values.
Fourth, we especially appreciate the set of references provided by comment F16. We will discuss these and other related references in our addendum.
These references use data from a period before COVID-19 when measures like reduced classroom density, restricted access to dorms and limits on social gathering size were not in place. Moreover, not all forms of contact discussed in the literature necessarily constitute a risk of transmission as high as what we model.
In our reading of Wallinga et al., 2006, the number of contacts per day is lower than the comment’s stated value of “low 20s”. The paper studies a 1986 survey in the Netherlands that asked respondents for “the number of different persons they conversed with during a typical week, excluding household members.” In our reading of Table 1 in that paper, individuals aged 13-19 converse with an average of 8.2 other individuals per day, those aged 20-39 converse with 7.4, those aged 40-59 converse with 6.2, and those over the age of 60 converse with 4.1. Keeping in mind that our 8.3 contacts / day is meant to represent an average across the entire population, including faculty, staff, and students, the Wallinga et al. paper would seem to be roughly consistent after adding in household members. Their estimate would seem to be higher than ours after adjusting for a reduction in contacts due to social distancing.
Also, while interesting, Del Valle et al. 2007 is based on a simulation (using census data, business directory data, and national household transportation survey data) that lacks real-world information about room sizes and occupancies. The authors write, “Our simulation results (Robbins et al., 2006) show that the average degree (average number of contacts per person) is sensitive to variations in size of the sub-location.” We therefore assign less weight to this paper’s estimate of contacts / day than other papers.
Mossong et al. 2008 finds values of 18 for 15-19 year olds and in the range of 12-14 for each of the age groups between 20 and 60. Some of these contacts are less than 5 minutes in duration.
As stated in the comment, Leung et al. 2017 finds a number of contacts per day of 8.1 in Hong Kong, 13.4 in Europe and 18 in a previous survey in Hong Kong.
In Weeden & Cornwell 2020, as discussed above, the type of interaction studied does not necessarily correspond to a substantial risk of transmission.
We also note that the variation in contact rate across age groups in these papers is less dramatic than some may fear. For example, in both Wallinga et al. 2006 and Mossong et al. 2008 the contact rate in university-aged individuals is roughly 30% larger than that in the general population. This would seem to support our approach of taking a population-level R0 of 2.5, inflating it to account for the fact that our population has more young people than most, and then decreasing it by the same amount to account for the fact that we are enforcing social distancing measures.
In summary, we do not see the literature in F16 as providing clear evidence that 8.3 is an underestimate under social distancing conditions. It does, however, support an important point on which we all agree: the right number may be larger, and we should make sure we are ready for this possibility.
2. Responding to higher-than-anticipated transmission rate
We have said previously that we can respond to higher-than modeled transmission rate through increased testing and targeted distancing / cleaning / compliance interventions.
We found particularly compelling the concerns about contact rates in residence halls. This concern about residence halls matches with the fact that high attack rates are found among individuals who live together in the same household, with low attack rates through other kinds of interaction. We will focus in particular on a scenario where contacts among those living together on a dorm floor are high. We will study the extent to which testing targeted to those individuals controls the spread of an epidemic in the addendum.
We similarly worry about contact rates in high-density off-campus housing, especially in fraternities and sororities.
One piece of intuition to keep in mind is that there is a period of roughly 2 days after exposure when new cases are not infectious. If we test frequently enough then infectious cases found will not have been infectious for long. Detecting a positive within 2 days of its becoming infectious followed by contact tracing and testing targeted to that individual’s dorm floor would allow us to isolate any secondary cases before they become infectious and create new tertiary cases. This could additionally be augmented by temporary extra cleaning and social distancing measures targeted to the dorm floor while additional testing is performed. This would dramatically reduce the spread through residence halls. Our understanding is that this strategy of frequent testing within dorms is also being pursued at Boston University. Restricting access so that only those that live on a dorm floor can gain access would complement this approach.
The paragraph above has been clarified to respond to F32. In this clarification we hope it is more clear that this paragraph does account for the fact that infections are modeled as undetectable in PCR following exposure for a random period of mean length 2 days. The above example does not quantify the effect of false negatives — it is intended only to give intuition and not to present a formal analysis. The addendum will provide this quantification.
3. Effectiveness of testing
F9 writes, “We can respond initially by testing more often, as Peter wrote, but that only goes so far: the curves in Fig. 15 (infection rate vs. testing rate, above the nominal 20%/day) are a whole lot shallower than the ones in Fig. 11 in the relevant range (increasing R0 above 2.5 is equivalent to increasing transmission per contact above the nominal 2.6%).
These curves in Fig 15 are shallow because, with nominal parameters, testing does a good job of controlling infections before they spread, and most of the infections are due to our assumed rate of infections from the outside community. (This is actually quite significant, in our nominal setting, perhaps pessimistically.) Thus, with the nominal parameters, once you get above a certain test frequency, there is nothing more for the testing to do. If we increase the transmission rate however, then we expect testing more frequently to provide significant value. We are studying this effect in the forthcoming addendum.
4.Testing in the virtual instruction setting + testing compliance in the residential setting
We have analyzed both non-compliance in the residential setting and offering testing in the virtual instruction setting. This was done before the June 15 report was published but was not ready in time to be included in that published report. Some of the results in this analysis are available as a slide in the June 24 faculty senate meeting. We will include a full writeup of this analysis in the addendum.
One way to understand the impact of non-compliance in the residential setting is to look at Figure 15 in the full report, which shows results as a function of the percentage of the population tested each day. If non-compliance is distributed uniformly across the population, then failure to comply is equivalent to 100% compliance with less frequent testing. Using this reasoning, we can see that 71% compliance with 5-day testing in the residential setting is comparable to 100% compliance with 7-day testing with results pictured in Table 14 (1800 infections, 22 hospitalizations).
The analysis to be included in the addendum shows that if we achieve high compliance with once-per-5-day testing in the virtual instruction setting, then we do see fewer infections than the residential setting, with the breakeven point around 50% compliance. (A number of details will be discussed in the addendum.)
Although this falls outside of the realm of modeling, we comment briefly here on some practical reasons that testing with high compliance is difficult in the virtual instruction setting. We defer other legally-focused questions about our ability to mandate testing to University Counsel.
In the residential scenario, Cornell’s ability to ensure compliance with testing comes first from the ability to restrict access to physical property for those that miss testing, and also through an RA’s ability to talk directly to students who live in dorms. In the virtual instruction scenario, for students that aren’t using Cornell’s physical property, this first ability goes away.
Cornell can also restrict access to our electronic property, e.g., Cornell email, canvas, for those that aren’t tested. In the virtual instruction setting, achieving this second kind of control over electronic property requires knowing where a student is. This is because we can only expect those students in Ithaca to be tested. Students elsewhere will not have access to testing, at least not on the cadence that we would require. We do not currently have a mechanism for determining where students are. While we could simply ask students to self-report their location, a student who has their email & canvas access revoked due to not being tested recently would plausibly be tempted to simply say they have left town as a way to regain access.
We emphasize that a residential student that has their netid & canvas access revoked because they were not tested would not have an easy way to get access back, other than to be tested.
5. Off-campus students are tested in the residential scenario
F31 writes, “But in the reopen scenario the off campus students are also not included in the surveillance testing.”
Off-campus students are included in surveillance testing during the residential scenario in which Cornell reopens.
As described in the section above, one reason that it is easier to mandate testing for off-campus students in the reopen scenario is because Cornell will know students’ locations. This allows us to enforce testing simply by turning off email and canvas access for any enrolled student that does not comply. In the virtual instruction scenario, Cornell will not be able to reliably know whether a student has not been tested because they are living elsewhere or because they are living in Ithaca but misrepresenting their location.
In addition, as described in Appendix 4 of the C-TRO report, Cornell’s legal authority to mandate testing (and behavior changes) for off-campus students is broader in the residential scenario than for “students living off campus and taking classes online only”, even though the off-campus students would not reside on campus.
6. Number of students returning in the virtual instruction scenario
The June 15 modeling report studies what happens when 9,000 students come back to Ithaca, unmonitored, in a virtual instruction setting. Although there are no community comments here on this topic, questions in other fora have asked about the sensitivity of our results to this number.
To address this, we studied what happens if fewer than 9,000 students return. We presented this in a Cornell faculty senate meeting on June 24. Slide 6 of this slide deck shows the number of infections and hospitalizations as we vary (1) the number of unmonitored students returning and (2) contacts / day in the virtual instruction setting relative to the residential setting.
If virtual instruction is unable to reduce contacts / day despite reduced density (e.g., because of the reduced legal authority to mandate behavior called out in Appendix 4 of the C-TRO report), then it remains worse than residential instruction down to under 1,000 students.
If virtual instruction is able to reduce contacts / day by roughly 40%, then the breakeven point improves to 2,000 students. In other words, if more than 2,000 students return, then infections and hospitalizations are larger in the virtual instruction scenario.
Thus, if we assume that we would achieve a reduction in contacts / day somewhere between 0% and 40% relative to the residential setting, an appropriate threshold to examine is 2,000 unmonitored students returning. Although we think it is entirely plausible that fewer than 9,000 students would return, especially if Cornell, the City of Ithaca, and Tompkins County all strongly discouraged students from living in Ithaca, student survey results and conversations with local landlords suggest it would be substantially harder to achieve fewer than 2,000 unmonitored students returning.
If virtual instruction is able to reduce contacts / day by substantially more than 40% (i.e., achieve fewer than 3.32 close contacts / day among the unmonitored students), this corresponds to an assumption that the R0 among the unmonitored student population is below 1, or is close enough to 1 that contact tracing alone is sufficient to control epidemic growth, while the R0 in the monitored residential setting is 2.5. Under these assumptions, virtual instruction becomes safer than residential instruction even if more than 2,000 students return.
7. Modeling fatalities
Unfortunately, fatalities may happen in any of the scenarios considered (residential instruction, virtual instruction) and in no way did we mean to imply otherwise.
We did not model them in the June 15 version of the report because they depend in a critical way on factors that are challenging to model, especially access to ventilators and the fraction of the infections in each age group that strike those with underlying health conditions. In particular, access to health care in Ithaca and the health of the population are both better than among the general population, especially for the time period on which most fatality data is based.
For these reasons, we felt that hospitalizations could be modeled more reliably than fatalities and would be a better guide as to the magnitude of the health consequences. In doing this, we meant for the reader to understand that any hospitalization can result in death. We believe that this was clear to members of the C-TRO committee when discussing the results, and we talked many times about measures to reduce mortality and morbidity (for example, pausing programs in which students volunteer in nursing homes).
In retrospect it would have been better to be more explicit about this in our modeling report. To get a sense for the relationship between hospitalizations and fatalities, refer to the CDC planning scenarios (https://www.cdc.gov/coronavirus/2019-ncov/hcp/planning-scenarios.html). Take the ratio between the Symptomatic Case Fatality Ratio and the Symptomatic Case Hospitalization Ratio to see that the fatality rate among those hospitalized is modeled by the CDC as being between 3% and 18% in their “best estimate” scenario. Thus, 16 hospitalizations in the nominal residential instruction setting would correspond to an expected number of fatalities between 0.5 and 2.9 fatalities. But again, please keep in mind that we have a significant amount of uncertainty about this number.
One of the comments refers to a sentence on page 32 that errantly suggested that fatalities would be 0. This referred only to our code, not to our beliefs. Our code does not model fatalities because we did not have time to add this feature and not adding them did not affect other results, but again should not be considered to imply that fatalities would not happen in reality.
It is a mischaracterization to claim that fatalities are not top of mind for the modeling team. Like all of those reading this comment, we are extremely concerned for the health and well-being of all members of the Cornell community. We ourselves are a part of this community, as are many of our dearest friends, neighbors, colleagues and family.
8. Racial and Ethnic Disparities
It is unfortunately true that there are significant racial and ethnic disparities in the impact of COVID-19. We will investigate what data is available for modeling and then discuss either in the addendum or a follow-up comment. We also think it is important to assess disparities in quarantine and health impacts in ongoing monitoring that Cornell would do as part of a reactivation.
Racial and ethnic disparities in health outcomes and quarantine among Cornell students are likely to be different than in the overall U.S. population. Indeed, some of the most important drivers of these disparities in the overall population, such as lack of access to healthcare and an over-representation of racial and ethnic minorities among essential workers, are likely to drive outcomes less significantly in our student population.
9. Suggestion of pressure from university leadership
One comment expresses a concern that the contacts / day parameter was changed because of pressure from university leadership. This concern is unfounded.
This comment is referring to section 7 of the June 15 report, which writes, “Based on feedback from President Brown and his team on the May 27 version, we reexamined the literature on asymptomatic rates, which led us to make a number of modifications to other parameters that significantly altered the results.”
President Bob Brown is the President of Boston University. Here, “his team” refers to David Hamer, who is Professor of Global Health and Medicine at BU involved in BU’s plan for COVID-19 response. Thus, the interaction which led to the change to our parameters did not come from Cornell University leadership — it came from BU. Indeed, Cornell University leadership was unaware that we were making this change and only became aware when we provided them with an updated report.
Moreover, the suggestion that Bob and David made was to increase the fraction of cases that are asymptomatic. We had been using a number based on early data from Italy and China which was subject to underreporting bias. They did not ask us to change the number of contacts per day. After reading the literature about asymptomatic rates, especially data from the CDC, we realized that there was an opportunity to improve other aspects of our parameters. We changed the number of contacts per day based on this reading, as explained in Section 7.
10. Effect of raising transmission rate equally in virtual and residential instruction settings
F21 discusses what happens if we raise the transmission rate in both the virtual instruction and residential instruction setting.
First, we should clarify that the people infected in the virtual instruction scenario are not just the 9000 unmonitored students living in Collegetown but also the graduate students / faculty / staff living on campus who are being tested regularly (with full compliance). They become infected because they are subject to outside infections from Tompkins County, just like in the residential scenario. We do not model the infections from the unmonitored Collegetown students, but they would present a substantial risk.
Despite this clarification, what F21 says is true. If we hold the transmission rate equal in the two scenarios and raise it, then what we’ll see is that the fraction of the population infected grows in both settings. The fraction infected will be smaller in the residential setting because of the testing in place, but if we bring the transmission rate high enough then the fraction infected will approach 100% in both settings. Then, because the overall population size is larger in the residential setting, nearly 100% of the population represents more people.
With this said, we currently believe that the transmission rates that we are most likely to face in reality are ones where interventions available to us in the residential setting will prevent having a high fraction of the population infected. The addendum will investigate further this critical question.
With regard to Provost Kotlikoff’s response regarding this in the town hall, we were not able to find that specific response and so we are unable to comment.
11. Capacity in local hospitals
The comment is mistaken about what the report states. The report estimates that there would be 16 people hospitalized during the fall semester. It does not comment on the local hospital’s capacity to hospitalize. This was top of mind for President Pollack and has been considered elsewhere in the C-TRO process, but the important task of understanding healthcare capacity is not part of the scope of the modeling report.
12. Framing of uncertainty
F9 writes, “So I would hope to see messaging and planning that acknowledge the vast uncertainty. For example, the Executive Summary of the modeling report highlights numbers (percentiles of projected infection rates) that take no account of parameter uncertainty, which is very misleading.”
Regarding the executive summary of the modeling report, we included a discussion of parameter uncertainty in the executive summary. See, for example, the bullet beginning “In all of our modeling results, modifying modeling parameters by only a modest amount from nominal values can result in substantially different numbers of infections and hospitalizations.” About half of the bullets below this describe uncertainty, limitations, or assumptions in one way or another. However, such bullets do not begin until the bottom of page 2 in the executive summary and we should have raised their prominence.
F30 points out that “some of the comments here about the ‘executive summary’ being misleading” may actually refer to the executive summary of the C-TRO report, rather than the modeling report. F9 does specifically refer to the “Executive Summary of the modeling report” and F5 offers general advice and context rather than criticism, but F30’s statement could be true for F2.
Regardless of comments’ intent, we agree that the executive summary of the main C-TRO report could have done more to articulate the full range of uncertainty in our modeling efforts. For example, in the often-cited phrase, “Paradoxically, the model predicts that not opening the campus for residential instruction could result in a greater number of infected individuals affiliated with Cornell”, the uncertainty conveyed by “could” is easy to miss next to the certainty suggested by the typographical emphasis applied to “not”. While we devoted substantial effort toward conveying caveats and nuance in our own modeling report, we could have devoted more effort to ensuring that more of this nuance was retained by translations and summarizations.
13. Crediting of experts in disease modeling
F9 writes, “The report doesn’t credit Yrjo, Ivana, or anyone with prior human disease modeling expertise for any involvement before June.”
The report actually does credit them, by name in Section 6. It also credits them indirectly in the executive summary, writing “This effort is supported by … a set of reviews provided by experts both within and outside Cornell on a previous version of this report.” However, we should have credited them in a more prominent place.
14. Impact on Tompkins County
This is an excellent point.
Qualitatively, we think that the impact on Tompkins County will be mild in the nominal residential setting: gateway testing is quite effective at identifying cases before they infect others, and then asymptomatic screening is able to isolate the infectious cases we miss quite quickly. A substantial number of the 1254 cases in this setting are actually due to infections originating outside of Cornell.
However, it is also possible that a small number of cases that leak out of Cornell could grow substantially in other communities that lack access to regular testing. This could cause significant negative effects in those communities, and the infections created there could also come back and reinfect Cornell students, staff and faculty.
Quantitatively, we have only recently developed the capability to study this, through a new simulation framework that can simulate multiple groups of people. We plan to use this to study the question you ask. It likely will not be in the first version of the addendum, but we do plan to include it in a follow-up version.
In the nominal virtual instruction scenario, although we do not model it the negative effect on Tompkins County is likely to be quite substantial.
15. Impact of students arriving early because we would start Sep 2
This is also a good point. Rather than modeling all of the students as arriving on Aug 27, it may be better to model them as gradually arriving. It is also important (critical, even) to model our gateway testing procedure for these students. We will investigate what our plans are here for testing before conducting any modeling work. We will provide an update either as a comment here or in a follow-up to the addendum.
Even without modeling, we think that having a significant number of people returning from afar without having gateway testing in place can be quite dangerous. The sooner we can have the mechanisms in place to include all returning off-campus students in our asymptomatic testing protocol, the better.
16. Number of cases missed in gateway testing
F32 writes that “a back of the envelope computation shows that this two 2 undetectable period will lead to about 10 case of infected students which will not be detected by the rigorous testing during the move-in days. This number grows to 15-20 if one includes the the test has false negatives.”
Our analysis of gateway testing includes both (1) cases missed because they are too soon after exposure to be detectable in PCR; and (2) cases missed due to false negatives.
This is discussed in detail in the full modeling report in Section 2.7 and 3.1 with the spreadsheet used to do the analysis available for examination from a link provided in Section 3.1. In that spreadsheet, people who are in the undetectable state and thus missed in gateway testing are in cell E28. This is 0.07% of the population of returning students. False negatives contribute an additional 0.02% of the population that are infectious and missed in gateway testing.
46 thoughts on "Reports from the Modeling Team – 2020"
Comments are closed.
The “paradoxical” finding that residential instruction will decrease the total number of cases among students is based on scenarios (p. 44 of the full report) where it is assumed that living on campus and mixing with other students in classes, dorms, dining, etc., will not cause any increase in the daily contact rate or expand students’ contact networks. Is that assumption reasonable? Given that assumption, the finding is almost unavoidable. It’s hard to predict how much contact rates might increase among students on campus and in dorms. How much of an increase would it take to offset the benefits of testing and contact tracing, or to overwhelm the system so that the testing/tracing/isolation plans become infeasible?
Experts on infectious disease modeling (several from the Cornell Vet School, and one external expert on human disease modeling) were only consulted after-the-fact (May 31), too late for their inputs to affect the Committee reports or recommendations. Why was it decided to do that, instead of getting some of them involved from the start?
Thanks for your questions.
Experts on infectious disease modeling were consulted, as described in section 6 of the full report. This included Yrjo Grohn and Renata Ivanek. They both provided detailed reviews of the modeling report.
In regard to the question of daily contact rate on campus, Figure 11 on page 38 shows sensitivity of the number of infections and hospitalizations to this parameter, holding the test frequency and social distancing measures fixed. More contacts / day increases infections and hospitalizations, with an inflection point near double the nominal value. This is roughly the daily contact rate at which we would lose control of the epidemic if we took no action, although as discussed below we would see this happening and would respond.
I understand part of your question as asking how much the residential contact rate would need to increase to make that scenario on par or worse than virtual instruction. The median number of infections overall in the virtual instruction scenario is 7200, which is ~20% of the full population of 34K. From Figure 11 under the nominal parameters, we see that this would be predicted to occur if the daily contact rate increased to roughly double what we expect, which would correspond to an R0 of 5.
If the contact rate is much larger than expected in the fall in the residential scenario, we’ll see this in the asymptomatic screening results and also through contact tracing. We would then take action — the larger-than-expected contact rate is likely to have a specific cause into which we would have visibility via contact tracing. If this cause is addressable, we would do that. If not, we would also have the ability to fall back on increased test frequency. Figure 15 shows the effectiveness of increased test frequency.
Although it isn’t emphasized, this visibility and the opportunity to respond that it provides is an important capability provided by asymptomatic screening. In a virtual instruction scenario where we don’t do screening, we risk not knowing until later that an epidemic is underfoot.
For reference, here is a link to the full report:
https://people.orie.cornell.edu/pfrazier/COVID_19_Modeling_Jun15.pdf
-Peter Frazier
Thank you for posting the full report. Especially important since the executive summary is pretty misleading about the parameters used and therefore conclusions one should draw from the report.
For reference, here is a link to the full report:
https://people.orie.cornell.edu/pfrazier/COVID_19_Modeling_Jun15.pdf
Including this since the C-TRO report only includes the executive summary.
Today, NATURE published an interesting comment on modeling: “Five ways to ensure that mathematical models serve society”
https://www.nature.com/articles/d41586-020-01812-9
The take-home message:
1.Mind the assumptions
2.Mind the hubris
3.Mind the framing
4.Mind the consequences
5.Mind the unknowns
Hi Peter, thanks for all the work you and your team have put into this.
Re: your comment on what residential contact rate would cause greater infection than virtual instruction, you say “this would be predicted to occur if the daily contact rate increased to roughly double what we expect.” Does that mean twice 8.3 = 16.6 contacts per person per day?
You further state that “If the contact rate is much larger than expected in the fall in the residential scenario … the larger-than-expected contact rate is likely to have a specific cause into which we would have visibility via contact tracing.” Perhaps in-person classes would be the specific cause. Even assuming we eliminate in-person instruction for classes in the hundreds, we routinely gather dozens of students and faculty together in classrooms for hours per day. Strangely, this fact doesn’t appear to factor into your model of contagion in a residential semester.
What’s the empirical justification for a model based on 8.3 contacts per day?
I gather Frazier’s team imports this figure from the CDC, but the CDC isn’t modeling what we’re trying to model. They’re looking at broad populations; we’re looking at a residential college. Seems dangerously misleading to take abstract inferences from the former generic analyses and apply them as “facts” to our specific situation. Instead we should attempt to estimate the actual contacts per person per day if we reactivate campus with in-person classes, dorms and dining halls. We should not assume students never socialize or party.
We should model infections and our capacity to control them based on our best estimates of conditions that will actually obtain, then reopen this conversation.
In Figure 11 on page 38, where they graph “average contacts per person per day” against “percentage of population infected” and you do not even consider the possibility of over ~13 contacts per day, by which point things are already pretty bad. Please explain.
Thanks again to the modeling group for their enormous efforts. Without your work our discussion would be entirely based on intuition and guesses.
Yesterday’s Senate meeting and the “hallway discussion” clarified some things. The report doesn’t credit Yrjo, Ivana, or anyone with prior human disease modeling expertise for any involvement before June. It’s good to hear that they played a larger and earlier role.
However, the discussions also confirmed that the model’s projections about the safety of residential instruction, in absolute terms and relative to virtual instruction, rest on the assumption that campus will not be an atypically hospitable environment for disease spread. Peter said that the break-even point for residential vs. virtual in the model is at twice the CDC estimate for typical/average situations, and beyond that the report shows that residential instruction quickly gets a whole lot worse. I, and many others, think that re-opened colleges campuses are likely to be very atypical hot-spots for viral spread. Peter and the modeling team evidently disagree, and they’re a pretty sharp group.
So I would hope to see messaging and planning that acknowledge the vast uncertainty. For example, the Executive Summary of the modeling report highlights numbers (percentiles of projected infection rates) that take no account of parameter uncertainty, which is very misleading. Re-opening (if it happens) might go very smoothly. But I think the model implies a real potential for uncontrolled outbreak and we need to be planning how to shut down fast if we need to, with minimal spillover to the Ithaca community. We can respond initially by testing more often, as Peter wrote, but that only goes so far: the curves in Fig. 15 (infection rate vs. testing rate, above the nominal 20%/day) are a whole lot shallower than the ones in Fig. 11 in the relevant range (increasing R0 above 2.5 is equivalent to increasing transmission per contact above the nominal 2.6%).
Please produce a table that has columns associated with duration of class (using T = 50min, 75min, 120min) and whose rows specifiy class size (N = 10, 20, 30, 40, 50). The table entries should be how many contacts that counts for. E.g., I am in a 50 minute class with 20 classmates and that adds 5 contacts to my daily contact total.
Make reasonable assumptions about the teaching space, e.g., reasonable airflow, the room has Nx100 square feet etc.
To the layperson, 8.3 contacts per day is shockingly low. Help us reason about safe teaching spaces and contacts!
The modeling assumes that an online semester would preclude asymptomatic testing of Ithaca-based students; why can’t we test all Cornell students in Ithaca regardless of whether they’re taking online courses?
What does the modeling say about expected fatalities?
Nothing. They chose not to model fatalities. “We assume all patients recover and there are no deaths.” (32)
We understand the need to use CDC numbers. But do those numbers apply to a small town college town?
Let’s say we have 9000 off-campus students in town for N days waiting for residential classes to start. Shouldn’t the no-reopen model be applied to predict what happens during those N days? The reason I am asking is that the C-TRO calendar starts Sept 2, 6 days later than the originally planned Aug 27 start date. Considering apartment leases and other pressures for soon-as-possible move in, it seems that we are encouraging a larger N with the delayed start.
Thus, I would like to see what the model says about a Aug 27 start vs a Sept 2 start.
Thank you for what is obviously a lot of work for this report. However, as others, I have to express dismay and incredulity at the key in-built assumption of the 8.3 contacts per day for each non-quarantined/isolated person (p.19) and very especially for a population largely comprising young adults. This assumed contact level may “cause our implied R0 (calculated in Section 2.5) to match the nominal value of 2.5 recommended by the CDC” (p.19, last 2 lines), but it is below the level identified in a March 27-April 9 Gallup survey which found in the US that, even under active social distancing during a period when this was being widely pursued and social contacts were dramatically down on ‘normal’, adults in general (9.9 mean contacts per day) or working adults (13.9 mean contacts per day) were both >8.3 contacts per day. These two are both categories which would much better describe college students versus ‘adults not working’, the one category under 8.3 in the survey (https://news.gallup.com/opinion/gallup/308444/americans-social-contacts-during-covid-pandemic.aspx).
Most studies in fact find that young adults are at the busier end of social contacts per day numbers (e.g. Mossong et al. 2008). For example, studies from the pre-COVID-19 era suggest much higher contacts per day (versus 8.3) as typical for the college-age group, for example in the low 20s (e.g. Wallinga et al. 2006; Del Valle et al. 2007). Data does of course vary. Whereas previous studies found averages of 13.4 contacts per person (all ages) per day in Europe and 18 for Hong Kong, another study did get a figure of 8.1 (see Leung et al. 2017). Regardless, two findings stand out in the Leung et al. (2017) paper: (i) “The contact intensity was highest among school-aged children aged below 20”, and (ii) the data were strongly right-skewed (some of the sample reported large numbers of contacts)—and even a small but substantive such super-contacter/spreader group undermines modelling based on low numbers of contacts.
The campus context and the issue of repeated contacts all exacerbate such a situation as your ref. [37] by Weeden & Cornwell 2020, based on Cornell data, makes apparent.
Student housing (all forms), any common dining facilities (however social distanced), and campus-life (however social distanced) all aggregate contacts and probabilities (contrary lack of such assumptions in this study)–physical changes to dorm structures (double to single, pod model, mentioned p.49, but not included in the modelling could of course be important). There is further the unproven assumption that a large set of 18-21 year olds will all behave as model, disciplined agents. This seems unlikely from any perspective across the range of accommodation scenarios in question. The past period has indicated many older adults do not. Further, however, important COVID-19 and social distancing may be as a recognized aim for person and society, other fundamental concerns and issues will potentially take precedence—as recent very justified waves of protest around BLM illustrate—potentially dramatically changing vector relations on campus from the modelled scenarios.
For all these reasons I struggle not to find this study fundamentally flawed, and potentially a dangerous basis on which to make decisions for the Cornell community.
References
Del Valle, S.Y. et al. 2007. Social Networks 29: 539-554.
Leung, K. et al. 2017. Scientific Reports. 7: 7974.
Mossong, J. et al. 2008. PLoS Medicine 5(3): e74. https://doi.org/10.1371/journal.pmed.0050074
Wallinga, J. et al. American Journal of Epidemiology 164: 936-944
Weeden, K. A. & Cornwell, B. 2020. Sociological Science 7: 222-241
From page 2: “Also note (4) our nominal scenario for residential instruction assumes full compliance with testing, quarantine and isolation.” Does this imply that full compliance is not assumed in the case of virtual instruction?
The Frazier report specifies that an earlier version (as of May 27-28) of the model assumed a higher number of contacts per person per day, a key parameter. But after intervention by university leadership, for reasons undisclosed, the contacts per person per day was lowered to 8.3. What’s the justification for using this lower figure? What was the parameter prior to that intervention and what was the justification for that number?
Page 50 of the report reveals that “the May 31 parameters predict significantly fewer infections than the May 27 and 28 parameters” as well as fewer hospitalizations. This is worrisome if key parameters were modified to less accurately represent reality. Can we see the earlier report?
Here’s a datapoint. After closing campus and moving courses online, my average contacts per person per day fell to 1. Before that, my average was probably 50.
The assumption that reopening campus won’t rapidly increase contacts and therefore secondary infections is absurd.
The model does a good job examining the effects of Covid within the Cornell community, highlighting that there will be fewer cases with residential instruction than with online. I am concerned, however, about the impact of infections within the Cornell community on Ithaca and Tompkins County residents. Under one scenario, for instance, there is a prediction of 1254 infections among the Cornell community. What is the impact of those infections on Ithaca and Tompkins County residents? If we are concerned about the public health impact on the larger community, it would be good to know and that. Moreover, this is important information for the city, town, and county to have in making their input on Cornell re-opening.
Much of the discussion (here and in Zooms) about the case loads under residential vs. virtual instruction focus on the likelihood of a higher R0 ensuing if the campus re-opens for instruction. I would like to note that a higher R0 than the “typical community” value assumed in the modeling report, even if it is set to *exactly* the same value in both residential and virtual scenarios, could substantially change the balance between residential and virtual instruction.
Here’s why. Under nominal parameters, in the virtual instruction scenario there are 9000 students living off-campus and unsupervised, and 7200 of them get the virus. Whatever you do to the value of R0 in the model, you can’t infect more than all of them. In the residential scenario with supervised students, there’s a lot more room for higher R0 to increase the cumulative case load.
This observation is relevant to a question asked today in the Q&A box at the Town Hall about reopening, and to the answer that was given by Mike Kotlikoff. My answer (above) differs from what Mike said about the effect on projections if R0 were increased equally under both scenarios.
The report says that the local hospitals can only hospitalize 16 patients only, meaning the vast majority of students 1000+ will need to fight COVID-19 off on their own. It saddens me on thinking that this will be the situation for infected students in the fall. Also, does Cornell have exclusive access to these 16 beds? It does not seem likely. There is also Ithaca College and the local community.
First, let me say I appreciate that the university is doing this sort of work ahead of making decisions about reopening. Following the news of other universities that say they will reopen and figure out the details later has been disheartening. Engaging campus in this way, using the expertise on campus, etc. – I really appreciate this.
1. The report considers the following scenarios: fall-reopen, no-reopen-students, no-reopen-faculty/staff. The overview says that:
“Importantly, not reopening actually results in more infections and hospitalizations than reopening. This arises because the students returning to campus in the reopen scenario undergo test-on-return and asymptomatic surveillance, which controls epidemic spread within this population. In contrast, in the no-reopen scenario, returning students are not subject to the University’s
asymptomatic surveillance testing protocols, allowing infections to grow rapidly in this group.” (p. 12)
The residential instruction/no-reopen scenarios seem like two extreme ends of a spectrum of strategies the university could adopt. Why not consider a no-reopen scenario with asymptomatic surveillance for students who return to Ithaca? For example, what does the number of cases look like if the university screens students before their return to campus? Or what does the number of cases look like if the university screens students who have returned to Ithaca every 7 days for the first month of return? I understand this probably depends on student compliance but it seems at least worth exploring. Put another way: it seems ludicrous for the university to say “not my problem” if students choose to come back to Ithaca even if residential instruction doesn’t resume.
2. The other question I have about the report is whether it is possible to consider how infections are likely to affect student populations? There are racial and ethnic disparities in the impacts of COVID-19 [e.g. https://www.cdc.gov/coronavirus/2019-ncov/need-extra-precautions/racial-ethnic-minorities.html, https://covidtracking.com/race%5D. How will this translate to students? For example, the report could begin by considering the racial and ethnic makeup of students who live on/off campus and assessing whether this changes the likelihood that particular student populations require quarantine.
I think any reopening plan *MUST* consider this. If the models are ‘blind’ to this issue then I urge them to be explicit about that assumption and include it in the summary.
You raise an important point about the dangers of adopting a “race-blind” health model that will disparately impact minority communities at Cornell. There are medical but also legal risks here.
The second link (covidtracking.com/race) seems to be broken.
You’re right to flag the racial and ethnic disparities. We should be talking about this.
We should also be talking about the fact that many of the reasons the CDC gives for higher risk for COVID-19 among minorities will apply to all college students in a residential semester. These risks (sourced from the CDC website linked above) include:
“more likely to live in densely populated” spaces
“may lack safe and reliable transportation”
“more likely to rely on public transportation, which may make it challenging to practice social distancing”
“where people live, work, eat, study, and recreate within congregate environments … it [is] difficult to slow the spread of COVID-19”
Sounds like college students during any semester in Ithaca.
The report assumes 100% compliance with testing and quarantine in the reopen scenario, and zero effort to test or quarantine non-residential students (compliance not even an option) in the no-reopen scenario. The former is impossible and the latter unnecessary in the real world, so it would be fruitful to compare models a little less like apples and oranges. What conclusions do we draw, for example, if we have 80% compliance in the reopen scenario and but make a non-zero effort to test non-residential students in the no-reopen scenario? The model as it stands favors reopening, but seems predetermined to do so.
I feel like the university is engaging in double-speak regarding social messaging. First, they ask us to trust students to engage in safe and responsible social distancing throughout the semester if we reopen. They tell us they’re launching a campaign of social messaging to ensure students behave safely.
Yet in the Frazier model, the assumption is that students living off-campus in a no-reopen scenario will not and cannot be expected to behave safely. The testing and quarantine assumed to be 100% effective if we reopen campus is not even an option; we don’t even try. Kotlikoff justified this abdication of responsibility in the town hall yesterday by saying it would harder to “enforce” testing among students not living in dorms in the no-reopen scenario. In other words, he pinned the blame on the students for not complying with a program he’s actually declined to offer them, at least in the hypothetical models under discussion.
The notion that we can and should lock students out of their dorms to enforce testing if we reopen is a little strange. So if they miss a test, they sleep in the snow? It’s also breathtakingly naive as “enforcement” technique imagined to be highly effective. It’s quite easy to get into a locked building subject to heavy foot traffic — like a dorm — by waiting 30 seconds until somebody else opens the door. Or by calling a friend to let you in. Students will not find it difficult to get around this enforcement technique if and when they neglect to get tested on time or, as seems likely in N cases, when they choose to avoid testing, perhaps because they don’t want to be quarantined. The model does not account for any of this, just assuming “enforcement” on-campus through dorm lockouts is 100% effective. In reality, the most efficacious enforcement mechanism will likely be the same for residental as for non-residential students: shutting down their netID.
Even absent a social messaging campaign, surely some students living off campus in a no-reopen scenario will want access to tests. Some — perhaps many — would also have an interest in having their roommates tested and provided a safe space in which to quarantine if needed. Why don’t we try surveying students about their interest? At the very least, we should not take it for granted that off-campus students in a no-reopen scenario would never participate and therefore plan to give them no access to tests or quarantine spaces, as in the model.
A culture of conformity to public health surveillance is not impossible to achieve, although it would surely be less that 100% effective — just as on-campus compliance will be less than 100%.
I’m wondering about the students living off campus in the re-open scenario. Do they simply disappear? Or does their unsafe behavior no longer matter?
“The average student has the potential to share a classroom with about 529 different students by the time they have completed one round of courses on their schedule (5,832,358*2 bi-directed arcs[“edges”]/22,051 students = 529). This does not mean that the average student will share a classroom with 529 other classmates, given attendance is rarely 100 percent. On the other hand, most classes meet more than one time per week, giving each student multiple opportunities to be in the same room…” (228-29).
Kim A. Weeden and Benjamin Cornwell, “The Small-World Network of College Classes: Implications for Epidemic Spread on a University Campus,” Sociological Science Vol. 7 (May 2020): 222-41.
Data like this seems to challenge the Frazier model, which assumes no increase in social contacts or R0 in a reopened campus. However, Weeden and Cornwell were not calculating contacts in a socially distanced classroom. Would be useful to get Weeden, Cornwell, and Frazier, among others, in conversation about social networks and epidemic spread in a reopened campus this fall. It seems no one on the university planning committees was tasked with examining this set of problems. Startling oversight.
While I appreciate all of the hard work that went into this report, like many others, I am deeply concerned about the assumptions of the model. By not modeling 50 or 75% compliance and comparing it to 100% compliance in the residential scenario, this report essentially denies us the opportunity to determine what level of compliance would be necessary to limit risk. Maybe 80% compliance is good enough, maybe it’s not – but we can’t know from what’s here. It is also absurd to assume 0 fatalities, based on everything we know about COVID-19 and our local Cornell community. How many members of the Cornell community, exactly, are the president and provost willing to let die to ensure a residential semester? How might different plans affect that number of deaths? Again, we’re denied the option to even explore the impacts – any discussion of “risk” that pretends fatality is not on the table is unrealistic at best, deliberately misleading at worst. Making decisions based on this model, given these limitations, is deeply concerning.
Huge thanks for Frazier and his team for their detailed and ongoing effort to address concerns raised here and in other fora.
I’m not sure, but I suspect some of the comments here about the “executive summary” being misleading are referring not to the executive summary in Frazier’s report itself, which he (reasonably) defends, but to the C-TRO’s executive summary. The latter states, “Paradoxically, the model predicts that not opening the campus for residential instruction could result in a greater number of infected individuals…. Hence, it appears that in addition to the educational advantages that would arise, opening the campus for residential instruction may be in the best interest of the health of the Cornell community, Ithaca, and surrounding communities.” In that text, all the complexity and uncertainty in this prediction is left out. This simplified and perhaps overly rosy view of the Frazier model is likely to be far more widely read and internalized by the Cornell and Ithaca community than the Frazier report itself.
The no reopen scenario assumes that there will be are large number of students living off campus that will be sufficiently beyond the control of the university so as to preclude surveillance testing. But in the reopen scenario the off campus students are also not included in the surveillance testing. Why is this a major problem in the no reopen scenario, but not in the reopen case? It could still be the same number of off campus students either way, possibly even more in the reopening scenario.
I find the last paragraph in point 2 of the response very confusing. In the paper the authors of the model clearly state that during the first two days after exposure the disease is not detectable by the PRC test (page 17). Therefore one can not use frequent testing to leverage this period to slow the spread, as suggested by the response.
A back of the envelope computation based in the current number of new cases, shows that this two 2 undetectable period will lead to about 10 case of infected students which will no be detected by the rigorous testing during the move-in days. This number grows to 15-20 if one includes the the test has false negatives.
I do agree with one of the conclusions of the model, that a significant student population outside campus is likely to have a very significant infection rate absent any testing.
The other main conclusion is a bit more problematic: opening the campus with significant surveillance testing might also lead with significant infection. The chosen parameters avoid this scenario, but increasing the infection rate/ number of contacts (to a number which is plausible) leads to significant infection. The author stated multiple times that the testing can detect this in very early stages, but he did not provide any evidence supporting this claim.
The chosen model gives about 10 detected infections per day, but I am afraid that an epidemic can start with something like 15 cases per day for the first two weeks, and an explosion afterwards which ramps to 50 cases over a week or so. However at that point the epidemic can be stopped.
I understand that the authors had put a lot of effort in this model, but it is clear that it has significant shortcomings and the response does not seem to address most of them.
Most if not all of the proposed academic calendars involve ending in-person/residential instruction in advance of the Thanksgiving break, with the rest of the Fall semester to be completed virtually. In the “no reopen” modeling scenario, however, it is assumed that several thousand students will be physically in Ithaca even if there is no in-person instruction. So it is perhaps not unreasonable to assume that some fraction of students might leave Ithaca for Thanksgiving, only to return after Thanksgiving to complete the rest of the semester virtually. Has that possibility been included in any of the modeling efforts?
To clarify the comment above (F33), the question is whether — in the residential/reopen scenario — any thought has been given to off-campus students who leave Ithaca at Thanksgiving and then return for some period of time before the winter break. (Presumably, in the no-reopen scenario, off-campus students might be coming and going throughout the semester, which might or might not be reflected in parameter choices that have been made.)
Having not read the report yet but listening to the town hall, I have a question. Since a significant portion of the student body lives off-Campus including seniors, graduate students and professional students how are you going to control contacts and testing. This is basically the open with virtual teaching but the students coming back. The term residential is misleading. Does it mean living in a dorm or university housing or being in Ithaca?
This preliminary study implicitly frames the institution and its prerogatives as unquestionable. This is a serious error.
The study cites Tompkins County as “a significant source of infection”. Tompkins County has had one new case in the last three weeks. We sacrificed for months to bring infection to near-zero levels. WE aren’t the source of infection. CORNELL would be the agent of infection to its surrounding community overwhelmingly!
This is a serious error because it is being used as cover for the university to assume prerogative for itself. It is OUR prerogative whether to allow you to conduct your business at the expense of our health! Are you offering to pay the medical expenses for the infections you will be causing? Funeral expenses?
And you cannot cover this mistake with talk about human endeavor and risk, pap about depression and suicide. You are a wealthy university that squanders its resources on 2-and-20 deals and a top-heavy management style. If you cannot support both your employees and the surrounding community until you truly get this sorted out, then the manner in which you conduct your core mission must be brought under question and re-examined at a fundamental level.
Does the modeling team understand how draconian it is to threaten to remove students’ access to their own email to encourage compliance with testing? (This is proposed in section 4 of the Replies to Comments.) Does the modeling team understand the *numerous* equity problems that such a threat creates?
Thank you for all your efforts in the full report. I wonder how the rapid increase in the past two weeks would change the result of your model. In the executive summary, you mentioned that “Toward the goal of quantifying uncertainty, we are continuing efforts to estimate parameters, provide ranges of plausible parameter values against which we should plan, and investigate the impact of modeling assumptions.” Could you provide some insights on how the parameters are changed based on the past two weeks’ data? And more importantly, has such an extreme scenario been tested in the Jun 15 report? If not, I am worried about the fidelity of the report.
This report fails to address what I think is the most important issue – the risk of ruin. There is likely a chance that there will be a widespread, uncontrollable infection on the Cornell campus, this would probably lead to cancelling of classes and research and put the university in a worse situation than it would be if it were not reopened in the fall. The potentiality of ruin would be represented as the tail of the distribution of possible outcomes and the important questions to ask become: how probable are these tail events? How sensitive is the probability to our model parameters? Would reopening be wise given the risk of ruin? Until these questions can be answered it seems foolish to commit to reopening the university.
Where can we see the survey used to gather data on how many Cornell students would return to Ithaca regardless of the mode of instruction? Additionally, how many students was this sent to and how many responded? During what time period was it collected?
Whatever was gathered is only a snapshot from a particular moment in time before recent surges, and without complete information regarding what campus life would really look like in a COVID world, and without parental input. Given all this, how useful and truthful is this data?
What is Cornell doing now to know who has returned, if they have been tested, to notify those in states currently under quarantine upon arrival orders from NYS?
Where will temporary hospital wards be set up given the possibility of exceeding what our small medical community has to offer?
Why not keep COVID infected students in Statler rather than spreading them out into the Ithaca community?
What are the ethical obligations that our higher ed institutions have to protect the welfare of the permanent local residents?
I completely agree with this comment. I cannot find the survey results and data to support the Frazier report.
After reading the Frazier report, we need to see the survey data this report’s simulations were based on. The LOWER bound (“optimistic”) scenario the Frazier modeling uses 8000/15000. In an article to the WSJ, Pollack stated that this survey found that “as many as 50%” of undergrads reported in the survey they would return to campus regardless of the more of instruction.
So what is it? As many or as little as 50%?
We need some academic transparency and a peer review process for these findings. This is a university after all. I urge all faculty, especially those is quantitative social science fields to review this study and Pollock’s conclusions.
To what extent if any, has this model been revised to reflect the alarming and growing surge in cases in large sections of the country?
Reading the report from June, as well as the addendum from July, I have a few concerns about the model.
First, looking at Figure 3 – you’re plotting two scenarios. 1) Full re-open with active surveillance and 2) No re-open and no surveillance. Your conclusion is that scenario 1 will be better, but you’re changing not one but two variables across the two scenarios. Could you share with us 2 other scenarios – re-opening campus without surveillance and no re-open with surveillance? This would really help differentiate between the effects of re-opening and adding testing.
Second, I’m very concerned about the assumption that there are homogeneous contacts between individuals. As an alum, my cursory familiarity with housing options for students suggests to me that there could be wild differences in the exposure risk between students who live in single apartments off-campus and students who live in fraternity houses. My concern here isn’t that the assumption is unrealistic – my concern is that it is well-understood among contact network epidemiologists that contact heterogeneity is a strong driver of outbreak size. Having read your explanations in your report for why you’ve chosen 8.3 contacts per individual, I do not see how you can justify not accounting for contact heterogeneity. I consider this important enough that providing actionable advice to Cornell and Ithaca based on a model that doesn’t account for this would be irresponsible.
It does sound like you’re planning on incorporating heterogeneity into your model for the future – I’d be very excited to see how this changes things.
Figure 3 of the addendum suggests that there is not delay between the testing and quarantining the infected individuals (otherwise there is no other way to explain how an epidemic can be controlled with R_0 is quite big as in the case of 40 contacts per day). This is not a very realistic assumption, at best one can hope for 12h delay from taking the test to quarantining — actually 24h or even 48h is more feasible given the current delays in testing in the US.
If people are tested every 5 days adding such a delay will not make a huge difference, but if one goes to testing every other day such delay is essential. It seems that accounting for such a delay will turn testing frequency from 50% to about 33%. Thus even in the pessimistic scenario in order to control the epidemic one need to test every other day (or even every day) which likely is outside the capabilities of the Vet school and Tompkins county health services.
One very important point not addressed in the addendum is how to detect if the spread factor is higher. I.e. the university starts with testing every 5 days and the number of contacts is say in the pessimistic scenario of about 11 per day, how log is the window where switching to testing every other day will still control the spread? Clearly if one waits a month it will be too late, however one needs to waits a few weeks to see that the number of infections is higher that the predicted by the model and switch to more frequent testing.
In short, the addendum does not address many of the short comings of the model, and it is likely that the model prediction are overly optimistic.
If the work on this is ongoing, as suggested by a Github repository that seems associated with the model [https://github.com/peter-i-frazier/group-testing], will the modeling team continue to provide updates to the university community during the fall semester?
The most recent Python notebook [https://github.com/peter-i-frazier/group-testing/blob/master/notebooks/NYS_lockdown_threshold_analysis_Aug_29.ipynb] looks like it presents simulations that are related to the the new NYS guidelines for “locking down” universities. Why are Vet and/or Law school students included or excluded from some simulations? Is the modeling still considering the overall impact on the community or now just focused on anyone who would be part of the ‘threshold count’?
I encourage the team doing this work to be transparent and continue communication. If the models are being changed or the outputs of the models are being used in ways that are different than those communicated to the public (ie. the Cornell community) so far, it seems important to let us know both here and at the Cornell COVID page [https://covid.cornell.edu/testing/modeling/].