Brian of course reminded me that while media releases are part of the job, they won’t get me my next job. I’d roll my eyes and complain about how he’s spoiling my fun, except that he’s right. Time to get that next paper out.
So I won’t spend too much time going over stuff that is already discussed in quite a bit of detail in the media releases (which I helped put together). I’ll just give some extra details and a slightly more personal angle on it.
The models I used aren’t really new. My analysis code uses their findings pretty much straight up. Neither is the idea of using light curves to infer the progenitor mass observationally. The theoretical idea that the white dwarf’s mass could explain the width-luminosity relationship used in cosmology is also not new! From the conclusions of this 14-year-old paper (my boldface below):
…we have explored the effect of changes in a variety of parameters on the resulting light curve. These include the opacity, explosion energy, 56Ni mass and distribution, and total mass. Of these, there is only one parameter that by itself can explain the observed correlation of peak width and luminosity: the total mass. All others, when varied individually, lead to anticorrelations. This does not necessarily imply that the mass of the explosion is the controlling parameter; there may be various combinations of parameters that, when altered in concert, lead to the same behavior. For example, if the opacity can be shown to be a strong function of the 56Ni mass, then the behavior of models at a single mass may be able to reproduce the [Phillips relation]. The fact that variations in so fundamental a property of the explosion as the total mass can explain the observed behavior is suggestive.
The new twist I’ve introduced is to embed these relatively simple analytic models, which everyone would agree have their shortcomings, into a Bayesian inference framework, capable of treating all the systematic errors and theoretical approximations using probability theory instead of trying to be perfectly self-consistent. My thoughts were that as long as you’ve got the essential model features coded in, and are honest about the error bars, you could probably find some interesting stuff.
Bayesian methods have gained traction at varying rates in different parts of astronomy. The cosmologists understand that they are basically indispensable for getting the most honest possible uncertainties on the fundamental parameters that describe the universe, so Bayesian inference is part and parcel of what they do. In traditional astronomy, on the other hand, often the most complicated thing you’re usually calculating is a least-squares fit to a line or Pearson’s r applied to a cloud of data — leading to grumbles from those with more sophisticated methods at their disposal.
Anything else you want to know, just ask!