news: “Critical Mass Not Needed for Supernova Explosions”

So that paper I mentioned yesterday?  It’s in the news:

Mainstream media release from caastro.org
Science media release from lbl.gov

Brian of course reminded me that while media releases are part of the job, they won’t get me my next job.  I’d roll my eyes and complain about how he’s spoiling my fun, except that he’s right.  Time to get that next paper out.

So I won’t spend too much time going over stuff that is already discussed in quite a bit of detail in the media releases (which I helped put together).  I’ll just give some extra details and a slightly more personal angle on it.

The models I used aren’t really new.  My analysis code uses their findings pretty much straight up.  Neither is the idea of using light curves to infer the progenitor mass observationally.  The theoretical idea that the white dwarf’s mass could explain the width-luminosity relationship used in cosmology is also not new!  From the conclusions of this 14-year-old paper (my boldface below):

…we have explored the effect of changes in a variety of parameters on the resulting light curve.  These include the opacity, explosion energy, 56Ni mass and distribution, and total mass.  Of these, there is only one parameter that by itself can explain the observed correlation of peak width and luminosity:  the total mass.  All others, when varied individually, lead to anticorrelations.  This does not necessarily imply that the mass of the explosion is the controlling parameter; there may be various combinations of parameters that, when altered in concert, lead to the same behavior.  For example, if the opacity can be shown to be a strong function of the 56Ni mass, then the behavior of models at a single mass may be able to reproduce the [Phillips relation].  The fact that variations in so fundamental a property of the explosion as the total mass can explain the observed behavior is suggestive.

The new twist I’ve introduced is to embed these relatively simple analytic models, which everyone would agree have their shortcomings, into a Bayesian inference framework, capable of treating all the systematic errors and theoretical approximations using probability theory instead of trying to be perfectly self-consistent.  My thoughts were that as long as you’ve got the essential model features coded in, and are honest about the error bars, you could probably find some interesting stuff.

Bayesian methods have gained traction at varying rates in different parts of astronomy.  The cosmologists understand that they are basically indispensable for getting the most honest possible uncertainties on the fundamental parameters that describe the universe, so Bayesian inference is part and parcel of what they do.  In traditional astronomy, on the other hand, often the most complicated thing you’re usually calculating is a least-squares fit to a line or Pearson’s r applied to a cloud of data — leading to grumbles from those with more sophisticated methods at their disposal.

Anything else you want to know, just ask!

Advertisements

About Richard

I'm an American scientist who is building a new life in Australia. This space will contain words about science and math, but also philosophy, policy, literature, my travels, occasional rants, all sorts of things I find strange and awesome. The views expressed in this blog do not necessarily reflect the opinions of my employer at the time (currently University of Sydney), though personally, I think they should.
This entry was posted in Astronomy, Career, Technical and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s