Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2019-12-06 12:30 to 2019-12-10 11:30 | Next meeting is Tuesday Nov 26th, 10:30 am.
Macroscopic dark matter -- "macros"-- refers to a broad class of alternative candidates to particle dark matter with still unprobed regions of parameter space. These candidates would transfer energy primarily through elastic scattering with approximately their geometric cross-section. For sufficiently large cross-sections, the linear energy deposition could produce observable signals if a macro were to pass through compact objects such as white dwarfs or neutron stars in the form of thermonuclear runaway, leading to a type IA supernova or superburst respectively. We update the constraints from white dwarfs. These are weaker than previously inferred in important respects because of more careful treatment of the passage of a macro through the white dwarf and greater conservatism regarding the size of the region that must be heated to initiate runaway. On the other hand, we place more stringent constraints on macros at low cross-section, using new data from the Montreal White Dwarf Database. New constraints are inferred from the low mass X-ray binary 4U 1820-30, in which more than a decade passed between successive superbursts. Updated microlensing constraints are also reported.
We have shown (Colin et al. 2019) that the acceleration of the Hubble expansion rate inferred from Type Ia supernovae is essentially a dipole with 3.9$\sigma$ significance, approximately aligned with the CMB dipole, while its monopole component which may be interpreted as due to a Cosmological Constant (or more generally dark energy) is consistent with zero at 1.4$\sigma$. This is challenged by Rubin & Heitlauf (2019) who assert that we incorrectly assumed the supernova light-curve parameters to be independent of redshift, and erred further in considering their measured redshifts (in the heliocentric frame) rather than transforming them to the CMB frame (in which the universe supposedly looks isotropic). We emphasize that our procedure is justified and that their criticism serves only to highlight the rather "arbitrary corrections" that are made to the data in order to infer isotropic cosmic acceleration. This is a vivid illustration of the 'Cosmological Fitting Problem' faced by observers who live in an inhomogeneous universe but still use the maximally symmetric FLRW cosmolgy to interpret observations.
We develop new tools for isolating CFTs using the numerical bootstrap. A ``cutting surface" algorithm for scanning OPE coefficients makes it possible to find islands in high-dimensional spaces. Together with recent progress in large-scale semidefinite programming, this enables bootstrap studies of much larger systems of correlation functions than was previously practical. We apply these methods to correlation functions of charge-0, 1, and 2 scalars in the 3d $O(2)$ model, computing new precise values for scaling dimensions and OPE coefficients in this theory. Our new determinations of scaling dimensions are consistent with and improve upon existing Monte Carlo simulations, sharpening the existing decades-old $8\sigma$ discrepancy between theory and experiment.