The adaptive ridge for model selection and applications

We present here a recent penalized likelihood method called the adaptive ridge. By solving iteratively weighted ridge problems, this method allows to approximate efficiently the L0 model selection problems. The method is similar to the (multi-step) adaptive lasso, with the noticeable advantage that the penalized likelihood remains smooth and that classical optimization algorithms (e.g. gradient descent, Newton-Raphson, Marquardt, etc.) can be applied directly. The interest of the adaptive ridge is illustrated by several applications: irregular histograms, piecewise constant hazard estimation, spline regression with automatic selection of knots, and graph/image/map segmentation.