July 2007 updated May 2008
We consider the problem of learning gradients in the supervised setting where there are multiple, related tasks. Gradients provide a natural interpretation to the geometric structure of data, and can assist in problems requiring variable selection and dimension reduction. By extending this idea to the multi-task learning (MTL) environment, we present methods for simultaneously learning structure within each task, and the shared structure across all tasks. Our methods are placed within the framework of Tikhonov regularization, providing (a) robustness to high-dimensional data, and (b) a mechanism for incorporating a priori knowledge of task (dis)similarity. We provide an implementation for multi-task gradient learning for classification and regression, and demonstrate the utility of our algorithms on simulated and real data.
Keywords: multi-task learning, dimension reduction, covariance estimation, inverse regression
The manuscript is available PDF format.