## Page Not Found

Page not found. Your pixels are in another canvas. ** Read more**

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Page not found. Your pixels are in another canvas. ** Read more**

This is a page not in th emain menu ** Read more**

** Published:**

This post will show up by default. To disable scheduling of future posts, edit `config.yml`

and set `future: false`

. ** Read more**

** Published:**

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. ** Read more**

** Published:**

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. ** Read more**

** Published:**

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. ** Read more**

** Published:**

** Read more**

Short description of portfolio item number 1

** Read more**

Short description of portfolio item number 2

** Read more**

**S. Gaïffas**

*Mathematical Methods of Statistics*, 2005

The nonparametric regression with a random design model is considered. We want to recover the regression function at a point $x_0$ where the design density is vanishing or exploding. ** Read more**

**S. Gaïffas**

*Statistics and Probability Letters*, 2007

In this paper, we study the estimation of a function based on noisy inhomogeneous data (the amount of data can vary on the estimation domain). We consider the model of regression with random design, where the design density is unknown. ** Read more**

**S. Gaïffas and G. Lecué**

*Electronic Journal of Statistics*, 2007

We want to recover the regression function in the single-index model. Using an aggregation algorithm with local polynomial estimators, we answer in particular to the second part of Question 2 from Stone (1982) on the optimal convergence rate. The procedure constructed here has strong adaptation properties: it adapts both to the smoothness of the link function and to the unknown index. ** Read more**

**S. Gaïffas**

*ESAIM: Probability and Statistics*, 2007

We want to recover a signal based on noisy inhomogeneous data (the amount of data can vary strongly on the estimation domain). We model the data using nonparametric regression with random design, and we focus on the estimation of the regression at a fixed point $x_0$ with little, or much data. ** Read more**

**S. Gaïffas and G. Lecué**

*arXiv preprint*, 2008

We give a general result concerning the rates of convergence of penalized empirical risk minimizers (PERM) in the regression model. Then, we consider the problem of agnostic learning of the regression, and give in this context an oracle inequality and a lower bound for PERM over a finite class. ** Read more**

**S. Gaïffas**

*Statistica Sinica*, 2009

he aim of this paper is to recover a signal based on inhomogeneous noisy data (the amount of data can vary strongly from one point to another.) In particular, we focus on the understanding of the consequences of the inhomogeneity of the data on the accuracy of estimation. For that purpose, we consider the model of regression with a random design, and we consider the minimax framework. ** Read more**

**S. Gaïffas and A. Guilloux**

*arXiv preprint*, 2009

We consider the problem of statistical learning for the intensity of a counting process with covariates. In this context, we introduce an empirical risk, and prove risk bounds for the corresponding empirical risk minimizers. Then, we give an oracle inequality for the popular algorithm of aggregation with exponential weights. ** Read more**

**F. Comte, S. Gaïffas and A. Guilloux**

*Annales de l’Institut Henri Poincaré - Probabilités et Statistiques*, 2010

We propose in this work an original estimator of the conditional intensity of a marker-dependent counting process, that is, a counting process with covariates. We use model selection methods and provide a nonasymptotic bound for the risk of our estimator on a compact set. ** Read more**

**S. Gaïffas and G. Lecué**

*arXiv preprint*, 2011

This paper is about iteratively reweighted basis-pursuit algorithms for compressed sensing and matrix completion problems. In a first part, we give a theoretical explanation of the fact that reweighted basis pursuit can improve a lot upon basis pursuit for exact recovery in compressed sensing. We exhibit a condition that links the accuracy of the weights to the RIP and incoherency constants, which ensures exact recovery. ** Read more**

**S. Delattre and S. Gaïffas**

*Stochastic Processes and their Applications*, 2011

We consider the problem of adaptive estimation of the regression function in a framework where we replace ergodicity assumptions (such as independence or mixing) by another structural assumption on the model. Namely, we propose adaptive upper bounds for kernel estimators with data-driven bandwidth (Lepski’s selection rule) in a regression model where the noise is an increment of martingale. ** Read more**

**S. Gaïffas and G. Lecué**

*IEEE Transaction on Information Theory*, 2011

We observe $(X_i, Y_i)$ where the $Y_i$ are real valued outputs and the $X_i$ are random matrices. We observe a new entry $X$ and we want to predict the output $Y$ associated with it. ** Read more**

**S. Gaïffas and G. Lecué**

*Journal of Machine Learning Research*, 2011

Given a finite set $F$ of functions and a learning sample, the aim of an aggregation procedure is to have a risk as close as possible to risk of the best function in $F$. Up to now, optimal aggregation procedures are convex combinations of every elements of $F$. In this paper, we prove that optimal aggregation procedures combining only two functions in $F$ exist. ** Read more**

**S. Gaïffas and A. Guilloux**

*Electronic Journal of Statistics*, 2012

We consider a general high-dimensional additive hazards model in a non-asymptotic setting, including regression for censored-data. In this context, we consider a Lasso estimator with a fully data-driven $\ell_1$-penalization, which is tuned for the estimation problem at hand. We prove sharp oracle inequalities for this estimator. ** Read more**

**E. Richard, S. Gaïffas and N. Vayatis**

*NIPS*, 2012

In the paper, we consider the problem of link prediction in time-evolving graphs. We assume that certain graph features, such as the node degree, follow a vector autoregressive (VAR) model and we propose to use this information to improve the accuracy of prediction. Our strategy involves a joint optimization procedure over the space of adjacency matrices and VAR matrices which takes into account both sparsity and low rank properties of the matrices. ** Read more**

**S. Gaïffas and B. Michel**

*arXiv preprint*, 2014

This paper is about variable selection, clustering and estimation in an unsupervised high-dimensional setting. Our approach is based on fitting constrained Gaussian mixture models, where we learn the number of clusters $K$ and the set of relevant variables $S$ using a generalized Bayesian posterior with a sparsity inducing prior. ** Read more**

**E. Richard, S. Gaïffas and N. Vayatis**

*Journal of Machine Learning Research*, 2014

In the paper, we consider the problem of link prediction in time-evolving graphs. We assume that certain graph features, such as the node degree, follow a vector autoregressive (VAR) model and we propose to use this information to improve the accuracy of prediction. Our strategy involves a joint optimization procedure over the space of adjacency matrices and VAR matrices. ** Read more**

**M. Z. Alaya, S. Gaïffas and A. Guilloux**

*IEEE Transactions on Information Theory*, 2015

We consider the problem of learning the inhomogeneous intensity of a counting process, under a sparse segmentation assumption. We introduce a weighted total-variation penalization, using data-driven weights that correctly scale the penalization along the observation interval. We prove that this leads to a sharp tuning of the convex relaxation of the segmentation prior, by stating oracle inequalities with fast rates of convergence, and consistency for change-points detection. ** Read more**

**E. Bacry, S. Gaïffas, I. Mastromatteo and J.-F. Muzy**

*Journal of Physics A: Mathematical and Theoretical*, 2016

We propose a fast and efficient estimation method that is able to accurately recover the parameters of a $d$-dimensional Hawkes point-process from a set of observations. We exploit a mean-field approximation that is valid when the fluctuations of the stochastic intensity are small. We show that this is notably the case in situations when interactions are sufficiently weak, when the dimension of the system is high or when the fluctuations are self-averaging due to the large number of past events they involve. ** Read more**

**M. Achab, A. Guilloux, S. Gaïffas and E. Bacry**

*International Conference of Monte Carlo Methods and Applications*, 2017

We introduce a doubly stochastic proximal gradient algorithm for optimizing a finite average of smooth convex functions, whose gradients depend on numerically expensive expectations. Indeed, the effectiveness of SGD-like algorithms relies on the assumption that the computation of a subfunction’s gradient is cheap compared to the computation of the total function’s gradient. This is true in the Empirical Risk Minimization (ERM) setting, but can be false when each subfunction depends on a sequence of examples. Our main motivation is the acceleration of the optimization of the regularized Cox partial-likelihood (the core model in survival analysis), but other settings can be considered as well. ** Read more**

**S. Gaïffas and O. Klopp**

*Statistica Sinica*, 2017

Assume that we observe a small set of entries or linear combinations of entries of an unknown matrix $A_0$ corrupted by noise. We propose a new method for estimating $A_0$ that does not rely on the knowledge or on an estimation of the standard deviation of the noise $\sigma$. Our estimator achieves, up to a logarithmic factor, optimal rates of convergence under Frobenius risk and, thus, has the same prediction performance as previously proposed estimators that rely on the knowledge of $\sigma$. Some numerical experiments show the benefits of this approach. ** Read more**

**E. Bacry, S. Gaïffas and J-F Muzy**

*Probability Theory and Related Fields*, 2017

This paper gives new concentration inequalities for the spectral norm of a wide class of matrix martingales in continuous time. These results extend previously established Freedman and Bernstein inequalities for series of random matrices to the class of continuous time processes. Our analysis relies on a new supermartingale property of the trace exponential proved within the framework of stochastic calculus. We provide also several examples that illustrate the fact that our results allow us to recover easily several formerly obtained sharp bounds for discrete time matrix martingales. ** Read more**

**M. Achab, E. Bacry, S. Gaïffas, I. Mastromatteo and J.-F. Muzy**

*ICML*, 2017

We design a new nonparametric method that allows one to estimate the matrix of integrated kernels of a multivariate Hawkes process. This matrix not only encodes the mutual influences of each node of the process, but also disentangles the causality relationships between them. Our approach is the first that leads to an estimation of this matrix without any parametric modeling and estimation of the kernels themselves. ** Read more**

**J. Mourtada, S. Gaïffas and E. Scornet**

*NIPS*, 2017

We establish the consistency of an algorithm of Mondrian Forests, a randomized classification algorithm that can be implemented online. First, we amend the original Mondrian Forest algorithm, that considers a fixed lifetime parameter. Indeed, the fact that this parameter is fixed hinders the statistical consistency of the original procedure. ** Read more**

**A. Virouleau, A. Guilloux, S. Gaïffas and M. Bogdan**

*arXiv preprint*, 2017

The problems of outliers detection and robust regression in a high-dimensional setting are fundamental in statistics, and have numerous applications. Following a recent set of works providing methods for simultaneous robust regression and outliers detection, we consider in this paper a model of linear regression with individual intercepts, in a high-dimensional setting. We introduce a new procedure for simultaneous estimation of the linear regression coefficients and intercepts, using two dedicated sorted-$\ell_1$ penalizations, also called SLOPE. ** Read more**

**S. Bussy, R. Veil, V. Looten, A Burgun, S. Gaïffas, A. Guilloux, B. Ranque, A.-S. Jannot**

*conférence 'L'Intelligence Artificielle en Santé'*, 2018

La production d’un algorithme d’intelligence artificielle (IA) à partir de données de vie réelle réside dans l’intégration et la transformation d’un très grand nombre de covariables (grande dimension), puis dans la construction d’un modèle d’apprentissage. Dans cet article, nous allons détailler les différentes étapes de ce processus à travers une illustration sur des données de soin réutilisées pour prédire la réadmis- sion à l’hôpital après une crise vaso-occlusive chez les patients atteints de drépanocytose. ** Read more**

**Emmanuel Bacry, Stéphane Gaïffas**

*Livre IA et Santé : présent et futur, Académie Nationale de Médecine*, 2018

The Caisse Nationale d’Assurance Maladie (Cnam) and the Ecole Polytechnique signed at the end of 2014 a research partnership agreement for a period of 3 years which was renewed for 3 years at the beginning of 2018. The objective of this partnership is to develop big data technologies and machine learning based on data from the Système National Inter-Régimes de l’Assurance Maladie (Sniiram). ** Read more**

**S. Bussy, A. Guilloux, S. Gaïffas and A.-S. Jannot**

*Statistical Methods in Medical Research*, 2018

We introduce a supervised learning mixture model for censored durations (C-mix) to simultaneously detect subgroups of patients with different prognosis and order them based on their risk. Our method is applicable in a high-dimensional setting, i.e. with a large number of biomedical covariates. Indeed, we penalize the negative log-likelihood by the Elastic-Net, which leads to a sparse parameterization of the model and automatically pinpoints the relevant covariates for the survival prediction. ** Read more**

**M. Achab, E. Bacry, S. Gaïffas, I. Mastromatteo and J.-F. Muzy**

*Journal of Machine Learning Research*, 2018

We design a new nonparametric method that allows one to estimate the matrix of integrated kernels of a multivariate Hawkes process. This matrix not only encodes the mutual influences of each node of the process, but also disentangles the causality relationships between them. Our approach is the first that leads to an estimation of this matrix *without any parametric modeling and estimation of the kernels themselves*. ** Read more**

**E. Bacry, M. Bompaire, P. Deegan, S. Gaïffas and S. V. Poulsen**

*Journal of Machine Learning Research*, 2018

`tick`

is a statistical learning library for `Python 3`

, with a particular emphasis on time-dependent models, such as point processes, and tools for generalized linear models and survival analysis. The core of the library is an optimization module providing model computational classes, solvers and proximal operators for regularization. ** Read more**

**S. Gaïffas and G. Matulewicz**

*Journal of Multivariate Analysis*, 2018

Given the observation of a high-dimensional Ornstein–Uhlenbeck (OU) process in continuous time, we are interested in inference on the drift parameter under a row-sparsity assumption. Towards that aim, we consider the negative log-likelihood of the process, penalized by an $\ell_1$-penalization (Lasso and Adaptive Lasso). We provide both finite and large-sample results for this procedure, by means of a sharp oracle inequality, and a limit theorem in the long-time asymptotics, including asymptotic consistency for variable selection. ** Read more**

**A. Luquiens, D. Vendryes, H.-J. Aubin, A. Benyamina, S. Gaiffas, E. Bacry**

*BMJ Open*, 2018

Self-exclusion is one of the main responsible gambling tools. The aim of this study was to assess the reliability of self-exclusion motives in self-reports to the gambling service provider. ** Read more**

**Martin Bompaire, Stéphane Gaïffas and Emmanuel Bacry**

*arXiv preprint*, 2018

The minimization of convex objectives coming from linear supervised learning problems, such as penalized generalized linear models, can be formulated as finite sums of convex functions. For such problems, a large set of stochastic first-order solvers based on the idea of variance reduction are available and combine both computational efficiency and sound theoretical guarantees (linear convergence rates). ** Read more**

**E. Bacry, M. Bompaire, S. Gaïffas and J-F. Muzy**

*arXiv preprint*, 2018

We consider the problem of unveiling the implicit network structure of node interactions (such as user interactions in a social network), based only on high-frequency timestamps. Our inference is based on the minimization of the least-squares loss associated with a multivariate Hawkes model, penalized by $\ell_1$ and trace norm of the interaction tensor. We provide a first theoretical analysis for this problem, that includes sparsity and low-rank inducing penalizations. ** Read more**

**M. Z. Alaya, S. Bussy, S. Gaïffas and A. Guilloux**

*Journal of Machine Learning Research*, 2019

This paper deals with the problem of large-scale linear supervised learning in settings where a large number of continuous features are available. We propose to combine the well-known trick of one-hot encoding of continuous features with a new penalization called binarsity. In each group of binary features coming from the one-hot encoding of a single raw continuous feature, this penalization uses total-variation regularization together with an extra linear constraint. ** Read more**

**Simon Bussy, Raphaël Veil, Vincent Looten, Anita Burgun, Stéphane Gaïffas, Agathe Guilloux, Brigitte Ranque and Anne-Sophie Jannot**

*BMC Medical Research Methodology*, 2019

Choosing the most performing method in terms of outcome prediction or variables selection is a recurring problem in prognosis studies, leading to many publications on methods comparison. But some aspects have received little attention. ** Read more**

**M. Morel, E. Bacry, S. Gaïffas, A. Guilloux and F. Leroy**

*Biostatistics*, 2019

With the increased availability of large electronic health records databases comes the chance of enhancing health risks screening. Most post-marketing detection of adverse drug reaction (ADR) relies on physicians’ spontaneous reports, leading to under-reporting. To take up this challenge, we develop a scalable model to estimate the effect of multiple longitudinal features (drug exposures) on a rare longitudinal outcome. ** Read more**

**Jaouad Mourtada, Stéphane Gaïffas and Erwan Scornet**

*to appear in the Annals of Statistics*, 2019

Introduced by Breiman, Random Forests are widely used classification and regression algorithms. While being initially designed as batch algorithms, several variants have been proposed to handle online learning. One particular instance of such forests is the Mondrian Forest, whose trees are built using the so-called Mondrian process, therefore allowing to easily update their construction in a streaming fashion. In this paper, we provide a thorough theoretical study of Mondrian Forests in a batch learning setting, based on new results about Mondrian partitions. ** Read more**

**Jaouad Mourtada, Stéphane Gaïffas**

*Journal of Machine Learning Research*, 2019

In this paper, we study the behavior of the Hedge algorithm in the online stochastic setting. We prove that anytime Hedge with decreasing learning rate, which is one of the simplest algorithm for the problem of prediction with expert advice, is remarkably both worst-case optimal and adaptive to the easier stochastic and adversarial with a gap problems. ** Read more**

**J. Mourtada, S. Gaïffas and E. Scornet**

*arXiv preprint*, 2019

Random Forests (RF) is one of the algorithms of choice in many supervised learning applications, be it classification or regression. The appeal of such methods comes from a combination of several characteristics: a remarkable accuracy in a variety of tasks, a small number of parameters to tune, robustness with respect to features scaling, a reasonable computational cost for training and prediction, and their suitability in high-dimensional settings. ** Read more**

**E. Bacry,, S. Gaïffas, F. Leroy, M. Morel, D. P. Nguyen, Y. Sebiat and D. Sun**

*arXiv preprint*, 2019

This article introduces SCALPEL3, a scalable open-source framework for studies involving Large Observational Databases (LODs). Its design eases medical observational studies thanks to abstractions allowing concept extraction, high-level cohort manipulation, and production of data formats compatible with machine learning libraries. SCALPEL3 has successfully been used on the SNDS database, a huge healthcare claims database that handles the reimbursement of almost all French citizens. ** Read more**

** Published:**

** Published:**

**Univ. Paris Diderot, Master M2MO**, 2019

This course focuses on the methodology underlying supervised and unsupervised learning, with a particular emphasis on the mathematical formulation of algorithms, and the way they can be implemented and used in practice. The course will describe for instance some necessary tools from optimization theory, and explain how to use them for machine learning. Numerical illustrations and applications to datasets will be given for the methods studied in the course. ** Read more**

**Ecole Normale Supérieure**, 2019

Le but de ce cours est d’étudier des méthodes statistiques et leurs propriétés d’un point de vue théorique. En fonction des goûts et du niveau d’allergie au clavier de chacun, on pourra s’attarder plus ou moins en profondeur sur des exemples d’applications. Nous essaierons de proposer dans ce cours 70% de contenus classiques et inévitables dans un cours de statistique et 30% de résultats récents et problèmes ouverts. ** Read more**

**CNRS Formation**, 2020

L’apprentissage machine (machine learning) est une discipline scientifique qui s’intéresse à la conception et au développement d’algorithmes permettant aux ordinateurs d’apprendre à prendre des décisions à partir de données. L’ensemble des données possibles qui alimentent une tâche d’apprentissage peut être très vaste et varié, ce qui rend la modélisation et les hypothèses préalables critiques pour la conception d’algorithmes pertinents. Ce stage se concentre sur la méthodologie sous-jacente à l’apprentissage supervisé avec un accent particulier sur la formulation mathématique des algorithmes et la façon dont ils peuvent être mis en oeuvre et utilisés dans la pratique. ** Read more**