Prof. Dr. Alexandra Carpentier

Prof. Dr. Alexandra Carpentier

Faculty of Mathematics (FMA)
Institute for Mathematical Stochastics (IMST)
Universitätsplatz 2, 39106, Magdeburg, G18-408
Projects

Current projects

Mathematical Complexity Reduction
Duration: 01.04.2017 bis 31.03.2026

In the context of the proposed RTG we understand complexity as an intrinsic property that makes
it difficult to determine an appropriate mathematical representation of a real world problem, to assess the fundamental structures and properties of mathematical objects, and to algorithmically solve a given mathematical problem. By complexity reduction we refer to all approaches that help to overcome these difficulties in a systematic way and to achieve the aforementioned goals more efficiently.

For many mathematical tasks, approximation and dimension reduction are the most important tools to obtain
a simpler representation and computational speedups. We see complexity reduction in a more general way and
will also, e.g., investigate liftings to higher-dimensional spaces and consider the costs of data observation.
Our research goals are the development of cross-disciplinary mathematical theory and methods for complexity
reduction and the identification of relevant problem classes and effective exploitation of their structures.

We aim at a comprehensive teaching and research program based on geometric, algebraic, stochastic, and
analytic approaches, complemented by efficient numerical and computational implementations. In order to
ensure the success of our doctoral students, they will participate in a tailored structured study program. It will
contain training units in form of compact courses and weekly seminars, and encourage early integration into the
scientific community and networking. We expect that the RTG will also serve as a catalyst for a dissemination
of these successful practices within the Faculty of Mathematics and improve the gender situation.

Complexity reduction is a fundamental aspect of the scientific backgrounds of the principal investigators.
The combination of expertise from different areas of mathematics gives the RTG a unique profile, with high
chances for scientific breakthroughs. The RTG will be linked to two faculties, a Max Planck Institute, and
several national and international research activities in different scientific communities.

The students of the RTG will be trained to become proficient in a breadth of mathematical methods, and
thus be ready to cope with challenging tasks in particular in cross-disciplinary research teams. We expect an
impact both in terms of research successes, and in the education of the next generation of leading scientists in
academia and industry.

View project in the research portal

Completed projects

Risk Estimation for Brain-Computer Interfaces
Duration: 01.06.2020 bis 30.06.2022

The project RE-BCI was awarded in the beginning of 2020 by the Land Sachsen Anhalt, more precisely by the Sachsen-Anhalt WISSENSCHAFT Spitzenforschung/Synergien. The objective of RE-BCI is to prepare preliminary results supporting the BCI (Brain-Computer Interfaces, i.e. a technology for connecting a human user with a computer through the lectrical impulses emitted by her/his brain) application to shared authority situations.

View project in the research portal

Participation in the SFB 1294 on Data Assimilation in Potsdam
Duration: 01.11.2018 bis 30.11.2021

The group is also funded by the Deutsche Foschungsgemeinschaft (DFG, German Research Foundation) on the SFB 1294 Data Assimilationon "Data Assimilation - The seamless integration of data and models" on Project A03 together with Prof. Gilles Blanchard.
This project is concerned with the problem of learning sequentially, adaptively and in partial information on an uncertain environment. In this setting, the learner collects sequentially and actively the data, which is not available before-hand in a batch form. The process is as follows: at each time t, the learner chooses an action and receives a data point, that depends on the performed action. The learner collects data in order to learn the system, but also to achieve a goal (characterized by an objective function) that depends on the application. In this project, we will aim at solving this problem under general objective functions, and dependency in the data collecting process - exploring variations of the so-called bandit setting which corresponds to this problem with a specific objective function.
As a motivating example, consider the problem of sequential and active attention detection through an eye tracker. A human user is looking at a screen, and the objective of an automatized monitor (learner) is to identify through an eye tracker zones of this screen where the user is not paying sufficient attention. In order to do so, the monitor is allowed at each time t to flash a small zone a t in the screen, e.g. light a pixel (action), and the eye tracker detects through the eye movement if the user has observed this flash. Ideally the monitor should focus on these difficult zones and flash more often there (i.e. choose more often specific actions corresponding to less identified zones). Therefore, sequential and adaptive learning methods are expected to improve the performances of the monitor.

View project in the research portal

Minimax testing rates in linear regression
Duration: 01.01.2019 bis 31.10.2021

In this project we focus on finding the minimax testing rates in l_2 norm for the linear regression model. We also investigate the problem of estimating optimally the l_2 norm for the parameter. We close some gaps in linear regression.

View project in the research portal

Teilnahme an dem GK Daedalus 2433 mit der TU Berlin
Duration: 01.09.2018 bis 31.10.2021

The main goal of DAEDALUS is the analysis of the interplay between incorporation of data and differential equation-based modeling, which is one of the key problems in model-based research of the 21th century. DAEDALUS focuses both on theoretical insights and on applications in life sciences (brain-computer interfaces and biochemistry) as well as in fluid dynamics. The projects cover a scientific range from machine learning, mathematical theory of model reduction and uncertainty quantification to respective applications in turbulence theory, simulation of complex nonlinear flows as well as of molecular dynamics in chemical and biological systems. In our group, we cover mathematical statistics and machine learning aspects.

This project is in the context of Daedalus, and is concerned with uncertainty quantification in complex cases.

View project in the research portal

Minimax change point detection in high dimension
Duration: 01.01.2019 bis 01.10.2021

The objective is to establish the minimax rates for sparse change point estimation in high dimension. We want in particular to investigate in a refined way intermediary regimes. Joint project with Emmanuel Pilliat and Dr. Nicolas Verzelen.

View project in the research portal

One sample local test in the Graph model
Duration: 01.01.2019 bis 01.10.2021

In this project we aim at finding minimax rates for the problem of local testing in the graph model, in l_q norm. We focus particularly on local rates, and aim also at the multinomial tetsig model, which can be seen as a special case.

View project in the research portal

Participation in the GK 2297 Mathcore
Duration: 01.01.2019 bis 30.09.2021

The objective of this GRK is to investigate the problem of complexity reduction across the different areas of mathematics. In our group, we bring to this project some expertise on the field of sequential learning, in order to reduce the complexity of given problems by adapting the sampling strategies.

View project in the research portal

Adaptive two sample test in the density setting
Duration: 01.11.2017 bis 31.10.2020

We consider the problem of testing between two samples of (non necessarily uniform) density. While minimax signal detection in the case where the null hypothesis density is uniform is well understood, recent works in the case of multinomial distributions have highlighted the amelioration in the minimax rate that can come when considering non uniform null hypothesis density. We want to study this problem in the two sample testing case, which is significantly more complex, and extend it to smooth densities.

View project in the research portal

MuSyAD on Anomaly Detection
Duration: 01.10.2017 bis 31.08.2020

Anomaly detection is an interdisciplinary domain, borrowing elements from mathematics, computer science, and engineering. The main aim is to develop efficient techniques for detecting anomalous behaviour of systems. In the classical scenario a monitor receives data from a system and compares this data to a reference system with some single normal behaviour. Ideally no strong assumptions are made on the nature of anomalous behaviours, so the problem of anomaly detection is by essence a non parametric problem. Here I propose to study a more complex scenario, which will be referred to as multisystem anomaly detection. In this setting, reference systems can have a variety of normal behaviours, and moreover, there are many systems under the monitor s surveillance, and the monitor must allocate its resources wisely among them. In this situation new theoretical and computational challenges arise. The overall objective of this proposal is to find efficient methods to solve the problem of multi-system anomaly detection. This aim will be reached by addressing the following sub-objectives. First, we will generalise the theoretical framework of anomaly detection to the broader setting of multi-system anomaly detection. Second, multi-system anomaly detection methods will be developed, by taking ideas from the non parametric testing field and applying them to the new framework. Third, we will study optimal monitoring strategies for cases where the multiple systems cannot be monitored simultaneously. Here, it is important that the monitor allocates its resources among the systems in a way that is as efficient as possible. To this end, sequential and adaptive sampling methods that target the anomaly detection problem will be designed. Since anomaly detection is a non parametric problem, elements in the theory of non parametric confidence sets will be used. Finally, the newly developed methods will be applied to practical problems: a methodological example in extreme value theory, an econometric application for speculative bubble detection and two applications in a Brain Computer Interface framework.

View project in the research portal

Projekt on Data Assimilation
Duration: 01.10.2017 bis 31.08.2020

This project is concerned with the problem of learning sequentially, adaptively and in partial information on an uncertain environment. In this setting, the learner collects sequentially and actively the data, which is not available before-hand in a batch form. The process is as follows: at each time t, the learner chooses an action and receives a data point, that depends on the performed action. The learner collects data in order to learn the system, but also to achieve a goal (characterized by an objective function) that depends on the application. In this project, we will aim at solving this problem under general objective functions, and dependency in the data collecting process exploring variations of the so-called bandit setting which corresponds to this problem with a specific objective function.

As a motivating example, consider the problem of sequential and active attention detection through an eye tracker. A human user is looking at a screen, and the objective of an automatized monitor (learner) is to identify through an eye tracker zones of this screen where the user is not paying sufficient attention. In order to do so, the monitor is allowed at each time t to flash a small zone a t in the screen, e.g. light a pixel (action), and the eye tracker detects through the eye movement if the user has observed this flash. Ideally the monitor should focus on these difficult zones and flash more often there (i.e. choose more often specific actions corresponding to less identified zones). Therefore, sequential and adaptive learning methods are expected to improve the performances of the monitor.

View project in the research portal

Active learning for matrix completion
Duration: 01.10.2017 bis 14.06.2019

Matrix completion is an essential problem in modern machine learning, as it is e.g. important for the calibration of the recommendation systems. We consider the problem of matrix completion in the setting where the learner can choose where to sample. In this setting, it can be of interest to target more specifically parts of the matrix where it is discovered that the complexity is high (higher local rank), where the knowledge is limited (few sampled points), or where the noise is high. This project plans to consider first the problem of active learning for matrix completion when the matrix can be subdivided into block submatrices of small ranks that are known, and then in the more general case where this cannot be done.

View project in the research portal

Smoothness testing in the Sobolev sense
Duration: 01.10.2017 bis 14.06.2019

We want to develop a test to determine whether a function lying in a fixed L2-Sobolev-type ball of smoothness t, and generating a noisy signal, is in fact of a given smoothness s larger than t or not. While it is impossible to construct a uniformly consistent test for this problem on every function of smoothness t, it becomes possible if we remove a sufficiently large region of the set of functions of smoothness t. The functions that we remove are functions of smoothness strictly smaller than s, but that are very close to s-smooth functions. This problem has been considered in the case of specific Besov bodies where it is easier, and we plan to extend it to more usual Sobolev ellipsoids.

View project in the research portal

Publications

2022

Peer-reviewed journal article

Local minimax rates for closeness testing of discrete distributions

Lam-Weil, Joseph; Carpentier, Alexandra; Sriperumbudur, Bharath K.

In: Bernoulli - Aarhus, Bd. 28 (2022), 2, S. 1179-1197

2021

Peer-reviewed journal article

Optimal sparsity testing in linear regression model

Carpentier, Alexandra; Verzelen, Nicolas

In: Bernoulli: official journal of the Bernoulli Society for Mathematical Statistics and Probability - Aarhus, Bd. 27 (2021), 2, S. 727-750

2020

Peer-reviewed journal article

Two-sample hypothesis testing for inhomogeneous random graphs

Ghoshdastidar, Debarghya; Gutzeit, Maurilio; Carpentier, Alexandra; Luxburg, Ulrike

In: The annals of statistics: an official journal of the Institute of Mathematical Statistics - Hayward, Calif.: IMS Business Off., Bd. 48.2020, 4, S. 2208-2229

Non-peer-reviewed journal article

Linear bandits with stochastic delayed feedback

Vernade, Claire; Carpentier, Alexandra; Lattimore, Tor; Zappella, Giovanni; Ermis, Beyza; Brueckner, Michael

In: De.arxiv.org - [S.l.]: Arxiv.org, 2020, article 1807.02089

Non-peer-reviewed journal article

The influence of shape constraints on the thresholding bandit problem

Cheshire, James; Menard, Pierre; Carpentier, Alexandra

In: De.arxiv.org - [S.l.]: Arxiv.org, 2020, article 2006.10006

Non-peer-reviewed journal article

Stochastic bandits with arm-dependent delays

Manegueu, Anne Gael; Vernade, Claire; Carpentier, Alexandra; Valko, Michal

In: De.arxiv.org - [S.l.]: Arxiv.org, 2020, article 2006.10459, insgesamt 19 Seiten

2019

Peer-reviewed journal article

Adaptive estimation of the sparsity in the Gaussian vector model

Carpentier, Alexandra; Verzelen, Nicolas

In: The annals of statistics - Hayward, Calif.: IMS Business Off., 1973, Bd. 47.2019, 1, S. 93-126

Peer-reviewed journal article

Minimax rate of testing in sparse linear regression

Carpentier, Alexandra; Collier, Oliver; Comminges, Laetitia; Tsybakov, Aleksandr Borisovich; Wang, Yu

In: Automation and remote control - Dordrecht [u.a.]: Springer Science + Business Media B.V, 2001, Bd. 80.2019, 10, S. 1817-1834

Dissertation

Topics in statistical minimax hypothesis testing

Gutzeit, Maurilio; Carpentier, Alexandra

In: Magdeburg, 2019, 99 Seiten, 1 Diagramm, 30 cm[Literaturverzeichnis: Seite 95-99]

Article in conference proceedings

A minimax near-optimal algorithm for adaptive rejection sampling

Achddou, Juliette; Lam, Joseph; Carpentier, Alexandra; Blanchard, Gilles

In: Algorithmic Learning Theory - PMLR; Garivier, Aurélien . - 2019, S. 94-126 - (Proceedings of Machine Learning Research; 98)

Article in conference proceedings

Rotting bandits are no harder than stochastic ones

Seznec, Julien; Locatelli, Andrea; Carpentier, Alexandra; Lazaric, Alessandro; Valko, Michal

In: The 22nd International Conference on Artificial Intelligence and Statistics - PMLR; Chaudhuri, Kamalika . - 2019, S. 2564-2572 - (Proceedings of Machine Learning Research; 89)

Article in conference proceedings

Active multiple matrix completion with adaptive confidence sets

Locatelli, Andrea; Carpentier, Alexandra; Valko, Michal

In: The 22nd International Conference on Artificial Intelligence and Statistics - PMLR, 2019; Chaudhuri, Kamalika . - 2019, S. 1783-1791 - (Proceedings of Machine Learning Research; 89)[Konferenz: 22nd International Conference on Artificial Intelligence and Statistics, Naha, Okinawa, Japan, 16-18 April 2019]

Non-peer-reviewed journal article

Two-sample hypothesis testing for inhomogeneous random graphs

Ghoshdastidar, Debarghya; Gutzeit, Maurilio; Carpentier, Alexandra; Luxburg, Ulrike

In: De.arxiv.org - [S.l.]: Arxiv.org, 2019, article 1707.00833, insgesamt 54 Seiten

Non-peer-reviewed journal article

Restless dependent bandits with fading memory

Zadorozhnyi, Oleksandr; Blanchard, Gilles; Carpentier, Alexandra

In: De.arxiv.org - [S.l.]: Arxiv.org, 2019, article 1906.10454, insgesamt 30 Seiten

Non-peer-reviewed journal article

Local minimax rates for closeness testing of discrete distributions

Lam, Joseph; Carpentier, Alexandra; Sriperumbudur, Bharath K.

In: De.arxiv.org - [S.l.]: Arxiv.org, 2019, article 1902.01219, insgesamt 62 Seiten

Non-peer-reviewed journal article

Optimal sparsity testing in linear regression model

Carpentier, Alexandra; Verzelen, Nicolas

In: De.arxiv.org - [S.l.]: Arxiv.org, 2019, Artikel 1901.08802, insgesamt 50 Seiten

2018

Book chapter

Constructing confidence sets for the matrix completion problem

Carpentier, Alexandra; Klopp, Olga; Löffler, Matthias

In: Nonparametric statistics: 3nd ISNPS, Avignon, France, June 2016 / Patrice Bertail, Pierre-André Cornillon, Eric Matzner-Lober, Delphine Blanke (Editors): 3nd ISNPS, Avignon, France, June 2016/ Conference of the International Society for Non-Parametric Statistics - Cham, Switzerland: Springer Nature, 2018 . - 2018[Konferenz: 3rd Conference of the International Society for Nonparametric Statistics, ISNPS, Avignon, France, June 11-16, 2016]

Peer-reviewed journal article

Adaptive confidence sets for matrix completion

Carpentier, Alexandra; Klopp, Olga; Löffler, Matthias; Nickl, Richard

In: Bernoulli: official journal of the Bernoulli Society for Mathematical Statistics and Probability - Aarhus, 1995, Vol. 24.2018, 4A, S. 2429-2460

Peer-reviewed journal article

Minimax euclidean separation rates for testing convex hypotheses in R d

Blanchard, Gilles; Carpentier, Alexandra; Gutzeit, Maurilio

In: Electronic journal of statistics - Ithaca, NY: Cornell University Library, 2007, Bd. 12.2018, 2, S. 3713-3735

Peer-reviewed journal article

An iterative hard thresholding estimator for low rank matrix recovery with explicit limiting distribution

Carpentier, Alexandra; Kim, Arlene K. H.

In: Statistica Sinica - Taipei: Statistica Sinica, Institute of Statistical Science, Academia Sinica, 1991, Bd. 28.2018, 3, S. 1371-1393

Article in conference proceedings

Adaptivity to smoothness in X-armed bandits

Locatelli, Andrea; Carpentier, Alexandra

In: Conference on Learning Theory: 6-9 July 2018 : [proceedings] - [Erscheinungsort nicht ermittelbar]: PMLR, 2018 . - 2018, S. 1463-1492 - (Proceedings of machine learning research; volume 75)[Konferenz: 31st Annual Conference on Learning Theory, COLT 2018, Stockholm, 6-9 July 2018]

Article in conference proceedings

An adaptive strategy for active learning with smooth decision boundary

Locatelli, Andrea; Carpentier, Alexandra; Kpotufe, Samory

In: Algorithmic Learning Theory 2018: 7-9 April 2018 : [proceedings] - [Erscheinungsort nicht ermittelbar]: PMLR, 2018 . - 2018, S. 547-571[Konferenz: Algorithmic Learning Theory 2018, Lanzarote, Spain, 7-9 April 2018]

Non-peer-reviewed journal article

A minimax near-optimal algorithm for adaptive rejection sampling

Achdou, Juliette; Lam, Joseph; Carpentier, Alexandra; Blanchard, Gilles

In: De.arxiv.org - [S.l.]: Arxiv.org . - 2018, insges. 32 S.

Non-peer-reviewed journal article

Total variation distance for discretely observed Lévy processes : a Gaussian approximation of the small jumps

Carpentier, Alexandra; Duval, Céline; Mariucci, Ester

In: De.arxiv.org - [S.l.]: Arxiv.org, 1991 . - 2018, insges. 32 S.

Non-peer-reviewed journal article

Estimating minimum effect with outlier selection

Carpentier, Alexandra; Delattre, Sylvain; Roquain, Etienne; Verzelen, Nicolas

In: De.arxiv.org - [S.l.]: Arxiv.org, 1991 . - 2018, insges. 70 S.

Non-peer-reviewed journal article

Minimax rate of testing in sparse linear regression

Carpentier, Alexandra; Collier, Olivier; Comminges, Laetitia; Tsybakov, Alexandre B.; Wang, Yuhao

In: De.arxiv.org - [S.l.]: Arxiv.org, 1991 . - 2018, insges. 18 S.

Non-peer-reviewed journal article

Adaptive estimation of the sparsity in the Gaussian vector model

Carpentier, Alexandra; Verzelen, Nicolas

In: De.arxiv.org - [S.l.]: Arxiv.org, 1991 . - 2018, insges. 76 S.

2017

Article in conference proceedings

Two-sample tests for large random graphs using network statistics

Ghoshdastidar, Debarghya; Gutzeit, Maurilio; Carpentier, Alexandra; Luxburg, Ulrike

In: Conference on Learning Theory - [Erscheinungsort nicht ermittelbar]: PMLR, S. 954-977, 2017 - (Proceedings of machine learning research; volume 65)[Konfernz: Conference on Learning Theory, Amsterdam, Netherlands, 7-10 July 2017]

Article in conference proceedings

Adaptivity to noise parameters in nonparametric active learning

Locatelli, Andrea; Carpentier, Alexandra; Kpotufe, Samory

In: Conference on Learning Theory: 7-10 July 2017, Amsterdam, Netherlands : [proceedings] - [Erscheinungsort nicht ermittelbar]: PMLR, S. 1383-1416 - (Proceedings of machine learning research; volume 65); proceedings.mlr.press/v65/locatelli-andrea17a.html[Konfernz: Conference on Learning Theory, Amsterdam, Netherlands, 7-10 July 2017]

Non-peer-reviewed journal article

Adaptivity to noise parameters in nonparametric active learning

Locatelli, Andrea; Carpentier, Alexandra; Kpotufe, Samory

In: De.arxiv.org - [S.l.] : Arxiv.org, insges. 32 S., 2017

Non-peer-reviewed journal article

An iterative hard thresholding estimator for low rank matrix recovery with explicit limiting distribution

Carpentier, Alexandra; Kim, Arlene

In: De.arxiv.org - [S.l.] : Arxiv.org, insges. 40 S., 2017

Non-peer-reviewed journal article

Two-sample tests for large random graphs using network statistics

Ghoshdastidar, Debarghya; Gutzeit, Maurilio; Carpentier, Alexandra; Luxenburg, Ulrike

In: De.arxiv.org - [S.l.]: Arxiv.org, insges. 24 S., 2017

2016

Article in conference proceedings

Pliable rejection sampling

Erraqabi, A.; Valko, M.; Carpentier, A.; Maillard, O.-A.

In: In: 33rd International Conference on Machine Learning, ICML 2016, Bd. 5, S. 3122-3137, 2016

Article in conference proceedings

An optimal algorithm for the Thresholding Bandit Problem

Locatelli, Andrea; Gutzeit, Maurilio; Carpentier, Alexandra

In: International Conference on Machine Learning - [Erscheinungsort nicht ermittelbar]: PMLR, S. 1690-1698, 2016 - (Proceedings of machine learning research; volume 48)[Konferenz: 33rd International Conference on Machine Learning, New York, 20-22 June 2016]

Article in conference proceedings

Tight (lower) bounds for the fixed budget best arm identification bandit problem

Carpentier, Alexandra; Locatelli, Andrea

In: Conference on Learning Theory: 23-26 June 2018, Columbia University, New York, New York, USA : [proceedings] - [Erscheinungsort nicht ermittelbar]: PMLR, S. 590-604, 2016 - (Proceedings of machine learning research; volume 49)[Konferenz: 29th Conference on Learning Theory, COLT, New York, 23-26 June 2018]

Article in conference proceedings

Learning relationships between data obtained independently

Carpentier, Alexandra; Schlüter, Teresa

In: Artificial Intelligence and Statistics: 9-11 May 2016, Cadiz, Spain : [proceedings] - [Erscheinungsort nicht ermittelbar]: PMLR, S. 658-666 - (Proceedings of machine learning research; volume 51); proceedings.mlr.press/v51/carpentier16b.html[Konferenz: 19th International Conference on Artificial Intelligence and Statistics, Cadiz, Spain, 9-11 May 2016]

Article in conference proceedings

An optimal algorithm for the thresholding bandit problem

Locatelli, A.; Gutzeit, M.; Carpentier, A.

In: In: 33rd International Conference on Machine Learning, ICML 2016, Bd. 4, S. 2539-2554, 2016

Article in conference proceedings

Pliable rejection sampling

Erraqabi, Akram; Valko, Michal; Carpentier, Alexandra; Maillard, Odalric

In: International Conference on Machine Learning: 20-22 June 2016, New York, New York, USA : [proceedings] - [Erscheinungsort nicht ermittelbar]: PMLR, S. 21-21-2129 - (Proceedings of machine learning research; volume 48)[Konferenz: 33rd International Conference on Machine Learning, New York, 20-22 June 2016]

2015

Peer-reviewed journal article

Implementable confidence sets in high dimensional regression

Carpentier, A.

In: In: Journal of Machine Learning Research, Bd. 38, S. 120-128, 2015

Peer-reviewed journal article

Adaptive strategy for stratified Monte Carlo sampling

Carpentier, A.; Munos, R.; Antosy, A.

In: In: Journal of Machine Learning Research, Bd. 16, S. 2231-2271, 2015

Peer-reviewed journal article

On signal detection and confidence sets for low rank inference problems

Carpentier, A.; Nickl, R.

In: In: Electronic Journal of Statistics, Bd. 9, 2, S. 2675-2688, 2015

Peer-reviewed journal article

Adaptive and minimax optimal estimation of the tail coefficient

Carpentier, A.; Kim, A.K.H.

In: In: Statistica Sinica, Bd. 25, 3, S. 1133-1144, 2015

Peer-reviewed journal article

Testing the regularity of a smooth signal

Carpentier, A.

In: In: Bernoulli, Bd. 21, 1, S. 465-488, 2015

Article in conference proceedings

Simple regret for infinitely many armed bandits

Carpentier, A.; Valko, M.

In: In: 32nd International Conference on Machine Learning, ICML 2015, Bd. 2, S. 1133-1141, 2015

2014

Peer-reviewed journal article

Adaptive confidence intervals for the tail coefficient in a wide second order class of Pareto models

Carpentier, A.; Kim, A.K.H.

In: In: Electronic Journal of Statistics, 1, S. 2066-2110, 2014

Peer-reviewed journal article

Minimax number of strata for online stratified sampling: The case of noisy samples

Carpentier, A.; Munos, R.

In: In: Theoretical Computer Science, Bd. 558, C, S. 77-106, 2014

Article in conference proceedings

Extreme bandits

Carpentier, A.; Valko, M.

In: In: Advances in Neural Information Processing Systems, Bd. 2, January, S. 1089-1097, 2014

2013

Peer-reviewed journal article

Optimizing P300-speller sequences by RIP-ping groups apart

Thomas, E.; Clerc, M.; Carpentier, A.; Daucea, E.; Devlaminck, D.; Munos, R.

In: In: International IEEE/EMBS Conference on Neural Engineering, NER, S. 1062-1065, 2013

Peer-reviewed journal article

Automatic motor task selection via a bandit algorithm for a brain-controlled button

Fruitet, J.; Carpentier, A.; Munos, R.; Clerc, M.

In: In: Journal of Neural Engineering, Bd. 10, 1, 2013

Peer-reviewed journal article

Honest and adaptive confidence sets in Lp

Carpentier, A.

In: In: Electronic Journal of Statistics, Bd. 7, 1, S. 2875-2923, 2013

Article in conference proceedings

Stochastic simultaneous optimistic optimization

Valko, M.; Carpentier, A.; Munos, R.

In: In: 30th International Conference on Machine Learning, ICML 2013, PART 1, S. 678-686, 2013

Article in conference proceedings

Toward optimal stratification for stratified Monte-Carlo integration

Carpentier, A.; Munos, R.

In: In: 30th International Conference on Machine Learning, ICML 2013, PART 1, S. 687-695, 2013

2012

Book chapter

Minimax number of strata for online stratified sampling given noisy samples

Carpentier, Alexandra; Munos, Rémi

In: Algorithmic Learning Theory / Bshouty , Nader H. - Berlin, Heidelberg : Springer ; Bshouty, Nader H. . - 2012, S. 229-244 - (Lecture notes in computer science; 7568)

Peer-reviewed journal article

Minimax number of strata for online stratified sampling given noisy samples

Carpentier, A.; Munos, R.

In: In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Bd. 7568 LNAI, S. 229-244, 2012

Peer-reviewed journal article

Bandit Theory meets Compressed Sensing for high-dimensional Stochastic Linear Bandit

Carpentier, A.; Munos, R.

In: In: Journal of Machine Learning Research, Bd. 22, S. 190-198, 2012

Article in conference proceedings

Adaptive stratified sampling for Monte-Carlo integration of differentiable functions

Carpentier, A.; Munos, R.

In: In: Advances in Neural Information Processing Systems, Bd. 1, S. 251-259, 2012

Article in conference proceedings

Bandit algorithms boost motor-task selection for brain computer interfaces

Fruitet, J.; Carpentier, A.; Munos, R.; Clerc, M.

In: In: Advances in Neural Information Processing Systems, Bd. 1, S. 449-457, 2012

Article in conference proceedings

Online allocation and homogeneous partitioning for piecewise constant mean-approximation

Maillard, O.A.; Carpentier, A.

In: In: Advances in Neural Information Processing Systems, Bd. 3, S. 1961-1969, 2012

2011

Peer-reviewed journal article

Upper-confidence-bound algorithms for active learning in multi-armed bandits

Carpentier, A.; Lazaric, A.; Ghavamzadeh, M.; Munos, R.; Auer, P.

In: In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Bd. 6925 LNAI, S. 189-203, 2011

Article in conference proceedings

Finite-time analysis of stratified sampling for Monte Carlo

Carpentier, A.; Munos, R.

In: In: Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011, NIPS 2011, 2011

Article in conference proceedings

Sparse recovery with Brownian sensing

Carpentier, A.; Maillard, O.-A.; Munos, R.

In: In: Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011, NIPS 2011, 2011

Teaching

Sommer Semester 2018

Mathematik des Maschinellen Lernens II: LSF 

Modellierung 2 (FMA): LSF 

Oberseminar zur Stochastik: LSF 

Weiterführende Mathematische Statistik: LSF Elearning

  • Weiterführende Mathematische Statistik (Ü): LSF

Winter Semester 2017/18

Mathematik des maschinellen Lernens: LSF Elearning

Oberseminar zur Stochastik. LSF

Weiterführende Wahrscheinlichkeitstheorie: LSF Elearning

  • Weiterführende Wahrscheinlichkeitstheorie (Tutorium): LSF
  • Weiterführende Wahrscheinlichkeitstheorie (Übung): LSF

Last Modification: 29.04.2022 - Contact Person: Webmaster