Christophe Pérignon

Professor of Finance
Associate Dean for Research

Email: perignon@hec.fr
www.hec.fr/perignon

Biography

Christophe Pérignon is Professor of Finance and Associate Dean for Research at HEC Paris, France. He is also the co-holder of the ACPR (Banque de France) Chair in Regulation and Systemic Risk. He obtained a Ph.D. in Finance from the Swiss Finance Institute and was a Post-Doctoral Fellow at the University of California at Los Angeles (UCLA). Prior to joining HEC Paris, he was an Assistant Professor of Finance at Simon Fraser University in Vancouver, Canada. His areas of research and teaching interests are in financial risk management, banking, and AI/machine learning. Christophe published a dozen of articles in top finance journals (Journal of Finance, Journal of Financial Economics, Review of Financial Studies, Journal of Business, Journal of Financial and Quantitative Analysis, and Review of Finance) or in general-science journals (Science). In 2014, he received the Europlace Award for the Best Young Researcher in Finance. With Christophe Hurlin, he co-founded RunMyCode, an online repository allowing researchers to share code and data associated with published papers (800,000+ individual visits); and cascad, a certification agency verifying the reproducibility of the results displayed in scientific articles (300 verifications). In 2022, Christophe launched RAMP-UP, the job matching platform for HEC Paris students and professors. He is also co-founder of Decilia Science, a data-science company producing and auditing AI and machine-learning algorithms.

   

Publications

What if Dividends Were Tax-Exempt? Evidence from a Natural Experiment
Review of Financial Studies, 2021 (with D. Isakov and J.P. Weisskopf)

The Private Production of Safe assets 
Journal of Finance
, 2021 (with M. Kacperczyk and G. Vuillemey)

Certify Reproducibility with Confidential Data
Science
, 2019 (with K. Gadouche, C. Hurlin, R. Silberman, and E. Debonnel)

Machine learning et nouvelles sources de données pour le scoring de crédit
Revue d'Economie Financière
, 2019 (with C. Hurlin)

The Counterparty Risk Exposure of ETF Investors
Journal of Banking and Finance
, 2019 (with C. Hurlin, G. Iseli, and S. Yeung)

Pitfalls in Systemic-Risk Scoring
Journal of Financial Intermediation
, 2019 (with S. Benoit and C. Hurlin)

Wholesale Funding Dry-Ups
Journal of Finance, 2018 (with D. Thesmar and G. Vuillemey)

The Political Economy of Financial Innovation: Evidence from Local Governments
Review of Financial Studies
, 2017 (with B. Vallée)

CoMargin
Journal of Financial and Quantitative Analysis,
2017 (with J. Cruz Lopez, J. Harris, and C. Hurlin)

Where the Risks Lie: A Survey on Systemic Risk
Review of Finance, 2017 (with S. Benoit, J.E. Colliard, and C. Hurlin)

Implied Risk Exposures
Review of Finance
, 2015 (with S. Benoit and C. Hurlin)

The Risk Map: A New Tool for Validating Risk Models
Journal of Banking and Finance
, 2013 (with G. Colletaz and C. Hurlin)

Derivatives Clearing, Default Risk, and Insurance
Journal of Risk and Insurance
, 2013 (with R. Jones)

The Pernicious Effects of Contaminated Data in Risk Management
Journal of Banking and Finance
, 2011 (with L. Frésard and A. Wilhelmsson)

The Level and Quality of Value-at-Risk Disclosure by Commercial Banks
Journal of Banking and Finance
, 2010 (with D. Smith)

Diversification and Value-at-Risk
Journal of Banking and Finance
, 2010 (with D. Smith)

Commonality in Liquidity: A Global Perspective
Journal of Financial and Quantitative Analysis
, 2009 (with P. Brockman and D. Chung)

How Common are Common Return Factors across Nyse and Nasdaq
Journal of Financial Economics, 2008 (with A. Goyal and C. Villa)

A New Approach to Comparing VaR Estimation Methods
Journal of Derivatives
, 2008 (with D. Smith)

Do Banks Overstate their Value-at-Risk?
Journal of Banking and Finance
, 2008 (with Z. Deng and Z. Wang)

Repurchasing Shares on a Second Trading Line
Review of Finance
, 2007 (with D. Chung and D. Isakov)

Testing the Monotonicity Property of Option Prices
Journal of Derivatives
, 2006

Sources of Time Variation in the Covariance Matrix of Interest Rates
Journal of Business
, 2006 (with C. Villa)

Working Papers

Measuring the Driving Forces of Predictive Performance: Application to Credit Scoring (with S. Hué, C. Hurlin and S. Saurin) 2023 Econometric Society Australasian Meeting, 2023 Asian Meeting of the Econometric Society - Package in Python

In credit scoring, machine learning models are known to outperform standard parametric models. As they condition access to credit, banking supervisors and internal model validation teams need to monitor their predictive performance and to identify the features with the highest impact on performance. To facilitate this, we introduce the XPER methodology to decompose a performance metric (e.g., AUC, R^2) into specific contributions associated with the various features of a classification or regression model. XPER is theoretically grounded on Shapley values and is both model-agnostic and performance metric-agnostic. Furthermore, it can be implemented either at the model level or at the individual level. Using a novel dataset of car loans, we decompose the AUC of a machine-learning model trained to forecast the default probability of loan applicants. We show that a small number of features can explain a surprisingly large part of the model performance. Furthermore, we find that the features that contribute the most to the predictive performance of the model may not be the ones that contribute the most to individual forecasts (SHAP). We also show how XPER can be used to deal with heterogeneity issues and significantly boost out-of-sample performance.

Computational Reproducibility in Finance: Evidence from 1,000 Tests (with O. Akmansoy, C. Hurlin, A. Menkveld, A. Dreber, F. Holzmeister, J. Huber, M. Johannesson, M. Kirchler, M. Razen, U. Weitzel) Review of Financial Studies, R&R

We analyze the computational reproducibility of more than 1,000 empirical answers to six research questions in finance provided by 168 international research teams. Running the original researchers’ code on the same raw data regenerates exactly the same results only 52% of the time. Reproducibility is higher for researchers with better coding skills and for those exerting more effort. It is lower for more technical research questions, more complex code, and for results lying in the tails of the results distribution. Neither researcher seniority, nor peer-review ratings appear to be related to the level of reproducibility. Moreover, researchers exhibit strong overconfidence when assessing the reproducibility of their own research. We provide guidelines for finance researchers and discuss several implementable reproducibility policies for academic journals.

Non-Standard Errors (with A. Menkveld, A. Dreber, F. Holzmeister, J. Huber, M. Johannesson, M. Kirchler, M. Razen, U. Weitzel et al.) Journal of Finance, forthcoming - cascad reproducibility report

My contribution: I was in charge of the reproducibility verification policy of the #fincap project

In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants.

The Fairness of Credit Scoring Models (with C. Hurlin and S. Saurin) Management Science, R&R

In credit markets, screening algorithms aim to discriminate between good-type and bad-type borrowers. However, when doing so, they also often discriminate between individuals sharing a protected attribute (e.g. gender, age, racial origin) and the rest of the population. In this paper, we show how (1) to test whether there exists a statistically significant difference between protected and unprotected groups, which we call lack of fairness and (2) to identify the variables that cause the lack of fairness. We then use these variables to optimize the fairness-performance trade-off. Our framework provides guidance on how algorithmic fairness can be monitored by lenders, controlled by their regulators, and improved for the benefit of protected groups.

Machine Learning and IRB Capital Requirements: Advantages, Risks, and Recommendations (with C. Hurlin) June 2023

This study proposes a theoretical and practical reflection on the use of machine learning methods in the context of the Internal Ratings Based (IRB) approach to banks' capital requirements. While machine learning is still rarely used in the regulatory domain (IRB, IFRS 9, stress tests), recent discussions initiated by the European Banking Authority suggest that this may change in the near future. While technically complex, this subject is crucial given growing concerns about the potential financial instability caused by the banks' use of opaque internal models. Conversely, for their proponents, machine learning models offer the prospect of better measurement of credit risk and enhancing financial inclusion. This study yields several conclusions and recommendations regarding (i) the accuracy of risk parameter estimations, (ii) the level of regulatory capital, (iii) the trade-off between performance and interpretability, (iv) international banking competition, and (v) the challenges of governance, operational risks, and training.

The Economics of Computational Reproducibility (with J.-E. Colliard and C. Hurlin) July 2023

We investigate why economics displays a relatively low level of computational reproducibility. We first study the benefits and costs of reproducibility for readers, authors, and academic journals. Second, we show that the equilibrium level of reproducibility may be suboptimally low due to three market failures: a competitive bottleneck effect due to the competition between journals to attract authors, the public good dimension of reproducibility, and the positive externalities of reproducibility outside academia. Third, we discuss different policies to address these market failures and move out of a low reproducibility equilibrium. In particular, we show that coordination among journals could reduce by half the cost of verifying the reproducibility of accepted papers.

The Role of Third-Party Verification in Research Reproducibility July 2023

Research reproducibility is defined as obtaining similar results using the same data and code as the original study. To check research reproducibility, third-party verification constitutes a useful complement to the work done by journals’ internal teams. Third-party verification services can also be used by individual researchers seeking a presubmission reproducibility certification to signal the reproducible nature of their research. Using the example of the cascad certification agency, which I co-founded in 2019 with Christophe Hurlin, I discuss the functioning, utility, comparative advantages, and challenges of third-party verification services.

Reports

Reproducibility of scientific results in the EU. Publication Office of the EU. December 2020.

AuthorsBaker, Lee;  Lusoli, Wainer;  Jaśko, Katarzyna;  Parry, Vivienne;  Pérignon, Christophe;  Errington, Timothy;  Cristea, Ioana Alina;  Winchester, Catherine;  MacCallum, Catriona;  Šimko, Tibor

Work in Progress

AI and bank capital requirements

Fairness equivalence: A regulatory framework to treat algorithms like prescription drugs

Biases in AI-enhanced human decisions

 

Book

Marchés Financiers:
Gestion de Portefeuille et des Risques
6e Edition, Dunod
Bertrand Jacquillat, Bruno Solnik & Christophe Pérignon

Order Here

Press Coverage

En poursuivant votre navigation, vous acceptez l'utilisation de cookies destinés à améliorer la performance de ce site et à vous proposer des services et contenus personnalisés.

X