Christophe Pérignon

Professor of Finance
Associate Dean for Research

Email: perignon@hec.fr
www.hec.fr/perignon

Biography

Christophe Pérignon is Professor of Finance and Associate Dean for Research at HEC Paris, France. He is also the co-holder of the ACPR (Banque de France) Chair in Regulation and Systemic Risk. He obtained a Ph.D. in Finance from the Swiss Finance Institute and was a Post-Doctoral Fellow at the University of California at Los Angeles (UCLA). Prior to joining HEC Paris, he was an Assistant Professor of Finance at Simon Fraser University in Vancouver, Canada. His areas of research and teaching interests are in financial risk management and AI/machine learning. Christophe published a dozen of articles in top finance journals (Journal of Finance, Journal of Financial Economics, Review of Financial Studies, Journal of Business, Journal of Financial and Quantitative Analysis, and Review of Finance) or in general-science journals (Science). In 2014, he received the Europlace Award for the Best Young Researcher in Finance. With Christophe Hurlin, he co-founded RunMyCode, an online repository allowing researchers to share code and data associated with published papers (800,000+ individual visits); and cascad, a certification agency verifying the reproducibility of the results displayed in scientific articles (300+ verifications). In 2022, Christophe launched RAMP-UP, the job matching platform for HEC Paris students and professors.

Publications

What if Dividends Were Tax-Exempt? Evidence from a Natural Experiment
Review of Financial Studies, 2021 (with D. Isakov and J.P. Weisskopf)

The Private Production of Safe assets 
Journal of Finance
, 2021 (with M. Kacperczyk and G. Vuillemey)

Certify Reproducibility with Confidential Data
Science
, 2019 (with K. Gadouche, C. Hurlin, R. Silberman, and E. Debonnel)

Machine learning et nouvelles sources de données pour le scoring de crédit
Revue d'Economie Financière
, 2019 (with C. Hurlin)

The Counterparty Risk Exposure of ETF Investors
Journal of Banking and Finance
, 2019 (with C. Hurlin, G. Iseli, and S. Yeung)

Pitfalls in Systemic-Risk Scoring
Journal of Financial Intermediation
, 2019 (with S. Benoit and C. Hurlin)

Wholesale Funding Dry-Ups
Journal of Finance, 2018 (with D. Thesmar and G. Vuillemey)

The Political Economy of Financial Innovation: Evidence from Local Governments
Review of Financial Studies
, 2017 (with B. Vallée)

CoMargin
Journal of Financial and Quantitative Analysis,
2017 (with J. Cruz Lopez, J. Harris, and C. Hurlin)

Where the Risks Lie: A Survey on Systemic Risk
Review of Finance, 2017 (with S. Benoit, J.E. Colliard, and C. Hurlin)

Implied Risk Exposures
Review of Finance
, 2015 (with S. Benoit and C. Hurlin)

The Risk Map: A New Tool for Validating Risk Models
Journal of Banking and Finance
, 2013 (with G. Colletaz and C. Hurlin)

Derivatives Clearing, Default Risk, and Insurance
Journal of Risk and Insurance
, 2013 (with R. Jones)

The Pernicious Effects of Contaminated Data in Risk Management
Journal of Banking and Finance
, 2011 (with L. Frésard and A. Wilhelmsson)

The Level and Quality of Value-at-Risk Disclosure by Commercial Banks
Journal of Banking and Finance
, 2010 (with D. Smith)

Diversification and Value-at-Risk
Journal of Banking and Finance
, 2010 (with D. Smith)

Commonality in Liquidity: A Global Perspective
Journal of Financial and Quantitative Analysis
, 2009 (with P. Brockman and D. Chung)

How Common are Common Return Factors across Nyse and Nasdaq
Journal of Financial Economics, 2008 (with A. Goyal and C. Villa)

A New Approach to Comparing VaR Estimation Methods
Journal of Derivatives
, 2008 (with D. Smith)

Do Banks Overstate their Value-at-Risk?
Journal of Banking and Finance
, 2008 (with Z. Deng and Z. Wang)

Repurchasing Shares on a Second Trading Line
Review of Finance
, 2007 (with D. Chung and D. Isakov)

Testing the Monotonicity Property of Option Prices
Journal of Derivatives
, 2006

Sources of Time Variation in the Covariance Matrix of Interest Rates
Journal of Business
, 2006 (with C. Villa)

Working Papers

 

Computational Reproducibility in Finance: Evidence from 1,000 Tests (with O. Akmansoy, C. Hurlin, A. Menkveld, A. Dreber, F. Holzmeister, J. Huber, M. Johannesson, M. Kirchler, M. Razen, U. Weitzel). Review of Financial Studies, forthcoming

We analyze the computational reproducibility of more than 1,000 empirical answers to six research questions in finance provided by 168 international research teams. Running the original researchers’ code on the same raw data regenerates exactly the same results only 52% of the time. Reproducibility is higher for researchers with better coding skills and for those exerting more effort. It is lower for more technical research questions, more complex code, and for results lying in the tails of the results distribution. Neither researcher seniority, nor peer-review ratings appear to be related to the level of reproducibility. Moreover, researchers exhibit strong overconfidence when assessing the reproducibility of their own research. We provide guidelines for finance researchers and discuss several implementable reproducibility policies for academic journals.

Non-Standard Errors (with A. Menkveld, A. Dreber, F. Holzmeister, J. Huber, M. Johannesson, M. Kirchler, M. Razen, U. Weitzel et al.). Journal of Finance, forthcoming

My role: I designed and implemented the reproducibility verification policy of the #fincap project.

In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants.

Reproducibility in Management Science (with M. Fišar, B. Greiner, C. Huber, E. Katok, A.I. Ozkes, and the Management Science Reproducibility Collaboration). Management Science, forthcoming

My role: I am a member of the Management Science Reproducibility Collaboration.

With the help of more than 700 reviewers we assess the reproducibility of nearly 500 articles published in the journal Management Science before and after the introduction of a new Data and Code Disclosure policy in 2019. When considering only articles for which data accessibility and hard- and software requirements were not an obstacle for reviewers, the results of more than 95% of articles under the new disclosure policy could be fully or largely computationally reproduced. However, for 29% of articles at least part of the dataset was not accessible to the reviewer. Considering all articles in our sample reduces the share of reproduced articles to 68%. These figures represent a significant increase compared to the period before the introduction of the disclosure policy, where only 12% of articles voluntarily provided replication materials, out of which 55% could be (largely) reproduced. Substantial heterogeneity in reproducibility rates across different fields is mainly driven by differences in dataset accessibility. Other reasons for unsuccessful reproduction attempts include missing code, unresolvable code errors, weak or missing documentation, but also soft- and hardware requirements and code complexity. Our findings highlight the importance of journal code and data disclosure policies, and suggest potential avenues for enhancing their effectiveness.

The Role of Third-Party Verification in Research Reproducibility. Harvard Data Science Review, R&R

Research reproducibility is defined as obtaining similar results using the same data and code as the original study. This simple, yet fundamental, property remains surprisingly difficult to validate in practice in many scientific fields, including economics. To check research reproducibility, third-party verifiers can complement the work done by journals’ internal teams. Third-party verifiers can also be used by individual researchers seeking a pre-submission reproducibility certification to signal the reproducible nature of their research. Using the example of the cascad certification agency, which I co-founded in 2019, I discuss the functioning, utility, comparative advantages, and challenges of third-party verification services.

The Economics of Computational Reproducibility (with J.-E. Colliard and C. Hurlin). December 2023

We investigate why economics displays a relatively low level of computational reproducibility. We first study the benefits and costs of reproducibility for readers, authors, and academic journals. Second, we show that the equilibrium level of reproducibility may be suboptimally low due to three market failures: a competitive bottleneck effect due to the competition between journals to attract authors, the public good dimension of reproducibility, and the positive externalities of reproducibility outside academia. Third, we discuss different policies to address these market failures and move out of a low reproducibility equilibrium. In particular, we show that coordination among journals could reduce by half the cost of verifying the reproducibility of accepted papers.

Measuring the Driving Forces of Predictive Performance: Application to Credit Scoring (with S. Hué, C. Hurlin and S. Saurin). Management Science, R&R - Package in Python

In credit scoring, machine learning models are known to outperform standard parametric models. As they condition access to credit, banking supervisors and internal model validation teams need to monitor their predictive performance and to identify the features with the highest impact on performance. To facilitate this, we introduce the XPER methodology to decompose a performance metric (e.g., AUC, R^2) into specific contributions associated with the various features of a classification or regression model. XPER is theoretically grounded on Shapley values and is both model-agnostic and performance metric-agnostic. Furthermore, it can be implemented either at the model level or at the individual level. Using a novel dataset of car loans, we decompose the AUC of a machine-learning model trained to forecast the default probability of loan applicants. We show that a small number of features can explain a surprisingly large part of the model performance. Furthermore, we find that the features that contribute the most to the predictive performance of the model may not be the ones that contribute the most to individual forecasts (SHAP). We also show how XPER can be used to deal with heterogeneity issues and significantly boost out-of-sample performance.

The Fairness of Credit Scoring Models (with C. Hurlin and S. Saurin). Management Science, R&R

In credit markets, screening algorithms aim to discriminate between good-type and bad-type borrowers. However, when doing so, they can also discriminate between individuals sharing a protected attribute (e.g. gender, age, racial origin) and the rest of the population. This can be unintentional and originate from the training dataset or from the model itself. We show how to formally test the \textit{algorithmic fairness} of scoring models and how to identify the variables responsible for any lack of fairness. We then use these variables to optimize the fairness-performance trade-off. Our framework provides guidance on how algorithmic fairness can be monitored by lenders, controlled by their regulators, improved for the benefit of protected groups, while still maintaining a high level of forecasting accuracy.

Machine Learning and IRB Capital Requirements: Advantages, Risks, and Recommendations (with C. Hurlin). December 2023

This survey proposes a theoretical and practical reflection on the use of machine learning methods in the context of the Internal Ratings Based (IRB) approach to banks' capital requirements. While machine learning is still rarely used in the regulatory domain (IRB, IFRS 9, stress tests), recent discussions initiated by the European Banking Authority suggest that this may change in the near future. While technically complex, this subject is crucial given growing concerns about the potential financial instability caused by the banks' use of opaque internal models. Conversely, for their proponents, machine learning models offer the prospect of better measurement of credit risk and enhancing financial inclusion. This survey yields several conclusions and recommendations regarding (i) the accuracy of risk parameter estimations, (ii) the level of regulatory capital, (iii) the trade-off between performance and interpretability, (iv) international banking competition, and (v) the governance and operational risks of machine learning models.

 

Reports

Reproducibility of scientific results in the EU. Publication Office of the EU. December 2020.

AuthorsBaker, Lee;  Lusoli, Wainer;  Jaśko, Katarzyna;  Parry, Vivienne;  Pérignon, Christophe;  Errington, Timothy;  Cristea, Ioana Alina;  Winchester, Catherine;  MacCallum, Catriona;  Šimko, Tibor

Work in Progress

AI and bank capital requirements

Fairness equivalence in credit scoring: A regulatory framework to treat algorithms like prescription drugs

Biases in AI-enhanced human decisions

 

Book

Marchés Financiers:
Gestion de Portefeuille et des Risques
6e Edition, Dunod
Bertrand Jacquillat, Bruno Solnik & Christophe Pérignon

Order Here

Press Coverage

En poursuivant votre navigation, vous acceptez l'utilisation de cookies destinés à améliorer la performance de ce site et à vous proposer des services et contenus personnalisés.

X