Ouvrir le menu

News

The biggest finance/economics research project!

Published on 03 January 2022 by NEOMA

  • Insights
  • Research

Arash Aloosh, NEOMA professor, contributed to one of the biggest research project in the history of Economics and Finance literature.

Non-Standard Errors” was announced in late November 2021, in an event organised by Deutsche Börse, the German stock market.

Contributing to the growing literature of replication crises, 341 recognised researchers and economists collaborated in the project, as follow:

  • 164 teams of scholars, including those from NYU, Columbia, Cornell, Duke, Berkeley, HEC, INSEAD among many others;
  • Central bank economists, including those from ECB, Fed of NY, England, and Norway;
  • A coordination team of 9 scholars, leading by Albert Menkveld (VU Amsterdam), attracted several superstar scholars, including Yacin Ait-Sahalia (Princeton) and Lubos Pastor (Chicago-Booth) among others.

Arash Aloosh, Assistant Professor of Finance at NEOMA Business School, took part in this incredible work. “It was a unique experience with an ultimately significant outcome. With research teams testing six hypotheses using the same sample of 720 million EuroStoxx50 futures trades, we found out that the non-standard errors across teams is huge and difficult to explain. Nevertheless, the non-standard errors are reduced after peer-feedback. Given the strength in the variety of “non-standard” research approaches, it is important to take both standard and non-standard errors into account. In other words, as it is meaningless to standardized research approaches, which have their own pros and cons, we should quantify the generalizability of results, to control the on-going replication crises.”

 

Menkveld, Albert et al. (2021). Non-Standard Errors. SSRN Electronic Journal. 10.2139/ssrn.3961574.

Abstract

In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants.

source

Professor

Contact us

  • *required fields