The goal of this website is to facilitate the production of the edited volume titled Macroeconomic Forecasting in the Era of Big Data. The website is maintained and updated by the coordinating editor. Comments and suggestions are most welcome.
The last three decades have seen a surge in data collection. During the same period, statisticians and econometricians have developed numerous techniques to digest the ever-growing amount of data and improve predictions. Continuing advances in information technology and its ubiquitous use will undoubtedly lead to further progress in this area. Although most of the data are collected at the micro level, there are efforts under way to consolidate the available information for use in government statistics and macroeconomic analysis (see for example the call for papers for the 2019 conference of NBER/CRIW - Big Data for 21st Century Economic Statistics).
While many tools associated with big data analysis were originally developed by statisticians, econometricians have refined them to deal with issues arising in their own field. In a forecasting setting these include the choice of a framework to capture relationships among variables, model selection, dealing with model uncertainty, instability, mixed frequency data, non-stationarity, and forecast evaluation, among others.
Although most of the topics covered in this volume have shown up elsewhere in the literature, the objective is to present a comprehensive collection of big data tools used in macroeconomic forecasting, emphasizing methodology as in a handbook. Currently there is no book on the market that accomplishes this. The intended audience includes researchers, professional forecasters, instructors, and students. The volume could be well-suited for teaching state-of-the-art techniques in macroeconomic forecasting to graduate students.
Given the handbook-like character of the volume, each chapter will provide solid background information, but also review the most recent developments in the literature. The contents will focus on the big data aspects of the methodology, and—if applicable—explain how they differ from the small data case. Each chapter will outline the underlying assumptions, provide an algorithmic descriptions of the presented techniques, compare competing approaches qualitatively and quantitatively, and suggest some use cases or illustrate the methods via examples.
The topics in the book can be categorized into four main sections: Capturing Relationships, Seeking Parsimony, Dealing with Model Uncertainty, and Further Issues. The chapters in the first section will review the main approaches for modeling relationships among macroeconomic variables. The second and third sections will focus on model selection and on dealing with model uncertainty, respectively. Many methods in these two sections have been adopted from the machine learning literature and are sometimes combined to avoid overfitting and to improve forecast accuracy. The chapters in the final section will examine important issues that extend the topics covered in the previous three sections. Inevitably some themes will span multiple chapters that can be cross-referenced. For example, issues associated with non-stationarity arise throughout the book, and model selection and forecast combination are often inseparable.
- Big Data Sources and Types (Philip Garboden)
- Dynamic Factor Models (Catherine Doz and Peter Fuleky)
- Factor Augmented Vector Autoregressions, Panel VARs, and Global VARs (Martin Feldkircher, Florian Huber and Michael Pfarrhofer)
- Large Bayesian Vector Autoregressions (Joshua Chan)
- Volatility Forecasts (Mauro Bernardi, Giovanni Bonaccolto, Massimiliano Caporin and Michele Costola)
- Neural Nets (Thomas Cook)
- Penalized Time Series Regression (Anders Bredahl Kock, Marcelo Medeiros and Gabriel Vasconcelos)
- Principal Components and Static Factor Analysis (Jianfei Cao, Chris Gu, Yike Wang)
- Subspace Methods: Complete Subset Regression, Random Projection, and Compressed Regression (Tom Boot and Didier Nibbering)
- Variable Selection and Feature Screening (Wanjun Liu and Runze Li)
- Frequentist Averaging (Felix Chan, Laurent Pauwels and Sylvia Soltyk)
- Bayesian Averaging (Bettina Grün and Paul Hofmarcher)
- Bootstrap Aggregating and Random Forests (Tae-hwy Lee, Aman Ullah and Ran Wang)
- Boosting (Jianghao Chu, Tae-hwy Lee, Aman Ullah and Ran Wang)
- Density Forecasts (Federico Bassetti, Roberto Casarin and Francesco Ravazzolo)
- Forecast Evaluation (Mingmian Cheng, Norman Swanson and Chun Yao)
- Unit Roots and Cointegration (Stephan Smeekes and Etienne Wijler)
- Turning Points and Classification (Jeremy Piger)
- Robust Variable Selection, Regression, and Covariance Estimation (Marco Avella Medina)
- Frequency Domain (Felix Chan and Marco Reale)
- Hierarchical Forecasting (George Athanasopoulos, Puwasala Gamakumara, Anastasios Panagiotelis, Rob Hyndman and Mohamed Affan)