Stefan Hubner

Postdoctoral Prize Research Fellow
Oxford University

About me

Stefan Hubner

I am currently a Postdoctoral Research Fellow at Oxford University in association with the Department of Economics and Nuffield College. I obtained my Ph.D. from Tilburg University in November 2016.

My main research interests are Applied Microeconometrics, Consumer Demand Estimation, Unobserved Heterogeneity, Nonparametric Identification and Quantile Regression.

Curriculum Vitæ Download as PDF  

Date of birth 14th of June 1985 in Bruck/Mur
Citizenship Austria
Military service 09/2004-05/2005
Languages German (native), English (fluent), Dutch (intermediate)

Higher Education & Studies

since 09/2016 Oxford University
Postdoctoral Price Research Fellow at Nuffield College
09/2012-08/2016 Tilburg University (CentER)
Ph.D. Candidate in Econometrics, Advisor: Arthur van Soest
05/2015 - 07/2015 Boston College
Visiting scholar, Host: Stefan Hoderlein
09/2010 - 08/2012 Tilburg School of Economics and Management
Research Master in Economics, Major in Econometrics, With Distinction
03/2008 - 06/2010 Vienna University of Economics and Business
Bachelor of Science in Economics (Main Course), Advisor: Ingrid Kubin
09/2006 - 06/2008 Johannes Kepler University Linz
Bachelor of Science in Economics and Business (Preliminary Diploma)
09/1999 - 06/2004 Higher Level Secondary Technical College Leonding
Software Engineering

Awards and Scholarships

2017 Oxford University
George Webb Medley Fund
2015 Tilburg University
CentER Overseas Research Grant
2011 Talenta, Vienna University of Economics and Business
WU Best Paper Award for Bachelor Thesis on "Exchange Rate Volatility and Its Impact on International Trade"
2010 Raabstiftung, Austrian Chamber of Commerce
Julius Raab Scholarship for high academic performance
2009 Vienna University of Economics and Business
Merit Scholarship for excellent academic performance
2008 Raabstiftung, Austrian Chamber of Commerce
Julius Raab Scholarship for high academic performance
2008-2012 Austrian Ministry of Science
Scholarship for former professionals in academia

Seminars and Conference Contributions

2017 Econometric Society Summer Meeting - Lisbon
Oxford University Faculty Seminar
Royal Economic Society Meeting - Bristol
Vienna-Copenhagen Conference - Vienna
2016 Heterogeneity in Supply and Demand Workshop - Boston
Econometric Society Winter Meeting - Edinburgh (cancelled)
Nuffield/INET Econometrics Seminar - Oxford
Econometric Study Group Meeting - Bristol
Royal Economic Society, PhD Meeting - London
2015 Econometric Society Winter Meeting - Milan
Simposio de la Asociacion Espanola de Economia - Girona
Tilburg University Structural Econometrics Group
Oxford University Job Market Seminar
Econometric Society World Congress - Montreal
Boston College Empirical Microeconomics Seminar
2014 Tilburg University GSS Seminar
2013 Advances in Family Economics - Paris
Tilburg University Brownbag Seminar

Professional Experience (full time)

06/2005 - 06/2008 RACON Software GmbH Linz
Development of front-office banking software
Mostly C, but also managed C++ and C# programming

Relevant Internships & Part-time Jobs

11/2008 - 03/2010 Raiffeisen Zentralbank Österreich AG
Economic and Financial Markets Research
Interest Rates and Foreign Exchange
06/2008 - 10/2008 Kommunalkredit Austria AG
Treasury - Funding and Liquidity Management
Research, reporting as well as the construction of different models and analyses
08/2003 and 08/2002 GRZ IT Center Linz GmbH
Software development and design, database modelling, E-Government

Further Skills

Programming ANSI C, C#, Python, Haskell (among others)
Scientific tools SciPy/NumPy, R, Ox, GNU Scientific Library, LaTeX
Applications Microsoft Office (Excel modelling and VBA programming), Bloomberg, Reuters, Thomson Datastream, EViews, Stata, Mathematica and a range of GNU tools
Personal Development Teaching, Rhetoric, Team building, Conflict management, Sales and Promotion

Personal Interests

  • Linux, (Open source) Programming, Haskell & Basic Category Theory
  • Windsurfing, Rowing, Triathlon, Skiing and Golf
  • Single malt Whisky, Coffee and Wine

Academic Work

Nonparametric Identification and Estimation of the Sharing Rule in Collective Models with Unobserved Heterogeneity

This paper considers identification and estimation of structural components of the collective household consumption model. Particular attention is given to non-separable unobserved heterogeneity in the reduced form demands that arises from the underlying aggregate decision process. Necessary and sufficient conditions about the models' primitives are derived that ensure nonparametric point-identification of the so-called sharing rule, a central component of the collective model that determines the allocation of wealth among household members. One such condition is the existence of information on intra-household allocation in the considered data-set. For such data-sets a nonparametric estimation procedure is developed and its asymptotic properties are derived. To study the finite sample behaviour of the estimator we conduct a Monte-Carlo experiment before we conclude the paper by estimating a collective labour supply model using the Dutch LISS panel.

It's complicated: A Nonparametric Test of Preference Stability between Singles and Couples

For a variety of reasons the collective household consumption model has gained increasing popularity. However, with very few exceptions, datasets do not provide sufficient information to recover individual welfare from aggregate consumption. Thus, authors have been making use of data from households consisting only single males or single females to predict their respective within-household counterparts. This approach requires a structural assumption that preferences are stable with respect to different household compositions, which has been relatively controversial. In this paper we propose a nonparametric testing procedure which allows us to test this assumption based on observing only marginal distributions of consumption choices for singles and couples. We allow for unobserved heterogeneity both with respect to preferences and intra-household bargaining by specifying a random utility model and making use of the Collective Axiom of Revealed Preference, a set of conditions restricting individual and aggregate demands of a household in different budget situations. To characterize our hypothesis we use a finite-dimensional characterization of hypothetical household types and construct a test-statistic based on the idea of revealed stochastic preferences. The empirical part of the paper consists of a simulation study in which we show that the test has power against the alternative hypothesis of non-stable preferences and an empirical study based on the Russian Longitudinal Monitoring survey, in which we find evidence that the stable preference assumption does not hold.

Quantile-based Smooth Transition Value at Risk Estimation

Financial time series often exhibit asymmetric dynamic behaviour with respect to past shocks not only in their conditional means but also in their conditional volatilities. In the context of Value at Risk estimation, the main interest lies in the conditional quantile, which is defined as the product of the conditional volatility and the respective quantile of the innovation distribution, in a location-scale model. Traditional likelihood-based approaches require a parametric assumption about the innovation distribution, not only for estimation of conditional volatility, but also to determine the quantile of the innovation, which together can amplify forecast errors in case of misspecification of this distribution. Instead, we directly model and estimate the conditional quantile of a time series as a general autoregressive process and additionally allow for the aforementioned asymmetric behaviour by allowing its data generating process to be governed by a convex combination of two regimes characterized by a smooth transition function. We propose a two-step estimation procedure based on a sieve estimator approximating conditional volatility, using composite quantile regression; the estimated conditional volatility is then used in the generalized autoregressive conditional quantile estimation. The proposed estimator is proved to be consistent and asymptotically normal. Monte Carlo simulations indicate an improved prediction and forecast accuracy compared to a standard smooth transition GARCH model. The presence of two regimes is also confirmed in the empirical application, which includes the asset returns of the USD/GBP exchange rate and the German equity index.

Bachelor Thesis - Vienna University of Economics & Business (2010)

Exchange Rate Volatility and its Impact on International Trade
For a long time it was taken for granted that exchange rate movements lead to a decline in international trade, since agents are assumed to be risk-averse. In this Bachelor thesis I present an option based approach, which should illustrate, that there might be a positive effect of exchange rate variation on international trade, if exports are modelled as a real option. After introducing a theoretical multi-period model, I provide some empirical evidence, using a macro panel with a fixed number of countries and large T. Both high-frequency realized and estimated GARCH(1,1) volatility of the exchange rate are considered to incorporate forward looking behaviour of the agents.
Awarded with the WU Talenta Award for the best bachelor thesis.
Download Slides (German), to get a copy of the paper (English) please contact me.

Teaching

2017/2018 Oxford University, Department of Economics
Michaelmas, MPhil Econometrics: Introduction to R and STATA
Michaelmas, MPhil Advanced Econometrics: Nonparametric Methods
2016/2017 Oxford University, Department of Economics
Hilary & Trinity, MPhil Econometrics (Teaching Assistant)
2015/2016 Tilburg School of Economics and Management
350912, Statistics 2 (Teaching Assistant)
35B111, Introduction Econometrics (Teaching Assistant)
2014/2015 Tilburg School of Economics and Management
350931, Statistics for Pre-master (Teaching Assistant)
350912, Statistics 2 (Teaching Assistant)
35B111, Introduction Econometrics (Teaching Assistant)
2013/2014 Tilburg School of Economics and Management
35B113, Introduction Analysis & Probability Theory (Teaching Assistant)
35B111, Introduction Econometrics (Teaching Assistant)
2012/2013 Tilburg School of Economics and Management
350011, Statistics 1 (Teaching Assistant)
35B111, Introduction Econometrics (Teaching Assistant)
2009/2010 Vienna University of Economics and Business, Department of Mathematics and Statistics
Econometrics 1 (Teaching Assistant)

System & Software

I am currently running an individually built Linux system following Linux from Scratch with Kernel 4.4.0 and Fluxbox 1.3.2 as a Window manager. I can highly recommend building a Linux system from scratch. Most importantly it outperforms any other system in terms of speed because all software can be optimized for the respective hardware at compile time. Secondly, being given a step by step instruction not only makes you understand how Unix-type systems work and how everything ties together but also maximizes the degree of configurability of the system. Compiling everything might take a few days, but the end result is worth the effort. Fluxbox is a very slim (as opposed to the vast environments of KDE and GNOME) and fast window manager which is highly customizable and still easy on the eyes. As a LaTeX environment I use Texlive 2015, which is pretty much standard for Linux and as a text editor I prefer Vi over Emacs.

Numeric and Scientific Python

Until recently, mostly used for engineering applications SciPy became a quickly developing open-source API offering a lot of mathematical and statistical procedures. At the same time full Python functionality can be used without the hassle of including any external libraries. There is a very easy-to-use interface for dynamic libraries and even the possibility to write inline code in C,C++ of Fortran which is automatically compiled at runtime, making the package extremely powerful. While SciPy provides scientific libraries, it heavily relies on NumPy which provides multidimensional datatypes like arrays and matrices. For visualization of data, there is the very powerful matplotlib library and for estimation of a vast range of statistical models there is statsmodels , both using the NumPy datatypes. For linear algebra support SciPy uses the two fortran libraries BLAS and LAPACK. Unfortunately Python still appears to be rather unknown in the scientific community (especially for econometrics) and hence comes with the disadvantage that not all available methods are implemented.

R

R provides a comprehensive library for statistics and econometrics including plotting and LaTeX tools (sweave). Since this open-source package is widely used there is a large development community, ensuring not only that everything is properly maintained but also gives scientists an incentive to provide their new tools as an R package. However there are also some shortcomings of the language. Probably the most important ones are the lack of real object orientation which can make the code quite messy, especially for larger-scale applications and the incredibly weak performance due to run-time interpretation. Although this is the case for almost all high level scientific programming packages, this one is still inferior to many other languages in terms of speed. However, there exists a nice interface for using external libraries, which can be used for computationally expensive code. In addition to this, as always, by building the application from sources it can be optimized specifically for your machine. (Note: when compiling, make sure your gcc is configured with --with-languages=c,c++,fortran). Packages are then easily downloaded and installed from the interface and with this version, they are locally compiled.

Haskell

As Econometricians we should seriously consider functional programming languages. Haskell seems to stand out as the most common one, especially in academia. As opposed to imperative programming languages like the C/Python type of family, Haskell, and functional programming languages in general, are very pure in a sense that they treat functions in a strictly mathematical sense using static typing and not allowing for side-effects. In other words, calling the same function twice, not only gives you the same return value but also leaves the state of the "outside world" unchanged. Similar to Python, and as opposed to for example C, Haskell uses lazy evaluation. Other than learning a completely new perspective to programming, the main advantage of functional programming in general is the fact that the compiler can easily parallelize the code (compile with ghc -O2 --make hello.hs -threaded -rtsopts and run using ./hello +RTS -N4). Both are consequences of the very abstract mathematical design of the language.

Compile from Sources: There are different implementations of Haskell compilers. It seems that the GHC (Glasgow Haskell Compiler) implementation is about to become the modern standard. Among other standard libraries it requires the GNU MP bignum library (libgmp), which is built in a straight-forward fashion. The only thing that is noteworthy, is that there is a problem with the march compiler flags, so CFLAGS should be unset prior to building. GHC itself is more difficult to compile, since it is written in Haskell. So, in order to compile it you need to do a bootstrap, which requires a bit of an effort. First you need the GHC binary files (x86) which are compiled on a Debian system. Its configuration script is therefore looking for libncurses.so.5. On other systems the required library is actually libncursesw.so.5, so if you don't have libncurses.so.5, just create a softlink to this one. Now, install the compiler to a temporary directory (e.g. --prefix=/tmp/ghc), add its bin folder to the path and finally compile GHC from sources. To create and install packages, GHC uses cabal-install, which is a cabal package and hence has to be bootstrapped again using the enclosed shell script. It is installed in ~/.cabal and needs to be added to the path. After initially running cabal update, packages can be installed using cabal install <packagename>.

Ox

Ox is probably one of the most convenient programming package for time-series econometrics. Its syntax and object-oriented logic is fairly related to C++. The package offers a lot of interfaces and base classes which makes it very easy to implement your own estimation procedures and Monte Carlo studies. For this one only has to overwrite some virtual functions, the rest is done by the API. The language is very scalable and can use extensions, defined by a source or "pre-compiled" library and a corresponding header file. It should be mentioned that, even though the code is also being interpreted at runtime, I found the API relatively fast, which is fortunate as the interface for external libraries is rather tedious to use. For extended functionality like plotting, the package is not free.

Gnuplot

Gnuplot can be summarized as a descriptive language which provides plotting functionality for both graphs and figures. It works great with LaTeX as it allows the text to be managed by the TeX environment while the drawing itself is imported as an eps or pdf graphic (ghostscript is needed). Once the dimensions are figured out (grep textwidth in latex logfile), one can endow the graphic with the same text and math font as in the rest of the document. I use it (not only) for Ox, whose free version lacks a plotting functionality. A rather ad-hoc 2d library implementation for dots-, lines- and bargraphs can be downloaded here. Note that this is at most a pre-alpha release and I recommend to only use it as a reference.

GNU Make

make is not only mandatory for any larger-scale programming project, but also significantly increases productivity for latex projects. Make provides a beautiful way to handle dependencies between different sub-projects. Consider for example a latex project that uses graphs created by gnuplot and tables produced by any statistical programming package. Then make can be configured such that all these inputs are generated automatically as soon as the final pdf file is compiled. However, this is only done for the dependency path for which the corresponding source file has changed since the last compilation. In combination with vi, generating a pdf file and all it's dependencies is as simple as typing ":make" in command mode. My canonical Makefile can be downloaded here. Feel free to use it.

Mendeley

Mendeley is a very neat tool for managing a growing library of scientific papers. The tool provides features such as categorization, online lookup for meta information like author, pages, journal and year and can automatically export bibtex files. In addition to this, one can connect to a free online webspace and syncronize papers in order to make them available anywhere. The Linux version uses the gtk-2 graphics library.

Genealogy Project

Stefan Hubner2016, TilburgTopics in Nonparametric Identification and Estimation
Arthur van Soest 1990, Tilburg Micro-econometric models of consumer behaviour and the labour market
Arie Kapteyn 1977, Leiden A Theory of Preference Formation
Bernard M. S. van Praag 1968, Amsterdam Individual Welfare and the Theory of Consumer Behaviour
Mars J.S. Cramer 1961, Amsterdam Model of the Ownership of Major Consumer Durables with an Application to some Findings of the 1953 Oxford Savings Survey
Pieter de Wolff 1939, Amsterdam-
Jan Tinbergen 1929, Leiden Minimumproblemen in de natuurkunde en de economie
Paul Ehrenfest 1904, Wien Die Bewegung starrer Körper in Flüssigkeiten und die Mechanik von Hertz
Ludwig Boltzmann 1866, Wien Über die mechanische Bedeutung des zweiten Hauptsatzes der mechanischen Wärmetheorie
Jožef Stefan 1858, Wien Bemerkungen über Absorption der Gase
Andreas von Ettingshausen 1817, Wien Electromagnetic Machines
Ignaz Lindner WienLogarithmisches und logarithmisch-trigonometrisches Taschenbuch
Georg J. B. V. von Vega 1775, Ljubljana Vorlesungen über die Mathematik
Joseph G. J. von Maffei 1762, Graz-
Nikolaus B. P. von Neuhaus 1753, KlagenfurtInsecta Musei Graecensis
References:

News

Placement and Ph.D. ThesisSep 28, 2016

As of September 1, I am a Postdoctoral Research Fellow at Oxford University in association with the Department of Economics and Nuffield College.

I will defend my Ph.D. Thesis in Tilburg on November 18, 2016 at 2pm. The members of my committee are Arthur van Soest, Pavel Cizek, Jaap Abbring, Laurens Cherchye and Frederic Vermeulen. The defense is open to the public.

Job MarketSep 23, 2015

I expect to finish my Ph.D. thesis in Summer 2016. Thus, I will be on the academic job market this year attending the AEA/ASSA annual meeting in San Francisco on January 3-5 as well as the annual meeting of the Spanish Economic Association in Girona on December 10-12 and the Ph.D. presentation meeting of the Royal Economic Society in London on January 8-9.

I am available for interviews at any of these sessions.

ETRICS package for Python3Nov 25, 2012

I recently started developing an Econometrics package written in Python which I published on Bitbucket. The idea of this project is, to roll out a scalable Econometrics class library for Python. This toolkit is supposed to provide both implementations for the most commonly used econometric models, and base classes and interfaces for new models and methods to be developed. The functionality will heavily rely on the Python libraries NumPy, SciPy, Matplotlib and statsmodels.

You can access the actual version at github.com/StefanHubner/etrics. The code is released under the Gnu Public License, meaning that everyone is welcome to use it, fork it and submit changes or even implement new methods. So far, the package contains a simulation class, both linear and non-parametric (local polynomial) quantile regression estimation routines, and a generic class for Systems of equations.

I composed a small Ipython notebook showing how the Python Etrics package can be used.

Contact