{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Week 7 Lab "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# **Monte Carlo Simulation of a Simpel Schooling Model**\n",
"\n",
"We will continue with Card's (1993) return to schooling estimation in the week 9 lab.\n",
"\n",
"This week and next week we will run Monte Carlo Simulations: We will create the *random sample* from a data generating process of our choosing and then estimate the model many many times to study the properties of the estimator.\n",
"\n",
"For that purpose, create a sample according to the following DGP:\n",
"\n",
"$$\n",
"\\begin{align*}\n",
" u & \\sim \\mathcal{N}(0, \\sigma_u^2) \\\\\n",
" A & \\sim \\mathcal{N}(0, \\sigma_A^2) && \\text{(ability)}\\\\\n",
" S &= \\pi + A && \\text{(schooling)}\\\\\n",
" Y &= \\exp (\\beta_1 + \\beta_2 S + \\beta_3 A + u) && \\text{(earnings)}\n",
"\\end{align*}\n",
"$$\n",
"\n",
"When you take logarithm of $Y$ you obtain $\\ln Y = \\beta_1 + \\beta_2 S + \\beta_3 A + u$. Our main focus is the return to schooling $\\beta_2$.\n",
"\n",
"In this DGP, a person's ability follows a normal distribution (not realisitic), schooling is centered around a mean of $\\pi$ years and also normally distributed (totally unrealistic), and earnings are determined by a combination of schooling and ability and a random error term. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 1\n",
"\n",
"Calibrate your model using Card's (1993) data.\n",
"\n",
"Set $\\beta_3=0$ (for now) to rule out endogeneity.\n",
"\n",
"Calibrate the other parameters to be in line with Card's (1993) data. The following table offers some guidance:\n",
"\n",
"(Note: you can round crudely to obtain simple calibrated values.)\n",
"\n",
"| | Base your calibration on the following info from the week 3 notebook | Calibrated values |\n",
"|---------------------------|-------------------------------------------------------------------------------|--------------------|\n",
"| $\\beta_1$ | $\\widehat{\\beta}_1$ | 0.00 |\n",
"| $\\beta_2$ | $\\widehat{\\beta}_2$ | 0.00 |\n",
"| $\\pi$ | sample average of schooling | 0.00 |\n",
"| $\\sigma_u^2$ | conditional variance of log wages for people with 12 years of schooling | 0.00 |\n",
"| $\\sigma_A^2$ | sample variance of schooling | 0.00 |\n",
"\n",
"Enter your calibrated values in the last column."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 2\n",
"\n",
"Write a function `schooling_sample` that takes\n",
"\n",
"* two arguments `b2` and `n` for $\\beta_2$ and sample size $n$;\n",
"\n",
"* keyword arguments `p`, `b1`, `b3`, `su`, `sa` for $\\pi$, $\\beta_1$, $\\beta_3$, $\\sigma_u^2$, and $\\sigma_A^2$ (set to equal your calibrated values)\n",
"\n",
" see https://julia.quantecon.org/getting_started_julia/julia_essentials.html#optional-and-keyword-arguments\n",
"\n",
"and returns the random sample `S` and `Y` following the above DGP. (Note: `Y` is the **logarithm** of wages.)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"using Distributions, Random, Plots"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"\n",
"\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 3\n",
"\n",
"Use the function `schooling_sample` to create a random sample of size 100 using your calibrated value for $\\beta_2$. Then Estimate $\\beta_2$ by OLS."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 4\n",
"\n",
"Use the function `schooling_sample` to create a random sample of size 100,000 using your calibrated value for $\\beta_2$. Then Estimate $\\beta_2$ by OLS."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 5\n",
"\n",
"Create `r` random samples of size `n` and plot a histogram of the OLS estimator.\n",
"\n",
"Be conservative with your initial choice of `r` and `n`, don't stress out your computer! For example `r=100` and `n=10` should be a safe starting point. Can you plot the corresponding histogram?\n",
"\n",
"Eventually it would be great if `r=1,000` and `n` is in `{30, 100, 1000, 10,000}`.\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 6\n",
"\n",
"From the lecture you know the asmptotic distribution of $\\widehat{\\beta}_2$. Write down its specific form below, using $\\LaTeX$. \n",
"\n",
"(Hint: The asymptotic distribution should contain, among other things, $\\sigma_u^2$ and $\\sigma_A^2$.)\n",
"\n",
"Can you superimpose the distribution in the above histogram?\n",
"\n",
"The juxtaposition between the histogram and the analytical distribution shows you the discrepancy between the exact small sample distribution (represented by the histogram) and the asymptotic approximation (represented by the analytical distribution)."
]
}
],
"metadata": {
"interpreter": {
"hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
},
"kernelspec": {
"display_name": "Julia 1.7.2",
"language": "julia",
"name": "julia-1.7"
},
"language_info": {
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.7.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}