# Coding 1 - Estimate Pi¶

To get comfortable with PyTorch, we will be estimating Pi.
Given n random samples from a uniform distribution, the goal is to approximate Pi as well as possible.

Rules:

• No functions other than elementary operations (+, -, *, /, **, sqrt)
• No numpy
• No direct computation of pi, or constants that produce pi
• for loops are allowed
In [ ]:
def estimate_pi(n):
"""
Inputs:
n (int) number of samples to use.

Returns:
(float) pi approximation.
"""
import torch
x = torch.rand(n)

result = x.mean()

return float(result)


### Evaluation¶

The estimation function will be evaluated on squared error from the actual value of pi.
We will run your code for varying values of n, to see how the approximation (hopefully) gets better as n grows.

In [ ]:
M_SAMPLES = 100

def squared_error(pi_func, n_samples):
"""
Averages estimate of pi over M_SAMPLES and computes the squared error from pi.

Inputs:
pi_func (function) given n_samples, returns approximation of pi.
n_samples (int) number of samples to draw from a uniform distribution.

Returns:
(float) squared error.
"""
from numpy import mean, pi

return mean([(pi - pi_func(n_samples)) ** 2 for _ in range(M_SAMPLES)])

def evaluate(pi_func):
import numpy as np
import matplotlib.pyplot as plt

ns = np.logspace(1, 5, 50)
errors = [squared_error(pi_func, int(n)) for n in ns]

plt.title('Error: %.5f %s' % (np.mean(errors), pi_func))
plt.gca().set_ylabel('Estimation Error')
plt.gca().set_xlabel('n Samples (log)')
plt.gca().set_xscale('log')
plt.plot(ns, errors, 'o--')
plt.show()

return np.mean(errors)


Test out different pi estimators here - evaluate is a function that takes in another function that estimates pi.
%matplotlib inline