Skip to content

Commit

Permalink
Deploying to gh-pages from @ f58cca9 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
danhalligan committed Aug 27, 2024
1 parent b2bcd45 commit 4ac8074
Show file tree
Hide file tree
Showing 12 changed files with 93 additions and 93 deletions.
10 changes: 5 additions & 5 deletions 03-linear-regression.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion 05-resampling-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ mean(store)
```
```
## [1] 0.6308
## [1] 0.6355
```

The probability of including $4$ when resampling numbers $1...100$ is close to
Expand Down
4 changes: 2 additions & 2 deletions 08-tree-based-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -509,7 +509,7 @@ bartfit <- gbart(Carseats[train, 2:11], Carseats[train, 1],
## done 800 (out of 1100)
## done 900 (out of 1100)
## done 1000 (out of 1100)
## time: 2s
## time: 3s
## trcnt,tecnt: 1000,1000
```

Expand Down Expand Up @@ -1150,7 +1150,7 @@ bart <- gbart(College[train, pred], College[train, "Outstate"],
## done 800 (out of 1100)
## done 900 (out of 1100)
## done 1000 (out of 1100)
## time: 4s
## time: 3s
## trcnt,tecnt: 1000,1000
```

Expand Down
68 changes: 34 additions & 34 deletions 10-deep-learning.md
Original file line number Diff line number Diff line change
Expand Up @@ -393,15 +393,15 @@ npred <- predict(nn, x[testid, ])
```

```
## 6/6 - 0s - 61ms/epoch - 10ms/step
## 6/6 - 0s - 55ms/epoch - 9ms/step
```

``` r
mean(abs(y[testid] - npred))
```

```
## [1] 2.219039
## [1] 2.269432
```

In this case, the neural network outperforms logistic regression having a lower
Expand Down Expand Up @@ -433,7 +433,7 @@ model <- application_resnet50(weights = "imagenet")

```
## Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
## 8192/102967424 [..............................] - ETA: 0s 3956736/102967424 [>.............................] - ETA: 1s 4202496/102967424 [>.............................] - ETA: 2s 8396800/102967424 [=>............................] - ETA: 1s 16785408/102967424 [===>..........................] - ETA: 1s 25174016/102967424 [======>.......................] - ETA: 1s 33562624/102967424 [========>.....................] - ETA: 0s 41951232/102967424 [===========>..................] - ETA: 0s 50905088/102967424 [=============>................] - ETA: 0s 58728448/102967424 [================>.............] - ETA: 0s 67117056/102967424 [==================>...........] - ETA: 0s 83894272/102967424 [=======================>......] - ETA: 0s101908480/102967424 [============================>.] - ETA: 0s102967424/102967424 [==============================] - 1s 0us/step
## 8192/102967424 [..............................] - ETA: 0s 4202496/102967424 [>.............................] - ETA: 3s 14876672/102967424 [===>..........................] - ETA: 1s 16785408/102967424 [===>..........................] - ETA: 1s 25174016/102967424 [======>.......................] - ETA: 1s 33562624/102967424 [========>.....................] - ETA: 1s 41951232/102967424 [===========>..................] - ETA: 1s 53223424/102967424 [==============>...............] - ETA: 0s 58728448/102967424 [================>.............] - ETA: 0s 68878336/102967424 [===================>..........] - ETA: 0s 75505664/102967424 [====================>.........] - ETA: 0s 86802432/102967424 [========================>.....] - ETA: 0s 92282880/102967424 [=========================>....] - ETA: 0s102967424/102967424 [==============================] - 1s 0us/step
```

``` r
Expand Down Expand Up @@ -721,15 +721,15 @@ kpred <- predict(model, xrnn[!istrain,, ])
```

```
## 56/56 - 0s - 58ms/epoch - 1ms/step
## 56/56 - 0s - 59ms/epoch - 1ms/step
```

``` r
1 - mean((kpred - arframe[!istrain, "log_volume"])^2) / V0
```

```
## [1] 0.412886
## [1] 0.4129694
```

Both models estimate the same number of coefficients/weights (16):
Expand Down Expand Up @@ -763,24 +763,24 @@ model$get_weights()
```
## [[1]]
## [,1]
## [1,] -0.031145222
## [2,] 0.101065643
## [3,] 0.141815767
## [4,] -0.004181504
## [5,] 0.116010934
## [6,] -0.003764492
## [7,] 0.038601257
## [8,] 0.078083567
## [9,] 0.137415737
## [10,] -0.029184511
## [11,] 0.036070298
## [12,] -0.821708620
## [13,] 0.095548652
## [14,] 0.511229098
## [15,] 0.521453559
## [1,] -0.032474127
## [2,] 0.097779043
## [3,] 0.178456694
## [4,] -0.005626136
## [5,] 0.121273242
## [6,] -0.076247886
## [7,] 0.035232600
## [8,] 0.077857092
## [9,] 0.163645267
## [10,] -0.026966114
## [11,] 0.032263778
## [12,] -0.807968795
## [13,] 0.095888853
## [14,] 0.513532162
## [15,] 0.496699780
##
## [[2]]
## [1] -0.006889343
## [1] -0.004996791
```

The flattened RNN has a lower $R^2$ on the test data than our `lm` model
Expand Down Expand Up @@ -833,11 +833,11 @@ xfun::cache_rds({
```

```
## 56/56 - 0s - 66ms/epoch - 1ms/step
## 56/56 - 0s - 64ms/epoch - 1ms/step
```

```
## [1] 0.4271516
## [1] 0.4262716
```

This approach improves our $R^2$ over the linear model above.
Expand Down Expand Up @@ -906,11 +906,11 @@ xfun::cache_rds({
```

```
## 56/56 - 0s - 133ms/epoch - 2ms/step
## 56/56 - 0s - 134ms/epoch - 2ms/step
```

```
## [1] 0.4405331
## [1] 0.4429825
```

### Question 13
Expand Down Expand Up @@ -966,21 +966,21 @@ xfun::cache_rds({

```
## Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
## 8192/17464789 [..............................] - ETA: 0s 3784704/17464789 [=====>........................] - ETA: 0s 4202496/17464789 [======>.......................] - ETA: 0s 8396800/17464789 [=============>................] - ETA: 0s17464789/17464789 [==============================] - 0s 0us/step
## 782/782 - 16s - 16s/epoch - 20ms/step
## 782/782 - 16s - 16s/epoch - 20ms/step
## 782/782 - 16s - 16s/epoch - 20ms/step
## 782/782 - 16s - 16s/epoch - 20ms/step
## 8192/17464789 [..............................] - ETA: 0s 2924544/17464789 [====>.........................] - ETA: 0s 4202496/17464789 [======>.......................] - ETA: 0s 8396800/17464789 [=============>................] - ETA: 0s17464789/17464789 [==============================] - 0s 0us/step
## 782/782 - 15s - 15s/epoch - 19ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
```



| Max Features| Accuracy|
|------------:|--------:|
| 1000| 0.86084|
| 3000| 0.87224|
| 5000| 0.87460|
| 10000| 0.86180|
| 1000| 0.85324|
| 3000| 0.87808|
| 5000| 0.88076|
| 10000| 0.86936|

Varying the dictionary size does not make a substantial impact on our estimates
of accuracy. However, the models do take a substantial amount of time to fit and
Expand Down
Binary file modified 10-deep-learning_files/figure-html/unnamed-chunk-12-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified 10-deep-learning_files/figure-html/unnamed-chunk-21-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 4ac8074

Please sign in to comment.