Skip to content

Commit

Permalink
Merge pull request #104 from DistanceDevelopment/optimist2
Browse files Browse the repository at this point in the history
CRAN release mrds-3.0.0
  • Loading branch information
LHMarshall authored Nov 6, 2024
2 parents 324180c + 00ac47d commit d05f8c3
Show file tree
Hide file tree
Showing 48 changed files with 1,881 additions and 154 deletions.
18 changes: 13 additions & 5 deletions DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,22 @@ Maintainer: Laura Marshall <[email protected]>
License: GPL (>=2)
Title: Mark-Recapture Distance Sampling
LazyLoad: yes
Author: Jeff Laake <[email protected]>, David Borchers
<[email protected]>, Len Thomas <[email protected]>, David
Miller <[email protected]>, Jon Bishop and Jonah McArthur
Authors@R: c(
person("Laura", "Marshall", email = "[email protected]" , role = "cre"),
person("Jeff", "Laake", role = "aut"),
person("David", "Miller", email = "[email protected]", role = "aut"),
person("Felix", "Petersma", email = "[email protected]", role = "aut"),
person("Len", "Thomas", email = "[email protected]", role = "ctb"),
person("David", "Borchers", email = "[email protected]", role = "ctb"),
person("Jon", "Bishop", role = "ctb"),
person("Jonah", "McArthur", role = "ctb"),
person("Eric", "Rexstad", email = "[email protected]", role = "rev"))
Description: Animal abundance estimation via conventional, multiple covariate
and mark-recapture distance sampling (CDS/MCDS/MRDS). Detection function
fitting is performed via maximum likelihood. Also included are diagnostics
and plotting for fitted detection functions. Abundance estimation is via a
Horvitz-Thompson-like estimator.
Version: 2.3.0
Version: 3.0.0
URL: https://github.com/DistanceDevelopment/mrds/
BugReports: https://github.com/DistanceDevelopment/mrds/issues
Depends:
Expand All @@ -21,12 +28,13 @@ Imports:
mgcv,
methods,
numDeriv,
nloptr,
Rsolnp
Suggests:
testthat,
covr,
knitr,
rmarkdown,
bookdown
RoxygenNote: 7.2.3
RoxygenNote: 7.3.2
Encoding: UTF-8
1 change: 1 addition & 0 deletions NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ export(solvecov)
export(varn)
import(Rsolnp)
import(mgcv)
import(nloptr)
import(optimx)
importFrom(grDevices,dev.interactive)
importFrom(grDevices,dev.new)
Expand Down
15 changes: 14 additions & 1 deletion NEWS
Original file line number Diff line number Diff line change
@@ -1,9 +1,22 @@
mrds 3.0.0
----------

New features

* The default R optimiser when (strict) monotonicity of the detection function is enforced has been improved. Monotonicity is enforced only when the detection function has adjustment terms but does not include covariates. The new optimiser is a Sequential Least Squares Programming (SLSQP) algorithm included in the 'nloptr'-package. This optimiser uses analytical gradients rather than approximate gradients and is therefore more robust and has improved runtime. Users can still make ddf() use the previous default R optimiser by specifying mono.method = 'solnp' in the 'control' argument. The default for mono.method is 'slsqp'. Because of this and other small improvements and bug fixes in the monotonic optimiser, results from ddf() might change even when using the old solnp optimiser. In most cases, however, we do not expect significant changes in the estimates.

Bug Fixes

* The summary of the fitting object now correctly prints the optimiser used when monotonicity is enforced ('slsqp' or 'solnp').
* check.mono() now uses the same point locations as the optimiser. It also uses the same tolerance as the optimiser (1e-8) and applies this tolerance when checking (strict) monotonicity, and when checking 0 <= g(x) <= 1.


mrds 2.3.0
----------

New Features

* The 'P2' estimator is now the default for estimating the encouter rate variance for point transect surveys. (Issue #65)
* The 'P2' estimator is now the default for estimating the encounter rate variance for point transect surveys. (Issue #65)

Bug Fixes

Expand Down
16 changes: 16 additions & 0 deletions R/adj.cos.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#' Cosine adjustment term, not the series.
#'
#' For internal use only -- not to be called by 'mrds' or 'Distance' users
#' directly.
#'
#' @param distance perpendicular distance vector/scalar
#' @param scaling scale parameter
#' @param adj.order the adjustment order
#'
#' @returns scalar or vector containing the cosine adjustment term for every
#' value in \code{distance} argument
#'
#' @author Felix Petersma
adj.cos <- function(distance, scaling, adj.order) {
return(cos(adj.order * pi * distance / scaling))
}
16 changes: 16 additions & 0 deletions R/adj.herm.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#' Hermite polynomial adjustment term, not the series.
#'
#' For internal use only -- not to be called by 'mrds' or 'Distance' users
#' directly.
#'
#' @param distance perpendicular distance vector/scalar
#' @param scaling scale parameter
#' @param adj.order the adjustment order
#'
#' @returns scalar or vector containing the Hermite adjustment term for every
#' value in \code{distance} argument
#'
#' @author Felix Petersma
adj.herm <- function(distance, scaling, adj.order) {
return(hermite.poly((distance / scaling), adj.order))
}
16 changes: 16 additions & 0 deletions R/adj.poly.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#' Simple polynomial adjustment term, not the series.
#'
#' For internal use only -- not to be called by 'mrds' or 'Distance' users
#' directly.
#'
#' @param distance perpendicular distance vector/scalar
#' @param scaling scale parameter
#' @param adj.order the adjustment order
#'
#' @returns scalar or vector containing the polynomial adjustment term for every
#' value in \code{distance} argument
#'
#' @author Felix Petersma
adj.poly <- function(distance, scaling, adj.order) {
return((distance / scaling) ^ adj.order)
}
41 changes: 41 additions & 0 deletions R/adj.series.grad.cos.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
#' Series of the gradient of the cosine adjustment series w.r.t. the scaled
#' distance.
#'
#' For internal use only -- not to be called by 'mrds' or 'Distance' users
#' directly.
#'
#' @param distance perpendicular distance vector/scalar
#' @param scaling scale parameter
#' @param adj.order the adjustment order
#' @param adj.parm vector of parameters (a_j)
#' @param adj.exp boolean, defaults to FALSE
#'
#' @return scalar or vector containing the gradient of the cosine adjustment
#' series for every value in \code{distance} argument
#'
#' @author Felix Petersma
adj.series.grad.cos <- function(distance, scaling = 1, adj.order,
adj.parm = NULL, adj.exp = FALSE){

# Check the adjustment parameters
if(is.null(adj.parm)){
adj.parm <- as.vector(rep(1, length(adj.order)))
}

adj.order <- as.vector(adj.order)

cossum <- 0

for(i in seq_along(adj.order)){
cossum <- cossum +
(adj.parm[i] * -(adj.order[i] * pi * sin(adj.order[i] * pi *
distance / scaling)))
}

# if adj.exp return exp(cossum) to keep f(x)>0
if(adj.exp){
return(exp(cossum))
}else{
return(cossum)
}
}
46 changes: 46 additions & 0 deletions R/adj.series.grad.herm.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#' Series of the gradient of the Hermite polynomial adjustment series w.r.t. the
#' scaled distance.
#'
#' For internal use only -- not to be called by 'mrds' or 'Distance' users
#' directly.
#'
#' @param distance perpendicular distance vector/scalar
#' @param scaling scale parameter
#' @param adj.order the adjustment order
#' @param adj.parm vector of parameters (a_j)
#' @param adj.exp boolean, defaults to FALSE
#'
#' @return scalar or vector containing the gradient of the Hermite adjustment
#' series for every value in \code{distance} argument
#'
#' @author Felix Petersma
adj.series.grad.herm <- function(distance, scaling = 1, adj.order,
adj.parm = NULL, adj.exp = FALSE){

# Check the adjustment parameters
if(is.null(adj.parm)){
adj.parm <- as.vector(rep(1, length(adj.order)))
}

adj.order <- as.vector(adj.order)

hermsum <- 0

for(i in seq_along(adj.order)){
if (adj.order[i] - 1 == 0) {
hermsum <- hermsum +
(adj.parm[i] * adj.order[i] * 1)
} else {
hermsum <- hermsum +
(adj.parm[i] * adj.order[i] * hermite.poly((distance / scaling),
adj.order[i] - 1))
}
}

# if adj.exp return exp(hermsum) to keep f(x)>0
if(adj.exp){
return(exp(hermsum))
}else{
return(hermsum)
}
}
45 changes: 45 additions & 0 deletions R/adj.series.grad.poly.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
#' Series of the gradient of the simple polynomial adjustment series w.r.t. the
#' scaled distance.
#'
#' For internal use only -- not to be called by 'mrds' or 'Distance' users
#' directly.
#'
#' @param distance perpendicular distance vector/scalar
#' @param scaling scale parameter
#' @param adj.order the adjustment order
#' @param adj.parm vector of parameters (a_j)
#' @param adj.exp boolean, defaults to FALSE
#'
#' @return scalar or vector containing the gradient of the polynomial adjustment
#' series for every value in \code{distance} argument
#'
#' @author Felix Petersma
adj.series.grad.poly <- function(distance, scaling = 1, adj.order,
adj.parm = NULL, adj.exp = FALSE){

# Check the adjustment parameters
if(is.null(adj.parm)){
adj.parm <- as.vector(rep(1, length(adj.order)))
}

adj.order <- as.vector(adj.order)

polysum <- 0

for(i in seq_along(adj.order)){
if (adj.order[i] - 1 == 0) {
polysum <- polysum +
(adj.parm[i] * adj.order[i] * 1)
} else {
polysum <- polysum +
(adj.parm[i] * adj.order[i] * (distance/scaling) ^ (adj.order[i] - 1))
}
}

# if adj.exp return exp(polysum) to keep f(x)>0
if(adj.exp){
return(exp(polysum))
}else{
return(polysum)
}
}
37 changes: 26 additions & 11 deletions R/check.mono.R
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
#' range (left to right truncation points).
#' @param n.pts number of equally-spaced points between left and right
#' truncation at which to evaluate the detection function (default 100)
#' @param tolerance numerical tolerance for monotonicity checks (default 1e-6)
#' @param tolerance numerical tolerance for monotonicity checks (default 1e-8)
#' @param plot plot a diagnostic highlighting the non-monotonic areas (default
#' \code{FALSE})
#' @param max.plots when \code{plot=TRUE}, what is the maximum number of plots
Expand All @@ -36,12 +36,12 @@
#' non-monotonic.
#'
#' @keywords utility
#' @author David L. Miller
#' @author David L. Miller, Felix Petersma
#' @importFrom stats as.formula model.matrix
#' @importFrom graphics polygon rug
#' @importFrom grDevices rgb
#' @export
check.mono <- function(df, strict=TRUE, n.pts=100, tolerance=1e-6, plot=FALSE,
check.mono <- function(df, strict=TRUE, n.pts=100, tolerance=1e-8, plot=FALSE,
max.plots=6){

# extract the ddf object from the fitted model
Expand All @@ -55,8 +55,13 @@ check.mono <- function(df, strict=TRUE, n.pts=100, tolerance=1e-6, plot=FALSE,
right.trunc <- df$meta.data$width
left.trunc <- df$meta.data$left

# generate distances between truncation points
x <- seq(left.trunc, right.trunc, len=n.pts)
# generate distances between truncation points
# x <- seq(left.trunc, right.trunc, len=n.pts) # commented out by Felix

## FTP: Len and I discussed that this check.mono should use the same points at
## MCDS and the fitting, which are determined through getRefPoints() in
## detfct.fit.mono.R
x <- getRefPoints(no_d = n.pts, int.range = c(left.trunc, right.trunc))

# grab the unique covariate combinations from the data
if(ddfobj$type=="unif"){
Expand Down Expand Up @@ -85,7 +90,7 @@ check.mono <- function(df, strict=TRUE, n.pts=100, tolerance=1e-6, plot=FALSE,
# make predictions over the data
ps <- as.vector(detfct(x, ddfobj, width=right.trunc, standardize=TRUE))

# catrch nasty errors
# catch nasty errors
if(any(is.na(ps) | is.nan(ps) | is.infinite(ps))){
return(rep(NA, 4))
}
Expand Down Expand Up @@ -156,18 +161,28 @@ check.mono <- function(df, strict=TRUE, n.pts=100, tolerance=1e-6, plot=FALSE,
}

# plot detection function >1
if(any(ps>1)){
## FTP: this was missing the tolerance. Now added [07/08/2024]
# if(any(ps>1)){ # FTP: old, so commented out
if(any(ps > (1 + tolerance))){
plot.monopoly(rep(FALSE, length(ps)), gmax+1, 1, rgb(0.5, 0, 0, 0.5))
}
# plot detection function <0
if(any(ps<0)){
## FTP: this was missing the tolerance. Now added [07/08/2024]
# if(any(ps<0)){ # FTP: old, so commented out
if(any(ps < (0 - tolerance))){
plot.monopoly(rep(FALSE, length(ps)), 0, gmin-1, rgb(0.5, 0, 0, 0.5))
}
}

# return logical -- TRUE == montonic
return(c(all(ps.weak.chk), all(ps.strict.chk),
all(ps<=1), all(ps>=0)))
# return logical -- TRUE == monotonic
## FTP: again, tolerance is not used for 0<g<1 check, but I think it should.
## I added it now and commented out the old code.
return(c(all(ps.weak.chk), # weak mono check
all(ps.strict.chk), # strict mono check
all(ps <= (1 + tolerance)), # g <= 1 check
all(ps >= (0 - tolerance)))) # g >= 0 check
# return(c(all(ps.weak.chk), all(ps.strict.chk),
# all(ps<=1), all(ps>=0)))
} # end chpply

# apply the check function to all the unique covariate combinations
Expand Down
15 changes: 12 additions & 3 deletions R/ddf.R
Original file line number Diff line number Diff line change
Expand Up @@ -175,16 +175,25 @@
#' \item{\code{optimx.method}}{one (or a vector of) string(s) giving the
#' optimisation method to use. If more than one is supplied, the results from
#' one are used as the starting values for the next. See
#' \code{\link{optimx}}}
#' \code{\link[optimx]{optimx}}}
#' \item{\code{optimx.maxit}}{maximum number of iterations to use in the
#' optimisation.}
#' \item{\code{mono.random.start}}{By default when monotonicity constraints
#' are enforced, a grid of starting values are tested. Instead random
#' starting values can be used (uniformly distributed between the upper and
#' lower bounds). Set \code{TRUE} for random start, \code{FALSE} (default)
#' uses the grid method}
#' \item{\code{mono.method}}{The optimiser method to be used when (strict)
#' monotonicity is enforced. Can be either \code{slsqp} or \code{solnp}.
#' Default \code{slsqp}.}
#' \item{\code{mono.startvals}}{Controls if the mono.optimiser should find
#' better starting values by first fitting a key function without adjustments,
#' and then use those start values for the key function parameters when
#' fitting the key + adjustment series detection function. Defaults to
#' \code{FALSE}}
#' \item{\code{mono.outer.iter}}{Number of outer iterations to be used by
#' \code{solnp} when fitting a monotonic model. Default 200.}
#' \code{solnp} when fitting a monotonic model and \code{solnp} is selected.
#' Default 200.}
#' \item{\code{silent}}{silences warnings within ds fitting method (helpful
#' for running many times without generating many warning/error messages).}
#' \item{\code{optimizer}}{By default this is set to 'both' for single
Expand All @@ -200,7 +209,7 @@
#' }
#'
#' Examples of distance sampling analyses are available at
#' \url{http://examples.distancesampling.org/}.
#' \url{https://examples.distancesampling.org/}.
#'
#' Hints and tips on fitting (particularly optimisation issues) are on the
#' \code{\link{mrds_opt}} manual page.
Expand Down
Loading

0 comments on commit d05f8c3

Please sign in to comment.