最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

r - How do I resolve 'Error in optim: non-finite finite-difference value [1]' in this example? - Stack Overflow

programmeradmin7浏览0评论

Running the MWE...

library(SuppDists)
library(stats4)

mle(function(x) -log(dPearson(x, N=21, rho=0.999)), 0.5)

produces an error

Error in optim(start, f, method = method, hessian = TRUE, ...) : 
  non-finite finite-difference value [1]

The error doesn't happen if I call the function with rMZ = 0.99. I'm guessing that the cause is the discontinuity in dPearson at r = 1. I have tried calling mle with method = "L-BFGS-B", lower = 0, upper = 1 or method = "L-BFGS-B", lower = 0.00001, upper = 0.99999 but these yield an alternate error

L-BFGS-B needs finite values of 'fn'

The actual example I am using the mle call in is:

Code:

library(SuppDists)
library(stats4)

test_rMZ_2rDZ <- function(rMZ, nMZ, rDZ, nDZ) {
  # Run MLE
  H1_MZ_MLE = mle(function(x) -log(dPearson(x, N=nMZ, rho=rMZ)), 0.5)
  H1_DZ_MLE = mle(function(x) -log(dPearson(x, N=nDZ, rho=rDZ)), 0.5)
  
  H0_MLE = mle(function(x) -(log(dPearson(x, N=nMZ, rho=rMZ)) + log(dPearson(x/2, N=nDZ, rho=rDZ))), 0.5)
  
  # Extract log-likelihoods
  H0_logLik = logLik(H0_MLE)
  H1_logLik = logLik(H1_MZ_MLE) + logLik(H1_DZ_MLE)
  
  # Run likelihood ratio test 
  test_stat = 2 * (H1_logLik - H0_logLik)
  pchisq(test_stat, df = 1, lower.tail = FALSE)
}

test_rMZ_2rDZ(rMZ = 0.999, nMZ = 21, rDZ = 0.98, nDZ = 52)

Running the MWE...

library(SuppDists)
library(stats4)

mle(function(x) -log(dPearson(x, N=21, rho=0.999)), 0.5)

produces an error

Error in optim(start, f, method = method, hessian = TRUE, ...) : 
  non-finite finite-difference value [1]

The error doesn't happen if I call the function with rMZ = 0.99. I'm guessing that the cause is the discontinuity in dPearson at r = 1. I have tried calling mle with method = "L-BFGS-B", lower = 0, upper = 1 or method = "L-BFGS-B", lower = 0.00001, upper = 0.99999 but these yield an alternate error

L-BFGS-B needs finite values of 'fn'

The actual example I am using the mle call in is:

Code:

library(SuppDists)
library(stats4)

test_rMZ_2rDZ <- function(rMZ, nMZ, rDZ, nDZ) {
  # Run MLE
  H1_MZ_MLE = mle(function(x) -log(dPearson(x, N=nMZ, rho=rMZ)), 0.5)
  H1_DZ_MLE = mle(function(x) -log(dPearson(x, N=nDZ, rho=rDZ)), 0.5)
  
  H0_MLE = mle(function(x) -(log(dPearson(x, N=nMZ, rho=rMZ)) + log(dPearson(x/2, N=nDZ, rho=rDZ))), 0.5)
  
  # Extract log-likelihoods
  H0_logLik = logLik(H0_MLE)
  H1_logLik = logLik(H1_MZ_MLE) + logLik(H1_DZ_MLE)
  
  # Run likelihood ratio test 
  test_stat = 2 * (H1_logLik - H0_logLik)
  pchisq(test_stat, df = 1, lower.tail = FALSE)
}

test_rMZ_2rDZ(rMZ = 0.999, nMZ = 21, rDZ = 0.98, nDZ = 52)
Share Improve this question asked Mar 31 at 11:50 MohanMohan 8,9975 gold badges40 silver badges61 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 3

The issue is that your log-likelihood function gets infinite outside the open interval (-1,1). You can set boundaries and adjust ndeps to ensure that the interval is also not left when the finite difference is calculated for the gradient. However, the result will be the upper boundary. I don't know the background of what you are doing, but it doesn't make sense.

curve(-dPearson(x, N=21, rho=0.999, log = TRUE), from = -1.5, to = 1.5)

res <- mle(function(x) -dPearson(x, N=21, rho=0.999, log = TRUE), 
           start = list(x = 0), 
           lower = -0.999, upper = 0.999,
           control = list(trace = 1, ndeps = 1e-10))
#final  value -6.761853 
#converged

res
#
#Call:
#mle(minuslogl = function(x) -dPearson(x, N = 21, rho = 0.999, 
#    log = TRUE), start = list(x = 0), lower = -0.999, upper = 0.999, 
#    control = list(trace = 1, ndeps = 1e-10))
#
#Coefficients:
#    x 
#0.999 
发布评论

评论列表(0)

  1. 暂无评论