- On 8 Jul 2002 at 12:18:09, Paul Hutson (prhutson.-at-.pharmacy.wisc.edu) sent the message

Back to the Top

Has anyone any comparative comments they would like to share after

comparing the ease of use, algorithms used, and output for these two

mixed-effect modelling programs? It would be very helpful to hear

from past users of two or all three programs.

Are all three certified in some fashion by the FDA?

If it helps for comparative purposes, I am presently using NONMEM.

Thanks

Paul

Paul Hutson, Pharm.D.

Associate Professor (CHS)

UW School of Pharmacy

NOTE NEW ADDRESS effective 6/2001

777 Highland Avenue

Madison, WI 53705-2222 - On 8 Jul 2002 at 16:04:52, Nick Holford (n.holford.-a-.auckland.ac.nz) sent the message

Back to the Top

Paul,

"Paul Hutson (by way of David Bourne)" wrote:

>

> Has anyone any comparative comments they would like to share after

> comparing the ease of use, algorithms used, and output for these two

> mixed-effect modelling programs?

A couple of years ago I compared NONMEM V Release 1.1 and WinMix 2.0.

The comparison was based on model building using simulated data with

deliberate model misspecification (of drug absorption). I was using

the Compaq Visual Fortran Optimizing Compiler Version 6.1 (Update A).

WNM 2.0 performed substantially better compared with WNM 1.0. With

the First-Order method NMV and WNM gave very similar results. Neither

method was clearly better using FOCE yet each was better on some

problems. NMV has the advantage for First-Order estimation in terms

of execution speed, however, WNM is faster than NMV on FOCE

estimation problems.

NONMEM was much easier to use than WNM for performing this kind of

comparison. This is primarily everything in WNM has to be done with

mouse and keyboard. There was (and as far as I know still is) no way

to automate the generation of alternative models or run a batch of

runs.

Data formatting for WNM can take files formatted for use with NONMEM

and also has its own WinNonLin like data format.

I found the method for specifying mixed effect models for covariate

model building was not as simple with WNM. Something as simple as

specifying sex as a covariate turned out to require a tricky bit of

coding and assumed sex was coded as 0 or 1. More complex models

familar to many NONMEM users are hard if not impossible with WNM. On

the other hand, very simple models which rely on the WinNonLin like

library of PK and PD models are easier than WNM for the first time or

occasional user.

The presentation of results (parameter estimates, graphs) was clearly

superior with WNM (not hard to beat NONMEM in this area!). Automatic

extraction of results is possible by processing a text file listing

for both WNM and NONMEM.

In summmary if you plan to do more than the occasional population

modelling problem then I suggest you use NONMEM. If you are an

occasional user, and especially if you use WinNonLin and want to

model intensively sampled data that you have looked at with WNL then

WNM is probably more convenient for basic population parameter

estimation.

Nick

--

Nick Holford, Divn Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

email:n.holford.-a-.auckland.ac.nz

http://www.health.auckland.ac.nz/pharmacology/staff/nholford/ - On 8 Jul 2002 at 18:58:36, Noel Cranswick (noel.aaa.melbpc.org.au) sent the message

Back to the Top

Hi Nick,

Did you publish or formally present the comparison.

Thanks in advance,

Noel

Noel E Cranswick

,--_|\ noel.at.melbpc.org.au Ph: +61-3-9455 1345

/ Oz \ 0NZ In real life: Noel E Cranswick

\_,--\M/ 0 Melbourne PC User Group, Australia.

v http://members.tripod.com/~noelc/

__o

_`\<,

...(*)/(*) - On 8 Jul 2002 at 18:59:07, Nick Holford (n.holford.aaa.auckland.ac.nz) sent the message

Back to the Top

Noel,

Noel Cranswick wrote:

> Did you publish or formally present the comparison.

This work was prepared as an internal report to the Pharsight

Scientific Advisory Board. There were plans at Pharsight to use my

report as a "White Paper" but nothing has come of this. I am not

aware of any other formal comparison of WinNonMix and NONMEM.

Nick

--

Nick Holford, Divn Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

email:n.holford.at.auckland.ac.nz

http://www.health.auckland.ac.nz/pharmacology/staff/nholford/ - On 11 Jul 2002 at 15:16:13, Roger Jelliffe (jelliffe.aaa.usc.edu) sent the message

Back to the Top

Dear All:

Concerning the recent comments on the "comparison of NONMEM,

WinNonMix, and USC*PACK". Nick's comments have been primarily about the

differences in the user interface between NONMEM and WinNonMix.

We would like to offer the following comments about parametric and

nonparametric population modeling methods in general, and more specifically

about the comparison of the statistical behavior of the FOCE approximation

(which is used in many parametric modeling software programs such as our

iterative 2 stage Bayesian program IT2B, in NONMEM, and in others) with the

nonparametric population modeling approach NPAG, which does not use this

approximation, and with a new parametric EM program, PEM.

Dr. Robert Leary, of the San Diego Supercomputer Center, at the

recent PAGE meeting in Paris in June, presented a careful comparison of the

nonparametric NPAG population modeling program (the successor of our

nonparametric NPEM software - about 1000 times faster!) and the parametric

iterative 2-stage Bayesian program IT2B, which uses the FOCE approximation.

Please note that he did not make a direct comparison of NPAG with NONMEM.

What he did was to compare the nonparametric maximum likelihood method NPAG

with a parametric method IT2B which uses the same FOCE approximation that

the FOCE NONMEM does.

He also compared a new parametric EM (PEM) approach developed by

Alan Schumitzky a number of years ago, but just recently implemented by Bob

Leary.

In Dr. Leary's careful simulation study, it is clearly shown that

the NPAG and the PEM methods are statistically consistent in their behavior

- that is, as the number of subjects in the population increases, the

results obtained get closer and closer to the true population values. He

makes the following points, among others:

1. Parametric modeling methods (IT2B, NONMEM, and others) use

approximate likelihood functions resulting from the FO, FOCE, and

Laplace methods. As a result, statistical consistency of the parametric

population estimates cannot be guaranteed. In fact, a lack of

statistical consistency has been observed in the past in several studies

from a variety of research groups, including our own.

2. Nonparametric (NP) methods, in contrast, use exact likelihood

functions. This is because the NP maximum likelihood distributions are

discrete rather than continuous, and the likelihood integral reduces to a

finite sum which can be evaluated exactly. Since the NP distribution

estimate is consistent, so are the derived estimates of the population

means, variances, covariances, and correlations.

The next question studied concerned which method is the most

statistically efficient - that is, which method gets the best results from

the fewest subjects. To answer this question, Dr. Leary evaluated the

statistical properties (bias, efficiency, and asymptotic convergence rate)

of the NPAG estimator in a simple, controlled, truly parametric simulated

setting. He compared the resulting NPAG estimates with those using the

approximate (FOCE) parametric method (the USC*PACK IT2B), and the PEM

parametric method, with Faure low discrepancy sequence integration. The

Faure numerical integration scheme results in a (very nearly) exact

parametric likelihood function. To our knowledge, this is the first time

that such an exact parametric likelihood has been used in these problems.

In principle, this should result in a statistically consistent parametric

population modeling method, and indeed, Dr. Leary's results confirm such

consistent and efficient behavior.

He studied a simulated population of 800 subjects. The parameter

distributions were Gaussian - not skewed or multimodal, but truly Gaussian.

Three scenarios were studied, based on the correlations between the

parameter values.

1. A one compartment model was used, with a unit IV bolus dose at time

zero. Two simulated serum levels were "obtained", each measured with a

10% coefficient of variation.

2. Five parameters were set:

Mean V = 1.1, SD of V = .25

Mean K = 1.0, SD of K = .25 also

The corrrelation coefficient between V and K was set at 3

different values, one for each scenario:

1. -0.6

2. 0.0

3. +0.6

3. Several population sizes were studied: 25, 50, 100, 200, 400, and

800 subjects. These 3 scenarios were each replicated over1000 times to

evaluate bias and efficiency.

The results were that NPAG and PEM were statistically consistent

in their behavior. As the number of subjects in the population increased

from 25 to 800, the mean V got closer and closer to the true value of 1.1.

In contrast, the FOCE method (IT2B) got 1.098 for 25 subjects, but it

drifted down to 1.08 at 800 subjects - not consistent behavior. For the

mean K, again the results with NPAG and PEM got closer and closer to the

true mean of 1.0 as the number of subjects increased from 25 to 800, while

the FOCE approximation hit 1.0 with 50 subjects, but drifted up to 1.016 at

800 subjects - not consistent behavior.

For the SD of K, both NPAG and PEM had consistent behavior,

closely approaching the true value of 0.25. In contrast, the FOCE

approximation started at about 0.22, and drifted way down to about 0.185 as

the number of subjects increased from 50 to 800. Behavior with respect to

the SD of V was similar.

For the first of the 3 different correlation coefficients, NPAG

and PEM were right on at -0.6, but the FOCE IT2B actually gave a positive

correlation coefficient, starting at about + .05 with 25 subjects, and

increasing further to +0.2 from 200 to 800 subjects. When the true

correlation coefficient was 0.0, again NPAG and PEM were very close to it,

but the FOCE IT2B was +0.5 with 25 subjects, increasing to +0.6. Where the

true correlation coefficient was +0.6, again NPAG and PEM were right on,

but the FOCE method gave +0.85.

So in summary, the consequences of using the FOCE approximation

were a loss of statistical consistency. It had:

1. small bias (1-2%) for the means of V and K,

2. moderate bias (20-30%) for the SD's of V and K

3. severe bias for the correlation coefficients

true value average estimate

-0.6 +0.2

0.0 +0.6

+0.6 +0.85

In addition, the FOCE approximation was also associated with a

loss of statistical efficiency. This was much higher for NPAG and PEM than

for the FOCE IT2B. It began at 0.7 for both NPAG and PEM, with 25 subjects,

and grew to 0.8 from 50 subjects on. In contrast, the FOCE efficiency was

only 0.4 for 25 subjects, and then fell to below 0.1 for from 400 to 800

subjects.

The FOCE approximation was also associated with a loss in

stochastic convergence rate. It was 1/the square root of N for NPAG and

PEM, but was much worse, 1/the 4th root of N, for the FOCE IT2B. The FOCE

approximation was associated with a severe loss of statistical efficiency

and a severe reduction of asymptotic convergence rate. While NPAG and PEM

required only 4 times the number of subjects to reduce the standard

deviation by half, the FOCE approximation required 16 times the number of

subjects to achieve the same improvement.

Dr. Leary's conclusions were:

1. Both NPAG and PEM, which use accurate likelihood

estimators, display statistical consistency, in agreement with

maximum likelihood theory. Biases, if any, are small and decay toward zero

with increasing number of subjects. The statistical quality

of NPAG and PEM parameter estimates are equivalent, though the

bias structures are different.

2. The FOCE approximation in IT2B results in loss of

consistency - a small bias for means, larger for

standard deviations, and very large for correlations. It also

severely degraded statistical efficiency and

asymptotic convergence behavior.

Previous work has shown that when population parameter

distributions are not Gaussian, the parameter estimates are best with NPEM

or NPAG, compared to parametric methods. However, many have had the

impression that when the parameter distributions are in fact truly

Gaussian, that parametric maximum likelihood methods such as those using

the FOCE approximation are better and more efficient. Dr. Leary's work here

clearly shows that this is not so.

David Bourne is being properly prudent when he does not permit

attachments in PharmPK. Because of this, if you would like to see graphs

instead of just the numbers given above, Dr. Leary's slides and his full

presentation at the PAGE meeting can be seen on our web

site www.lapk.org Click on New Advances in Population Modeling, under

announcements, etc.

Very best regards to all,

Roger Jelliffe, Bob Leary, Alan Schumitzky, Mike Van Guilder, and the USC

Laboratory of Applied Pharmacokinetics.

Roger W. Jelliffe, M.D. Professor of Medicine,

Laboratory of Applied Pharmacokinetics,

USC Keck School of Medicine

2250 Alcazar St, Los Angeles CA 90033, USA

email= jelliffe.at.hsc.usc.edu

Our web site= http://www.lapk.org - On 23 Jul 2002 at 15:11:47, "Dan Hirshout" (dhirshout.at.innaphase.com) sent the message

Back to the Top

Dear All,

The recent release of Kinetica v4.1 now includes Population PK/PD

functionality. Brief Overview:

..Power Model- Allows the user to add covariable in an exponential

relationship in the population analysis

Population Model Validation- Allows the user to validate the current or an

existing population model using Bayesian fit on the parameter or individual

concentrations. This tool also allows user to choose their own datasets or

let Kinetica randomly choose the dataset for validation. This functionality

is first in its class among the population PK software.

..Livermore algorithm for Ordinary Differential Equation (ODE)- Addition of

the Livermore algorithm to solve stiff and non-stiff differential equations.

The new algorithm is currently set as default. The other choice is the Runge

Kutta.

..Friedman rank test (non-parametric)- Friedman test is the non-parametric

equivalent of ANOVA. It is appropriate for data arising from an unreplicated

complete block design

..Comparison of two groups- The Comparison of Two Groups is the same as the

two sample t-test for paired and non-paired variables. Kinetica utilizes

both Wilcoxon and Student's t tests to make comparison.

..Linear Regression with CI- Linear regression with confidence interval

plotted and calculated

For a free demonstration CD contact: Dan Hirshout - dhirshout.aaa.innaphase.com

Best Regards,

Dan Hirshout

InnaPhase Corporation

Want to post a follow-up message on this topic? If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "WinNonMix vs NONMEM vs USCPACK" as the subject

PharmPK Discussion List Archive Index page

Copyright 1995-2010 David W. A. Bourne (david@boomer.org)