- On 16 Apr 2000 at 21:52:19, "Ning Song" (SongN.aaa.tripharm.com) sent the message

Back to the Top

To all:

I am doing bioanalytical work to support PK. We often have to

compare two group data done by two different method (two different

labs, or two different analysts), and we have to document this kind

of statistical results from comparison in a view of scientific

statistical point. Therefore, we need a software which has both graph

and all the basic statistic features. I know that there are graph

and some basic features in some softwares such as "Microsoft Excel",

but we need a software which has much more features. Does anyone can

suggest me a software and give me some information?

I really appreciate any of your information!!

Nina

songn.aaa.tripharm.com

(919) 402-2627 - On 17 Apr 2000 at 21:23:57, David_Bourne (david.aaa.boomer.org) sent the message

Back to the Top

Date: Mon, 17 Apr 2000 08:53:17 +0200

From: furlanut.aaa.HYDRUS.CC.UNIUD.IT

Subject: Re: PharmPK Statistics software

To: PharmPK.-at-.boomer.org

X-Accept-Language: en,pdf

Give a look at SigmaStat by SPSS

http://www.spss.com/software/science/sigmastat/

Regards

Federico Pea, MD

Clinical Pharmacologist

Institute of Clinical Pharmacology & Toxicology

University of Udine

Italy

---

Date: Mon, 17 Apr 2000 00:55:25 -0700 (MST)

X-Sender: ml11439.-a-.pop.goodnet.com

To: PharmPK.-at-.boomer.org

From: ml11439.-at-.goodnet.com (Michael J. Leibold)

Subject: Re: PharmPK Statistics software

Nina,

The following reference comes with statistical software which

includes most parametric and nonparametric statistical tests. It

also comes with graphical capabilities, which can be printed and

saved, but may not be publication quality.

Glantz, S.A., Primer of Biostatisics 4th ed., New York,

McGraw-Hill 1997

However, Excel has add-on capabilities which may allow the

addition of expanded graphing capabilities (a question for

Microsoft support services).

Mike Leibold, PharmD, RPh

ML11439.at.goodnet.com

---

X-Sender: jhzwafri.aaa.merle.acns.nwu.edu (Unverified)

Date: Mon, 17 Apr 2000 09:16:22 -0500

To: PharmPK.aaa.boomer.org

From: zhao wang

Subject: Re: PharmPK Statistics software

What kind of work are you doing? Are they pharmacokinetic analysis or

statistic analysis of the PK results? SAAM II can do PK and have the

statistic analysis of the PK result, for instance, the goodness the

fitting and objective function, AIC ect. which can be used as a

statistic analysis of the modeling, also the graphic features.If

you are going to do a statistic analysis of the PK result, there are

plenty softwares available, SPSS have more features. It depends on

what you are doing.

Zhao Wang - On 18 Apr 2000 at 22:25:02, Russell Reeve (rreeve.-at-.pharsight.com) sent the message

Back to the Top

Dear Nina:

I would stay away from Excel. Dedicated statistical packages where the

developers have paid attention to the useability and numerical stability

would be preferred. Note that for unbalanced data, Excel provides

incorrect results, and I have seen regression analyses where the

coefficients were of the wrong sign! Furthermore, the documentation is

often incorrect.

WinNonlin from Pharsight is one package that provides the statistical

functionality that you appear to be looking for. It contains an ANOVA

module that fits general linear models, of which the t-test is a special

case. In addition, the ANOVA module performs average bioequivalence

testing.

Other features of note:

* A collection of built-in PK and PD models that you can fit to your

data

* Descriptive statistics

* Data manipulation ability, based on a spreadsheet interface

* Nonparametric superposition

* Semicompartmental modeling

* Deconvolution

* Tables wizard for presentation of summary results

For further information, point your browser to http://www.pharsight.com.

A comment on the statistical analysis: A paired t-test is probably not

what you want to do. Consider the following hypothetical data for 5

independent samples:

Method 1 Method 2

50 70

70 80

90 90

110 100

130 110

A t-test (either paired or unpaired) would say the methods were

identical (t=0). The analysis I typically use is to fit the regression

model

Method1 = a + b*(Method 2) + error

where Method 2 is the reference method. If the confidence interval for

Method1(predicted)/Method1(observed) is close to 1 with no clear trend,

then you call the methods equivalent.

If you would like to discuss your comparison issues further, feel free

to email me.

Russell Reeve

Pharsight Technical Support - On 19 Apr 2000 at 22:57:18, "Hans Proost" (J.H.Proost.-at-.farm.rug.nl) sent the message

Back to the Top

Russell Reeve wrote:

> I would stay away from Excel. Dedicated statistical packages where the

> developers have paid attention to the useability and numerical stability

> would be preferred. Note that for unbalanced data, Excel provides

> incorrect results, and I have seen regression analyses where the

> coefficients were of the wrong sign! Furthermore, the documentation is

> often incorrect.

This is an important warning. I like Excel, and I never found

erroneous results. After your warning, however, I certainly will

become more critical to the Excel results. Thank you!

> A comment on the statistical analysis: A paired t-test is probably not

> what you want to do. Consider the following hypothetical data for 5

> independent samples:

>

> Method 1 Method 2

> 50 70

> 70 80

> 90 90

> 110 100

> 130 110

>

> A t-test (either paired or unpaired) would say the methods were

> identical (t=0).

This is not correct! A paired t-test (an unpaired t-test would be

inappropriate) would say that the methods are not significantly

different. This does not imply that the methods are not different,

and certainly not that the methods are identical! This is a clear

misuse of statistical information (although quite common, even in

serious science, unfortunately).

By the way, it is certainly an interesting example! And there are

indeed many way of interpreting the results completely wrong.

However, please don't use the word 'identical' in statistics, since it

does not exists in statistics (at least not at the usual level of

statistics as applied by non-statisticians).

Best regards,

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.aaa.farm.rug.nl - On 20 Apr 2000 at 18:19:30, "J.G. Wright" (J.G.Wright.at.newcastle.ac.uk) sent the message

Back to the Top

Dear Dr Proost

As t-tests are for comparing MEANS, and the means of the two samples are

IDENTICAL, both a paired and unpaired t-statistic should be equal to zero.

If not, in

which direction would there be evidence for a difference in?

If you wish to think in terms of hypothesis tests, then it is impossible

to prove anything is true (the next sample could always

contradict your conclusions). Identical, in practise, means no evidence

of

a difference. When statisticians jump on people for use of language, it is

perceived

as pedantic because only a fool would believe with absolute certainty

that the

means OF THE UNDERLYING PROCESS were equal with such a sample. This is

obvious and it is pointless

to try to edit permissible language down to Neyman-Pearson hypothesis

tests.

Russell's conclusions seem fair to me, if you want to know about

correlations, use an analysis for correlation. Failing to

reject the

null hypothesis about means doesn't tell you anything about this. Even

the

makers of Excel were kind enough to print a Pearson correlation as part of

their t-test report.

James Wright - On 24 Apr 2000 at 21:39:47, Roger Jelliffe (jelliffe.at.usc.edu) sent the message

Back to the Top

Dear Nina:

Since you say you often must analyse data from 2 different

labs, there may

well be 2 different assays involved, each having its own unique error

pattern. How do you deal with this problem in your analyses?

Very best regards,

Roger Jelliffe

Roger W. Jelliffe, M.D. Professor of Medicine, USC

USC Laboratory of Applied Pharmacokinetics

2250 Alcazar St, Los Angeles CA 90033, USA

Phone (323)442-1300, fax (323)442-1302, email= jelliffe.aaa.hsc.usc.edu

Our web site= http://www.usc.edu/hsc/lab_apk

************* - On 25 Apr 2000 at 21:40:25, David_Bourne (david.-a-.boomer.org) sent the message

Back to the Top

[Two replies - db]

Date: Tue, 25 Apr 2000 09:24:05 -0400

From: "Ning Song"

To:

Subject: PharmPK Re: Statistics software

Dear Roger:

We transfer the same method from our lab to another lab (uaually

contract lab). The method is still the same, but we need a

cross-validation between labs.

Nina

---

Date: Tue, 25 Apr 2000 13:13:53 -0400

From: "Ed O'Connor"

Reply-To: efoconnor.-at-.snet.net

Organization: PM PHARMA

X-Accept-Language: en

To: PharmPK.-at-.boomer.org

Subject: Re: PharmPK Re: Statistics software

There is a specific software sold expressly for method comparisons. It is

sold by Westgard out of Maine...I acnnot recall the name but the statistics

for comparison include random, systematic and total error, intercept and

slope, secondary stats include t and f tests. For a descitption see Chap

15 Tietz Textbook of Clinical Chemistry, Chap 15. - On 26 Apr 2000 at 21:27:26, "Hans Proost" (J.H.Proost.at.farm.rug.nl) sent the message

Back to the Top

Dear Dr. Wright,

Thank you for your message. You wrote:

> Identical, in practise, means no evidence of a difference.

For you, perhaps. I am not sure that everybody agrees.

If I understand you correctly, you call everything identical unless

you have some evidence of a difference?

This is indeed the usual starting point in a statistical null

hypothesis. However, 'not enough evidence for rejecting the null

hypothesis' is not identical to 'no evidence of a difference'.

> When statisticians jump on people for use of language, it is perceived as

> pedantic because only a fool would believe with absolute certainty that

> the means OF THE UNDERLYING PROCESS were equal with such a sample. This

> is obvious and it is pointless to try to edit permissible language down to

> Neyman-Pearson hypothesis tests.

I don't fully understand what point you want to make. I am not a

statisticians. I regard myself as a scientist who tries to formulate

conclusions from statistical tests correctly. And if I formulate not

correctly, I appreciate to be corrected by others.

About the fools: if you are right, there are many fools in science.

How often one reads in the Results: 'the difference between A and

B was not statistically significant', and in the Conclusion 'A is

identical to B'. This is nonsense, unless an appropriate statistical

test, e.g. a power analysis, has been performed, which is quite

seldom.

Returning to the example given by Dr. Reeve: It can be said that

the OBSERVED means are identical. This is simple logic.

This is, however, quite different from a statement about the means

OF THE UNDERLYING PROCESS. This has nothing to do with the

samples in the example of Dr. Reeve. Such a statement cannot be

made with, e.g., a t-test, irrespective of the values.

You may call this pedantic, but I say: In the world of science, one

should say what is proven, and one should not say what is not

proven.

> Russell's conclusions seem fair to me, if you want to know about

> correlations, use an analysis for correlation. Failing to

> reject the

> null hypothesis about means doesn't tell you anything about this.

I agree, of course. This was certainly not disputable.

Best regards,

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.aaa.farm.rug.nl - On 27 Apr 2000 at 20:04:08, exfamadu.aaa.savba.sk sent the message

Back to the Top

To the PharmPK list,

I support what J.H.Proost wrote. The use of pedantic statistical tests

without pedantic interpretation of the results of these tests may lead to

incorrect conclusions. In general, there is a very pedantic theory behind

many of seemingly simple statistical tests. Here, the word "pedantic" does

not have any pejorative meaning.

If a scientist form one field of science uses tools of another scientific

field she/he should be very careful in the utilization of these tools and also

in the utilization of the terminology and language of this scientific field.

With best regards,

Maria Durisova

Dipl. Engineer Maria Durisova D.Sc.

Senior Research Worker

Scientific Secretary

Institute of Experimental Pharmacology

Slovak Academy of Sciences

SK-842 16 Bratislava

Slovak Republic

http://nic.savba.sk/sav/inst/exfa/advanced.htm - On 27 Apr 2000 at 20:07:33, James (J.G.Wright.at.ncl.ac.uk) sent the message

Back to the Top

Dear Dr Proost,

At 12:52 PM 4/26/00 MET, you wrote:

>Dear Dr. Wright,

>

>Thank you for your message. You wrote:

>

>> Identical, in practise, means no evidence of a difference.

>

>For you, perhaps. I am not sure that everybody agrees.

>If I understand you correctly, you call everything identical unless

>you have some evidence of a difference?

No,that would be silly.

The notion that we had collected some data was implicit in my argument.

"No evidence of a difference" is, of course, a much stricter to criterion

than failing to reject the null hypothesis at the 5% level.

If we have collected sufficient evidence to eliminate (with some degree of

confidence) the possibility of a difference which is of practical

importance, then I might use the word identical. If I, for example,

assayed one thousand samples, spanning the range of interest, and got

exactly the same results with each method, I think the word identical would

be appropriate. If I didn't have any evidence, then I would say that I

have no evidence. In the example under discussion, I would return a

confidence interval to quantify the strength of evidence about the

difference in the means of the two samples.

Curiously, if there is no variability detected between the methods (which

does not mean there is no variability), a paired t-test would imply the

methods were identical by giving a confidence interval of zero width,

regardless of the sample size (or perhaps a division by zero error, which

is more sensible). Not all inferential procedures are this naive

thankfully. As we would know that the comparison could only be made to the

observed resolution, we could never say the two methods were absolutely

identical. This comes back to my point that it is not possible to prove

anything is absolutely true, but only true for practical purposes.

>This is indeed the usual starting point in a statistical null

>hypothesis. However, 'not enough evidence for rejecting the null

>hypothesis' is not identical to 'no evidence of a difference'.

Indeed. The latter is a subset of the former (and hence a stricter

criterion), however if we let the (entirely arbitrary) size of our test

tend to the maximum (100%) the two statements would be equivalent. After

all, there is no particular reason to go with the wimpy 5% - this is the

convention for evidence against a null hypothesis we wish to show is false.

By analogy we should use a size of 95% for a decision procedure on a null

hypothesis we wish to show is true...(not that I think this is actually a

good idea) Alternatively, if we let the sample size tend to infinity, the

two statements also become equivalent, regardless of size (sadly, not an

option).

In the example we considered, we would be unable to reject the null

hypothesis no matter what the size of our test. Of course, I do not

propose that the underlying methods are identical. We would also be unable

to reject the null hypothesis that there were differences of small

magnitude (relative to the variability in the sample) at smaller levels and

it is this line of reasoning that leads to presenting a confidence interval.

>

>> When statisticians jump on people for use of language, it is perceived as

>> pedantic because only a fool would believe with absolute certainty that

>> the means OF THE UNDERLYING PROCESS were equal with such a sample. This

>> is obvious and it is pointless to try to edit permissible language down to

>> Neyman-Pearson hypothesis tests.

>

>I don't fully understand what point you want to make. I am not a

>statisticians. I regard myself as a scientist who tries to formulate

>conclusions from statistical tests correctly. And if I formulate not

>correctly, I appreciate to be corrected by others.

>

My point was that Russell Reeve made no such claims about the underlying

processes being identical but simply pointed out that the t-test considered

the means to be identical.

On another note, hypothesis tests are not the only approach to inference.

They are decision procedures which have been extensively criticised.

>About the fools: if you are right, there are many fools in science.

>How often one reads in the Results: 'the difference between A and

>B was not statistically significant', and in the Conclusion 'A is

>identical to B'. This is nonsense, unless an appropriate statistical

>test, e.g. a power analysis, has been performed, which is quite

>seldom.

The people who make such statements are fully deserving of the criticisms

which you levelled at Dr Reeve. Not to mention failing to state the level

at which they defined statistical significance, and failing to quote the

observed p-value in case my arbitrary level differs from theirs. I am not

quite sure how a power analysis "de-nonsensifies" such conclusions, as I

thought it was something you did when designing your experiment. I guess

we can't trust the conclusion of "acceptance" from a low-powered test is

the idea. However, once we have the data we can calculate a confidence

interval and quantify the strength of evidence more precisely.

>

>Returning to the example given by Dr. Reeve: It can be said that

>the OBSERVED means are identical. This is simple logic.

>This is, however, quite different from a statement about the means

>OF THE UNDERLYING PROCESS. This has nothing to do with the

>samples in the example of Dr. Reeve. Such a statement cannot be

>made with, e.g., a t-test, irrespective of the values.

Absolutely, you can't prove the null hypothesis. Or anything else for that

matter.

>You may call this pedantic, but I say: In the world of science, one

>should say what is proven, and one should not say what is not

>proven.

>

(...one should not say what is not proven is proven. Knowing what there

isn't evidence for is quite important)

There is no such thing as absolute proof in world of science, only

evidence. However, I think we might both agree we should present the

strength of evidence, and if it was strong enough I guess you can call it

proof if I can use the word identical.

Please accept my sincere apologies for implying you were a statistician.

James Wright - On 9 May 2000 at 23:00:18, Roger Jelliffe (jelliffe.aaa.usc.edu) sent the message

Back to the Top

Dear Nina:

How do you do a cross-validation between labs? Further, how

do you weight

your data once it is measured and you have the results? What if the 2 labs

do not have the same error pattern?

Best regards,

Roger Jelliffe

Roger W. Jelliffe, M.D. Professor of Medicine, USC

USC Laboratory of Applied Pharmacokinetics

2250 Alcazar St, Los Angeles CA 90033, USA

Phone (323)442-1300, fax (323)442-1302, email= jelliffe.at.hsc.usc.edu

Our web site= http://www.usc.edu/hsc/lab_apk

********************************************************************

Want to post a follow-up message on this topic? If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Statistics software" as the subject

PharmPK Discussion List Archive Index page

Copyright 1995-2010 David W. A. Bourne (david@boomer.org)