S-Plus Commands for Survival Estimation:
Estimating the Survival Function:One-sample non-parametric methods
We consider three methods for estimating a survivorship function , without resorting to parametric methods — Kaplan-Meier method;Life-table (Actuarial Estimator);via the Cumulative hazard estimator.
(1) The Kaplan-Meier Estimator
It can be justified from 3 perspectives: product limit estimator, likelihood justification, redistribute to the right estimator. We will start with an intuitive motivation based on conditional probabilities, then review some of the other justifications.
Motivation: First, consider an example where there is no censoring. The following are times of remission (weeks) for 21 leukemia patients receiving control treatment (Table 1.1 of Cox & Oakes):
1, 1, 2, 2, 3, 4, 4, 5, 5, 8, 8, 8, 8, 11, 11, 12, 12, 15, 17, 22, 23
How would we estimate S(10), the probability that an individual survives to time 10 or later? What about ? Let’s construct a table of time t.
Empirical Survival Function: no censoring, {# individuals with T ≥ t}/{total sample size}.
What if there is censoring?
Consider the treated group from Table 1.1 of Cox and Oakes: 6+, 6, 6, 6, 7, 9+, 10+, 10, 11+, 13, 16, 17+, 19+, 20+, 22, 23, 25+, 32+, 32+, 34+, 35+. [Note: times with + are right censored]
We know S(6)= 21/21, because everyone survived at least until time 6 or greater. But, we can’t say S(7) = 17/21, because we don’t know the status of the person who was censored at time 6.
In a 1958 paper in the Journal of the American Statistical Association, Kaplan and Meier proposed a way to non-parametrically estimate S(t), even in the presence of censoring. The method is based on the ideas of conditional probability.
Now, let’s apply these ideas of conditional probability to estimate S(t):
. So,
so,
. And
is the number of deaths at
,
is the number at risk at
.
Intuition behind the Kaplan-Meier Estimator: Think of dividing the observed time span of the study into a series of fine intervals so that there is a separate interval for each time of death or censoring. Using the law of conditional probability,
,
where the product is taken over all the intervals including or preceding time t.
4 possibilities for each interval:
(1) No events (death or censoring) - conditional probability of surviving the interval is 1.
(2) Censoring - assume they survive to the end of the interval, so that the conditional probability of surviving the interval is 1.
(3) Death, but no censoring - conditional probability of not surviving the interval is # deaths (d) divided by #‘at risk’ (r) at the beginning of the interval. So the conditional probability of surviving the interval is 1−(d/r).
(4) Tied deaths and censoring - assume censoring last to the end of the interval, so that conditional probability of surviving the interval is still 1 − (d/r).
General Formula for j-th interval:
It turns out we can write a general formula for the conditional probability of surviving the j-th interval that holds for all 4 cases: .
We could use the same approach by grouping the event times into intervals (say, one interval for each month), and then counting up the number of deaths in each to estimate the probability of surviving the interval (this is called the life-table estimate). However, the assumption that those censored last until the end of the interval wouldn’t be quite accurate, so we would end up with a cruder approximation. As the intervals get finer and finer, the approximations made in estimating the probabilities of getting through each interval become smaller and smaller, so that the estimator converges to the true S(t). This intuition clarifies why an alternative name for the KM is the product limit estimator.
Make a table with a row for every death or censoring time:
Two Other Justifications for KM Estimator:Cox and Oakes
The likelihood is that of g independent binomials: . Therefore, the maximum likelihood estimator of
is:
.
Now we plug in the MLE’s of to estimate S(t):
.
Redistribute to the right justification (Efron, 1967)
Algorithm: The idea behind Efron approach is to spread the contributions of censored observations out over all the possible times to their right.
Step (1): arrange the n observed times (deaths or censoring) in increasing order. If there are ties, put censored after deaths.
Step (2): Assign weight (1/n) to each time.
Step (3): Moving from left to right, each time you encounter a censored observation, distribute its mass to all times to its right.
Step (4): Calculate by subtracting the final weight for time j from
.
Example of “redistribute to the right” algorithm
Consider the following event times: 2, 2.5+, 3, 3, 4, 4.5+, 5, 6, 7. The algorithm goes as follows:
Properties of the KM estimator: This is just like an estimated probability from a binomial distribution, so we have the no censoring case:
.
How does censoring affect this?
is still approximately normal ; The mean of
converges to the true S(t); The variance is a bit more complicated (since the denominator n includes some censored observations). In calculus, we can construct (1 − α)% confidence bands about
:
.
Greenwood’s formula:
1. The KM estimator
.
2. Binomial proportions to the variance standards
.
3. Estimate its variance using the delta method
If Y is normal with mean and variance
, then g(Y) is approximately normally distributed with mean g(µ) and variance
, where h(µ)=g'(µ).
4. Two specific examples of the delta method
Z=log(Y), then ; Z=exp(Y), then
.
The examples above use the following results from calculus:
;
.
Instead of computing directly with too large waste in the resources, we will look at its log:
.
Thus, by approximate independence of the by the Z=log(Y) transformation:
Thus by Z=exp(Y), and :
.
5. Back to confidence intervals
Problem: This approach can yield values > 1 or < 0.
Better approach: Get a 95% confidence interval for L(t) = log(− log(S(t))).
Since this quantity is unrestricted, the confidence interval will be in the proper range when we transform back. To see why this works, note the following:
① Since is an estimated probability
;
② Taking the log of has bounds
;
③ Taking the opposite ;
④ Taking the log again ;
⑤ To transform back, reverse steps with .
Log-log Approach for Confidence Intervals:
① Define L(t) = log(− log(S(t))) ;
② Form a 95% confidence interval for L(t) based on , yielding
, where
;
③ Since S(t) = exp(− exp(L(t)), the confidence bounds for the 95% CI on S(t) are: ;
④ Substituting back into the above bounds, we get confidence bounds of
;
⑤ More calculations of the variance functions:
;
⑥ Applied the delta method to simplex inference:
.
⑦ The version of the new confidence interval:
.
(2) The Lifetable or Actuarial Estimator
Our goal is still to estimate survival function, hazard, and density function, but this is complicated by the fact that we don’t know exactly when during each time interval an event occurs.
+ one of the oldest techniques around
+ used by actuaries, demographers, etc.
+ applies when the data are grouped
Population Life Tables:
+ cohort life table - describes the mortality experience from birth to death for a particular cohort of people born at about the same time. People at risk at the start of interval are those who survived the previous interval.
+ current life table - constructed from (1) census information on the number of individuals alive at each age, for a given year and (2) vital statistics on the number of deaths or failures in a given year, by age. This type of lifetable is often reported in terms of a hypothetical cohort of 100,000 people.
Generally, censoring is not an issue for Population Life Tables.
Clinical Life tables - applies to grouped survival data from patients with specific diseases. Because patients can enter the study at different times, or be lost to follow-up, censoring must be allowed.
We could apply the K-M formula directly to the numbers in the table on the previous page, estimating S(t) as . However, this approach is unsatisfactory for grouped data......it treats the problem as though it were in discrete time, with events happening only at 1 yr, 2 yr, etc. In fact, what we are trying to calculate here is the conditional probability of dying within the interval, given survival to the beginning of it.
Quantities estimated: Conditional probability of dying, ; Conditional probability of surviving,
; Cumulative probability at
,
.
Other quantities estimated at the midpoint of the j-th interval:
- Hazard in the j-th interval: .
- Density at the midpoint of the j-th interval:
.
(3) Estimating the cumulative hazard — Nelson-Aalen estimator
Just as we did for the KM, think of dividing the observed time span of the study into a series of fine intervals so that there is only one event per interval:
Suppose we want to estimate , the cumulative hazard at time t. Λ(t) can then be approximated by a sum:
, where the sum is over intervals,
is the value of the hazard in the j-th interval and ∆ is the width of each interval.
is approximately the probability of dying in the interval, we can further approximate by
. It follows that Λ(t) will change only at death times, and hence we write the Nelson-Aalen estimator as:
.
Fleming-Harrington: . In general, this estimator of the survival function will be close to the Kaplan-Meier estimator,
. We can also go the other way ...... we can take the Kaplan-Meier estimate of S(t), and use it to calculate an alternative estimate of the cumulative hazard function:
.