Skip to content

Commit

Permalink
Merge pull request #496 from CamDavidsonPilon/v0.15.0
Browse files Browse the repository at this point in the history
0.15.0
  • Loading branch information
CamDavidsonPilon authored Nov 22, 2018
2 parents 61ef503 + 815588f commit fc97807
Show file tree
Hide file tree
Showing 32 changed files with 2,498 additions and 766 deletions.
24 changes: 24 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,29 @@
### Changelogs

#### 0.15.0
- adding `robust` params to `CoxPHFitter`'s `fit`. This enables atleast i) using non-integer weights in the model (these could be sampling weights like IPTW), and ii) mis-specified models (ex: non-proportional hazards). Under the hood it's a sandwich estimator. This does not handle ties, so if there are high number of ties, results may significantly differ from other software.
- `standard_errors_` is now a property on fitted `CoxPHFitter` which describes the standard errors of the coefficients.
- `variance_matrix_` is now a property on fitted `CoxPHFitter` which describes the variance matrix of the coefficients.
- new criteria for convergence of `CoxPHFitter` and `CoxTimeVaryingFitter` called the Newton-decrement. Tests show it is as accurate (w.r.t to previous coefficients) and typically shaves off a single step, resulting in generally faster convergence. See https://www.cs.cmu.edu/~pradeepr/convexopt/Lecture_Slides/Newton_methods.pdf. Details about the Newton-decrement are added to the `show_progress` statements.
- Minimum suppport for scipy is 1.0
- Convergence errors in models that use Newton-Rhapson methods now throw a `ConvergenceError`, instead of a `ValueError` (the former is a subclass of the latter, however).
- `AalenAdditiveModel` raises `ConvergenceWarning` instead of printing a warning.
- `KaplanMeierFitter` now has a cumulative plot option. Example `kmf.plot(invert_y_axis=True)`
- a `weights_col` option has been added to `CoxTimeVaryingFitter` that allows for time-varying weights.
- `WeibullFitter` has a new `show_progress` param and additional information if the convergence fails.
- `CoxPHFitter`, `ExponentialFitter`, `WeibullFitter` and `CoxTimeVaryFitter` method `print_summary` is updated with new fields.
- `WeibullFitter` has renamed the incorrect `_jacobian` to `_hessian_`.
- `variance_matrix_` is now a property on fitted `WeibullFitter` which describes the variance matrix of the parameters.
- The default `WeibullFitter().timeline` has changed from integers between the min and max duration to _n_ floats between the max and min durations, where _n_ is the number of observations.
- Performance improvements for `CoxPHFitter` (~20% faster)
- Performance improvements for `CoxTimeVaryingFitter` (~100% faster)
- In Python3, Univariate models are now serialisable with `pickle`. Thanks @dwilson1988 for the contribution. For Python2, `dill` is still the preferred method.
- `baseline_cumulative_hazard_` (and derivatives of that) on `CoxPHFitter` now correctly incorporate the `weights_col`.
- Fixed a bug in `KaplanMeierFitter` when late entry times lined up with death events. Thanks @pzivich
- Adding `cluster_col` argument to `CoxPHFitter` so users can specify groups of subjects/rows that may be correlated.
- Shifting the "signficance codes" for p-values down an order of magnitude. (Example, p-values between 0.1 and 0.05 are not noted at all and p-values between 0.05 and 0.1 are noted with `.`, etc.). This deviates with how they are presented in other software. There is an argument to be made to remove p-values from lifelines altogether (_become the changes you want to see in the world_ lol), but I worry that people could compute the p-values by hand incorrectly, a worse outcome I think. So, this is my stance. P-values between 0.1 and 0.05 offer _very_ little information, so they are removed. There is a growing movement in statistics to shift "signficant" findings to p-values less than 0.01 anyways.
- New fitter for cumulative incidence of multiple risks `AalenJohansenFitter`. Thanks @pzivich! See "Methodologic Issues When Estimating Risks in Pharmacoepidemiology" for a nice overview of the model.

#### 0.14.6
- fix for n > 2 groups in `multivariate_logrank_test` (again).
- fix bug for when `event_observed` column was not boolean.
Expand Down
102 changes: 99 additions & 3 deletions docs/Examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -282,6 +282,18 @@ Hide confidence intervals
:height: 300


Invert axis
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. code-block:: python
kmf.fit(T, label="kmf.plot(invert_y_axis=True)")
kmf.plot(invert_y_axis=True)
.. image:: /images/invert_y_axis.png
:height: 300


Set the index/timeline of a estimate
##############################################

Expand Down Expand Up @@ -403,7 +415,7 @@ id T E
Example SQL queries and transformations to get time varying data
####################################################################

For Cox time-varying models, we discussed what the dataset should look like in :ref:`Dataset for time-varying regression`. Typically we have a base dataset, and then we fold in the covariate datasets. Below are some SQL queries and Python transformations from end-to-end.
For Cox time-varying models, we discussed what the dataset should look like in :ref:`Dataset creation for time-varying regression`. Typically we have a base dataset, and then we fold in the covariate datasets. Below are some SQL queries and Python transformations from end-to-end.


Base dataset: ``base_df``
Expand Down Expand Up @@ -482,7 +494,7 @@ Initially, this can't be added to our baseline dataframe. Using ``utils.covariat
Example cumulative total using and time-varying covariates
############################################################

Often we have either __transactional covariate datasets__ or __state covariate datasets__. In a transactional dataset, it may make sense to sum up the covariates to represent administration of a treatment over time. For example, in the risky world of start-ups, we may want to sum up the funding amount recieved at a certain time. We also may be interested in the amount of the last round of funding. Below is an example to do just that:
Often we have either transactional covariate datasets or state covariate datasets. In a transactional dataset, it may make sense to sum up the covariates to represent administration of a treatment over time. For example, in the risky world of start-ups, we may want to sum up the funding amount recieved at a certain time. We also may be interested in the amount of the last round of funding. Below is an example to do just that:

Suppose we have an initial DataFrame of start-ups like:

Expand Down Expand Up @@ -573,6 +585,8 @@ Problems with convergence in the Cox Proportional Hazard Model
################################################################
Since the estimation of the coefficients in the Cox proportional hazard model is done using the Newton-Raphson algorithm, there is sometimes a problem with convergence. Here are some common symptoms and possible resolutions:

0. First diagnostic: look for ``ConvergenceWarning`` in the output. Most often problems in convergence are the result of problems in the dataset. Lifelines has diagnostic checks it runs against the dataset before fitting and warnings are outputted to the user.

1. ``delta contains nan value(s). Convergence halted.``: First try adding ``show_progress=True`` in the ``fit`` function. If the values in ``delta`` grow unboundedly, it's possible the ``step_size`` is too large. Try setting it to a small value (0.1-0.5).

2. ``LinAlgError: Singular matrix``: This means that there is a linear combination in your dataset. That is, a column is equal to the linear combination of 1 or more other columns. Try to find the relationship by looking at the correlation matrix of your dataset.
Expand All @@ -584,7 +598,89 @@ Since the estimation of the coefficients in the Cox proportional hazard model is
3. Related to above, the relationship between a covariate and the duration may be completely determined. For example, if the rank correlation between a covariate and the duration is very close to 1 or -1, then the log-likelihood can be increased arbitrarly using just that covariate. Look for a ``ConvergenceWarning`` after the ``fit`` call.
4. Another problem may be a co-linear relationship in your dataset. See point 2. above.

4. Adding a very small ``penalizer_coef`` significantly changes the results. This probably means that the step size is too large. Try decreasing it, and returning the ``penalizer_coef`` term to 0.
4. If adding a very small ``penalizer`` significantly changes the results (``CoxPHFitter(penalizer=0.0001)``), then this probably means that the step size in the iterative algorithm is too large. Try decreasing it (``.fit(..., step_size=0.50)`` or smaller), and returning the ``penalizer`` term to 0.

5. If using the ``strata`` arugment, make sure your stratification group sizes are not too small. Try ``df.groupby(strata).size()``.

Adding weights to observations in a Cox model
##############################################

There are two common uses for weights in a model. The first is as a data size reduction technique (known as case weights). If the dataset has more than one subjects with identical attributes, including duration and event, then their likelihood contribution is the same as well. Thus, instead of computing the log-likelihood for each individual, we can compute it once and multiple it by the count of users with identical attributes. In practice, this involves first grouping subjects by covariates and counting. For example, using the Rossi dataset, we will use Pandas to group by the attributes (but other data processing tools, like Spark, could do this as well):

.. code-block:: python
from lifelines.datasets import load_rossi
rossi = load_rossi()
rossi_weights = rossi.copy()
rossi_weights['weights'] = 1.
rossi_weights = rossi_weights.groupby(rossi.columns.tolist())['weights'].sum()\
.reset_index()
The original dataset has 432 rows, while the grouped dataset has 387 rows plus an additional `weights` column. ``CoxPHFitter`` has an additional parameter to specify which column is the weight column.

.. code-block:: python
from lifelines import CoxPHFitter
cp = CoxPHFitter()
cp.fit(rossi_weights, 'week', 'arrest', weights_col='weights')
The fitting should be faster, and the results identical to the unweighted dataset. This option is also available in the `CoxTimeVaryingFitter`.


The second use of weights is sampling weights. These are typically positive, non-integer weights that represent some artifical under/over sampling of observations (ex: inverse probability of treatment weights). It is recommened to set ``robust=True`` in the call to the ``fit`` as the usual standard error is incorrect for sampling weights. The ``robust`` flag will use the sandwich estimator for the standard error.

.. warning:: The implementation of the sandwich estimator does not handle ties correctly (under the Efron handling of ties), and will give slightly or significantly different results from other software depending on the frequeny of ties. g


Correlations between subjects in a Cox model
###################################################

There are cases when your dataset contains correlated subjects, which breaks the independent-and-identically-distributed assumption. What are some cases when this may happen?

1. If a subject appears more than once in the dataset (common when subjects can have the event more than once)
2. If using a matching technique, like prospensity-score matching, there is a correlation between pairs.

In both cases, the reported standard errors from a unadjusted Cox model will be wrong. In order to adjust for these correlations, there is a ``cluster_col`` keyword in `CoxPHFitter.fit` that allows you to specify the column in the dataframe that contains designations for correlated subjects. For example, if subjects in rows 1 & 2 are correlated, but no other subjects are correlated, then ``cluster_col`` column should have the same value for rows 1 & 2, and all others unique. Another example: for matched pairs, each subject in the pair should have the same value.

.. code-block:: python
from lifelines.datasets import load_rossi
from lifelines import CoxPHFitter
rossi = load_rossi()
# this may come from a database, or other libaries that specialize in matching
mathed_pairs = [
(156, 230),
(275, 228),
(61, 252),
(364, 201),
(54, 340),
(130, 33),
(183, 145),
(268, 140),
(332, 259),
(314, 413),
(330, 211),
(372, 255),
# ...
]
rossi['id'] = None # we will populate this column
for i, pair in enumerate(matched_pairs):
subjectA, subjectB = pair
rossi.loc[subjectA, 'id'] = i
rossi.loc[subjectB, 'id'] = i
rossi = rossi.dropna(subset=['id'])
cph = CoxPHFitter()
cph.fit(rossi, 'week', 'arrest', cluster_col='id')
Specifying ``cluster_col`` will handle correlations, and invoke the robust sandwich estimator for standard errors (the same as setting `robust=True`).
15 changes: 11 additions & 4 deletions docs/Quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -159,16 +159,23 @@ The input of the ``fit`` method's API in a regression is different. All the data
cph.print_summary()
"""
n=200, number of events=189
duration col = T
event col = E
number of subjects = 200
number of events = 189
log-likelihood = -807.620
time fit was run = 2018-10-23 02:44:18 UTC
---
coef exp(coef) se(coef) z p lower 0.95 upper 0.95
var1 0.2213 1.2477 0.0743 2.9796 0.0029 0.0757 0.3669 **
var2 0.0509 1.0522 0.0829 0.6139 0.5393 -0.1116 0.2134
var3 0.2186 1.2443 0.0758 2.8836 0.0039 0.0700 0.3672 **
var1 0.2222 1.2488 0.0743 2.9920 0.0028 0.0767 0.3678 **
var2 0.0510 1.0523 0.0829 0.6148 0.5387 -0.1115 0.2134
var3 0.2183 1.2440 0.0758 2.8805 0.0040 0.0698 0.3669 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Concordance = 0.580
Likelihood ratio test = 15.540 on 3 df, p=0.00141
"""
cph.plot()
Expand Down
Loading

0 comments on commit fc97807

Please sign in to comment.