Please use this identifier to cite or link to this item:
|Title:||Lotus: An algorithm for building accurate and comprehensible logistic regression trees||Authors:||Chan, K.-Y.
|Keywords:||Piecewise linear logistic regression
Trend-adjusted chi-square test
Unbiased variable selection
|Issue Date:||Dec-2004||Citation:||Chan, K.-Y., Loh, W.-Y. (2004-12). Lotus: An algorithm for building accurate and comprehensible logistic regression trees. Journal of Computational and Graphical Statistics 13 (4) : 826-852. ScholarBank@NUS Repository. https://doi.org/10.1198/106186004X13064||Abstract:||Logistic regression is a powerful technique for fitting models to data with a binary response variable, but the models are difficult to interpret if collinearity, nonlinearity, or interactions are present. Besides, it is hard to judge model adequacy because there are few diagnostics for choosing variable transformations and no true goodness-of-fit test. To overcome these problems, this article proposes to fit a piecewise (multiple or simple) linear logistic regression model by recursively partitioning the data and fitting a different logistic regression in each partition. This allows nonlinear features of the data to be modeled without requiring variable transformations. The binary tree that results from the partitioning process is pruned to minimize a cross-validation estimate of the predicted deviance. This obviates the need for a formal goodness-of-fit test. The resulting model is especially easy to interpret if a simple linear logistic regression is fitted to each partition, because the tree structure and the set of graphs of the fitted functions in the partitions comprise a complete visual description of the model. Trend-adjusted chi-square tests are used to control bias in variable selection at the intermediate nodes. This protects the integrity of inferences drawn from the tree structure. The method is compared with standard stepwise logistic regression on 30 real datasets, with several containing tens to hundreds of thousands of observations. Averaged across the datasets, the results show that the method reduces predicted mean deviance by 9% to 16%. We use an example from the Dutch insurance industry to demonstrate how the method can identify and produce an intelligible profile of prospective customers.||Source Title:||Journal of Computational and Graphical Statistics||URI:||http://scholarbank.nus.edu.sg/handle/10635/105210||ISSN:||10618600||DOI:||10.1198/106186004X13064|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Sep 30, 2022
WEB OF SCIENCETM
checked on Sep 30, 2022
checked on Sep 22, 2022
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.