Cost-sensitive feature selection for support vector machines
|Author||Benítez Peña, Sandra
Blanquero Bravo, Rafael
Carrizosa Priego, Emilio José
Ramírez Cobo, Josefa
|Department||Universidad de Sevilla. Departamento de Estadística e Investigación Operativa|
|Published in||Computers and Operations Research|
|Abstract||Feature Selection (FS) is a crucial procedure in Data Science tasks such as
Classification, since it identifies the relevant variables, making thus the classification procedures more interpretable and more effective by ...
Feature Selection (FS) is a crucial procedure in Data Science tasks such as Classification, since it identifies the relevant variables, making thus the classification procedures more interpretable and more effective by reducing noise and data overfit. The relevance of features in a classification procedure is linked to the fact that misclassifications costs are frequently asymmetric, since false positive and false negative cases may have very different consequences. However, off-the-shelf FS procedures seldom take into account such cost-sensitivity of errors. In this paper we propose a mathematical-optimization-based FS procedure embedded in one of the most popular classification procedures, namely, Support Vector Machines (SVM), accommodating asymmetric misclassification costs. The key idea is to replace the traditional margin maximization by minimizing the number of features selected, but imposing upper bounds on the false positive and negative rates. The problem is written as an integer linear problem plus a quadratic convex problem for SVM with both linear and radial kernels. The reported numerical experience demonstrates the usefulness of the proposed FS procedure. Indeed, our results on benchmark data sets show that a substantial decrease of the number of features is obtained, whilst the desired trade-off between false positive and false negative rates is achieved.