## Journal of Applied Mathematics

### Cost-Sensitive Feature Selection of Numeric Data with Measurement Errors

#### Abstract

Feature selection is an essential process in data mining applications since it reduces a model’s complexity. However, feature selection with various types of costs is still a new research topic. In this paper, we study the cost-sensitive feature selection problem of numeric data with measurement errors. The major contributions of this paper are fourfold. First, a new data model is built to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models mainly on the error boundaries. Second, a covering-based rough set model with normal distribution measurement errors is constructed. With this model, coverings are constructed from data rather than assigned by users. Third, a new cost-sensitive feature selection problem is defined on this model. It is more realistic than the existing feature selection problems. Fourth, both backtracking and heuristic algorithms are proposed to deal with the new problem. Experimental results show the efficiency of the pruning techniques for the backtracking algorithm and the effectiveness of the heuristic algorithm. This study is a step toward realistic applications of the cost-sensitive learning.

#### Article information

Source
J. Appl. Math., Volume 2013 (2013), Article ID 754698, 13 pages.

Dates
First available in Project Euclid: 14 March 2014

https://projecteuclid.org/euclid.jam/1394807942

Digital Object Identifier
doi:10.1155/2013/754698

Zentralblatt MATH identifier
1267.68199

#### Citation

Zhao, Hong; Min, Fan; Zhu, William. Cost-Sensitive Feature Selection of Numeric Data with Measurement Errors. J. Appl. Math. 2013 (2013), Article ID 754698, 13 pages. doi:10.1155/2013/754698. https://projecteuclid.org/euclid.jam/1394807942

#### References

• P. Lanzi, “Fast feature selection with genetic algorithms: a filter approach,” in Proceedings of the IEEE International Conference on Evolutionary Computation, 1997.
• T. L. B. Tseng and C. C Huang, “Rough set-based approach to feature selection in customer relationship management,” Omega, vol. 35, no. 4, pp. 365–383, 2007.
• N. Zhong, J. Z. Dong, and S. Ohsuga, “Using rough sets with heuristics to feature selection,” Journal of Intelligent Information Systems, vol. 16, no. 3, pp. 199–214, 2001.
• H. Liu and H. Motoda, Feature Selection for Knowledge Discovery and Data Mining, vol. 454, Springer, 1998.
• Y. Weiss, Y. Elovici, and L. Rokach, “The CASH algorithm-cost-sensitive attribute selection using histograms,” Information Sciences, vol. 222, pp. 247–268, 2013.
• C. Elkan, “The foundations of cost-sensitive learning,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence, 2001.
• W. Fan, S. Stolfo, J. Zhang, and P. Chan, “Adacost: misclassification cost-sensitive boosting,” in Proceedings of the 16th International Conference on Machine Learning, 1999.
• E. B. Hunt, J. Marin, and P. J. Stone, Experiments in Induction, Academic Press, New York, NY, USA, 1966.
• M. Pazzani, C. Merz, P. M. K. Ali, T. Hume, and C. Brunk, “Reducing misclassification costs,” in Proceedings of the 11th International Conference of Machine Learning (ICML '94), Morgan Kaufmann, 1994.
• G. Fumera and F. Roli, “Cost-sensitive learning in support vector machines,” in Proceedings of VIII Convegno Associazione Italiana per L' Intelligenza Artificiale, 2002.
• C. X. Ling, Q. Yang, J. N. Wang, and S. C. Zhang, “Decision trees with minimal costs,” in Proceedings of the 21st International Conference on Machine learning, 2004.
• R. Greiner, A. J. Grove, and D. Roth, “Learning cost-sensitive active classifiers,” Artificial Intelligence, vol. 139, no. 2, pp. 137–174, 2002.
• S. Ji and L. Carin, “Cost-sensitive feature acquisition and classification,” Pattern Recognition, vol. 40, pp. 1474–1485, 2007.
• N. Lavrac, D. Gamberger, and P. Turney, “Cost-sensitive feature reduction applied to a hybrid genetic algorithm,” in Proceedings of the 7th International Workshop on Algorithmic Learning Theory (ALT '96), 1996.
• F. Min, H. P. He, Y. H. Qian, and W. Zhu, “Test-cost-sensitive attribute reduction,” Information Sciences, vol. 181, pp. 4928–4942, 2011.
• R. Susmaga, “Computation of minimal cost reducts,” in Foundations of Intelligent Systems, Z. Ras and A. Skowron, Eds., vol. 1609 of Lecture Notes in Computer Science, pp. 448–456, Springer, Berlin, Germany, 1999.
• F. Min and W. Zhu, “Minimal cost attribute reduction through backtracking,” in Proceedings of the International Conference on Database Theory and Application, vol. 258 of FGIT-DTA/BSBT, CCIS, 2011.
• F. Min and Q. Liu, “A hierarchical model for test-cost-sensitive decision systems,” Information Sciences, vol. 179, no. 14, pp. 2442–2452, 2009.
• P. Turney, “Cost-sensitive classification: empirical evaluation of a hybrid genetic decision tree induction algorithm,” Journal of Artificial Intelligence Research, vol. 2, no. 1, pp. 369–409, 1994.
• D. Margineantu, “Methods for cost-sensitive learning,” 2001.
• S. Norton, “Generating better decision trees,” in Proceedings of the 11th International Joint Conference on Artificial Intelligence, 1989.
• M. Núñez, “The use of background knowledge in decision tree induction,” Machine Learning, vol. 6, no. 3, pp. 231–250, 1991.
• M. Tan, “Cost-sensitive learning of classification knowledge and its applications in robotics,” Machine Learning, vol. 13, no. 1, pp. 7–33, 1993.
• N. Johnson and S. Kotz, Continuous Distributions, John Wiley, New York, NY, USA.
• R. A. Johnson and D. W. Wichern, Applied Multivariate Statistical Analysis, vol. 4, Prentice Hall, Englewood Cliffs, NJ, USA, 3rd edition, 1992.
• F. Min, W. Zhu, H. Zhao, G. Y. Pan, J. B. Liu, and Z. L. Xu, “Coser: cost-senstive rough sets,” 2012, http://grc.fjzs.edu.cn/$\sim\,\!$fmin/.
• Y. Y. Yao, “A partition model of granular computing,” Transactions on Rough Sets I, vol. 3100, pp. 232–253, 2004.
• H. Zhao, F. Min, and W. Zhu, “Test-cost-sensitive attribute reduction of data with normal distribution measurement errors,” Mathematical Problems in Engineering, vol. 2013, Article ID 946070, 12 pages, 2013.
• T. Y. Lin, “Granular computing on binary relations-analysis of conflict and chinese wall security policy,” in Proceedings of Rough Sets and Current Trends in Computing, vol. 2475 of Lecture Notes in Artificial Intelligence, 2002.
• T. Y. Lin, “Granular computing–-structures, representations, and applications,” in Lecture Notes in Artificial Intelligence, vol. 2639, 2003.
• L. Ma, “On some types of neighborhood-related covering rough sets,” International Journal of Approximate Reasoning, vol. 53, no. 6, pp. 901–911, 2012.
• H. Zhao, F. Min, and W. Zhu, “Test-cost-sensitive attribute reduction based on neighborhood rough set,” in Proceedings of the IEEE International Conference on Granular Computing, 2011.
• W. Zhu, “Generalized rough sets based on relations,” Information Sciences, vol. 177, no. 22, pp. 4997–5011, 2007.
• W. Zhu and F.-Y. Wang, “Reduction and axiomization of covering generalized rough sets,” Information Sciences, vol. 152, pp. 217–230, 2003.
• F. Min and W. Zhu, “Attribute reduction of data with error ranges and test costs,” Information Sciences, vol. 211, pp. 48–67, 2012.
• Z. Zhou and X. Liu, “Training cost-sensitive neural networks with methods addressing the class imbalance problem,” IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 1, pp. 63–77, 2006.
• H. Zhao, F. Min, and W. Zhu, “A backtracking approach to minimal cost feature selection of numerical data,” Journal of Information & Computational Science. In press.
• M. Kukar and I. Kononenko, “Cost-sensitive learning with neural networks,” in Proceedings of the 13th European Conference on Artificial Intelligence (ECAI '98), John Wiley & Sons, Chichester, UK, 1998.
• J. Lan, M. Hu, E. Patuwo, and G. Zhang, “An investigation of neural network classifiers with unequal misclassification costs and group sizes,” Decision Support Systems, vol. 48, no. 4, pp. 582–591, 2010.
• P. Turney, “Types of cost in inductive concept learning,” in Proceedings of the ICML-2000 Workshop on Cost-Sensitive Learning, 2000.
• S. Viaene and G. Dedene, “Cost-sensitive learning and decision making revisited,” European Journal of Operational Research, vol. 166, no. 1, pp. 212–220, 2005.
• Z. Pawlak, “Rough sets,” International Journal of Computer and Information Sciences, vol. 11, no. 5, pp. 341–356, 1982.
• J. Błaszczyński, S. Greco, R. Słowiński, and M. Szel\kag, “Monotonic variable consistency rough set approaches,” International Journal of Approximate Reasoning, vol. 50, no. 7, pp. 979–999, 2009.
• Z. Bonikowski, E. Bryniarski, and U. Wybraniec-Skardowska, “Extensions and intentions in the rough set theory,” Information Sciences, vol. 107, no. 1–4, pp. 149–167, 1998.
• M. Inuiguchi, Y. Yoshioka, and Y. Kusunoki, “Variable-precision dominance-based rough set approach and attribute reduction,” International Journal of Approximate Reasoning, vol. 50, no. 8, pp. 1199–1214, 2009.
• Y. Kudo, T. Murai, and S. Akama, “A granularity-based framework of deduction, induction, and abduction,” International Journal of Approximate Reasoning, vol. 50, no. 8, pp. 1215–1226, 2009.
• J. A. Pomykała, “Approximation operations in approximation space,” Bulletin of the Polish Academy of Sciences: Mathematics, vol. 35, no. 9-10, pp. 653–662, 1987.
• Y. Y. Yao, “Constructive and algebraic methods of the theory of rough sets,” Information Sciences, vol. 109, no. 1–4, pp. 21–47, 1998.
• Y. Y. Yao, “Probabilistic rough set approximations,” Journal of Approximate Reasoning, vol. 49, no. 2, pp. 255–271, 2008.
• W. Zakowski, “Approximations in the space (u, $\pi$),” Demonstratio Mathematica, vol. 16, no. 40, pp. 761–769, 1983.
• W. Zhu, “Relationship among basic concepts in covering-based rough sets,” Information Sciences, vol. 179, no. 14, pp. 2478–2486, 2009.
• W. Zhu and F. Wang, “On three types of covering-based rough sets,” IEEE Transactions on Knowledge and Data Engineering, vol. 19, no. 8, pp. 1131–1144, 2007.
• S. Calegari and D. Ciucci, “Granular computing applied to ontologies,” International Journal of Approximate Reasoning, vol. 51, no. 4, pp. 391–409, 2010.
• W. Zhu and F. Wang, “Covering based granular computing for conflict analysis,” Intelligence and Security Informatics, pp. 566–571, 2006.
• Wikipedia, http://www.wikipedia.org/.
• Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer Academic, Boston, Mass, USA, 1991.
• M. Dash and H. Liu, “Feature selection for classification,” Intelligent Data Analysis, vol. 1, no. 1–4, pp. 131–156, 1997.
• X. Wang, J. Yang, X. Teng, W. Xia, and R. Jensen, “Feature selection based on rough sets and particle swarm optimization,” Pattern Recognition Letters, vol. 28, no. 4, pp. 459–471, 2007.
• W. Siedlecki and J. Sklansky, “A note on genetic algorithms for large-scale feature selection,” Pattern Recognition Letters, vol. 10, no. 5, pp. 335–347, 1989.
• C. L. Blake and C. J. Merz, “UCI repository of machine learning databases,” 1998, http://www.ics.uci.edu/$\sim\,\!$mlearn/mlreposi- tory.html.
• Q. H. Liu, F. Li, F. Min, M. Ye, and G. W. Yang, “An efficient reduction algorithm based on new conditional information entropy,” Control and Decision, vol. 20, no. 8, pp. 878–882, 2005 (Chinese).
• A. Skowron and C. Rauszer, “The discernibility matrices and functions in information systems,” in Intelligent Decision Support, 1992.
• G. Wang, “Attribute core of decision table,” in Proceedings of Rough Sets and Current Trends in Computing, vol. 2475 of Lecture Notes in Computer Science, 2002.