The development of, data-mining applications such as text-classification and molecular profiling has shown the need for machine learning algorithms that can benefit from both labeled and unlabeled data, where often the unlabeled examples greatly outnumber the labeled examples. In this paper we present a two-stage classifier that improves its predictive accuracy by making use of the available unlabeled data. It uses a weighted nearest neighbor classification algorithm using the combined example-sets as a knowledge base. The examples from the unlabeled set are "pre-labeled" by an initial classifier that is build using the limited available training data. By choosing appropriate weights for this prelabeled data, the nearest neighbor classifier consistently improves on the original classifier.
|Title of host publication||Advances in Knowledge Discovery and Data Mining|
|Subtitle of host publication||Proceedings of the 10th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2006)|
|Editors||Wee Keong Ng, Masaru Kitsuregawa, Jianzhong Li, Kuiyu Chang|
|ISBN (Print)||3-540-33206-5, 978-3-540-33206-0|
|Publication status||Published - 2006|
|Series||Lecture Notes in Computer Science|