Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • DSpace İçeriği
  • Analiz
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Hacibeyoglu, Mehmet" seçeneğine göre listele

Listeleniyor 1 - 5 / 5
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Küçük Resim Yok
    Öğe
    ATTRIBUTE REDUCTION BY PARTITIONING THE MINIMIZED DISCERNIBILITY FUNCTION
    (ICIC INTERNATIONAL, 2011) Kahramanli, Sirzat; Hacibeyoglu, Mehmet; Arslan, Ahmet
    The goal of attribute reduction is to reduce the problem size and search space for learning algorithms. The basic solution of this problem is to generate all possible minimal attributes subsets (MASes) and choose one of them, with minimal size. This can be done by constructing a kind of discernibility function (DF) from the dataset and converting it to disjunctive normal form (DNF). Since this conversion is NP-hard, for attribute reduction usually heuristic algorithms are used. But these algorithms generate one or a small number of possible MASes that generally is not sufficient for optimality of dataset processing in such aspects as the simplicity of data representation and description, the speed and classification accuracy of the data mining algorithms and the required amount of memory. In this study, we propose an algorithm that finds all MASes by iteratively partitioning the DF so that the part to be converted to DNF in each of iterations has the space complexity no higher than the square root of the worst-case space complexity of the conversion of the whole DF to DNF. The number of iterations is always fewer than the number of attributes.
  • Küçük Resim Yok
    Öğe
    A Boolean function approach to feature selection in consistent decision information systems
    (PERGAMON-ELSEVIER SCIENCE LTD, 2011) Kahramanli, Sirzat; Hacibeyoglu, Mehmet; Arslan, Ahmet
    The goal of feature selection (FS) is to find the minimal subset (MS) R of condition feature set C such that R has the same classification power as C and then reduce the dataset by discarding from it all features not contained in R. Usually one dataset may have a lot of MSs and finding all of them is known as an NP-hard problem. Therefore, when only one MS is required, some heuristic for finding only one or a small number of possible MSs is used. But in this case there is a risk that the best MSs would be overlooked. When the best solution of an FS task is required, the discernibility matrix (DM)-based approach, generating all MSs, is used. There are basically two factors that often cause to overflow the computer's memory due to which the DM-based FS programs fail. One of them is the largeness of sizes of discernibility functions (DFs) for large data sets; the other is the intractable space complexity of the conversion of a DF to disjunctive normal form (DNF). But usually most of the terms of DF and temporary results generated during DF to DNF conversion process are redundant ones. Therefore, usually the minimized DF (DFmin) and the final DNF is to be much simpler than the original DF and temporary results mentioned, respectively. Based on these facts, we developed a logic function-based feature selection method that derives DFmin from the truth table image of a dataset and converts it to DNF with preventing the occurrences of redundant terms. The proposed method requires no more amount of memory than that is required for constructing DFmin and final DNF separately. Due to this property, it can process most of datasets that can not be processed by DM-based programs. (C) 2011 Elsevier Ltd. All rights reserved.
  • Küçük Resim Yok
    Öğe
    A Hybrid Method for Fast Finding the Reduct with the Best Classification Accuracy
    (UNIV SUCEAVA, FAC ELECTRICAL ENG, 2013) Hacibeyoglu, Mehmet; Arslan, Ahmet; Kahramanli, Sirzat
    Usually a dataset has a lot of reducts finding all of which is known to be an NP hard problem. On the other hand, different reducts of a dataset may provide different classification accuracies. Usually, for every dataset, there is only a reduct with the best classification accuracy to obtain this best one, firstly we obtain the group of attributes that are dominant for the given dataset by using the decision tree algorithm. Secondly we complete this group up to reducts by using discernibility function techniques. Finally, we select only one reduct with the best classification accuracy by using data mining classification algorithms. The experimental results for datasets indicate that the classification accuracy is improved by removing the irrelevant features and using the simplified attribute set which is derived from proposed method.
  • Küçük Resim Yok
    Öğe
    A logic method for efficient reduction of the space complexity of the attribute reduction problem
    (TUBITAK SCIENTIFIC & TECHNICAL RESEARCH COUNCIL TURKEY, 2011) Hacibeyoglu, Mehmet; Basciftci, Fatih; Kahramanli, Sirzat
    The goal of attribute reduction is to find a minimal subset (MS) R of the condition attribute set C of a dataset such that R has the same classification power as C. It was proved that the number of MSs for a dataset with n attributes may be as large as ((n)(n/2)) and the generation of all of them is an NP-hard problem. The main reason for this is the intractable space complexity of the conversion of the discernibility function (DF) of a dataset to the disjunctive normal form (DNF). Our analysis of many DF-to-DNF conversion processes showed that approximately (1 - 2/((n)(n/2)) x 100) % of the implicants generated in the DF-to-DNF process are redundant ones. We prevented their generation based on the Boolean inverse distribution law. Due to this property, the proposed method generates 0.5 x ((n)(n/2)) times fewer implicants than other Boolean logic-based attribute reduction methods. Hence, it can process most of the datasets that cannot be processed by other attribute reduction methods.
  • Küçük Resim Yok
    Öğe
    The logic transformations for reducing the complexity of the discernibility function-based attribute reduction problem
    (SPRINGER LONDON LTD, 2016) Hacibeyoglu, Mehmet; Salman, Mohammad Shukri; Selek, Murat; Kahramanli, Sirzat
    The basic solution for locating an optimal reduct is to generate all possible reducts and select the one that best meets the given criterion. Since this problem is NP-hard, most attribute reduction algorithms use heuristics to find a single reduct with the risk to overlook for the best ones. There is a discernibility function (DF)-based approach that generates all reducts but may fail due to memory overflows even for datasets with dimensionality much below the medium. In this study, we show that the main shortcoming of this approach is its excessively high space complexity. To overcome this, we first represent a DF of attributes by a bit-matrix (BM). Second, we partition the BM into no more than sub-BMs (SBMs). Third, we convert each SBM into a subset of reducts by preventing the generation of redundant products, and finally, we unite the subsets into a complete set of reducts. Among the SBMs of a BM, the most complex one is the first SBM with a space complexity not greater than the square root of that of the original BM. The proposed algorithm converts such a SBM with attributes into the subset of reducts with the worst case space complexity of .

| Selçuk Üniversitesi | Kütüphane | Açık Erişim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Selçuk Üniversitesi Kütüphane ve Dokümantasyon Daire Başkanlığı, Konya, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2025 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim