Kahramanli, SirzatHacibeyoglu, MehmetArslan, Ahmet2020-03-262020-03-2620111349-41981349-418Xhttps://hdl.handle.net/20.500.12395/26180The goal of attribute reduction is to reduce the problem size and search space for learning algorithms. The basic solution of this problem is to generate all possible minimal attributes subsets (MASes) and choose one of them, with minimal size. This can be done by constructing a kind of discernibility function (DF) from the dataset and converting it to disjunctive normal form (DNF). Since this conversion is NP-hard, for attribute reduction usually heuristic algorithms are used. But these algorithms generate one or a small number of possible MASes that generally is not sufficient for optimality of dataset processing in such aspects as the simplicity of data representation and description, the speed and classification accuracy of the data mining algorithms and the required amount of memory. In this study, we propose an algorithm that finds all MASes by iteratively partitioning the DF so that the part to be converted to DNF in each of iterations has the space complexity no higher than the square root of the worst-case space complexity of the conversion of the whole DF to DNF. The number of iterations is always fewer than the number of attributes.eninfo:eu-repo/semantics/closedAccessAttribute reductionFeature selectionDiscernibility functionFunctional partitioningATTRIBUTE REDUCTION BY PARTITIONING THE MINIMIZED DISCERNIBILITY FUNCTIONArticle75A21672186Q3WOS:000290601100012N/A