Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • DSpace İçeriği
  • Analiz
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Kahramanli, Sirzat" seçeneğine göre listele

Listeleniyor 1 - 17 / 17
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Küçük Resim Yok
    Öğe
    ATTRIBUTE REDUCTION BY PARTITIONING THE MINIMIZED DISCERNIBILITY FUNCTION
    (ICIC INTERNATIONAL, 2011) Kahramanli, Sirzat; Hacibeyoglu, Mehmet; Arslan, Ahmet
    The goal of attribute reduction is to reduce the problem size and search space for learning algorithms. The basic solution of this problem is to generate all possible minimal attributes subsets (MASes) and choose one of them, with minimal size. This can be done by constructing a kind of discernibility function (DF) from the dataset and converting it to disjunctive normal form (DNF). Since this conversion is NP-hard, for attribute reduction usually heuristic algorithms are used. But these algorithms generate one or a small number of possible MASes that generally is not sufficient for optimality of dataset processing in such aspects as the simplicity of data representation and description, the speed and classification accuracy of the data mining algorithms and the required amount of memory. In this study, we propose an algorithm that finds all MASes by iteratively partitioning the DF so that the part to be converted to DNF in each of iterations has the space complexity no higher than the square root of the worst-case space complexity of the conversion of the whole DF to DNF. The number of iterations is always fewer than the number of attributes.
  • Küçük Resim Yok
    Öğe
    A Boolean function approach to feature selection in consistent decision information systems
    (PERGAMON-ELSEVIER SCIENCE LTD, 2011) Kahramanli, Sirzat; Hacibeyoglu, Mehmet; Arslan, Ahmet
    The goal of feature selection (FS) is to find the minimal subset (MS) R of condition feature set C such that R has the same classification power as C and then reduce the dataset by discarding from it all features not contained in R. Usually one dataset may have a lot of MSs and finding all of them is known as an NP-hard problem. Therefore, when only one MS is required, some heuristic for finding only one or a small number of possible MSs is used. But in this case there is a risk that the best MSs would be overlooked. When the best solution of an FS task is required, the discernibility matrix (DM)-based approach, generating all MSs, is used. There are basically two factors that often cause to overflow the computer's memory due to which the DM-based FS programs fail. One of them is the largeness of sizes of discernibility functions (DFs) for large data sets; the other is the intractable space complexity of the conversion of a DF to disjunctive normal form (DNF). But usually most of the terms of DF and temporary results generated during DF to DNF conversion process are redundant ones. Therefore, usually the minimized DF (DFmin) and the final DNF is to be much simpler than the original DF and temporary results mentioned, respectively. Based on these facts, we developed a logic function-based feature selection method that derives DFmin from the truth table image of a dataset and converts it to DNF with preventing the occurrences of redundant terms. The proposed method requires no more amount of memory than that is required for constructing DFmin and final DNF separately. Due to this property, it can process most of datasets that can not be processed by DM-based programs. (C) 2011 Elsevier Ltd. All rights reserved.
  • Küçük Resim Yok
    Öğe
    BOOSTING THE PERFORMANCE OF PSEUDO AMINO ACID COMPOSITION
    (AMER SOC MECHANICAL ENGINEERS, 2011) Goktepe, Yunus Emre; Ilhan, Ilhan; Kahramanli, Sirzat
    Protein-protein interactions are critical in coordinating various cellular processes. They help understanding protein function and drug design. Extracting protein features from amino acid sequences is important in order to study protein-protein interactions. Various feature extraction approaches for proteins have been introduced up to the present. PseAAC is one of the most used protein feature extractor. In this work we purpose a new approach to calculate amino acid composition values. The purpose of our method is to adjust the weights of the composition values during feature extraction process. It means that bigger composition values will contribute more to prediction function than smaller ones. Our experimental results showed that our method outperformed PseAAC.
  • Küçük Resim Yok
    Öğe
    Determination of the fatigue state of the material under dynamic loading by thermal video image processing
    (IEEE, 2007) Selek, Murat; Kahramanli, Sirzat
    In this study we propose an infrared thermography and artificial neural networks (ANNs) based method for obtaining the fatigue state of the steel specimen under bending fatigue. We use the thermal images (TIs) of the specimen under bending fatigue, taken by FUR E45 infrared camera at 1 Hz. By processing TIs using ANNs we obtain temperatures of the characteristic spots of the specimen surface. Based on these temperatures we fit curve on which we fix the temperatures of all spots of the specimen surface. The region of hottest spots of the TI obtained by this method allows us to kept under control the probably crack region of the tested specimen until it is fractured.
  • Küçük Resim Yok
    Öğe
    Effect of Hardness upon Temperature Rise of Steel Specimens during Bending Fatigue by Using Infrared Thermography
    (TRANS TECH PUBLICATIONS LTD, 2011) Selek, Murat; Sahin, Omer Sinan; Kahramanli, Sirzat
    In this study, the effects of hardness on temperature increase of ST 37 steel during fatigue loading were investigated. Steel specimens are made of ST 37 steel and subjected to heat treatment to obtain different hardness. The specimens were subjected to reverse bending fatigue loading and the specimens were observed by using a infrared (IR) camera during the test. The obtained thermal images were recorded by FUR E45 IR camera and then transferred to the image processing program developed by using MATLAB. Thus after image processing, thermal values used to detect the temperature rise of the surface of the steel specimen under fatigue loading were obtained. During the fatigue, the material is subjected to strain energy input which result in plastic or/and elastic deformation. This event results in an increase of temperature within material. The energy conservation requires that the generated heat shows itself as heat transfer by conduction, convection and radiation and internal energy increase. Besides, if the material has undergone plastic deformation, an additional term which accounts this effect should be included within energy conservation equation. In order to observe the effect of plastic deformation upon temperature increase of material, the ability of plastic deformation has been changed through the change of hardness and the thermal variations during fatigue has been investigated.
  • Küçük Resim Yok
    Öğe
    Fast computation of the prime implicants by exact direct-cover algorithm based on the new partial ordering operation rule
    (ELSEVIER SCI LTD, 2011) Basciftci, Fatih; Kahramanli, Sirzat
    In this study, a novel OFF-set based direct-cover Exact Minimization Algorithm (EMA) is proposed for single-output Boolean functions represented in a sum-of-products form. To obtain the complete set of prime implicants covering the given Target Minterm (ON-minterm), the proposed method uses OFF-cubes (OFF-minterms) expanded by this Target Minterm. The amount of temporary results produced by this method does not exceed the size of the OFF-set. In order to achieve the goal of this study, which is to make faster computations, logic operations were used instead of the standard operations. Expansion OFF-cubes, commutative absorption operations and intersection operations are realized by logic operations for fast computation. The proposed minimization method is tested on several classes of benchmarks and then compared with the ESPRESSO algorithm. The results show that the proposed algorithm obtains more accurate and faster results than ESPRESSO does. (C) 2011 Elsevier Ltd. All rights reserved.
  • Küçük Resim Yok
    Öğe
    A Hybrid Method for Fast Finding the Reduct with the Best Classification Accuracy
    (UNIV SUCEAVA, FAC ELECTRICAL ENG, 2013) Hacibeyoglu, Mehmet; Arslan, Ahmet; Kahramanli, Sirzat
    Usually a dataset has a lot of reducts finding all of which is known to be an NP hard problem. On the other hand, different reducts of a dataset may provide different classification accuracies. Usually, for every dataset, there is only a reduct with the best classification accuracy to obtain this best one, firstly we obtain the group of attributes that are dominant for the given dataset by using the decision tree algorithm. Secondly we complete this group up to reducts by using discernibility function techniques. Finally, we select only one reduct with the best classification accuracy by using data mining classification algorithms. The experimental results for datasets indicate that the classification accuracy is improved by removing the irrelevant features and using the simplified attribute set which is derived from proposed method.
  • Küçük Resim Yok
    Öğe
    Investigation of Bending Fatigue of Composite Plates by Using Infrared Thermography
    (TRANS TECH PUBLICATIONS LTD, 2011) Sahin, Omer Sinan; Selek, Murat; Kahramanli, Sirzat
    In this study, the temperature rise of composite plates with a hole during fatigue loading was investigated. Woven glass/epoxy composite plates with eight plies were subjected to bending fatigue loading and materials were observed by using a thermal camera during the test. Previous works showed that a heat generation can form due to internal friction and damage formation. Therefore, a thermographic infrared imaging system was used to detect the temperature rise of composite specimens. During the tests, the thermal images of the specimens have been recorded by a thermal camera and then transferred to the image processing program which has been developed by using MATLAB. By using these thermal images, the spot temperatures of the specimen were obtained by using artificial neural networks. The obtained temperatures show local increase at places where the heat generation localized. These regions considered being the probable damage initiation sites. It is shown in this study that most probable damage initiation zones in the woven glass/epoxy composite material can be detected by using infrared thermography (IRT) approach prior to failure.
  • Küçük Resim Yok
    Öğe
    A logic method for efficient reduction of the space complexity of the attribute reduction problem
    (TUBITAK SCIENTIFIC & TECHNICAL RESEARCH COUNCIL TURKEY, 2011) Hacibeyoglu, Mehmet; Basciftci, Fatih; Kahramanli, Sirzat
    The goal of attribute reduction is to find a minimal subset (MS) R of the condition attribute set C of a dataset such that R has the same classification power as C. It was proved that the number of MSs for a dataset with n attributes may be as large as ((n)(n/2)) and the generation of all of them is an NP-hard problem. The main reason for this is the intractable space complexity of the conversion of the discernibility function (DF) of a dataset to the disjunctive normal form (DNF). Our analysis of many DF-to-DNF conversion processes showed that approximately (1 - 2/((n)(n/2)) x 100) % of the implicants generated in the DF-to-DNF process are redundant ones. We prevented their generation based on the Boolean inverse distribution law. Due to this property, the proposed method generates 0.5 x ((n)(n/2)) times fewer implicants than other Boolean logic-based attribute reduction methods. Hence, it can process most of the datasets that cannot be processed by other attribute reduction methods.
  • Küçük Resim Yok
    Öğe
    The logic transformations for reducing the complexity of the discernibility function-based attribute reduction problem
    (SPRINGER LONDON LTD, 2016) Hacibeyoglu, Mehmet; Salman, Mohammad Shukri; Selek, Murat; Kahramanli, Sirzat
    The basic solution for locating an optimal reduct is to generate all possible reducts and select the one that best meets the given criterion. Since this problem is NP-hard, most attribute reduction algorithms use heuristics to find a single reduct with the risk to overlook for the best ones. There is a discernibility function (DF)-based approach that generates all reducts but may fail due to memory overflows even for datasets with dimensionality much below the medium. In this study, we show that the main shortcoming of this approach is its excessively high space complexity. To overcome this, we first represent a DF of attributes by a bit-matrix (BM). Second, we partition the BM into no more than sub-BMs (SBMs). Third, we convert each SBM into a subset of reducts by preventing the generation of redundant products, and finally, we unite the subsets into a complete set of reducts. Among the SBMs of a BM, the most complex one is the first SBM with a space complexity not greater than the square root of that of the original BM. The proposed algorithm converts such a SBM with attributes into the subset of reducts with the worst case space complexity of .
  • Küçük Resim Yok
    Öğe
    A new method based on cube algebra for the simplification of logic functions
    (SPRINGER HEIDELBERG, 2007) Kahramanli, Sirzat; Guenes, Salih; Sahan, Seral; Basciftci, Fatih
    In this study an Off-set based direct-cover minimization method for single-output logic functions is proposed represented in a sum-of-products form. To find the sufficient set of prime implicants including the given On-cube with the existing direct-cover minimization methods, this cube is expanded for one coordinate at a time. The correctness of each expansion is controlled by the way in which the cube being expanded intersects with all of K < 2(n) Off-cubes. If we take into consideration that the expanding of one cube has a polynomial complexity, then the total complexity of this approach can be expressed as O(n(p))O(2(n)), that is, the product of polynomial and exponential complexities. To obtain the complete set of prime implicants including the given On-cube, the proposed method uses Off-cubes expanded by this On-cube. The complexity of this operation is approximately equivalent to the complexity of an intersection of one On-cube expanded by existing methods for one coordinate. Therefore, the complexity of the process of calculating of the complete set of prime implicants including given On-cube is reduced approximately to O(n(p)) times. The method is tested on several different kinds of problems and on standard MCNC benchmarks, results of which are compared with ESPRESSO.
  • Küçük Resim Yok
    Öğe
    A NOVEL APPROACH FOR FAST COVERING THE BOOLEAN SETS
    (WORLD SCIENTIFIC AND ENGINEERING ACAD AND SOC, 2008) Basciftci, Fatih; Kahramanli, Sirzat
    In this study we propose a new method for iteratively covering given Boolean data set by its prime implicants identified one at a time. In contrast to existing set covering methods of NP time complexity in the size of the set, our method is realized by procedures of linear complexity and therefore its efficiency is rapidly increased by increasing the size of the set. Our method can be useful in all fields related to Boolean data sets processing such as logic synthesis, image processing, data compressing, artificial intelligence and many others.
  • Küçük Resim Yok
    Öğe
    An off-cubes expanding approach to the problem of separate determination of the essential prime implicants of the single-output Boolean functions
    (IEEE, 2007) Basciftci, Fatih; Kahramanli, Sirzat
    The goal of this study is the avoidance of excessive amount of temporary results produced during the minimization process of the two-level single-output Boolean functions of many variables. In this paper we have proposed an Off-set based-direct cover minimization method that uses a single On-cube oriented expanding of the Off-cubes on which the essential prime implicants are identified in a one by one manner and are used for iterative covering the function being minimized. The amount of temporary results produced by this method does not exceed the size of the Offset. The proposed algorithm is up to 3 times faster and uses significantly less amount of memory than well known ESPRESSO.
  • Küçük Resim Yok
    Öğe
    A REDUCED OFFSET BASED METHOD FOR FAST COMPUTATION OF THE PRIME IMPLICANTS COVERING A GIVEN CUBE
    (ICIC INTERNATIONAL, 2012) Basciftci, Fatih; Kahramanli, Sirzat; Selek, Murat
    In order to generate prime implicants for a cube of a logic function, most logic minimization methods expand this cube by one at a time removing the literals from it. However, there is an intractable problem of determining the order of literals to be removed from the cube and checking whether a tentative literal removal is acceptable. In order to avoid this problem, the reduced offset method was developed. This method uses the positional-cube notation where every reduced off-cube of an n-variable function is represented by two n-bit strings. However, unfortunately, the conversion of such reduced cubes to the associated prime implicants has the time complexity worse than exponential. To avoid this problem, in this study, the method representing every reduced cube by a single n-bit string and a set of bitwise operations to be performed on such strings are proposed. The theoretical and experimental estimations show that this approach can significantly improve the quality of results and reduce the space and time complexities of the logic minimization process 2 times and up to 3.5 times, respectively.
  • Küçük Resim Yok
    Öğe
    THE STRUCTURE AND ADVANTAGES OF DIGITAL TRAINING SET FOR COMPUTER ENGINEERING
    (WORLD SCIENTIFIC AND ENGINEERING ACAD AND SOC, 2009) Tezel, Guelay; Kahramanli, Sirzat
    The knowledge provided in theoretical computer engineering education needs to be tested in laboratory which contributes to acquiring of specifications on tools, equipment and measurement methods. During the laboratory classes, using specifically produced experimental sets reduce cost and learning time developing the designing ability of students. The theoretical courses are supported with digital design and microprocessor courses in electronics and computer education, where the fundamentals of computer hardware and digital control technologies are provided. In the present study, the established Digital Training Set (DTS) and Microcontroller 8031, and their advantages in experimentally teaching and learning of basic digital circuits design and principles of microprocessors in the Computer Engineering Department of Selcuk University are presented.
  • Küçük Resim Yok
    Öğe
    TAG SNP SELECTION USING CLONAL SELECTION ALGORITHM BASED ON SUPPORT VECTOR MACHINE
    (AMER SOC MECHANICAL ENGINEERS, 2011) Ilhan, Ilhan; Goktepe, Yunus Emre; Ozcan, Cengiz; Kahramanli, Sirzat
    Investigations on genetic variants associated with complex diseases are important for enhancements in diagnosis and treatments. SNPs (Single Nucleotide Polymorphisms), which comprise most of the millions of changes in human genome, are promising tools for disease-gene association studies. On the other hand, these studies are limited by cost of genotyping tremendous number of SNPs. Therefore, it is essential to identify a subset of SNPs that represents rest of the SNPs. As subset of SNPs is identified, data set should be searched as well as possible. In this study, a new method called CLONTagger was introduced, where Support Vector Machine (SVM) was used as SNP prediction method, whereas Clonal Selection Algorithm (CLONALG) was used as tag SNP selection method. The suggested method was compared with current tag SNP selection algorithms in literature using different datasets. Experimental results demonstrated that the suggested method could identify tag SNPs with better prediction accuracy than other methods from literature.
  • Küçük Resim Yok
    Öğe
    Using Chaotic System in Encryption
    (SPRINGER-VERLAG BERLIN, 2010) Findik, Oguz; Kahramanli, Sirzat
    In this paper chaotic systems and RSA encryption algorithm are combined in order to develop an encryption algorithm which accomplishes the modern standards. E.Lorenz's weather forecast' equations which are used to simulate non-linear systems are utilized to create chaotic map. This equation can be used to generate random numbers. In order to achieve up-to-date standards and use online and offline status, a new encryption technique that combines chaotic systems and RSA encryption algorithm has been developed. The combination of RSA algorithm and chaotic systems makes encryption system.

| Selçuk Üniversitesi | Kütüphane | Açık Erişim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Selçuk Üniversitesi Kütüphane ve Dokümantasyon Daire Başkanlığı, Konya, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2025 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim