Yazar "Yilmaz, Burak" seçeneğine göre listele
Listeleniyor 1 - 4 / 4
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Classification of EEG Signals Using Spiking Neural Networks(IEEE, 2018) Tahtirvanci, Aykut; Durdu, Akif; Yilmaz, BurakIn signal processing applications of conventional artificial neural networks, the processing time of the data is high and the accuracy rates are not good enough. At the same time, time-dependent processing is not possible. In this study, classification of EEG signals was performed using an artificial neural network including the characteristics of spiking neural networks. Successful results were obtained using large data sets. Moreover, by using the neuron model of Eugene M. Izhikevich as the spiking neural network model, the EEG signals were processed biologically realistically.Öğe Contrast enhancement using linear image combinations algorithm (CEULICA) for enhancing brain magnetic resonance images(TUBITAK SCIENTIFIC & TECHNICAL RESEARCH COUNCIL TURKEY, 2014) Yilmaz, Burak; Ozbay, YukselBrain magnetic resonance imaging (MRI) images support important information about brain diseases for physicians. Morphological alterations in brain tissues indicate the probable existence of a disease in many cases. Proper estimation of these tissues, measuring their sizes, and analyzing their image patterns are parts of the diagnosis process. Therefore, the interpretability and perceptibility level of the MRI image is valuable for physicians. In this paper, a new image contrast enhancement algorithm based on linear combinations is presented. The proposed algorithm is focused on improving the interpretability and perceptibility of the image information. An MRI image is presented to the algorithm, which generates a set of images from this MRI image. After this step, the algorithm uses the linear combination technique for combining the image set to generate a final image. Linear combination coefficients are generated using the artificial bee colony algorithm. The algorithm is evaluated by 4 different global image enhancement evaluation techniques: contrast improvement ratio (CIR), enhancement measurement error (EME), absolute mean brightness error (AMBE), and peak-signal-to-noise ratio (PSNR). During the evaluation process, 2 case studies are performed. The first case study is performed with 3 different image sets (T1, T2, and proton density) presented to the algorithm. These sets are obtained from the Brainweb simulated MRI database. The algorithm shows the best performance on the T1 image set with 5.844 CIR, 6.217 EME, 15.045 AMBE, and 22.150 dB PSNR scores as average values. The second case study is also performed with 3 different image sets (T1-fast low-angle shot sequence, T1-magnetization-prepared rapid acquired gradient-echoes (MP-RAGE), and T2) obtained from the The Multimedia Digital Archiving System public community database. The algorithm performs best with the T1-MP-RAGE modality images with 6.983 CIR, 17.326 EME, 3.514 AMBE, and 30.157 dB PSNR scores as average values. In addition, this algorithm can be used for classification tasks with proper linear combination coefficients, for instance, segmentation of the white matter regions in brain MRI images.Öğe Detection of microcalcification in digitized mammograms with multistable cellular neural networks using a new image enhancement method: automated lesion intensity enhancer (ALIE)(TUBITAK SCIENTIFIC & TECHNICAL RESEARCH COUNCIL TURKEY, 2015) Civcik, Levent; Yilmaz, Burak; Ozbay, Yuksel; Emlik, Ganime DilekMicrocalcification detection is a very important issue in early diagnosis of breast cancer. Generally physicians use mammogram images for this task; however, sometimes analyzing these images become a hard task because of problems in images such as high brightness values, dense tissues, noise, and insufficient contrast level. In this paper, we present a novel technique for the task of microcalcification detection. This technique consists of three steps. The first step is focused on removing pectoral muscle and unnecessary parts from the mammogram images by using cellular neural networks (CNNs), which makes this a novel process. In the second step, we present a novel image enhancement technique focused on enhancing lesion intensities called the automated lesion intensity enhancer (ALIE). In the third step, we use a special CNN structure, named multistable CNNs. After applying the combination of these methods on the MIAS database, we achieve 82.0% accuracy, 90.9% sensitivity, and 52.2% specificity values.Öğe A new method for skull stripping in brain MRI using multistable cellular neural networks(SPRINGER LONDON LTD, 2018) Yilmaz, Burak; Durdu, Akif; Emlik, Ganime DilekThis study proposes a new method on "detecting brain region in MRI data". This task is generally named as "skull stripping" in the literature. The algorithm is developed by using the cellular neural networks (CNNs) and multistable CNN structures. It also includes a contrast enhancement and noise reduction algorithm. The algorithm is named as multistable cellular neural network on MRI for skull stripping (mCNN-MRI-SS). Three different case studies are performed for measuring the success of the algorithm. Also a fourth case study is performed to evaluate the supporting algorithm, the CEULICA. First two evaluations are performed by using well-known MIDAS-NAMIC and Brainweb databases, which are properly organized Talairach-compatible databases. The third database was obtained from the research and application hospital of Necmettin Erbakan University Meram Faculty of Medicine. These MRI data were not Talairach-compatible and less sampled. The algorithm achieved 0.595 Jaccard, 0.744 Dice, 0.0344 TPF and 0.383 TNF mean values with the Brainweb T1-weighted images and 0.837 Jaccard, 0.898 Dice, 0.0124 TPF and 0.1511 TNF mean values with the MIDAS-NAMIC T2-weighted images. The algorithm achieved 0.8297 Jaccard, 0.9012 Dice, 0.0951 TPF and 0.1225 TNF mean values and achieved with the obtained data the best values among the other algorithms. As a result, it can be claimed that algorithm performs best with the non-Talairach-compatible MRI data due to its nature of performing at cellular level.