Mühendislik Fakültesi / Faculty of Engineering
Permanent URI for this collectionhttps://hdl.handle.net/11727/1401
Browse
8 results
Search Results
Item A Systematic Review of Transfer Learning-Based Approaches for Diabetic Retinopathy Detection(2023) Oltu, Burcu; Karaca, Busra Kubra; Erdem, Hamit; Ozgur, Atilla; 0000-0002-9237-8347; 0000-0003-1704-1581; AAD-6546-2019Diabetic retinopathy, which is extreme visual blindness due to diabetes, has become an alarming issue worldwide. Early and accurate detection of DR is necessary to prevent the progression and reduce the risk of blindness. Recently, many approaches for DR detection have been proposed in the literature. Among them, deep neural networks (DNNs), especially Convolutional Neural Network (CNN) models, have become the most offered approach. However, designing and training new CNN architectures from scratch is a troublesome and labor-intensive task, particularly for medical images. Moreover, it requires training tremendous amounts of parameters. Therefore, transfer learning approaches as pre-trained models have become more prevalent in the last few years. Accordingly, in this study, 43 publications based on DNN and Transfer Learning approaches for DR detection between 2016 and 2021 are reviewed. The reviewed papers are summarized in 4 figures and 10 tables that present detailed information about 29 pre-trained CNN models, 13 DR data sets, and standard performance metrics.Item Automated Fracture Detection in the Ulna and Radius Using Deep Learning on Upper Extremity Radiographs(2023) Erdas, Cagatay Berke; 0000-0003-3467-9923; 37750264Objectives: This study aimed to detect single or multiple fractures in the ulna or radius using deep learning techniques fed on upper-extremity radiographs. Materials and methods: The data set used in the retrospective study consisted of different types of upper extremity radiographs obtained from an open-source dataset, with 4,480 images with fractures and 4,383 images without fractures. All fractures involved the ulna or radius. The proposed method comprises two distinct stages. The initial phase, referred to as preprocessing, involved the removal of radiographic backgrounds, followed by the elimination of nonbone tissue. In the second phase, images consisting only of bone tissue were processed using deep learning models, such as RegNetX006, EfficientNet B0, and InceptionResNetV2. Thus, whether one or more fractures of the ulna or the radius are present was determined. To measure the performance of the proposed method, raw images, images generated by background deletion, and bone tissue removal were classified separately using RegNetX006, EfficientNet B0, and InceptionResNetV2 models. Performance was assessed by accuracy, F1 score, Matthew's correlation coefficient, receiver operating characteristic area under the curve, sensitivity, specificity, and precision using 10-fold cross-validation, which is a widely accepted technique in statistical analysis. Results: The best classification performance was obtained with the proposed preprocessing and RegNetX006 architecture. The values obtained for various metrics were as follows: accuracy (0.9921), F1 score (0.9918), Matthew's correlation coefficient (0.9842), area under the curve (0.9918), sensitivity (0.9974), specificity (0.9863), and precision (0.9923). Conclusion: The proposed preprocessing method is able to detect fractures of the ulna and radius by artificial intelligence.Item Classification of Human Movements by Using Kinect Sensor(2023) Acis, Busra; Guney, Selda; https://orcid.org/0000-0001-6683-0005; https://orcid.org/0000-0002-0573-1326; HDM-2942-2022In recent years, studies have been carried out to classify human movements in many areas such as health and safety. To classify human movements, image processing methods have also started to be used in recent years. With the help of learning-based algorithms, human posture can be defined in the images obtained by various imaging methods. The predecessor methods of these classification algorithms are machine learning and deep learning. In addition, in recent years, the use of sensors that can detect human joints in perceiving human posture has also increased. The Kinect sensor, developed by Microsoft, is one of the most frequently used sensors because it is not wearable and can detect joints with infrared rays and transfer this information directly to the computer via USB connection. This study used a dataset called CAD60 that included real-time human posture information and images obtained using a Microsoft Kinect sensor, which is available in the literature. This dataset contains data that includes different movements/postures of different people. Within the scope of this study, the performances of these algorithms were obtained by using classification algorithms with the MATLAB program and these performances were compared. The classification algorithms have been used to try to improve the results by using different architectures. When raw data is used, classification accuracy is obtained as 72.60% with one of the machine learning methods, the Cosine K-Nearest Neighbor method. With the feature selection method, this success value has been increased to 74.18%. In addition, when classified by the Support Vector Machines method after the feature extraction process using the Long Short Term Memory method from the deep network architectures, which is the method proposed in this study, the accuracy rate was increased to 98.95%. The best method of classifying human posture was investigated by using different methods and a method was proposed by comparing it with the literature.Item Early and Late Level Fusion of Deep Convolutional Neural Networks for Visual Concept Recognition(2016) Ergun, Hilal; Akyuz, Yusuf Caglar; Sert, Mustafa; Liu, Jianquan; 0000-0002-7056-4245; 0000-0002-7056-4245; B-1296-2011; D-3080-2015; AAB-8673-2019Visual concept recognition is an active research field in the last decade. Related to this attention, deep learning architectures are showing great promise in various computer vision domains including image classification, object detection, event detection and action recognition in videos. In this study, we investigate various aspects of convolutional neural networks for visual concept recognition. We analyze recent studies and different network architectures both in terms of running time and accuracy. In our proposed visual concept recognition system, we first discuss various important properties of popular convolutional network architecture under consideration. Then we describe our method for feature extraction at different levels of abstraction. We present extensive empirical information along with best practices for big data practitioners. Using these best practices we propose efficient fusion mechanisms both for single and multiple network models. We present state-of-the-art results on benchmark datasets while keeping computational costs at low level. Our results show that these state-of-the-art results can be reached without using extensive data augmentation techniques.Item Human Activity Recognition by Using Different Deep Learning Approaches for Wearable Sensors(2021) Erdas, Cagatay Berke; Guney, Selda; 0000-0003-3467-9923With the spread of wearable sensors, the solutions to the task of activity recognition by using the data obtained from the sensors have become widespread. Recognition of activities owing to wearable sensors such as accelerometers, gyroscopes, and magnetometers, etc. has been studied in recent years. Although there are several applications in the literature, differently in this study, deep learning algorithms such as Convolutional Neural Networks, Convolutional LSTM, and 3D Convolutional Neural Networks fed by Convolutional LSTM have been used in human activity recognition task by feeding with data obtained from accelerometer sensor. For this purpose, a frame was formed with raw samples of the same activity which were collected consecutively from the accelerometer sensor. Thus, it is aimed to capture the pattern inherent in the activity and due to preserving the continuous structure of the movement.Item Classification of Canine Maturity and Bone Fracture Time Based on X-Ray Images of Long Bones(2021) Ergun, Gulnur Begum; Guney, Selda; 0000-0002-0573-1326; 0000-0001-8469-5484Veterinarians use X-rays for almost all examinations of clinical fractures to determine the appropriate treatment. Before treatment, vets need to know the date of the injury, type of the broken bone, and age of the dog. The maturity of the dog and the time of the fracture affects the approach to the fracture site, the surgical procedure and needed materials. This comprehensive study has three main goals: determining the maturity of the dogs (Task 1), dating fractures (Task 2), and finally, detecting fractures of the long bones in dogs (Task 3). The most popular deep neural networks are used: AlexNet, ResNet-50 and GoogLeNet. One of the most popular machine learning algorithms, support vector machines (SVM), is used for comparison. The performance of all sub-studies is evaluated using accuracy and F1 score. Each task has been successful with different network architecture. ResNet-50, AlexNet and GoogLeNet are the most successful algorithms for the three tasks, with F1 scores of 0.75, 0.80 and 0.88, respectively. Data augmentation is performed to make models more robust, and the F1 scores of the three tasks were 0.80, 0.81, and 0.89 using ResNet-50, which is the most successful model. This preliminary work can be developed into support tools for practicing veterinarians that will make a difference in the treatment of dogs with fractured bones. Considering the lack of work in this interdisciplinary field, this paper may lead to future studies.Item A real-time approach to recognition of Turkish sign language by using convolutional neural networks(2021) Guney, Selda; 0000-0002-0573-1326Sign language is a form of visual communication used by people with hearing problems to express themselves. The main purpose of this study is to make life easier for these people. In this study, a data set was created using 3200 RGB images for 32 classes (32 static words) taken from three different people. Data augmentation methods were applied to the data sets, and the number of images increased from 3200 to 19,200, 600 per class. A 10-layer convolutional neural network model was created for the classification of the signs, and VGG166, Inception, and ResNet deep network architectures, which are deep learning methods, were applied by using the transfer learning method. Also, the signs are classified using the support vector machines and k-nearest neighbor methods, which are the traditional machine learning methods, by using features obtained from the last layer of the convolutional neural network. The most successful method was determined by comparing the obtained results according to time and performance ratios. In addition to these analyses, an interface was developed. By using the interface, the static words belonging to Turkish sign language (TSL) are translated into real-time written language. With the real-time system designed, its success in recognizing the static words of TSL signs and printing its prediction on the computer screen was evaluated.Item A new framework using deep auto-encoder and energy spectral density for medical waveform data classification and processing(2019) Karim, Ahmad M.; Guzel, Mehmet S.; Tolun, Mehmet R.; Kaya, Hilal; Celebi, Fatih V.This paper proposes a new framework for medical data processing which is essentially designed based on deep autoencoder and energy spectral density (ESD) concepts. The main novelty of this framework is to incorporate ESD function as feature extractor into a unique deep sparse auto-encoders (DSAEs) architecture. This allows the proposed architecture to extract more qualified features in a shorter computational time compared with the conventional frameworks. In order to validate the performance of the proposed framework, it has been tested with a number of comprehensive medical waveform datasets with varying dimensionality, namely, Epilepsy Serious Detection, SPECTF Classification and Diagnosis of Cardiac Arrhythmias. Overall, the ESD function speeds up the deep auto-encoder processing time and increases the overall accuracy of the results which are compared to several studies in the literature and a promising agreement is achieved. (C) 2018 Nalecz Institute of Biocybernetics and Biomedical Engineering of the Polish Academy of Sciences. Published by Elsevier B.V. All rights reserved.