Scopus İndeksli Yayınlar Koleksiyonu
Permanent URI for this collectionhttps://hdl.handle.net/11727/4809
Browse
2 results
Search Results
Item Femoral neck fracture detection in X-ray images using deep learning and genetic algorithm approaches(2020) Beyaz, Salih; Acici, Koray; Sumer, Emre; 0000-0002-5788-5116; 32584712; K-8820-2019Objectives: This study aims to detect frontal pelvic radiograph femoral neck fracture using deep learning techniques. Patients and methods: This retrospective study was conducted between January 2013 and January 2018. A total of 234 frontal pelvic X-ray images collected from 65 patients (32 males, 33 females; mean age 74.9 years; range, 33 to 89 years) were augmented to 2106 images to achieve a satisfactory dataset. A total of 1,341 images were fractured femoral necks while 765 were non-fractured ones. The proposed convolutional neural network (CNN) architecture contained five blocks, each containing a convolutional layer, batch normalization layer, rectified linear unit, and maximum pooling layer. After the last block, a dropout layer existed with a probability of 0.5. The last three layers of the architecture were a fully connected layer of two classes, a softmax layer and a classification layer that computes cross entropy loss. The training process was terminated after 50 epochs and an Adam Optimizer was used. Learning rate was dropped by a factor of 0.5 on every five epochs. To reduce overfitting, regularization term was added to the weights of the loss function. The training process was repeated for pixel sizes 50x50, 100x100, 200x200, and 400x400. The genetic algorithm (GA) approach was employed to optimize the hyperparameters of the CNN architecture and to minimize the error after testing the model created by the CNN architecture in the training phase. Results: Performance in terms of sensitivity, specificity, accuracy, F1 score, and Cohen's kappa coefficient were evaluated using five-fold cross validation tests. Best performance was obtained when cropped images were rescaled to 50x50 pixels. The kappa metric showed more reliable classifier performance when 50x50 pixels image size was used to feed the CNN. The classifier performance was more reliable according to other image sizes. Sensitivity and specificity rates were computed to be 83% and 73%, respectively. With the inclusion of the GA, this rate increased by 1.6%. The detection rate of fractured bones was found to be 83%. A kappa coefficient of 55% was obtained, indicating an acceptable agreement. Conclusion: This experimental study utilized deep learning techniques in the detection of bone fractures in radiography. Although the dataset was unbalanced, the results can be considered promising. It was observed that use of smaller image size decreases computational cost and provides better results according to evaluation metrics.Item Integrating features for accelerometer-based activity recognition(2016) Erdas, C.Berke; Atasoy, Isil; Acici, Koray; Ogul, Hasan; 0000-0003-3467-9923Activity recognition is the problem of predicting the current action of a person through the motion sensors worn on the body. The problem is usually approached as a supervised classification task where a discriminative model is learned from known samples and a new query is assigned to a known activity label using learned model. The challenging issue here is how to feed this classifier with a fixed number of features where the real input is a raw signal of varying length. In this study, we consider three possible feature sets, namely time-domain, frequency domain and wavelet-domain statistics, and their combinations to represent motion signal obtained from accelerometer reads worn in chest through a mobile phone. In addition to a systematic comparison of these feature sets, we also provide a comprehensive evaluation of some preprocessing steps such as filtering and feature selection. The results determine that feeding a random forest classifier with an ensemble selection of most relevant time-domain and frequency-domain features extracted from raw data can provide the highest accuracy in a real dataset. (C) 2016 The Authors. Published by Elsevier B.V.