Fakülteler / Faculties

Permanent URI for this communityhttps://hdl.handle.net/11727/1395

Browse

Search Results

Now showing 1 - 10 of 12
  • Item
    Classification of Human Movements by Using Kinect Sensor
    (2023) Acis, Busra; Guney, Selda; https://orcid.org/0000-0001-6683-0005; https://orcid.org/0000-0002-0573-1326; HDM-2942-2022
    In recent years, studies have been carried out to classify human movements in many areas such as health and safety. To classify human movements, image processing methods have also started to be used in recent years. With the help of learning-based algorithms, human posture can be defined in the images obtained by various imaging methods. The predecessor methods of these classification algorithms are machine learning and deep learning. In addition, in recent years, the use of sensors that can detect human joints in perceiving human posture has also increased. The Kinect sensor, developed by Microsoft, is one of the most frequently used sensors because it is not wearable and can detect joints with infrared rays and transfer this information directly to the computer via USB connection. This study used a dataset called CAD60 that included real-time human posture information and images obtained using a Microsoft Kinect sensor, which is available in the literature. This dataset contains data that includes different movements/postures of different people. Within the scope of this study, the performances of these algorithms were obtained by using classification algorithms with the MATLAB program and these performances were compared. The classification algorithms have been used to try to improve the results by using different architectures. When raw data is used, classification accuracy is obtained as 72.60% with one of the machine learning methods, the Cosine K-Nearest Neighbor method. With the feature selection method, this success value has been increased to 74.18%. In addition, when classified by the Support Vector Machines method after the feature extraction process using the Long Short Term Memory method from the deep network architectures, which is the method proposed in this study, the accuracy rate was increased to 98.95%. The best method of classifying human posture was investigated by using different methods and a method was proposed by comparing it with the literature.
  • Item
    mirLSTM: A Deep Sequential Approach to MicroRNA Target Binding Site Prediction
    (2019) Paker, Ahmet; Ogul, Hasan; HJH-2307-2023
    MicroRNAs (miRNAs) are small and non-coding RNAs of similar to 21-23 base length, which play critical role in gene expression. They bind the target mRNAs in the post-transcriptional level and cause translational inhibition or mRNA cleavage. Quick and effective detection of the binding sites of miRNAs is a major problem in bioinformatics. In this study, a deep learning approach based on Long Short Term Memory (LSTM) is developed with the help of an existing duplex sequence model. Compared with four conventional machine learning methods, the proposed LSTM model performs better in terms of the accuracy (ACC), sensitivity, specificity, AUC (Area under the curve) and F1 score. A web-tool is also developed to identify and display the microRNA target sites effectively and quickly.
  • Item
    Computer-Aided Breast Cancer Diagnosis from Thermal Images Using Transfer Learning
    (2020) Cabioglu, Cagri; Ogul, Hasan
    Breast cancer is one of the prevalent types of cancer. Early diagnosis and treatment of breast cancer have vital importance for patients. Various imaging techniques are used in the detection of cancer. Thermal images are obtained by using the temperature difference of regions without giving radiation by the thermal camera. In this study, we present methods for computer aided diagnosis of breast cancer using thermal images. To this end, various Convolutional Neural Networks (CNNs) have been designed by using transfer learning methodology. The performance of the designed nets was evaluated on a benchmarking dataset considering accuracy, precision, recall, F1 measure, and Matthews Correlation coefficient. The results show that an architecture holding pre-trained convolutional layers and training newly added fully connected layers achieves a better performance compared with others. We have obtained an accuracy of 94.3%, a precision of 94.7% and a recall of 93.3% using transfer learning methodology with CNN.
  • Item
    Feature-level Fusion of Convolutional Neural Networks for Visual Object Classification
    (2016) Ergun, Hilal; Sert, Mustafa; https://orcid.org/0000-0002-7056-4245; AAB-8673-2019
    Deep learning architectures have shown great success in various computer vision applications. In this study, we investigate some of the very popular convolutional neural network (CNN) architectures, namely GoogleNet, AlexNet, VGG19 and ResNet. Furthermore, we show possible early feature fusion strategies for visual object classification tasks. Concatanation of features, average pooling and maximum pooling are among the investigated fusion strategies. We obtain state-of-the-art results on well-known image classification datasets of Caltech-101, Caltech-256 and Pascal VOC 2007.
  • Item
    Early and Late Level Fusion of Deep Convolutional Neural Networks for Visual Concept Recognition
    (2016) Ergun, Hilal; Akyuz, Yusuf Caglar; Sert, Mustafa; Liu, Jianquan; 0000-0002-7056-4245; 0000-0002-7056-4245; B-1296-2011; D-3080-2015; AAB-8673-2019
    Visual concept recognition is an active research field in the last decade. Related to this attention, deep learning architectures are showing great promise in various computer vision domains including image classification, object detection, event detection and action recognition in videos. In this study, we investigate various aspects of convolutional neural networks for visual concept recognition. We analyze recent studies and different network architectures both in terms of running time and accuracy. In our proposed visual concept recognition system, we first discuss various important properties of popular convolutional network architecture under consideration. Then we describe our method for feature extraction at different levels of abstraction. We present extensive empirical information along with best practices for big data practitioners. Using these best practices we propose efficient fusion mechanisms both for single and multiple network models. We present state-of-the-art results on benchmark datasets while keeping computational costs at low level. Our results show that these state-of-the-art results can be reached without using extensive data augmentation techniques.
  • Item
    Applications of Deep Learning Techniques to Wood Anomaly Detection
    (2022) Celik, Yaren; Guney, Selda; Dengiz, Berna; Xu, J; Altiparmak, F.; Hassan, MHA; Marquez, FPG
    Wood products and structures have an important place in today's industry. They are widely used in many fields. However, there are various difficulties in production systems where wood raw material is under many processes. Some difficulty and complexity of production processes result in high variability of raw materials such as a wide range of visible structural defects that must be checked by specialists on line or of line. These issues are not only difficult and biased in manual processes, but also less effective and misleading. To overcome the drawbacks of the manual quality control processes, machine vision-based inspection systems are in great of interest recently for quality control applications. In this study, the wood anomaly has been detected by using deep learning. As it will be a distinction-based method on image processing, the Convolution Neural Network (CNN), which is one of the most suitable methods, has been used for anomaly detection. In addition, it will be tried to obtain the most suitable one among different CNN architectures such as ShuffleNet, AlexNet, GoogleNet for the problem. MobileNet, SqueezeNet, GoogleNet, ShuffleNet among considered methods show promising results in classifying normal and abnormal wood products.
  • Item
    Virtual contrast enhancement for CT scans of abdomen and pelvis
    (2022) Liu, Jingya; Tian, Yingli; Duzgol, Cihan; Akin, Oguz; Agildere, A. Muhtesem; Haberal, K. Murat; Coskun, Mehmet; 0000-0002-8211-4065; 35914340; R-9398-2019
    Contrast agents are commonly used to highlight blood vessels, organs, and other structures in magnetic resonance imaging (MRI) and computed tomography (CT) scans. However, these agents may cause allergic reactions or nephrotoxicity, limiting their use in patients with kidney dysfunctions. In this paper, we propose a generative adversarial network (GAN) based framework to automatically synthesize contrast-enhanced CTs directly from the non-contrast CTs in the abdomen and pelvis region. The respiratory and peristaltic motion can affect the pixel-level mapping of contrast-enhanced learning, which makes this task more challenging than other body parts. A perceptual loss is introduced to compare high-level semantic differences of the enhancement areas between the virtual contrast-enhanced and actual contrast-enhanced CT images. Furthermore, to accurately synthesize the intensity details as well as remain texture structures of CT images, a dual-path training schema is proposed to learn the texture and structure features simultaneously. Experiment results on three contrast phases (i.e. arterial, portal, and delayed phase) show the potential to synthesize virtual contrast-enhanced CTs directly from non-contrast CTs of the abdomen and pelvis for clinical evaluation.
  • Item
    Utilizing Deep Convolutional Generative Adversarial Networks for Automatic Segmentation of Gliomas: An Artificial Intelligence Study
    (2022) Aydogan Duman, Ebru; Sagiroglu, Seref; Celtikci, Pinar; Demirezen, Mustafa Umut; Borcek, Alp Ozgun; Emmez, Hakan; Celtikci, Emrah; 34542897
    AIM: To describe a deep convolutional generative adversarial networks (DCGAN) model which learns normal brain MRI from normal subjects than finds distortions such as a glioma from a test subject while performing a segmentation at the same time. MATERIAL and METHODS: MRIs of 300 healthy subjects were employed as training set. Additionally, test data were consisting anonymized T2-weigted MRIs of 27 healthy subjects and 27 HGG patients. Consecutive axial T2-weigted MRI slices of every subject were extracted and resized to 364x448 pixel resolution. The generative model produced random normal synthetic images and used these images for calculating residual loss to measure visual similarity between input MRIs and generated MRIs. RESULTS: The model correctly detected anomalies on 24 of 27 HGG patients' MRIs and marked them as abnormal. Besides, 25 of 27 healthy subjects' MRIs in the test dataset detected correctly as healthy MRI. The accuracy, precision, recall, and AUC were 0.907, 0.892, 0.923, and 0.907, respectively. CONCLUSION: Our proposed model demonstrates acceptable results can be achieved only by training with normal subject MRIs via using DCGAN model. This model is unique because it learns only from normal MRIs and it is able to find any abnormality which is different than the normal pattern.
  • Item
    Deep neural network to differentiate brain activity between patients with euthymic bipolar disorders and healthy controls during verbal fluency performance: A multichannel near-infrared spectroscopy study
    (2022) Alici, Yasemin Hosgoren; Oztoprak, Huseyin; Rizaner, Nahit; Baskak, Bora; Ozguven, Halise Devrimci; 0000-0003-3384-8131; 36088826
    In this study, we aimed to differentiate between euthymic bipolar disorder (BD) patients and healthy controls (HC) based on frontal activity measured by fNIRS that were converted to spectrograms with Convolutional Neural Networks (CNN). And also, we investigated brain regions that cause this distinction. In total, 29 BD patients and 28 HCs were recruited. Their brain cortical activities were measured using fNIRS while performing letter versions of VFT. Each one of the 24 fNIRS channels was converted to a 2D spectrogram on which a CNN architecture was designed and utilized for classification. We found that our CNN algorithm using fNIRS activity during a VFT is able to differentiate subjects with BD from healthy controls with 90% accuracy, 80% sensitivity, and 100% specificity. Moreover, validation performance reached an AUC of 94%. From our individual channel analyses, we observed channels corresponding to the left inferior frontal gyrus (left-IFC), medial frontal cortex (MFC), right dorsolateral prefrontal cortex (DLPFC), Broca area, and right premotor have considerable activity variation to distinguish patients from HC. fNIRS activity during VFT can be used as a potential marker to classify euthymic BD patients from HCs. Activity particularly in the MFC, left-IFC, Broca's area, and DLPFC have a considerable variation to distinguish patients from healthy controls.
  • Item
    Human Activity Recognition by Using Different Deep Learning Approaches for Wearable Sensors
    (2021) Erdas, Cagatay Berke; Guney, Selda; 0000-0003-3467-9923
    With the spread of wearable sensors, the solutions to the task of activity recognition by using the data obtained from the sensors have become widespread. Recognition of activities owing to wearable sensors such as accelerometers, gyroscopes, and magnetometers, etc. has been studied in recent years. Although there are several applications in the literature, differently in this study, deep learning algorithms such as Convolutional Neural Networks, Convolutional LSTM, and 3D Convolutional Neural Networks fed by Convolutional LSTM have been used in human activity recognition task by feeding with data obtained from accelerometer sensor. For this purpose, a frame was formed with raw samples of the same activity which were collected consecutively from the accelerometer sensor. Thus, it is aimed to capture the pattern inherent in the activity and due to preserving the continuous structure of the movement.