Scopus İndeksli Açık & Kapalı Erişimli Yayınlar
Permanent URI for this communityhttps://hdl.handle.net/11727/10752
Browse
7 results
Search Results
Item Applications of Deep Learning Techniques to Wood Anomaly Detection(2022) Celik, Yaren; Guney, Selda; Dengiz, Berna; Xu, J; Altiparmak, F.; Hassan, MHA; Marquez, FPGWood products and structures have an important place in today's industry. They are widely used in many fields. However, there are various difficulties in production systems where wood raw material is under many processes. Some difficulty and complexity of production processes result in high variability of raw materials such as a wide range of visible structural defects that must be checked by specialists on line or of line. These issues are not only difficult and biased in manual processes, but also less effective and misleading. To overcome the drawbacks of the manual quality control processes, machine vision-based inspection systems are in great of interest recently for quality control applications. In this study, the wood anomaly has been detected by using deep learning. As it will be a distinction-based method on image processing, the Convolution Neural Network (CNN), which is one of the most suitable methods, has been used for anomaly detection. In addition, it will be tried to obtain the most suitable one among different CNN architectures such as ShuffleNet, AlexNet, GoogleNet for the problem. MobileNet, SqueezeNet, GoogleNet, ShuffleNet among considered methods show promising results in classifying normal and abnormal wood products.Item Virtual contrast enhancement for CT scans of abdomen and pelvis(2022) Liu, Jingya; Tian, Yingli; Duzgol, Cihan; Akin, Oguz; Agildere, A. Muhtesem; Haberal, K. Murat; Coskun, Mehmet; 0000-0002-8211-4065; 35914340; R-9398-2019Contrast agents are commonly used to highlight blood vessels, organs, and other structures in magnetic resonance imaging (MRI) and computed tomography (CT) scans. However, these agents may cause allergic reactions or nephrotoxicity, limiting their use in patients with kidney dysfunctions. In this paper, we propose a generative adversarial network (GAN) based framework to automatically synthesize contrast-enhanced CTs directly from the non-contrast CTs in the abdomen and pelvis region. The respiratory and peristaltic motion can affect the pixel-level mapping of contrast-enhanced learning, which makes this task more challenging than other body parts. A perceptual loss is introduced to compare high-level semantic differences of the enhancement areas between the virtual contrast-enhanced and actual contrast-enhanced CT images. Furthermore, to accurately synthesize the intensity details as well as remain texture structures of CT images, a dual-path training schema is proposed to learn the texture and structure features simultaneously. Experiment results on three contrast phases (i.e. arterial, portal, and delayed phase) show the potential to synthesize virtual contrast-enhanced CTs directly from non-contrast CTs of the abdomen and pelvis for clinical evaluation.Item Utilizing Deep Convolutional Generative Adversarial Networks for Automatic Segmentation of Gliomas: An Artificial Intelligence Study(2022) Aydogan Duman, Ebru; Sagiroglu, Seref; Celtikci, Pinar; Demirezen, Mustafa Umut; Borcek, Alp Ozgun; Emmez, Hakan; Celtikci, Emrah; 34542897AIM: To describe a deep convolutional generative adversarial networks (DCGAN) model which learns normal brain MRI from normal subjects than finds distortions such as a glioma from a test subject while performing a segmentation at the same time. MATERIAL and METHODS: MRIs of 300 healthy subjects were employed as training set. Additionally, test data were consisting anonymized T2-weigted MRIs of 27 healthy subjects and 27 HGG patients. Consecutive axial T2-weigted MRI slices of every subject were extracted and resized to 364x448 pixel resolution. The generative model produced random normal synthetic images and used these images for calculating residual loss to measure visual similarity between input MRIs and generated MRIs. RESULTS: The model correctly detected anomalies on 24 of 27 HGG patients' MRIs and marked them as abnormal. Besides, 25 of 27 healthy subjects' MRIs in the test dataset detected correctly as healthy MRI. The accuracy, precision, recall, and AUC were 0.907, 0.892, 0.923, and 0.907, respectively. CONCLUSION: Our proposed model demonstrates acceptable results can be achieved only by training with normal subject MRIs via using DCGAN model. This model is unique because it learns only from normal MRIs and it is able to find any abnormality which is different than the normal pattern.Item Deep neural network to differentiate brain activity between patients with euthymic bipolar disorders and healthy controls during verbal fluency performance: A multichannel near-infrared spectroscopy study(2022) Alici, Yasemin Hosgoren; Oztoprak, Huseyin; Rizaner, Nahit; Baskak, Bora; Ozguven, Halise Devrimci; 0000-0003-3384-8131; 36088826In this study, we aimed to differentiate between euthymic bipolar disorder (BD) patients and healthy controls (HC) based on frontal activity measured by fNIRS that were converted to spectrograms with Convolutional Neural Networks (CNN). And also, we investigated brain regions that cause this distinction. In total, 29 BD patients and 28 HCs were recruited. Their brain cortical activities were measured using fNIRS while performing letter versions of VFT. Each one of the 24 fNIRS channels was converted to a 2D spectrogram on which a CNN architecture was designed and utilized for classification. We found that our CNN algorithm using fNIRS activity during a VFT is able to differentiate subjects with BD from healthy controls with 90% accuracy, 80% sensitivity, and 100% specificity. Moreover, validation performance reached an AUC of 94%. From our individual channel analyses, we observed channels corresponding to the left inferior frontal gyrus (left-IFC), medial frontal cortex (MFC), right dorsolateral prefrontal cortex (DLPFC), Broca area, and right premotor have considerable activity variation to distinguish patients from HC. fNIRS activity during VFT can be used as a potential marker to classify euthymic BD patients from HCs. Activity particularly in the MFC, left-IFC, Broca's area, and DLPFC have a considerable variation to distinguish patients from healthy controls.Item Human Activity Recognition by Using Different Deep Learning Approaches for Wearable Sensors(2021) Erdas, Cagatay Berke; Guney, Selda; 0000-0003-3467-9923With the spread of wearable sensors, the solutions to the task of activity recognition by using the data obtained from the sensors have become widespread. Recognition of activities owing to wearable sensors such as accelerometers, gyroscopes, and magnetometers, etc. has been studied in recent years. Although there are several applications in the literature, differently in this study, deep learning algorithms such as Convolutional Neural Networks, Convolutional LSTM, and 3D Convolutional Neural Networks fed by Convolutional LSTM have been used in human activity recognition task by feeding with data obtained from accelerometer sensor. For this purpose, a frame was formed with raw samples of the same activity which were collected consecutively from the accelerometer sensor. Thus, it is aimed to capture the pattern inherent in the activity and due to preserving the continuous structure of the movement.Item Classification of Canine Maturity and Bone Fracture Time Based on X-Ray Images of Long Bones(2021) Ergun, Gulnur Begum; Guney, Selda; 0000-0002-0573-1326; 0000-0001-8469-5484Veterinarians use X-rays for almost all examinations of clinical fractures to determine the appropriate treatment. Before treatment, vets need to know the date of the injury, type of the broken bone, and age of the dog. The maturity of the dog and the time of the fracture affects the approach to the fracture site, the surgical procedure and needed materials. This comprehensive study has three main goals: determining the maturity of the dogs (Task 1), dating fractures (Task 2), and finally, detecting fractures of the long bones in dogs (Task 3). The most popular deep neural networks are used: AlexNet, ResNet-50 and GoogLeNet. One of the most popular machine learning algorithms, support vector machines (SVM), is used for comparison. The performance of all sub-studies is evaluated using accuracy and F1 score. Each task has been successful with different network architecture. ResNet-50, AlexNet and GoogLeNet are the most successful algorithms for the three tasks, with F1 scores of 0.75, 0.80 and 0.88, respectively. Data augmentation is performed to make models more robust, and the F1 scores of the three tasks were 0.80, 0.81, and 0.89 using ResNet-50, which is the most successful model. This preliminary work can be developed into support tools for practicing veterinarians that will make a difference in the treatment of dogs with fractured bones. Considering the lack of work in this interdisciplinary field, this paper may lead to future studies.Item A real-time approach to recognition of Turkish sign language by using convolutional neural networks(2021) Guney, Selda; 0000-0002-0573-1326Sign language is a form of visual communication used by people with hearing problems to express themselves. The main purpose of this study is to make life easier for these people. In this study, a data set was created using 3200 RGB images for 32 classes (32 static words) taken from three different people. Data augmentation methods were applied to the data sets, and the number of images increased from 3200 to 19,200, 600 per class. A 10-layer convolutional neural network model was created for the classification of the signs, and VGG166, Inception, and ResNet deep network architectures, which are deep learning methods, were applied by using the transfer learning method. Also, the signs are classified using the support vector machines and k-nearest neighbor methods, which are the traditional machine learning methods, by using features obtained from the last layer of the convolutional neural network. The most successful method was determined by comparing the obtained results according to time and performance ratios. In addition to these analyses, an interface was developed. By using the interface, the static words belonging to Turkish sign language (TSL) are translated into real-time written language. With the real-time system designed, its success in recognizing the static words of TSL signs and printing its prediction on the computer screen was evaluated.