PubMed Kapalı Erişimli Yayınlar

Permanent URI for this collectionhttps://hdl.handle.net/11727/10764

Browse

Search Results

Now showing 1 - 3 of 3
  • Item
    Virtual contrast enhancement for CT scans of abdomen and pelvis
    (2022) Liu, Jingya; Tian, Yingli; Duzgol, Cihan; Akin, Oguz; Agildere, A. Muhtesem; Haberal, K. Murat; Coskun, Mehmet; 0000-0002-8211-4065; 35914340; R-9398-2019
    Contrast agents are commonly used to highlight blood vessels, organs, and other structures in magnetic resonance imaging (MRI) and computed tomography (CT) scans. However, these agents may cause allergic reactions or nephrotoxicity, limiting their use in patients with kidney dysfunctions. In this paper, we propose a generative adversarial network (GAN) based framework to automatically synthesize contrast-enhanced CTs directly from the non-contrast CTs in the abdomen and pelvis region. The respiratory and peristaltic motion can affect the pixel-level mapping of contrast-enhanced learning, which makes this task more challenging than other body parts. A perceptual loss is introduced to compare high-level semantic differences of the enhancement areas between the virtual contrast-enhanced and actual contrast-enhanced CT images. Furthermore, to accurately synthesize the intensity details as well as remain texture structures of CT images, a dual-path training schema is proposed to learn the texture and structure features simultaneously. Experiment results on three contrast phases (i.e. arterial, portal, and delayed phase) show the potential to synthesize virtual contrast-enhanced CTs directly from non-contrast CTs of the abdomen and pelvis for clinical evaluation.
  • Item
    Utilizing Deep Convolutional Generative Adversarial Networks for Automatic Segmentation of Gliomas: An Artificial Intelligence Study
    (2022) Aydogan Duman, Ebru; Sagiroglu, Seref; Celtikci, Pinar; Demirezen, Mustafa Umut; Borcek, Alp Ozgun; Emmez, Hakan; Celtikci, Emrah; 34542897
    AIM: To describe a deep convolutional generative adversarial networks (DCGAN) model which learns normal brain MRI from normal subjects than finds distortions such as a glioma from a test subject while performing a segmentation at the same time. MATERIAL and METHODS: MRIs of 300 healthy subjects were employed as training set. Additionally, test data were consisting anonymized T2-weigted MRIs of 27 healthy subjects and 27 HGG patients. Consecutive axial T2-weigted MRI slices of every subject were extracted and resized to 364x448 pixel resolution. The generative model produced random normal synthetic images and used these images for calculating residual loss to measure visual similarity between input MRIs and generated MRIs. RESULTS: The model correctly detected anomalies on 24 of 27 HGG patients' MRIs and marked them as abnormal. Besides, 25 of 27 healthy subjects' MRIs in the test dataset detected correctly as healthy MRI. The accuracy, precision, recall, and AUC were 0.907, 0.892, 0.923, and 0.907, respectively. CONCLUSION: Our proposed model demonstrates acceptable results can be achieved only by training with normal subject MRIs via using DCGAN model. This model is unique because it learns only from normal MRIs and it is able to find any abnormality which is different than the normal pattern.
  • Item
    Deep neural network to differentiate brain activity between patients with euthymic bipolar disorders and healthy controls during verbal fluency performance: A multichannel near-infrared spectroscopy study
    (2022) Alici, Yasemin Hosgoren; Oztoprak, Huseyin; Rizaner, Nahit; Baskak, Bora; Ozguven, Halise Devrimci; 0000-0003-3384-8131; 36088826
    In this study, we aimed to differentiate between euthymic bipolar disorder (BD) patients and healthy controls (HC) based on frontal activity measured by fNIRS that were converted to spectrograms with Convolutional Neural Networks (CNN). And also, we investigated brain regions that cause this distinction. In total, 29 BD patients and 28 HCs were recruited. Their brain cortical activities were measured using fNIRS while performing letter versions of VFT. Each one of the 24 fNIRS channels was converted to a 2D spectrogram on which a CNN architecture was designed and utilized for classification. We found that our CNN algorithm using fNIRS activity during a VFT is able to differentiate subjects with BD from healthy controls with 90% accuracy, 80% sensitivity, and 100% specificity. Moreover, validation performance reached an AUC of 94%. From our individual channel analyses, we observed channels corresponding to the left inferior frontal gyrus (left-IFC), medial frontal cortex (MFC), right dorsolateral prefrontal cortex (DLPFC), Broca area, and right premotor have considerable activity variation to distinguish patients from HC. fNIRS activity during VFT can be used as a potential marker to classify euthymic BD patients from HCs. Activity particularly in the MFC, left-IFC, Broca's area, and DLPFC have a considerable variation to distinguish patients from healthy controls.