Meslek Yüksek Okulları / Vocational Schools

Permanent URI for this communityhttps://hdl.handle.net/11727/1410

Browse

Search Results

Now showing 1 - 10 of 13
  • Item
    A Novel Approach for Estimating Heat Transfer Coefficients of Ethylene Glycol-Water Mixtures
    (2014) Bulut, Murat; Ankishan, Haydar; Demircioglu, Erdem; Ari, Seckin; Sengul, Orhan; https://orcid.org/0000-0002-6240-2545; AAH-4421-2019
    Ethylene glycol-water mixtures (EGWM) are vital for cooling engines in automotive industry. Scarce information is available in the literature for estimating the heat transfer coefficients (HTC) of EGWM using knowledge-based estimation techniques such as adaptive neuro-fuzzy inference systems (ANFIS) and artificial neural networks (ANN) which offer nonlinear input-output mapping. In this paper, the supervised learning methods of ANFIS and ANN are exploited for estimating the experimentally determined HTC. This original research fulfills the preceding modeling efforts on thermal properties of EGWM and HTC applications in the literature. An experimental test setup is designed to compute HTC of mixture over a small circular aluminum heater surface, 9.5 mm in diameter, placed at the bottom 40-mm-wide wall of a rectangular channel 3 mm x 40 mm in cross section. Measurement data are utilized as the train and test data sets of the estimation process. Prediction results have shown that ANFIS provide more accurate and reliable approximations compared to ANN. ANFIS present correlation factor of 98.81 %, whereas ANN estimate 87.83 % accuracy for test samples.
  • Item
    Deep Learning Based Multi Modal Approach for Pathological Sounds Classification
    (2020) Ankishan, Haydar; Kocoglu, Arif
    Automatic detection of voice disorders is very important because it makes the diagnosis process simpler, cheaper and less time consuming. In the literature, there are many studies available on the analysis of voice disorders based on the characteristics of the voice and subdividing the result of this analysis. In general, these studies have been carried out in order to subdivide the sound into pathological - normally sub - groups by means of certain classifiers as a result of subtraction of the features on frequency, time or hybrid axis. In contrast to existing approaches, in this study, a multiple- deep learning model using feature level fusion is proposed to distinguish pathological-normal sounds from each other. First, a feature vector (HOV) on the hybrid axis was obtained from the raw sound data. Then two CNN models were used. The first model has used raw audio data and the second model has used HOV as an input. Feature data in both model SoftMax layers were obtained as a matrix, and canonical correlation analysis (Canonical Correlation Analysis (CCA) was applied at feature level fusion. The new obtained feature vector was used as an input for multiple support vector machines (M-SVMs), Decision Tree (DTC) and naive bayes (NBC) classifiers. When the experimental results are examined, it is seen that the new multi-model based deep learning architecture provides superior success in classifying pathological sound data. With the results of the study, it will be possible to automatically detect and classify the pathology of these patients according to the proposed system.
  • Item
    An Approach to the Classification of Environmental Sounds by LSTM Based Transfer Learning Method
    (2020) Ankishan, Haydar
    This electronic Effective frequency extraction from acoustic environmental sounds in frequency and time axis increases the importance of voice recognition, sound detection, environmental classification in recently. For this purpose, there are many studies in the literature on the discrimination of acoustic environmental sounds. These studies generally perform these operations with the help of machine learning and deep learning algorithms. In this study, a new artificial intelligence architecture using two long short term memory networks (LSTM) is designed. The structure, which uses both raw data and the proposed feature vector at its inputs, is reinforced by the transfer learning approach. The obtained classification results were fused at the decision level. As a result of experimental studies, five different environmental acoustic sounds were subdivided into 97.15% test accuracy. In environmental studies conducted in pairs, it is seen that the environmental sounds have reached 100% accuracy. Experimental results have shown that the proposed artificial intelligence architecture with fusion support at decision level is capable of discriminating acoustic environmental sounds.
  • Item
    A New Approach for Discriminating the Acoustic Signals: Largest Area Parameter (LAP)
    (2018) Ankishan, Haydar; Inam, S. Cagdas
    Feature extraction of sound signals is essential for the performance of applications such as pattern and voice recognition etc. In this study, a method based on a novel feature is proposed to separate pathological human voice signals from healthy ones as well as to separate subgroups of pathological voices from each other. The voices are examined in time-frequency domain. Their differences obtained from the results of the proposed method are investigated and the mechanism of the method is demonstrated using experimental cases. It is concluded that the method succeeds to discriminate the voices marked "healthy" and "pathological".
  • Item
    A New Portable Device for the Snore/Non-Snore Classification
    (2017) Ankishan, Haydar; Tuncer, A. Turgut; 0000-0002-6240-2545; AAH-4421-2019
    Snoring is widely known as a disease. The aim of this paper is to introduce and validate our newly developed snoring detection device to identify automatically snore and non-snore sounds using a nonlinear analysis technique. The developed device can analyze chaotic features of a snore related sounds such as entropy, Largest Lyapunov Exponents (LLEs) and also has the data classification ability depending on the feature values. We report that the developed snoring detection device with proposed automatic classification method could achieve an accuracy of 94.38% for experiment I and 82.02 for experiment II when analyzing snore and non-snore sounds from 22 subjects. This study revealed the efficacy of our newly developed snoring detection device and indicated that it may be used at home an alternative to diagnose snore related sounds. It is anticipated that our findings will contribute to the development of an automated snore analysis system to be used in sleep studies.
  • Item
    A New Approach for the Acoustic Analysis of the Speech Pathology
    (2017) Ankishan, Haydar; 0000-0002-6240-2545; AAH-4421-2019
    Voice disorders are a common physical problem that can be encountered today and can cause serious problems in the long term. It is necessary to analyze the voice and extract its characteristics correctly so that it can be treated. In some cases, due to their sound characteristics, they do not differ from each other characteristics exactly, and today's systems do not yet have the ability to make correct decisions. This study has taken into account those evident which from voice disturbances and tries to the analysis of these disorders by means of previously unused attributes with the help of classifier (SVMs). In this study, after the sounds are modeled with LPC and MFCC, disorder analysis is performed on the obtained signals. In the results obtained from experimental studies, it has been determined that 100% of the patients with four different diseases can be decomposed together with the used nonlinear features.
  • Item
    Max-Min Space Approach for Acoustic Signal Analysis
    (2017) Ankishan, Haydar; Baysal, Ugur; 0000-0002-6240-2545; AAH-4421-2019; AAJ-5711-2020
    Acoustic signals having pathological problem are difficult to discriminate from each other. Despite the presence of many features, the difficulties arise from the chaotic and nonlinear nature of these voices. Unlike the existing features, a new feature and feature space are emphasized in this study. Considering the maximum and minimum values of acoustic signals at certain time intervals, the relation between them is revealed and Max-Min space is created. Experimental studies have shown that the space distribution between pathological and normal sounds is completely separated from each other and that the space-scattering field sizes are different from each other. As a result of the studies, a time-based feature is introduced which allows the separation of chaotic and nonlinear acoustic signals in the literature.
  • Item
    A New Approach for Estimation of Heart Beat Rates from Speech Recordings
    (2017) Ankishan, Haydar; Baysal, Ugur; 0000-0002-6240-2545; AAH-4421-2019; AAJ-5711-2020
    Today, people are able to have information about their mental state, behavior, and health status in some issues from the features of the voices. The study involves calculating the heart rates of people using nonlinear equations with the help of the features of sound recordings. The model proposed for the study consists of the four inputs of the difference equation parameters which change with constant and variable sound features. When the experimental studies were examined, it was observed that the heart rate could be predicted with an accuracy of 89.76% by using 10s sound recordings. With the proposed equation, it is observed that the heart beat rate is related to the speech features, can be calculated these features with minimal error rate and also the nonlinear equation is presented in the literature.
  • Item
    Detecting COVID-19 from Respiratory Sound Recordings with Transformers
    (2022) Aytekin, Idil; Dalmaz, Onat; Ankishan, Haydar; Saritas, Emine U.; Bagci, Ulas; Cukur, Tolga; Celik, Haydar; Drukker, K; Iftekharuddin, KM
    Auscultation is an established technique in clinical assessment of symptoms for respiratory disorders. Auscultation is safe and inexpensive, but requires expertise to diagnose a disease using a stethoscope during hospital or office visits. However, some clinical scenarios require continuous monitoring and automated analysis of respiratory sounds to pre-screen and monitor diseases, such as the rapidly spreading COVID-19. Recent studies suggest that audio recordings of bodily sounds captured by mobile devices might carry features helpful to distinguish patients with COVID-19 from healthy controls. Here, we propose a novel deep learning technique to automatically detect COVID-19 patients based on brief audio recordings of their cough and breathing sounds. The proposed technique first extracts spectrogram features of respiratory recordings, and then classifies disease state via a hierarchical vision transformer architecture. Demonstrations are provided on a crowdsourced database of respiratory sounds from COVID-19 patients and healthy controls. The proposed transformer model is compared against alternative methods based on state-of-the-art convolutional and transformer architectures, as well as traditional machine-learning classifiers. Our results indicate that the proposed model achieves on par or superior performance to competing methods. In particular, the proposed technique can distinguish COVID-19 patients from healthy subjects with over 94% AUC.
  • Item
    Blood pressure prediction from speech recordings
    (2020) Ankishan, Haydar
    The aim of this study is to extract new features to show the relationship between speech recordings and blood pressure (BP). For this purpose, a database consisting of / a / vowels with different BP values under the same room and environment conditions is presented to the literature. Convolutional Neural Networks- Regression (CNN-R), Support Vector Machines- Regression (SVMs-R) and Multi Linear Regression (MLR) are used in this study to predict BP with extracted features. From the experiments, the highest accuracy rates of BP prediction from / a / vowel have been obtained based on Systolic BP values with CNNR. In the study, 89.43 % for MLR, 92.15 % for SVM-R and 93.65 % for CNN-R are obtained when ReliefF has been used. When the root mean square errors (RMSE) are considered, the lowest error value is obtained with CNN-R as RMSE = 0.2355. In conclusion, it can be observed that the proposed feature vector (FVx) shows a relationship between BP and the human voices, and in this direction, it can be used as an FVx in a system that will be developed in order to follow the tension of individuals. (C) 2020 Elsevier Ltd. All rights reserved.