Browsing by Author "Yazici, Adnan"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item An Intelligent Multimedia Information System for Multimodal Content Extraction and Querying(2018) Yazici, Adnan; Koyuncu, Murat; Yilmaz, Turgay; Sattari, Saeid; Sert, Mustafa; Gulen, Elvan; https://orcid.org/0000-0002-7056-4245; D-3080-2015This paper introduces an intelligent multimedia information system, which exploits machine learning and database technologies. The system extracts semantic contents of videos automatically by using the visual, auditory and textual modalities, then, stores the extracted contents in an appropriate format to retrieve them efficiently in subsequent requests for information. The semantic contents are extracted from these three modalities of data separately. Afterwards, the outputs from these modalities are fused to increase the accuracy of the object extraction process. The semantic contents that are extracted using the information fusion are stored in an intelligent and fuzzy object-oriented database system. In order to answer user queries efficiently, a multidimensional indexing mechanism that combines the extracted high-level semantic information with the low-level video features is developed. The proposed multimedia information system is implemented as a prototype and its performance is evaluated using news video datasets for answering content and concept-based queries considering all these modalities and their fused data. The performance results show that the developed multimedia information system is robust and scalable for large scale multimedia applications.Item Leveraging Multimodal and Feature Selection Approaches to Improve Sleep Apnea Classification Performance(2017) Memis, Gokhan; Sert, Mustafa; Yazici, Adnan; 0000-0002-7056-4245; AAB-8673-2019Obstructive sleep apnea (OSA) is a sleep disorder with long-term adverse effects such as cardiovascular diseases. However, clinical methods, such as polisomnograms, have high monitoring costs due to long waiting times and hence efficient computer-based methods are needed for diagnosing OSA. In this study, we propose a method based on feature selection of fused oxygen saturation and electrocardiogram signals for OSA classification. Specifically, we use Relieff feature selection algorithm to obtain robust features from both biological signals and design three classifiers, namely Naive Bayes (NB), k-nearest neighbors (kNN), and Support Vector Machine (DVM) to test these features. Our experimental results on the real clinical samples from the PhysioNet dataset show that the proposed multimodal and Relieff feature selection based method improves the average classification accuracy by 4.67% on all test scenarios.Item Use of Acoustic and Vibration Sensor Data to Detect Objects in Surveillance Wireless Sensor Networks(2017) Kucukbay, Selver Ezgi; Sert, Mustafa; Yazici, Adnan; 0000-0002-7056-4245; AAB-8673-2019Nowadays, people are using stealth sensors to detect intruders due to their low power consumption and wide coverage. It is very important to use lightweight sensors for detecting real time events and taking actions accordingly. In this paper, we focus on the design and implementation of wireless surveillance sensor network with acoustic and seismic vibration sensors to detect objects and/or events for area security in real time. To this end, we introduce a new environmental sensing based system for event triggering and action. In our system, we first design an appropriate hardware as a part of multimedia surveillance sensor node and use proper classification technique to classify acoustic and vibration data that are collected by sensors in real-time. According to the type of acoustic data, our proposed system triggers a camera event as an action for detecting intruder (human or vehicle). We use Mel Frequency Cepstral Coefficients (MFCC) feature extraction method for acoustic sounds and Support Vector Machines (SVM) as classification method for both acoustic and vibration data. We have also run some experiments to test the performance of our classification approach. We show that our proposed approach is efficient enough to be used in real life.Item Visual and Auditory Data Fusion for Energy-Efficient and Improved Object Recognition in Wireless Multimedia Sensor Networks(2019) Koyuncu, Murat; Yazici, Adnan; Civelek, Muhsin; Cosar, Ahmet; Sert, Mustafa; 0000-0002-7056-4245; AAB-8673-2019Automatic threat classification without human intervention is a popular research topic in wireless multimedia sensor networks (WMSNs) especially within the context of surveillance applications. This paper explores the effect of fusing audio-visual multimedia and scalar data collected by the sensor nodes in a WMSN for the purpose of energy-efficient and accurate object detection and classification. In order to do that, we implemented a wireless multimedia sensor node with video and audio capturing and processing capabilities in addition to traditional/ordinary scalar sensors. The multimedia sensors are kept in sleep mode in order to save energy until they are activated by the scalar sensors which are always active. The object recognition results obtained from video and audio applications are fused to increase the object recognition performance of the sensor node. Final results are forwarded to the sink in text format, and this greatly reduces the size of data transmitted in network. Performance test results of the implemented prototype system show that the fusing audio data with visual data improves automatic object recognition capability of a sensor node significantly. Since auditory data requires less processing power compared to visual data, the overhead of processing the auditory data is not high, and it helps to extend network lifetime of WMSNs.