Multi Level Lecture Video Classification Using Text Content

No Thumbnail Available

Date

2020

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Recent interest in e-learning and distance education services has significantly increased the amount of lecture video data in public and institutional repositories. In their current forms, users can browse in these collections using meta-data-based search queries such as course name, description, instructor and syllabus. However, lecture video entries have rich contents, including image, text and speech, which can not be easily represented by meta-data annotations. Therefore, there is an emerging need to develop tools that will automatically annotate lecture videos to facilitate more targeted search. A simple way to realize this is to classify lectures into known categories. With this objective, this paper presents a method for classifying videos based on extracted text content in several semantic levels. The method is based on Bidirectional Long-Short Term Memory (Bi-LSTM) applied on word embedding vectors of text content extracted by Optical Character Recognition (OCR). This approach can outperform conventional machine learning models and provide a useful solution for automatic lecture video annotation to support online education.

Description

Keywords

Lecture video classification, Content-based video retrieval, Long-Short Term Memory (LSTM)

Citation

Endorsement

Review

Supplemented By

Referenced By