Sensitive-Video Analysis

Data
07/07/20172017-07-06 21:00:00 2017-07-06 21:00:00 Sensitive-Video Analysis Sensitive videos that may be inadequate to some audiences (e.g., pornography and violence, towards underages) are constantly being shared over the Internet. Employing humans for filtering them is daunting. The huge amount of data and the tediousness of the task ask for computer-aided sensitive-video analysis. In this talk, we will discuss how to tackle this problem in two ways. In the first one (sensitive-video classification), we explore methods to decide whether or not a video contains sensitive material. In the second one (sensitive-content localization), we explore manners to find the moments a video starts and ceases to display sensitive content. For both cases, we will explain in details how we have designed and developed effective and efficient methods, with low-memory footprint, small runtime, and suitability for deployment on mobile devices. We start with a novel Bag-of-Visual-Words-based pipeline for efficient motion-aware sensitive-video classification. Then, we move to a novel high-level multimodal fusion pipeline for sensitive-content localization. Finally, we introduce a novel space-temporal video interest point detector and video content descriptor, which we call Temporal Robust Features (TRoF). Auditório do IC3 Auditório do IC3 Auditório do IC3 America/Sao_Paulo public
Horário
14:00
Local
Auditório do IC3
Palestrante
Dr. Daniel Henriques Moreira
Descrição

Sensitive videos that may be inadequate to some audiences (e.g., pornography and violence, towards underages) are constantly being shared over the Internet. Employing humans for filtering them is daunting. The huge amount of data and the tediousness of the task ask for computer-aided sensitive-video analysis. In this talk, we will discuss how to tackle this problem in two ways. In the first one (sensitive-video classification), we explore methods to decide whether or not a video contains sensitive material. In the second one (sensitive-content localization), we explore manners to find the moments a video starts and ceases to display sensitive content. For both cases, we will explain in details how we have designed and developed effective and efficient methods, with low-memory footprint, small runtime, and suitability for deployment on mobile devices. We start with a novel Bag-of-Visual-Words-based pipeline for efficient motion-aware sensitive-video classification. Then, we move to a novel high-level multimodal fusion pipeline for sensitive-content localization. Finally, we introduce a novel space-temporal video interest point detector and video content descriptor, which we call Temporal Robust Features (TRoF).

Sobre o Palestrante

Daniel Moreira received the B.Sc. degree from the Federal University of Pará, Belém, Brazil, in 2006, the M.Sc. degree from the Federal University of Pernambuco, Recife, Brazil, in 2008, and the Ph.D. degree from the University of Campinas, Brazil, in 2016, all in Computer Science. After working for five years as a systems analyst in the Brazilian government-owned Federal Data Processing Service (a.k.a., SERPRO), he became a full-time dedicated computer science researcher whose interests include the fields of Computer Vision, Machine Learning, and Digital Image and Video Forensics. Currently, he is a postdoctoral research scholar at the Computer Science and Engineering Department of the University of Notre Dame, US, under the supervision of Prof. Dr. Walter Scheirer, where he investigates topics in Media Forensics.