Thu 2 March 2017 | 6:00 pm - 9:00 pm
Google Campus Tel Aviv - Electra Tower, Yigal Alon 98, Tel Aviv, FREE

*** in Hebrew ***

We present a general approach to video understanding. Our method considers a video to be a 1D sequence of clips, each one associated with its own semantics. The nature of these semantics — natural language captions or other labels — depends on the task at hand. A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video.

We describe two matching methods, both designed to ensure that (a) reference clips appear similar to test clips and (b), taken together, the semantics of selected reference clips is consistent and maintains temporal coherence. We use our method for video captioning on the LSMDC’16 benchmark and video summarization on the SumMe benchmark. In both cases,

Convolutional Neural Networks – Opening the black box

In this lecture, we will share our journey from theory to practice, we will examine some of the challenges we face, the techniques and best practices we’ve developed. To gain a better understanding of the network, we used several debugging and visualization tools to give us a better understanding of what neuron “sees” and thus, what computations the networks are doing.

Check Out The Event Page On Facebook