Interpreting Human Behaviour in Video Using FSA's and Object Context

NSF Award:  IIS  - 0534837

PI:  D.A. Forsyth

What we are doing:

We study methods and features to build models of human activities that have not, themselves, been observed. 
For example, one might wish to find an example of video showing a person visiting an ATM, but without using
any previous example of that phenomenon.  As another example, one might wish to find an example of an activity
seen from an aspect not represented in the training data. 


Recent progress:


General materials relating to activity and activity recognition:

I have had an interest in human motion --- both animation and tracking --- and in human animation
for some time.   Together with a number of colleagues, I wrote an extensive review of tracking and animation
recently.  This is worth reading, if only for the huge bibliography (close to 500 papers).
I plan to write a similar review dealing with human activity recognition and representation.  This is still  in preparation, though.  Part of
the preparation was to do a tutorial at CVPR in 2006, together with Deva Ramanan and Cristian Smincisescu.  
I have written a series of papers on animation, tracking and activity recognition.