Interpreting Human Behaviour in Video Using FSA's and Object
Context
NSF Award: IIS - 0534837
PI: D.A. Forsyth
What we are doing:
We study methods and features to build models of human activities that
have not, themselves, been observed.
For example, one might wish to find an example of video showing a
person visiting an ATM, but without using
any previous example of that phenomenon. As another example,
one might wish to find an example of an activity
seen from an aspect not represented in the training data.
Recent progress:
- Search:
- We have demonstrated methods to search for activities that have never been seen before using a simple finite state model
- Transfer:
- We have demonstrated methods to learn models of activity in one domain and recognize those activities in a different domain
- For American Sign Language, we can learn words in one view from an avatar and spot them in a different view of a real person
- For large body movements (walking, running, and so on) we have shown that you can learn a model in one view (say, from the side) and spot the activity in another view (say, from above)
- Representation:
- We have demonstrated an appearance based representation for activity that is very good at classifying activities on a number of different well-known datasets, and can also learn models of new activities with few examples.
- Brief description
- Publications
- Datasets
- We are checking model release forms, and will release a large activity dataset shortly.
- We have also collected video "in the wild" at Siebel Hall (which we can't release).
- We have also collected and used video from YouTube
General materials relating to activity and activity
recognition:
I have had an interest in human motion --- both animation and tracking
--- and in human animation
for some time. Together with a number of colleagues, I wrote
an extensive review of tracking and animation
recently. This is worth reading, if only for the huge
bibliography (close to 500 papers).
I plan to write a similar review dealing with human activity
recognition and representation. This is still in
preparation, though. Part of
the preparation was to do a tutorial at CVPR in 2006, together with
Deva Ramanan and Cristian Smincisescu.
I have written a series of papers on animation, tracking and activity
recognition.
- Forsyth papers on animation, tracking and activity
recognition.