Learning to Recognise Dynamic
                  Visual Content from Broadcast Footage is a 4 year project
                  funded by the EPSRC
                  which brings together the Centre
                    for Vision Speech and Signal Processing at the University
                    of Surrey, the Visual
                    Geometry Group at the University
                    of Oxford and the Computer
                    Vision Group at the University
                    of Leeds to tackle the subject of automatically learn to
                  recognise dynamic activity in broadcast footage with
                  demonstration activities in both Sign Language and more
                  general actions and activity. The task is to use linguistic
                  annotation provided by subtitle text and scripts as weak
                  supervision in the learning process. The project started in
                  late 2011 and will run until 2015. As we progress, this site
                  will be updated with results and
                  publications so check back
                  soon. For more details contact the lead academics at each
                  site.