HOME              PERSONAL             RESEARCH         CV         IAPR         2020         BOOKS         CONTACT

BMVA Workshop on Real-time 3D Scene Understanding in year 2020 (17th June 2015):

I wrote a little report on the workshop for the BMVA Newsletter, available here.

I organized/co-chaired a one-day workshop with Andrew Davison, in collaboration with the British Machine Vision Association. The following is taken from the official call for participation.

Confirmed Speakers

1. Marice Fallon, Assistant Professor at Edinburgh University / DARPA Robotics Challenge
2. David Moloney, CTO at Movidius
3. Doug Watt, Multimedia Strategy Manager at Imagination Technologies
4. Gerhard Reitmayr, Principal Engineer at Qualcomm
5. Jamie Shotton, Principal Researcher at Microsoft Research
6. Mike Aldred, Electronics Lead at Dyson
7. Simon Knowles, CTO at XMOS
8. Simon Lynen, Researcher at ETH-Zurich and Google Project Tango
9. Thomas Whelan, Dyson Research Fellow at Imperial College London
10. Renato Salas-Moreno, Co-founder at Surreal Vision Ltd.
11. Zeeshan Zia, Research Associate at Imperial College London

Registration:

Book online at www.bmva.org/meetings
10 GBP for BMVA Members
30 GBP for Non Members
including lunch

Meeting Overview

Real-time 3D scene understanding is the capability which will enable the applications that have always been expected from sensing equipped AI --- robotic devices and systems which can interact fully and safely with normal human environments to perform widely useful tasks. A key reason that such applications have not yet emerged is simply that the robust and real-time perception of the complex everyday world that they require has simply been too difficult to achieve, algorithmically and computationally. This has been especially true in the domain of commodity-level sensing and computing hardware which is where the potential for real world-changing impact lies.

In the PAMELA research project (a collaboration between Manchester, Edinburgh and Imperial) we are asking questions about the way that computer vision algorithms will co-evolve with computer architecture and programming tools in the coming years. We predict that mobile-class hardware in the 2020s will be increasingly massively parallel, but also heterogeneous and specialised, and that power consumption measures will be critical. Computer vision is likely to be one of the key application domains driving the design of such hardware; but what are the essential algorithms in this fast-moving field which we should aim to optimise in architecture choices, or even to design specific processors for? And how can we give application programmers a usable and performance-portable interface to what might be complicated architecture under the hood?

In this meeting we hope the seek the opinions of speakers from the most interesting companies and academic labs with an interest in algorithms, processors and applications for real-time vision. Our plan is for an extremely high quality meeting with a programme of 100% invited talks from industrial and academic leaders in the domain.

Areas of Interest:

  • High performance, low-power 3D scene understanding
  • Cross-layer (application, programming language, compiler, runtime, architecture) optimization for vision
  • Benchmarking of vision pipelines for accuracy, performance, and power
  • Domain Specific Languages for vision
  • Vision Processing Units (VPU)