Computational photography is rapidly breaking out of the laboratory and into the wild, from panoramas and high-dynamic-range imaging in smartphones to light-field cameras or wearable devices. However, these algorithms typically run on a CPU or GPU and consume large amounts of energy, which is at a premium on mobile devices.
Conversely, fixed-function hardware such as conventional image signal processors (ISPs) consume relatively little energy, but the lack of programmability makes such hardware almost useless for computational photography and computer vision applications. Unfortunately, developing custom hardware - and the software that drives it - is extremely time consuming and expensive.
In this project, we’re building a programmable camera to explore answers to these challenges.
For the hardware, we’ve coupled a Xilinx Zynq FPGA SOC with an NVIDIA Tegra X1. The Zynq provides reconfigurable logic fabric for acclerating custom applications, while the Tegra provides a convenient platform for CPU and GPU-based image processing. The Tegra also provides a responsive user interface via an HDMI touchscreen. On the front end, we have a Micro 4/3 lens mount and a 1080p image sensor, with plans to upgrade to a 10MP sensor in the future.