The goal of the project “The design and implementation of an image processing system for mobile applications using FPGA technology” was to develop a fully integrated system capable of performing image processing operations on a continuous stream of input image data using the System on Chip (SoC) approach. According to this approach, the complete system is contained within a single integrated circuit. In the case of the presented project, the base devices used for the implementation were the Field Programmable Gate Arrays (FPGAs). A great advantage of using the FPGAs for system prototyping is their reprogrammability. The consecutive solutions to the problem at hand can be quickly tested for correctness and their performance can be easily evaluated in many different configurations.
The target application for the described system was mobile robot navigation support – the system was tasked with the estimation of the epipolar geometry between the registered images allowing the computation of the relative rotation and the direction of translation and the detection of moving objects within the field of view of the camera.
The application was decomposed into software and hardware part, so that the final form of the system is a hybrid, software-hardware solution. The software is executed using a multi-core soft-processor based architecture. The tasks that can benefit from parallel hardware implementation were implemented as a set of dedicated coprocessors.
In particular, the following objectives were accomplished:
the performance of algorithms for image filtering and feature detection (edges, interest points) was tested, and a subset of algorithms that offered sufficient processing speed and accuracy while enabling easy translation to programmable hardware was selected,
pairs of point image feature detectors and descriptors were tested for accuracy and speed, and hardware-friendly subset of high-perfromance detector-descriptor pairs was selected for implementation,
the selected subset of algorithms was translated into programmable hardware and implemented as a set of flexible, high performance dedicated digital stream coprocessors enabling efficient image filtering and point feature extraction, description and matching,
a subsystem for the robust estimation of the epipolar geometry of the observed scene from the established matches was implemented — the subsystem is based on the RANSAC framework and uses a multicore soft-processor for RANSAC hypotheses generation and dedicated hardware coprocessor for hypothesis testing,
based on the established epipolar geometry, the system is capable of computing the relative rotation and the direction of translation of the moving camera,
additionally, the system is capable of performing the background subtraction operation to detect, label and track the moving objects in the field of view of the camera.
The results of measurements on physical devices confirm that programmable logic is an attractive platform for the implementation of image processing architectures. A complete system-on-chip solution offering high performance and low power consumption is especially desirable in embedded systems – small, lightweight mobile robots, driver assistance systems and smart cameras for vision-based surveillance.
If you’re interested in details, feel free to contact us or view the list of our publications. We also intend to release parts of the source code on GitHub — please stay tuned for more information.