A de facto standard in computer vision is to use a high-resolution camera for solving problems and choosing the placement of that camera (i.e., position and orientation) by human intuition. On the other hand, nature provides contrasting examples, wherein extremely simple, well-designed visual sensors allow for diverse and capable dynamic behaviors~\cite{landanimal2012}. In this work, motivated by these examples, we raise the following questions: \textit{1)} can very simple visual sensors solve computer vision tasks, and \textit{2)} what role does the design play in their effectiveness? We explore sensors of resolutions as low as 1x1, representing a single photoreceptor. %\textit{photoreceptor}. First, we demonstrate that just a few photoreceptors can be enough to solve tasks such as visual navigation and dynamical control with performance similar to a high-resolution camera. Second, we show that the design of these simple visual sensors plays a crucial role in their ability to provide useful information. To find a well-performing design for a given task, we present a \textit{computational design optimization} algorithm and demonstrate its effectiveness across different tasks and domains. Finally, we conduct a human study showing that, in most cases, the computational approach is superior to manual human design in finding optimal visual sensor designs, especially for simple and consequently less intuitive sensors.
Live content is unavailable. Log in and register to view live content