Modeling the environment with egocentric vision systems
More and more intelligent systems, such as robots or wearable systems, are present in our everyday life. This kind of systems interact with the environment so they need suitable models of their surrounding. Depending on the tasks that they have to perform, the information required in those models changes: from
highly detailed 3D models for autonomous navigation systems, to semantic models including important information for the user. These models are created using the sensory data provided by the system (Fig. 1). Cameras are an important sensor included in most intelligent systems thanks to their small size, cheap prices and the great amount of information that they provide. This thesis studies and develops new methods to create models of the environment with different levels of precision and semantic information. There are two coomon key-points in the subsequent presented approaches:
- The use of egocentric vision systems. All the vision systems and image sequences used in this thesis characterize for a first-person (egocentric) point of view.
- The use of omnidirectional vision. This kind of vision systems provide much more information that conventional cameras thanks to their wide field of view.
This thesis studies how computer vision can be used to create different models of the environment. To test our proposals, different cameras have been used, both in robotic and wearable platforms.
KeywordsCompurter Vision, Omnidirectional camera, Egocentric vision
Copyright (c) 2015 Alejandro Rituerto
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.