3D Scenery Construction of Agricultural Environments for Robotics Awareness
Aristotelis Christos Tagarakis (),
Damianos Kalaitzidis,
Evangelia Filippou,
Lefteris Benos and
Dionysis Bochtis
Additional contact information
Aristotelis Christos Tagarakis: Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology Hellas (CERTH)
Damianos Kalaitzidis: Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology Hellas (CERTH)
Evangelia Filippou: Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology Hellas (CERTH)
Lefteris Benos: Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology Hellas (CERTH)
Dionysis Bochtis: Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology Hellas (CERTH)
A chapter in Information and Communication Technologies for Agriculture—Theme III: Decision, 2022, pp 125-142 from Springer
Abstract:
Abstract Depth cameras started to gain popularity in agricultural applications during the last years. This type of cameras has been implemented mainly for three-dimensional (3D) reconstruction of objects in indoor or outdoor sceneries. The use of such cameras in the construction of 3D models for simulation purposes of complex structures that are usually met in nature, such as trees and other plants, is a great challenge. Remarkably, agricultural environments are extremely complex. Thus, the proper setup and implementation of such technologies is particularly important in order to attain useable data. So far, the depth information collected using these cameras varies among different objects’ structure and sensing conditions due to the uncertainty of the outdoor environment. The use of a specific methodology using color and depth images gives the opportunity to extract geometrical characteristics information about point clouds of the targeted objects. This chapter explores the different technologies used by depth cameras and presents several applications concerning indoor and outdoor environments by presenting indicative scenarios for agricultural applications. Towards that direction, a 3D reconstruction of trees was established producing point clouds from Red Green Blue Depth (RGB-D) images acquired in real field conditions. The point cloud samples of trees were collected using an unmanned ground vehicle (UGV) and imported in Gazebo in order to visualize a simulation of the environment. This simulation technique can be used for testing and evaluating the navigation of robotic systems. By further analyzing the resulted 3D point clouds, various geometrical measurements of the simulated samples, such as the volume or the height of tree canopies, can be calculated. Possible weaknesses of this procedure are mainly attributed to the camera’s limitations and the sampling parameters. However, results show that it is possible to establish a suitable simulation environment to implement it in several agricultural applications by utilizing automated unmanned robotic platforms.
Keywords: Depth cameras; Unmanned ground vehicles; Scene reconstruction; 3D point cloud; Agricultural robots; Situation awareness (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations: View citations in EconPapers (2)
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:spochp:978-3-030-84152-2_6
Ordering information: This item can be ordered from
http://www.springer.com/9783030841522
DOI: 10.1007/978-3-030-84152-2_6
Access Statistics for this chapter
More chapters in Springer Optimization and Its Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().