25021 – Images and Lidar data analysis for vine pruning

Description:

The project focuses on the development of a tool for the segmentation of RGB images and their co-registration with LiDAR data, in the agricultural sector for automated vine pruning. The RGB images capture a portion of the vineyard, including the vine cordon, shoots, and buds. These images will be processed with deep learning algorithms that identify, pixel by pixel, the various parts of the plant. This makes it possible to determine precise cutting points. The LiDAR system enables 3D reconstruction of the depicted scene and determination of the spatial coordinates of each plant part. The goal is therefore to combine the segmentations obtained from the RGB images with the spatial coordinates derived from the LiDAR data. Combining this information is essential for automatic vine pruning. Part of the work will take place at a farm near Padua (a personal car is required).

Why This System is Needed

Deep learning-based vision algorithms can process images capturing vineyard scenes, determining the various plant parts pixel by pixel and consequently identifying the best cut points for winter pruning. To proceed with pruning, it is necessary to determine the spatial coordinates of the identified pixels. LiDAR systems are currently the most suitable tool for reconstructing objects in 3D and thus determining spatial coordinates. However, it is difficult to extract segmentations of the various plant parts from LiDAR data, just as it is difficult to extract spatial information from RGB images. To leverage the benefits of both techniques, RGB images and LiDAR data, a tool is needed to align the two acquisitions. 

How We Plan to Achieve It

To meet these objectives, the project will follow a structured three-phase approach:

1. Literature Analysis and Research

The first phase consists of studying methods and libraries described in the literature, as well as analyzing images and data already available or to be collected.

2. Tool Implementation and Data Acquisition

This phase involves developing the tool. Techniques for image segmentation and co-registration of images and LiDAR data will be determined and subsequently implemented in Python. Part of the tool development and data acquisition will be carried out at a farm near Padua.

3. Testing, Evaluation, and Documentation

The tool will undergo rigorous testing to validate the accuracy and robustness of both image segmentation and image–LiDAR co-registration. Results will be documented and compared with benchmarks from the literature. Technical documentation will cover the system architecture, implementation, and usage.