top of page

Annotate Smarter

Annotate Smarter | 2D & 3D Sensor Fusion Data Segmentation Guide 2024

Easily Segment Your Sensor Fusion Data in 5 Steps, with Automated 3D Segmentation Model: minimizing the workload and cost

6

min

Mahmoud_edited.jpg

Admon W.

Easily Segment Sensor Fusion Data in 5 Steps, with Automated 3D Segmentation Model

Meet Nate, an autonomous robot navigating a bustling laboratory filled with researchers, tables, chairs, and electronic devices. Tasked with transporting items and collaborating with humans, Nate's precision and adaptability are crucial. The secret behind Nate's abilities? The cutting-edge 2D & 3D fusion segmentation technology.

Nate, an autonomous robot (invented character)

By merging 2D images from cameras and 3D point cloud data from LiDAR sensors, Nate can accurately identify and locate objects, generating precise environmental maps and real-time obstacle updates for effective path planning and obstacle avoidance. This technology is the backbone of modern applications like self-driving cars and drones, where precise detection and recognition of objects are essential for safe navigation.

Normal Annotation vs. Data Segmentation

So, what are the differences between normal annotation and segmentation?Generally, data annotation is the process of adding labels or tags to objects or regions within data, which can be used for various purposes such as object recognition, classification, or providing information about the content. This process can be applied to different modalities, including images and point clouds. Annotations can be made manually by humans or automatically using algorithms, and in the context of machine learning, they are often used as ground truth data to train and evaluate models. Common types of annotations include bounding boxes, points, polygons, and semantic labels.

Image Annotation

Data segmentation is a general concept that can be applied to various modalities, including images and point clouds. The main goal of segmentation is to partition the data into meaningful groups or regions, which can be used for further analysis, object recognition, or other tasks. Different techniques and approaches are employed depending on the modality and the specific characteristics of the data. Segmentation can be done using various techniques, and there are two main types: semantic segmentation, where each pixel / point is assigned a label representing the object or class it belongs to, and instance segmentation, which differentiates between instances of the same class.

3D point cloud segmentation

While both data annotation and segmentation involve labeling or partitioning data, their applications and use cases differ. Annotations are typically used to provide ground truth data for training and evaluating machine learning models, while segmentation is used to extract meaningful information from data or simplify their representation for further analysis. Additionally, annotations can vary in granularity and level of detail, whereas segmentation usually focuses on dividing the data into meaningful regions or objects at a fine level of detail. The techniques and approaches employed for annotation and segmentation depend on the specific modality and characteristics of the data.

Segmentation offers advantages over normal annotation in applications requiring high precision and detailed representations, such as object recognition, scene understanding, and medical image analysis. By providing pixel-level or point-level information, segmentation allows for more accurate delineation of objects and regions, enabling better performance in these tasks. Moreover, segmentation algorithms can often be more automated and scalable than manual annotation, making them more suitable for processing large datasets efficiently.

Why 2D & 3D Sensor Fusion Segmentation Matters

2D & 3D sensor fusion segmentation technology combines the image information captured by 2D sensors (such as cameras) with the point cloud data captured by 3D sensors (such as LiDAR) to obtain richer and more accurate scene information. This fusion method takes full advantage of the high resolution and rich texture information of 2D images as well as the precise spatial information of 3D point cloud data, making target detection and recognition more accurate and robust.

2D & 3D sensor fusion segmentation

In practical applications, when using 2D & 3D sensor fusion segmentation technology for data annotation, the traditional method is: first, manually segment and annotate the 2D images, and then map these annotation information onto the 3D point cloud data. Next, further manual segmentation and annotation are performed using the 3D point cloud data. Finally, the annotation results of the 2D images and 3D point clouds are combined to obtain the final annotation results.

However, this approach has some drawbacks: firstly, annotation mapping errors may occur when transferring the 2D image annotation information to the 3D point cloud data. This is mainly due to the inherent mismatch between images and point cloud data, such as differences in the field of view and loss of depth information. This may result in inaccurate positioning of annotation information in 3D space. On the other hand, the time cost is relatively high: during the separate processing of 2D images and 3D point cloud data, manual segmentation and annotation must be performed for both data sources. This may lead to higher time costs, especially when dealing with large amounts of data.

How to Conduct 2D & 3D Sensor Fusion Segmentation on BasicAI Cloud*: A 5-Step Guide

Addressing the pain points mentioned above, BasicAI Cloud*'s solution is to first segment the point cloud data using an automatic segmentation model, and then map the points to 2D images for manual segmentation, filtering out other points to ensure that the 2D and 3D segmentation created for the same object are linked. By first segmenting the point cloud data with an automatic segmentation model, the time for manual segmentation and annotation can be greatly reduced. This approach is more efficient for large-scale datasets. In addition, the automatic segmentation model ensures the accuracy of the segmentation (the accuracy of manually segmenting point cloud data is questionable), thus reducing human errors. Furthermore, by mapping point clouds to 2D images for manual segmentation, the correlation between 2D and 3D segmentation can be further ensured. Lastly, since the segmentation of point clouds and 2D images is linked, there is no need to address the inconsistencies between 2D and 3D annotation results during fusion, simplifying the fusion process. This fusion segmentation approach can significantly enhance the accuracy and efficiency of data annotation while reducing the workload and cost associated with manual labeling.

Step 1: Create a Sensor Fusion Dataset

  1. Log in to the BasicAI Cloud* platform.

  2. Navigate to the Datasets tab and click the Create button.

  3. Choose the LiDAR Fusion option and enter a name for your dataset.

  4. Click Upload to add your 2D & 3D sensor fusion data. For format requirements, refer to the BasicAI Upload Documentation.

Step 1: Create a Sensor Fusion Dataset

Step 2: Create Ontologies

  1. Access the Ontology tab within your dataset (not the main menu on the left).

  2. Create new classes with attributes by clicking Create.

  3. Under Tool Type, select Segmentation.

  4. After defining your ontologies, click Model Map to link your classes to relevant classes in the default segmentation model.

  5. Create a Classification to represent the global label of the dataset (e.g., environment or scene).

Step 2: Create Ontologies

Step 3: Automatically Segment 3D Point Cloud Data

  1. Open your dataset and switch to the Segmentation tab in the right-hand menu.

  2. Click the auto-segmentation button (represented by a brain icon) to run the segmentation model. Note: You can use the extended menu to select specific objects for segmentation.

  3. Click the Apply and Run button to initiate the process, and watch as your point cloud data is automatically segmented into classes.

Step 3: Automatically Segment 3D Point Cloud Data

Step 4: Project Points onto Images

  1. Open the Display menu located in the bottom left corner.

  2. Check the Projected Points option to view points projected onto the corresponding image data.

Step 4: Project Points onto Images

Step 5. Segment 2D images with connections to 3D segmentations

  1. Double-click to open the image data.

  2. In the Results list on the right, select the objects for which you want to create corresponding 2D segmentation results. This step automatically establishes a connection between 2D and 3D segmentation results for the same object or class, allowing for simultaneous export of both modalities.

  3. Click the Filter icon (funnel-shaped) in the left-hand menu to display only the points of the selected objects, which have already been segmented in the 3D point cloud.

  4. Use the Polygon tool in the top-left corner to create 2D segmentation annotations around the projected points.

  5. Repeat this process for the remaining 2D segmentation tasks.

  6. Save your work and exit. You can now export the segmentation results.

Step 5. Segment 2D images with connections to 3D segmentations

BasicAI Cloud*'s solution significantly boosts the efficiency of 2D & 3D sensor fusion segmentation, minimizing the workload and cost associated with manual labeling.

Get started with BasicAI Cloud* today and revolutionize your AI project!


* To further enhance data security, we discontinue the Cloud version of our data annotation platform since 31st October 2024. Please contact us for a customized private deployment plan that meets your data annotation goals while prioritizing data security. 

Get Project Estimates
Get a Quote Today

Get Essential Training Data
for Your AI Model Today.

bottom of page