P344 - 44. GIL-Jahrestagung 2024 - Fokus: Biodiversität fördern durch digitale Landwirtschaft

Permanent URI for this collectionhttps://dl.gi.de/handle/20.500.12116/43863

Authors with most Documents  

Browse

Search Results

1 - 4 of 4
  • Conference Paper
    Mapping invasive Lupine on grasslands using UAV images and deep learning
    (Gesellschaft für Informatik e.V., 2024) Wijesingha, Jayan; Schulze-Brüninghoff, Damian; Wachendorf, Michael
    Semi-natural grasslands are threatened by invasive species. This study employs high-resolution images captured by an unmanned aerial vehicle (UAV) and deep learning techniques to map Lupine (Lupinus polyphyllus Lindl.) in grasslands, which is one of the most common invasive species in European grasslands. The methodology involves RGB image acquisition, structure from motion processing, canopy height modelling, and deep learning semantic segmentation model development. The resulting models were trained on RGB data, canopy surface height data, and their combination. The models demonstrate high accuracy and efficacy in identifying Lupine distribution. These models offer a valuable tool for continuously monitoring and managing invasive Lupine, with potential applications in similar environments without retraining. The method is beneficial for early-stage invasion detection, facilitating more targeted management efforts for ecologists.
  • Conference Paper
    Weed detection with YOLOv8-seg in UAV-imagery
    (Gesellschaft für Informatik e.V., 2024) Maren Pukrop, Simon Pukrop
    Accurate site-specific weed management depends on precise weed localization. In 2023, the YOLOv8 architecture was introduced, providing an accessible instance segmentation tool available in five scaled versions, each with an increasing number of trainable parameters. This study focuses on weed mapping on high-resolution UAV imagery, emphasizing the detection of small weed plants. The research investigates the detection of Cirsium arvense and other weed species in maize. To aid this research, RGB UAV imagery was obtained on three different dates ranging from May to June 2022. The detection of weeds was performed on five different YOLOv8 models. During validation, it was demonstrated that the models' accuracy in detecting weeds with many small plants is comparable, indicating no need for a larger model. Recall is low for small objects measuring only a few cm² across all five models tested but increases as object size increases.
  • Conference Paper
    Image-based activity monitoring of pigs
    (Gesellschaft für Informatik e.V., 2024) Jan-Hendrik Witte, Jorge Marx Gómez
    In modern pig livestock farming, animal well-being is of paramount importance. Monitoring activity is crucial for early detection of potential health or behavioral anomalies. Traditional object tracking methods such as DeepSort often falter due to the pigs' similar appearances, frequent overlaps, and close-proximity movements, making consistent long-term tracking challenging. To address this, our study presents a novel methodology that eliminates the need for conventional tracking to capture activity on pen-level. Instead, we segment video frames into predefined sectors, where pig postures are determined using YOLOv8 for pig detection and EfficientNetV2 for posture classification. Activity levels are then assessed by comparing sector counts between consecutive frames. Preliminary results indicate discernible variations in pig activity throughout the day, highlighting the efficacy of our method in capturing activity patterns. While promising, this approach remains a proof of concept, and its practical implications for real-world agricultural settings warrant further investigation.
  • Conference Paper
    Explainable AI in grassland monitoring: Enhancing model performance and domain adaptability
    (Gesellschaft für Informatik e.V., 2024) Shanghua Liu, Anna Hedström
    Grasslands are known for their high biodiversity and ability to provide multiple ecosystem services. Challenges in automating the identification of indicator plants are key obstacles to large-scale grassland monitoring. These challenges stem from the scarcity of extensive datasets, the distributional shifts between generic and grassland-specific datasets, and the inherent opacity of deep learning models. This paper delves into the latter two challenges, with a specific focus on transfer learning and eXplainable Artificial Intelligence (XAI) approaches to grassland monitoring, highlighting the novelty of XAI in this domain. We analyze various transfer learning methods to bridge the distributional gaps between generic and grassland-specific datasets. Additionally, we showcase how explainable AI techniques can unveil the model's domain adaptation capabilities, employing quantitative assessments to evaluate the model's proficiency in accurately centering relevant input features around the object of interest. This research contributes valuable insights for enhancing model performance through transfer learning and measuring domain adaptability with explainable AI, showing significant promise for broader applications within the agricultural community.
Load citations