A modern deep learning framework in robot vision for automated bean leaves diseases detection

2021-07-02

A modern deep learning framework in robot vision for automated bean leaves diseases detection

Sudad H. Abed, Alaa S. Al-Waisy, Hussam J. Mohammed & Shumoos Al-Fahdawi

Agriculture is considered as one of the most important economic resources for farmers and countries. The beans are one of the essential agricultural crops which provide dietary fibre as the second significant source and the third important source of calories for human beings. The bean leaves can be affected by several diseases, such as angular leaf spots and bean rust, which can cause big damage to bean crops and decrease their productivity.

 Thus, treating these diseases in their early stages can improve the quality and quantity of the product. Incorrect diagnosis of the infected leaf can lead to the use of chemical treatments for normal leaf thereby the issue will not be solved, and the process may be costly and harmful. To overcome these issues, a modern deep learning framework in robot vision for the early detection of bean leaves diseases is proposed.

The proposed framework is composed of two primary stages, which detect the bean leaves in the input images and diagnosing the diseases within the detected leaves. The U-Net architecture based on a pre-trained ResNet34 encoder is employed for detecting the bean leaves in the input images captured in uncontrolled environmental conditions.

In the classification stage, the performance of five diverse deep learning models (e.g., Densenet121, ResNet34, ResNet50, VGG-16, and VGG-19) is assessed accurately to identify the healthiness of bean leaves. The performance of the proposed framework is evaluated using a challenging and extensive dataset composed of 1295 images of three different classes (e.g., Healthy, Angular Leaf Spot, and Bean Rust). In the binary classification task, the best performance is achieved using the Densenet121 model with a CAR of 98.31%, Sensitivity of 99.03%, Specificity of 96.82%, Precision of 98.45%, F1-Score of 98.74%, and AUC of 100%. The higher CAR of 91.01% is obtained using the same model in the multi-classification task, with less than 2 s per image to produce the final decision.

 To Download

Prepare the Printer   Back to Detail Page