EconPapers    
Economics at your fingertips  
 

An Intelligent and Precise Agriculture Model in Sustainable Cities Based on Visualized Symptoms

Bashar Igried, Shadi AlZu’bi, Darah Aqel, Ala Mughaid, Iyad Ghaith and Laith Abualigah ()
Additional contact information
Bashar Igried: Department of Information Technology, Faculty of Prince Al-Hussien Bin Abdullah II for IT, The Hashemite University, Zarqa 13133, Jordan
Shadi AlZu’bi: Faculty of Science and IT, Al-Zaytoonah University of Jordan, Amman 11733, Jordan
Darah Aqel: Faculty of Science and IT, Al-Zaytoonah University of Jordan, Amman 11733, Jordan
Ala Mughaid: Department of Information Technology, Faculty of Prince Al-Hussien Bin Abdullah II for IT, The Hashemite University, Zarqa 13133, Jordan
Iyad Ghaith: Faculty of Science and IT, Al-Zaytoonah University of Jordan, Amman 11733, Jordan
Laith Abualigah: Computer Science Department, Prince Hussein Bin Abdullah Faculty for Information Technology, Al al-Bayt University, Mafraq 25113, Jordan

Agriculture, 2023, vol. 13, issue 4, 1-20

Abstract: Plant diseases represent one of the critical issues which lead to a major decrease in the quantity and quality of crops. Therefore, the early detection of plant diseases can avoid any losses or damage to these crops. This paper presents an image processing and a deep learning-based automatic approach that classifies the diseases that strike the apple leaves. The proposed system has been tested using over 18,000 images from the Apple Diseases Dataset by PlantVillage, including images of healthy and affected apple leaves. We applied the VGG-16 architecture to a pre-trained unlabeled dataset of plant leave images. Then, we used some other deep learning pre-trained architectures, including Inception-V3, ResNet-50, and VGG-19, to solve the visualization-related problems in computer vision, including object classification. These networks can train the images dataset and compare the achieved results, including accuracy and error rate between those architectures. The preliminary results demonstrate the effectiveness of the proposed Inception V3 and VGG-16 approaches. The obtained results demonstrate that Inception V3 achieves an accuracy of 92.42% with an error rate of 0.3037%, while the VGG-16 network achieves an accuracy of 91.53% with an error rate of 0.4785%. The experiments show that these two deep learning networks can achieve satisfying results under various conditions, including lighting, background scene, camera resolution, size, viewpoint, and scene direction.

Keywords: agriculture intelligence; precise agriculture; sustainability; predefined model; computer vision; intelligent systems (search for similar items in EconPapers)
JEL-codes: Q1 Q10 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q18 (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.mdpi.com/2077-0472/13/4/889/pdf (application/pdf)
https://www.mdpi.com/2077-0472/13/4/889/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jagris:v:13:y:2023:i:4:p:889-:d:1126062

Access Statistics for this article

Agriculture is currently edited by Ms. Leda Xuan

More articles in Agriculture from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jagris:v:13:y:2023:i:4:p:889-:d:1126062