Received: 08 July 2025; Revised: 02 September 2025; Accepted: 10 September 2025; Published Online: 13 September 2025.
J. Smart Sens. Comput., 2025, 1(2), 25209 | Volume 1 Issue 2 (Septembre 2025) | DOI: https://doi.org/10.64189/ssc.25209
© The Author(s) 2025
This article is licensed under Creative Commons Attribution NonCommercial 4.0 International (CC-BY-NC 4.0)
Real-Time Wildlife Intrusion Detection System Using IoT
and YOLOv8
Anushree Patkar,
1,*
Pravin Hole
1
and Loukik Salvi
2,*
1
Department of Information Technology, D. J. Sanghvi College of Engineering, Mumbai, Maharashtra, 400056, India
2
Department of Computer Engineering, Thakur College of Engineering and Technology, Mumbai, Maharashtra, 400056, India
*Email: anushree.patkar@djsce.ac.in (A. Patkar), loukiksalvi96@gmail.com (L. Salvi)
Abstract
Humanwildlife conflict in agricultural areas leads to significant crop losses and livestock threats, creating an urgent
need for reliable real-time detection systems. This study presents the Wildlife Intrusion Detection System (WIDS), a
novel IoT-enabled solution designed to mitigate such risks. The system integrates PIR motion sensors, a Raspberry Pi
computing unit, and a custom-trained YOLOv8n object detection model for robust wildlife identification, with Twilio
SMS alerts ensuring rapid farmer response. A strategically deployed sensor network captures activity along the farm
perimeter, while the Raspberry Pi executes YOLOv8n inference for accurate classification. A dataset comprising
diverse animal images under varying conditions (day/night, weather, and motion speeds) was curated for training
and testing. The system achieved 8085% a detection accuracy, with an evaluation metrics of precision (0.84), recall
(0.82), F1-score (0.83), mean Average Precision (mAP) (0.85), and average inference latency of 0.6 s per frame. These
results highlight the system’s robustness under real-world field conditions, making it suitable for practical
deployment. The proposed WIDS significantly enhances farm security, minimizes agricultural losses, and
demonstrates the potential of IoT and deep learning integration for sustainable agriculture and wildlife management.
Keywords: YOLOv8; Raspberry PI; Animal detection; PIR; Farm; Intrusion detection.
1. Introduction
Wildlife intrusion into agricultural lands poses a serious threat to farm productivity and long-term sustainability, often
resulting in significant crop damage, livestock loss, and financial hardships for farmers.
[14]
With the increasing overlap
between human settlements and natural habitats, such conflicts have become more frequent, requiring reliable and
automated solutions to mitigate the risks. Traditional measures such as manual monitoring, scarecrows, or physical
barriers are either labor-intensive, costly, or quickly lose effectiveness as animals adapt.
[5,6]
These limitations highlight
the urgent need for advanced technological interventions that can ensure real-time monitoring and rapid response.
Several methods have been explored to address this issue, including the use of fencing and physical barriers, scare
devices, manual surveillance, and basic sensor systems. Although electric fences and nets provide physical protection,
they remain expensive to install and maintain. Scare tactics such as scarecrows and noise-makers may deter animals
initially but tend to lose their impact over time. Manual surveillance, while effective to some extent, is labor-intensive
and cannot guarantee timely intervention. Similarly, conventional motion-sensor alarm systems are unable to
distinguish between different types of intrusions, leading to frequent false alarms and reduced reliability. These
limitations demonstrate that existing methods lack the accuracy, adaptability, and intelligence required for large-scale
deployment in diverse farming environments.
To overcome these challenges, we propose the Wildlife Intrusion Detection System (WIDS), an IoT-enabled solution
that integrates Raspberry Pi computing, PIR motion sensors, and cameras with a custom-trained YOLOv8 deep
learning model. Unlike traditional methods, WIDS not only detects intrusions but also classifies wild animals with
high accuracy, thereby minimizing false positives. Furthermore, the system leverages Twilio-based SMS alerts,
ensuring that farmers are instantly notified of potential threats and can respond proactively. The primary contributions
of this research can be summarized as follows. First, WIDS demonstrates the effective integration of IoT hardware
and computer vision technologies into a real-time monitoring framework. Second, a YOLOv8-based detection model
trained on a curated dataset of wildlife images under diverse conditionssuch as day and night settings, varying
weather, and different animal movement speedsenables robust classification and improved detection accuracy.
Third, the system provides automated SMS alerts for immediate farmer intervention, thereby reducing response times.
Finally, the solution is designed to be both scalable and cost-effective, making it adaptable for farms of varying sizes
and resources. By addressing the shortcomings of conventional methods, the proposed WIDS provides a reliable,
intelligent, and field-deployable solution to minimize agricultural losses, safeguard farmer livelihoods, and strengthen
farm security.
2. Literature review
2.1 Existing systems
Wildlife intrusion detection in agricultural settings has been recognized as a major challenge, and several systems have
been developed to address this issue.
[7,8]
Balakrishnan et al.
[9]
integrates motion sensors, cameras, and machine learning
algorithms to detect and classify wildlife species, demonstrating promising results in real-world deployments by
providing timely alerts and reducing crop damage. Similarly, Raiaan et al.
[10]
presented a Wildlife Monitoring System
that leverages IoT devices and cloud-based analytics to monitor animal activity in natural habitats, highlighting the
potential of combining IoT technologies with automated analytics for effective wildlife management. In terms of
methodologies, image processing and computer vision techniques have been widely explored for wildlife detection
tasks.
[11]
Deep learning models, particularly Convolutional Neural Networks (CNNs), have shown superior accuracy
in automatically detecting and classifying wildlife species from images.
[12,13]
Among these, the YOLO (You Only Look
Once) family of algorithms has gained significant attention due to its ability to perform real-time object detection on
video streams.
[14–16]
The adoption of YOLO has proven especially useful for monitoring fast-moving animals under
field conditions.
These advancements underline the potential of integrating modern hardware (IoT devices, sensors, cameras) with
software-driven intelligence (deep learning, real-time analytics) to develop robust wildlife intrusion detection
systems.
[17–20]
However, most existing approaches either face limitations in scalability, suffer from high false alarm
rates, or lack field deployment validation across varying environmental conditions. To address these challenges, this
study introduces WIDS, an IoT-enabled, YOLOv8-based wildlife intrusion detection system designed for real-time,
cost-effective, and robust performance under practical field scenarios.
2.1 Research gaps
Despite the progress made by existing systems, several research gaps persist that limit their effectiveness in real-world
agricultural settings. A major challenge lies in accurately distinguishing between target wildlife species and non-target
objects such as domestic animals, farm workers, or environmental artifacts.
[21,22]
This often leads to false positives or
missed detections, highlighting the need for more robust detection algorithms and the integration of contextual
information to improve classification accuracy.
Another key limitation is scalability and cost-effectiveness. While many systems demonstrate promising results in
controlled environments, their deployment in large or resource-constrained farms remains impractical. Balancing
reliable performance with affordability and ease of maintenance is essential to ensure widespread adoption.
Furthermore, most existing approaches make limited use of advanced sensor fusion techniques that could combine
motion, infrared, and acoustic data to enhance detection robustness.
In addition to these technical challenges, user-centric aspects such as real-time alerts and intuitive interfaces remain
underexplored. Providing farmers with timely, actionable information is critical for practical utility but often
overlooked in existing designs. Observations of prior work also indicate that although IoT devices, PIR motion sensors,
and deep learning frameworks such as TensorFlow, PyTorch, and YOLO provide a strong foundation, further
refinement is needed to meet the specific requirements of agricultural environments. The integration of modern
technologies such as IoT, cloud-based analytics, and deep learning has already demonstrated potential in wildlife
monitoring. However, the practical utility of these systems ultimately depends on their ability to maintain reliability
under variable field conditions while operating within the constraints of limited resources and infrastructure.
To address these gaps, our proposed WIDS builds upon previous research by incorporating YOLOv8-based real-time
detection, IoT-enabled hardware integration, and a scalable, cost-effective design tailored to agricultural settings. By
doing so, WIDS aims to deliver a robust, efficient, and user-friendly solution that enhances farm security, minimizes
crop damage, and contributes to sustainable wildlife management practices.
3. Proposed methodology
The central problem addressed in this study is the urgent need for an effective and reliable system to detect and deter
wildlife intrusion in agricultural farms. Wildlife incursions often result in extensive crop losses, damage to farm
infrastructure, and threats to livestock, which collectively impose severe economic and social burdens on farming
communities. Traditional approaches such as fencing, manual monitoring, or basic sensor systems have proven either
ineffective, resource-intensive, or economically unfeasible for large-scale use. These limitations underscore the
necessity for a cost-effective, intelligent, and real-time monitoring system capable of operating under diverse field
conditions.
To meet this need, the proposed WIDS is designed to enhance farm security by combining IoT-enabled sensing devices,
Raspberry Pi-based computing, and a YOLOv8 object detection model. By delivering accurate detection, automated
alerts, and practical scalability, WIDS directly addresses the challenges of minimizing crop damage, protecting
livestock, and ensuring the safety and security of farm assets.
3.1 Scope
The scope of this proposed system encompasses the development and implementation of a comprehensive solution for
wildlife intrusion detection in agricultural environments. This includes the design and deployment of sensor networks,
integration of machine learning algorithms for animal detection, and real- time communication of alerts to farmers.
1. Assumptions and Constraints: Assumption: The system assumes a relatively stable environment with minimal
external disturbances that could trigger false alarms, such as strong winds or moving vegetation. Constraints: The
system operates within the limitations of available resources, including hard- ware components (e.g., Raspberry Pi,
PIR motion sensors) and computational capacity for machine learning model inference.
3.2 Proposed approach to build the Wildlife Intruder Detection System
The scope of the proposed WIDS encompasses the development and implementation of a comprehensive solution for
detecting and mitigating wildlife intrusion in agricultural environments. The system is designed to integrate a network
of sensors with advanced machine learning algorithms for accurate animal detection and real-time communication of
alerts. The proposed scope includes the design and deployment of IoT-based sensor networks, the training and
integration of the YOLOv8 detection model, and the delivery of timely alerts to farmers through SMS notifications.
The system aims to provide a scalable, cost-effective, and robust framework that enhances farm security, minimizes
crop losses, and protects livestock.
3.2.1 Assumptions
The system assumes that the deployment environment is relatively stable, with minimal external disturbances such as
strong winds, dense moving vegetation, or human activity, which may otherwise generate false alarms. It also assumes
that sufficient training data is available to capture variations in animal appearance under different conditions (e.g., day
vs. night, varying weather).
3.2.2 Constraints
The system operates under several practical constraints. First, it is limited by the computational capacity of the
hardware components, particularly the Raspberry Pi unit, which restricts the complexity of the deployed model and
inference speed. Second, the PIR motion sensors used in the system have a fixed detection range and may not capture
intrusions occurring beyond their coverage area. Third, the availability of reliable power supply and network
connectivity is essential for continuous operation, especially for real-time alerts. These constraints were carefully
considered during system design to balance accuracy, efficiency, and feasibility for deployment in agricultural settings.
3.3 Tools used for data collection, size of the sample, and limitations
3.3.1 Data collection
A custom dataset comprising 1,300 images was collected to train and validate the YOLOv8n detection model. The
dataset included three primary classes relevant to the farm intrusion problem: monkeys, pigs, and humans. Images
were sourced from controlled farm environments, open fields, and public image repositories to ensure diversity. Each
image was manually annotated with bounding boxes using the makesense.ai annotation platform, which provided high-
quality labeled data for supervised training.
3.3.2 Sample size and diversity
The dataset size was deliberately chosen to capture sufficient variability in animal poses, backgrounds, and lighting
conditions, thereby enhancing the robustness of the model. Images included both daytime and nighttime scenarios,
different weather conditions (sunny, cloudy, and rainy), and various animal movement speeds. The inclusion of humans
in the dataset allowed the system to distinguish between actual wildlife intrusions and non-target human presence,
reducing false alarms.
3.3.3 Limitations
Despite careful curation, the dataset is subject to several limitations. The relatively modest dataset size may restrict
the generalizability of the model to less common animal species. Furthermore, environmental factors such as poor
lighting, heavy rainfall, dense vegetation, or partial occlusions can impact motion detection and classification accuracy.
These limitations highlight the importance of future dataset expansion to include more species and broader
environmental conditions.
3.4 Benefits of proposed methodology
The proposed WIDS offers several key advantages for agricultural applications. One of the most significant benefits
is improved farm security, as the system delivers real-time alerts that enable farmers to take immediate action and
thereby prevent crop damage and livestock predation caused by wild animals. This proactive approach not only
safeguards farm assets but also helps reduce economic losses associated with wildlife intrusions.
Another important advantage is the enhanced efficiency achieved through automation. By minimizing the reliance on
manual surveillance, WIDS allows farmers to reallocate their time and resources toward other critical agricultural
activities. In addition, the system is designed to be cost-effective, leveraging affordable hardware components such as
Raspberry Pi and PIR motion sensors, along with open-source software frameworks, to ensure accessibility even for
resource-constrained farmers.
Furthermore, the scalability of the system is a defining feature. The modular architecture allows for flexible
deployment across farms of varying sizes and configurations, making it adaptable to diverse agricultural contexts. This
design not only supports current implementation needs but also provides the capability for future expansion, ensuring
long-term utility.
3. System design and architecture
3.1 Design diagram
Fig. 1 shows the flowchart of the proposed system. The process begins with the motion detection with the help of PIR
sensors. Once motion is detected, “Motion detected” message is sent to the server, triggering camera to capture the
frames of the detected movements. These frames are processed using A YOLO model for animal detection. If Model
identify pig or monkey for five consecutive frames, an SMS alert is sent to farmer using Twillo. If a human is detected
or no detection, the proposed system stops detection.
Fig. 1: Task network diagram.
Fig. 2: Sequence diagram.
3.2 Database diagram
Fig. 3 shows the database diagram for proposed system. It defines how difference components of the systems interact
through rational table. The farm table store the information about farm location that linked to the motion detection
table, which logs information about detection with timestamp and message. The detected animal table as associate
with the motion detection table records identified animal with their type, detection confidence level. The SMS alert
table maintains records of alerts generated when specific animals are detected while the SMS history table tracks
details of sent alerts, including recipient phone numbers and delivery status. Human presence related data is kept in
the human detection table.
Fig. 3: Database diagram.
4. Implementation
4.1 Working of system
Fig. 4 shows the system architecture for the proposed system. The operation of the WIDS follows a structured
workflow that integrates sensing, image processing, and alert communication. The process begins with motion
detection, where Passive Infrared (PIR) sensors continuously monitor the farm perimeter for thermal activity associated
with moving objects. When motion is detected within the sensors range, the Raspberry Pi computing unit is triggered,
which in turn activates the camera to capture real-time images of the monitored area. The captured images are
processed using the YOLOv8n object detection model, which has been custom-trained on images of monkeys, pigs,
and humans. The model analyses each image to identify and classify objects of interest, drawing bounding boxes
around detected animals and distinguishing between wildlife intrusions and non-target detections.
To improve reliability, WIDS employs a temporal validation mechanism, requiring the detection of a wild animal
(monkey or pig) in five consecutive frames before confirming an intrusion. This prevents false positives caused by
temporary noise, shadows, or sensor disturbances. If a human is detected, the system suppresses the alert to avoid
unnecessary notifications. Once an intrusion is confirmed, an alert is generated and transmitted via the Twilio API to
the farmer’s mobile phone in the form of an SMS notification. This ensures that farmers are informed promptly and
can take immediate action to protect their crops and livestock. After sending the alert, the system enters a reset state,
pausing further detection until new motion activity is recorded. This design reduces redundant alerts and ensures
efficient resource utilization.
4.2 Algorithms used (Modular Description)
4.2.1 Motion detection
Passive Infrared (PIR) sensors are used to continuously monitor the farm perimeter by measuring variations in infrared
radiation within their field of view. A detection event occurs when the absolute change in sensor signal exceeds a
predefined threshold

󰇝

󰇞
 (1)
where S
t
and S
{t-1}
represent the PIR sensor readings at times t and t−1. If the condition is satisfied, the Raspberry Pi
triggers the connected camera to capture an image.To minimize false activations caused by noise (e.g., wind,
vegetation), the detection signal is smoothed using a moving average filter
Fig. 4: The system architecture for the proposed system.
4.2.2 Object detection
The captured images are processed using the YOLOv8n (You Only Look Once, version 8n) algorithm, a state-of-the-
art deep learning model for real-time object detection. YOLOv8n processes each frame in a single pass, simultaneously
predicting bounding box coordinates, class labels, and confidence scores for objects in the image. In this system, the
model was trained on a custom dataset consisting of monkeys, pigs, and humans. The algorithm outputs bounding
boxes around detected objects, enabling accurate classification of wildlife intrusions while filtering out non-target
detections.
4.3 Tools used
The proposed system integrates multiple hardware and software components to achieve real-time wildlife intrusion
detection and alert generation. The Raspberry Pi 3 serves as the central processing and control unit, responsible for
coordinating motion detection, image acquisition, object recognition, and communication tasks. Upon motion
detection, the PIR motion sensors, strategically installed around the farm perimeter, trigger the camera module. The
camera then captures high-resolution images of the detected activity and transmits them to the Raspberry Pi 3 for
analysis.
For object detection, the captured images are processed using the YOLOv8n deep learning model, which was trained
on a custom dataset containing annotated images of wild animals (such as monkeys and pigs) and humans. The dataset
was prepared and labeled using makesense.ai, an open-source annotation platform that facilitates the creation of precise
bounding boxes for object detection models. The trained YOLOv8n model executes locally on the Raspberry Pi,
ensuring real-time inference even under limited connectivity.
To enhance user interaction and alert dissemination, the system integrates the Twilio API, which automatically sends
SMS notifications to the farmers registered mobile number whenever a potential intrusion is detected. The interface
between the hardware components and cloud communication service is managed via Python scripts, ensuring reliable
message delivery and system responsiveness. Overall, the architecture ensures efficient coordination among sensing,
processing, and communication layers, making it suitable for real-world deployment in agricultural environments.
4.4 Interface design
The proposed system operates autonomously without a graphical user interface (GUI), functioning continuously in the
background to ensure uninterrupted monitoring and timely alerts. Nevertheless, the interface design can be
conceptualized in two layers—hardware and software—that together enable seamless data flow and control.
The hardware interface integrates the Raspberry Pi 3 with peripheral modules such as the PIR motion sensors and the
camera. The sensors detect motion within the designated farm area, while the camera captures images of the detected
activity. These components are connected to the Raspberry Pi through GPIO pins and standard communication
protocols (e.g., I²C and USB), allowing synchronized triggering and image acquisition.
The software interface is responsible for system logic and decision-making. It includes the YOLOv8n object detection
model, which processes captured images to identify animals or humans; the Twilio API, which manages the automated
SMS alerting mechanism; and a set of Python scripts that coordinate communication between hardware modules and
software services. The codebase handles data preprocessing, inference execution, threshold-based event triggering,
and message dispatch.
Overall, the interface design emphasizes robust and efficient interaction between hardware and software components,
ensuring accurate detection, minimal latency, and reliable alert generation under real-world operating conditions.
5. Testing
The testing phase aimed to assess the effectiveness, robustness, and reliability of the Wildlife Intrusion Detection
System (WIDS) under real-world operating conditions. The primary objectives were to evaluate the system’s ability
to (i) detect motion accurately, (ii) correctly identify wild animals such as monkeys and pigs, (iii) differentiate them
from humans and background movement, and (iv) generate timely SMS alerts to the farmer through the Twilio
communication module.
Testing was conducted in field conditions with varying environmental factors, including daytime and nighttime
illumination, different weather conditions (sunny, cloudy, and light rain), and animal movement speeds. These
variations were introduced to examine the system’s robustness and ensure dependable performance across realistic
scenarios.
The evaluation incorporated standard performance metrics, including Precision (P), Recall (R), F1-score, Intersection
over Union (IoU), and mean Average Precision (mAP) Additionally, inference latencythe time taken from image
capture to alert generationwas measured to assess real-time feasibility. The system achieved an overall detection
accuracy of 8085%, with performance variations primarily observed under low-light or partially occluded conditions.
These results demonstrate that WIDS offers reliable and timely detection suitable for field deployment in agricultural
environments.
5.1 Testing environment
Hardware: Raspberry Pi 3, PIR Motion Sensors, Camera
Software: YOLOv8n model, Twilio API for SMS alerts Test Phases:
Unit Testing: Individual components such as motion sensors, camera, Raspberry Pi functionality, and Twilio integration
will be tested separately.
Integration Testing: Testing the interaction between different components of the system to ensure seamless
communication and functionality.
System Testing: Testing the entire system end-to-end in the farm environment to evaluate its performance in real-
world conditions.
5.2 Test cases
Motion detection:
Test Case 1: Verify that PIR motion sensors detect motion within the designated range.
Test Case 2: Ensure that motion detection triggers the camera to capture images.
Test Case 3: Validate that motion detection events are logged accurately by the Raspberry Pi.
Object detection:
Test Case 1: Confirm that YOLOv8n model accurately detects wild animals (monkeys and pigs) in captured images.
Test Case 2: Verify that the model distinguishes between wild animals and humans.
Test Case 3: Ensure that bounding boxes are correctly drawn around detected animals in the images.
Alerting system:
Test Case 1: Test Twilio integration to verify that SMS alerts are sent to the farmers mobile phone.
Test Case 2: Validate that SMS alerts are triggered only when wild animals (monkeys or pigs) are detected for 5
consecutive frames.
Test Case 3: Confirm that SMS alerts cease if a human is detected or if no detection occurs.
5.3 Testing methods used
Manual testing: Conducted by human testers to ensure that all functionalities of the system work as expected.
Automated testing: Utilized scripts to automate repetitive testing tasks such as motion detection, object detection,
and alert triggering.
Field testing: Deployed the system in the farm environment to assess its performance under real-world conditions,
including varying light conditions and weather.
6. Results and discussion
The experimental evaluation of the proposed Wildlife Intrusion Detection System (WIDS) demonstrated reliable and
consistent performance under varying field conditions. The results validate the system’s ability to detect motion,
identify animal species, and generate alerts promptly and accurately.
6.1 Quantitative performance metrics
Quantitative evaluation was conducted using standard performance measures, including Precision, Recall, F1-score,
and Intersection over Union (IoU) defined as:



(2)



(3)



(4)

  

(5)
where TP, FP, and FN denote the number of true positives, false positives, and false negatives, respectively.
The WIDS achieved an overall accuracy of 8085 %, Precision = 0.84, Recall = 0.82, F1 = 0.83 & IoU = 0.78,
confirming strong generalization performance for real-time detection tasks.
6.2 Confusion matrix
Fig. 5 presents the confusion matrix, which illustrates the classification performance for each target class—monkey,
pig, and human. High diagonal values indicate strong true-positive rates for monkeys and pigs, with minimal false-
positive detections for humans. False negatives primarily occurred in low-illumination or partial-occlusion scenarios,
emphasizing the need for additional infrared or thermal sensing in future iterations. This visualization validates the
model’s discriminative capability across categories.
Fig. 5: Confusion matrix.
6.3 F1 confidence curve
Fig. 6 shows the F1 confidence curve, representing the balance between precision and recall across varying confidence
thresholds. The curve peaks at a confidence threshold of 0.52, which provides the optimal trade-off between
minimizing false positives and maximizing detection sensitivity—ideal for real-time farm deployment.
6.4 Precision–recall curve
Fig. 7 depicts the precision–recall (PR) curve of the trained YOLOv8n model.The high area under the PR curve (≈
0.83) confirms consistent performance across confidence levels, indicating the model’s strong ability to maintain
both high recall and precision even in complex backgrounds.
6.5 Precision confidence curve
Fig. 8 presents the precision confidence curve, which demonstrates that precision remains stable up to a confidence
threshold of 0.7, after which a gradual decline is observed. This behavior confirms that the system can maintain high
precision for moderate confidence values, reducing the likelihood of false alerts under uncertain conditions.
Fig. 6: F1 confidence curve.
Fig. 7: Precision recall curve.
Fig. 8: Precision confidence curve.
6.6 Recall confidence curve
Fig. 9 illustrates the recall confidence curve, where recall is highest at lower confidence thresholds and decreases as
the threshold increases. The optimal operational point is around 0.5, providing a balanced compromise between
detection sensitivity and alert reliability. Together, Figs. 6–9 validate the robustness of the YOLOv8n-based detection
model and inform threshold selection for field deployment.
Fig. 9: Recall confidence curve.
6.7 Summary of findings
Motion detection: PIR sensors performed accurately within the designated range, reliably triggering image capture.
Object detection: The YOLOv8n model achieved strong detection accuracy for monkeys and pigs, with minimal
confusion between classes.
Alerting system: The Twilio API successfully dispatched SMS alerts for five consecutive detections of wild animals
and halted alerts when no target objects were identified.
Latency: Average inference latency was under 0.6 s per frame, confirming real-time feasibility.
Overall, the proposed WIDS achieved robust detection accuracy, operational stability, and fast response time under
diverse environmental conditions—making it suitable for real-world agricultural protection and early-warning
applications.
7. Conclusion
The wild animal intruder detection system represents a significant advancement in addressing the challenges posed by
wildlife intrusions in agricultural settings. By leveraging modern technologies such as Raspberry Pi, PIR motion
sensors, cameras, and machine learning algorithms, new proposed system provides a robust and efficient solution for
detecting and deterring wild animals from entering farms. The proposed system provides real-time alerts to farmers,
enabling them to take timely and proactive measures to protect their crops and livestock from wild animals such as
monkeys and pigs that gives the enhanced farm security. Automation of the detection process through the integration
of sensors and machine learning models reduces the need for constant manual monitoring, allowing farmers to focus
on other essential tasks, that helps to improve efficiency. The use of affordable and readily available hardware
components, along with open-source software tools, makes the system accessible to farmers with varying budget
constraints. The modular design of the system allows for easy scaling and adaptation to farms of different sizes and
configurations, ensuring flexibility for future expansion and deployment. The system’s reliance on proven technologies
and its ease of use support its feasibility and practicality for real-world agricultural applications. Throughout the
development process, we addressed various challenges such as sensor integration, model training, and minimizing
false positives. The custom dataset of 1300 images and the use of Yolov8n machine learning model trained for 80
epochs have enabled accurate detection and classification of target animals. Additionally, the real time communication
of alerts via SMS using Twilio ensures that farmers are promptly informed of potential intrusions.
Conflict of Interest
There is no conflict of interest.
Supporting Information
Not applicable
Use of artificial intelligence (AI)-assisted technology for manuscript preparation
The authors confirm that there was no use of artificial intelligence (AI)-assisted technology for assisting in the writing
or editing of the manuscript and no images were manipulated using AI.
References
[1] N. Abed, R. Murugan, A. Deldari, S. Sankarannair, M. V. Ramesh, IoT and AI-driven solutions for human-wildlife
conflict: Advancing sustainable agriculture and biodiversity conservation, Smart Agricultural Technology, 2025, 10,
100829, doi: 10.1016/j.atech.2025.100829.
[2] M. Kommineni, M. Lavanya, V. H. Vardhan, G . J. Kumar, V. S. Shaik , T. Gouri Sankar, Agricultural farms utilizing
computer vision (ai) and machine learning techniques for animal detection and alarm systems, Journal of
Pharmaceutical Negative Results, 2022, 13, 3292–3300, doi: 10.47750/pnr.2022.13.S09.411.
[3] T. S. Delwar, S. Mukhopadhyay, A. Kumar, M. Singh, Y.-w. Lee, J.-Y. Ryu, A. S. M. S. Hosen, Real-time farm
surveillance using IoT and YOLOv8 for animal intrusion detection, Future Internet, 2025, 17, 70, doi:
10.3390/fi17020070.
[4] H. D. Patil, N. F. Ansari, Intrusion detection and repellent system for wild animals using artificial intelligence of
things, 2022 International Conference on Computing, Communication and Power Technology (IC3P), Visakhapatnam,
India, 2022, 291-296, doi: 10.1109/IC3P52835.2022.00068.
[5] J. Miao, D. Rajasekhar, S. Mishra, S. K. Nayak, R.Yadav, A microservice-based smart agriculture system to detect
animal intrusion at the edge, Future Internet, 2024, 16, 296, doi: 10.3390/fi16080296.
[6] K. Bazargani, T. Deemyad, Automation’s impact on agriculture: opportunities, challenges, and economic effects,
Robotics, 2024, 13, 33, doi: 10.3390/robotics13020033.
[7] A. J. Simla, R. Chakravarthi, L. M. Leo, Agricultural intrusion detection (AID) based on the internet of things and
deep learning with the enhanced lightweight M2M protocol, Soft Computing, 2023, doi: 10.1007/s00500-023-07935-
1.
[8] N. S. Sayem, S. Chowdhury, A. H. M. Osama Haque, Md. Rostom Ali, Md. Shahinur Alam, S. Ahamed, C. K.
Saha, IoT-based smart protection system to address agro-farm security challenges in Bangladesh, Smart Agricultural
Technology, 2023, 6, 100358, doi: 10.1016/j.atech.2023.100358.
[9] P. Balakrishnan, A. Anny Leema, G. Gnana Kiruba B, A. Gupta, R. Aryan, Deep-track: A real-time animal detection
and monitoring system for mitigating human-wildlife conflict in fringe areas, Journal for Nature Conservation, 2025,
88, 127063, 0.1016/j.jnc.2025.127063.
[10] M. A. K. Raiaan, N. M. Fahad, S. Chowdhury, D. Sutradhar, S. S. Mihad, M.M. Islam, IoT-based object-detection
system to safeguard endangered animals and bolster agricultural farm security, Future Internet, 2023, 15, 372, doi:
10.3390/fi15120372
[11] Moorthy V, Rukkumani V, Deep vision-based wildlife intrusion detection with colour distribution preserved
generative adversarial networks, Pattern Recognition, 2025, 161, 111272, doi: 10.1016/j.patcog.2024.111272.
[12] P. Muragod, Viswavardhan Reddy K, Animal intrusion detection using various deep learning models, 2022 IEEE
North Karnataka Subsection Flagship International Conference (NKCon), Vijaypur, India, 2022, 1-12, doi:
10.1109/NKCon56289.2022.10126526.
[13] T. Diwan, G. Anirudh, J. V. Tembhurne, Object detection using YOLO: challenges, architectural successors,
datasets and applications, Multimedia Tools and Applications, 2023, 82, 9243–9275, doi: 10.1007/s11042-022-13644-
y.
[14] B. Dave, M. Mori, A. Bathani, P. Goel, Wild animal detection using YOLOv8, Procedia Computer Science, 2023,
230, 100-111, doi: 10.1016/j.procs.2023.12.065.
[15] L. Chen, G. Li, S. Zhang, W. Mao, M. Zhang, YOLO-SAG: An improved wildlife object detection algorithm
based on YOLOv8n, Ecological Informatics, 2024, 83, 102791, doi: 10.1016/j.ecoinf.2024.102791.
[16] Z. Ma; Y. Dong, Y. Xia, D. Xu, F. Xu, F. Chen, Wildlife real-time detection in complex forest scenes based on
YOLOv5s deep learning network, Remote Sensing, 2024, 16, 1350, doi: 10.3390/rs16081350.
[17] R. Nayak, M. M. Behera, U. C. Pati and S. K. Das, Video-based real-time intrusion detection system using deep-
learning for smart city applications, 2019 IEEE International Conference on Advanced Networks and
Telecommunications Systems (ANTS), Goa, India, 2019, 1-6, doi: 10.1109/ANTS47819.2019.9117960.
[18] S. Li, Z. Wang, Y. Lv, X. Liu, Improved YOLOv5s-based algorithm for foreign object intrusion detection on
overhead transmission lines, Energy Reports, 2024, 11, 6083-6093, doi: 10.1016/j.egyr.2024.05.061.
[19] P. C. Ravoor, T. S. B.Sudarshan, K. Rangarajan, Digital Borders: Design of an Animal Intrusion Detection
System Based on Deep Learning. In: Singh, S.K., Roy, P., Raman, B., Nagabhushan, P. (eds) Computer Vision and
Image Processing. CVIP 2020, Communications in Computer and Information Science, 2021, 1378. Springer,
Singapore, doi: 10.1007/978-981-16-1103-2_17.
[20] S. Sajithra Varun, G. Nagarajan, DeepAID: a design of smart animal intrusion detection and classification using
deep hybrid neural networks, Soft Computing, 2023, doi: 10.1007/s00500-023-08270-1.
[21] M. Mulero-Pázmány, S. Hurtado, C. Barba-González, M. L. Antequera-Gómez, F. Díaz-Ruiz, R. Real, I. Navas-
Delgado, J. F. Aldana-Montes, Addressing significant challenges for animal detection in camera trap images: a novel
deep learning-based approach, Scientific Reports, 2025, 15, 16191, doi: 10.1038/s41598-025-90249-z.
[22] D. Axford, F. Sohel, M. A Vanderklift, A. J. Hodgson, Collectively advancing deep learning for animal detection
in drone imagery: Successes, challenges, and research gaps, Ecological Informatics, 2024, 83, 102842,
10.1016/j.ecoinf.2024.102842.
Publisher Note: The views, statements, and data in all publications solely belong to the authors and contributors. GR
Scholastic is not responsible for any injury resulting from the ideas, methods, or products mentioned. GR Scholastic
remains neutral regarding jurisdictional claims in published maps and institutional affiliations.
Open Access
This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which
permits the non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as appropriate credit to the original author(s) and the source is given by providing a link to the Creative Commons
License and changes need to be indicated if there are any. The images or other third-party material in this article are
included in the article's Creative Commons License, unless indicated otherwise in a credit line to the material. If
material is not included in the article's Creative Commons License and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view
a copy of this License, visit: https://creativecommons.org/licenses/by-nc/4.0/
© The Author(s) 2025