CN116994389A - Monitoring alarm driving system and method based on artificial intelligence and image recognition - Google Patents

Monitoring alarm driving system and method based on artificial intelligence and image recognition Download PDF

Info

Publication number
CN116994389A
CN116994389A CN202310777454.9A CN202310777454A CN116994389A CN 116994389 A CN116994389 A CN 116994389A CN 202310777454 A CN202310777454 A CN 202310777454A CN 116994389 A CN116994389 A CN 116994389A
Authority
CN
China
Prior art keywords
monitoring
module
alarm
face
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310777454.9A
Other languages
Chinese (zh)
Inventor
舒志兵
严俊鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN202310777454.9A priority Critical patent/CN116994389A/en
Publication of CN116994389A publication Critical patent/CN116994389A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/19Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using infrared-radiation detection systems
    • G08B13/191Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using infrared-radiation detection systems using pyroelectric sensor means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Alarm Systems (AREA)

Abstract

The invention discloses a monitoring alarm driving system and a monitoring alarm driving method based on artificial intelligence and image processing, wherein the monitoring alarm driving system comprises a biological monitoring module, an image data acquisition module, a server, an alarm driving module and a mobile terminal, wherein the output end of the biological monitoring module is connected with the input end of the image data acquisition module, and the output end of the image data acquisition module is connected with the input end of the server; and the output end of the server is respectively connected with the alarm driving module and the mobile terminal. Compared with the traditional monitoring alarm system for people, the invention has various alarm driving modes, and the gas driving device is added to drive large animals besides the alarm of the alarm; the species recognition algorithm, the face recognition algorithm and the body type recognition algorithm are complementary, so that the identity of an intruder can be better judged, and corresponding countermeasures can be taken; after the alarm driving system is started, the server can send prompt information to the mobile terminal, and a user can know in time and know the site situation through the mobile equipment.

Description

Monitoring alarm driving system and method based on artificial intelligence and image recognition
Technical Field
The invention belongs to the technical field of monitoring devices, and particularly relates to a monitoring alarm driving system and method based on artificial intelligence and image recognition.
Background
With the development of the age, the economic enrichment is achieved, and people are increasingly concerned about personal property safety. The monitoring alarm system mainly comprises a high-definition camera, a sensor and an alarm device. The camera is responsible for finding and recording on-site images, the sensor is mainly responsible for sensing biological information, an alarm system is started after an intruder enters a monitoring range, and an alarm is sent out by the alarm to warn the intruder, so that property safety is protected from being affected.
The traditional monitoring alarm device mainly depends on a sensor to monitor biological information for alarming, and when a living organism intrudes into a monitoring range, the alarm device can be started. With the development of artificial intelligence technology in application, image recognition technology is gradually applied to monitoring alarm systems. In the prior art, for example, chinese patent numbers: CN103514694B applies face recognition technology in the monitoring alarm system, the camera compares the monitored face image with the face image stored in the face library, if it is judged that the person is not a stranger, the alarm is not triggered, otherwise, the alarm is started, and the personal property safety is protected.
It still suffers from the following disadvantages:
the monitoring alarm system is initially designed to protect personal property, but with the development of society, the national quality is improved, the number of phenomena such as personal theft destroying other property is reduced year by year, but various property losses caused by animal damage are not improved, mainly because most of the monitoring alarm systems put the center of gravity of protection on the monitoring of people, neglect possible damage caused by animals, and the alarm mode is mainly that an alarm sounds to alarm, so that an effective driving effect cannot be generated on large animals.
Therefore, we propose a monitoring alarm driving system and method based on artificial intelligence and image recognition to solve the above-mentioned problems.
Disclosure of Invention
(one) solving the technical problems
The invention aims to solve the problem that the prior art considers damage possibly caused by animal damage and personnel invasion at the same time, and provides a corresponding alarm driving method. The method comprises the steps of judging the biological type in an intrusion monitoring area through a species identification algorithm, a face identification algorithm and a body type judgment algorithm, starting different alarm driving devices according to the judging result, and integrating a monitoring alarm driving system and a monitoring alarm driving method based on artificial intelligence and image identification.
(II) technical scheme
The technical scheme for solving the technical problems is as follows:
the monitoring alarm driving system based on artificial intelligence and image processing comprises a biological monitoring module, an image data acquisition module, a server, an alarm driving module and a mobile terminal, wherein the output end of the biological monitoring module is connected with the input end of the image data acquisition module, and the output end of the image data acquisition module is connected with the input end of the server; the output end of the server is connected with the alarm driving module and the mobile terminal respectively; the biological monitoring module is used for monitoring biological thermal radiation information in a collection range in real time, wherein the image data collection module is used for receiving biological signals monitored by the biological monitoring module, and the image data collection module is used for collecting real-time image data information of the living beings in the collection range and transmitting the real-time image data information to the server; the server comprises a data processing module and a communication module, wherein the data processing module is connected with the communication module; the data processing module is used for processing the acquired real-time image data information, and the communication module is used for respectively sending the processed real-time image data information to the alarm driving module and the mobile terminal; the alarm driving module is used for receiving the real-time image data information transmitted by the communication module, and driving the intruder entering the monitoring range according to the corresponding real-time image data information; the mobile terminal is used for receiving the real-time image data information and watching, and simultaneously receives the alarm information.
Further, the biological monitoring module is a pyroelectric infrared sensor cluster with a biological monitoring function, wherein the pyroelectric infrared sensor cluster is used for monitoring biological thermal radiation information in an acquisition range in real time and sending a signal to the image acquisition module when the biological information is monitored.
Further, the image data acquisition module comprises a camera, a memory and a loudspeaker; the camera is a high-definition night vision camera with a gyroscope and an acceleration sensor, wherein the camera is used for recording real-time images; the memory comprises a memory card memory and a server memory, wherein the memory is used for storing the real-time image into the local SD memory card and storing the image information into the server through a wireless network.
Furthermore, the communication module is a wireless communication module, two communication modes of wire and wireless are adopted between devices through the communication module, and wireless communication respectively adopts two communication modes of wireless Bluetooth communication and wireless network communication according to the requirements of communication distance and data transmission speed between the devices.
Further, the alarm driving module comprises an audible and visual alarm and a gas driving device; the audible and visual alarm drives an intruder entering the monitoring range through high-decibel warning sound and strong light, wherein the gas driving device is a repeatable filling gas releasing device, the whole monitoring protection area is covered according to interval arrangement, and the large animals which are possibly damaged are driven through releasing pungent smell gas.
Further, the server is more than one centralized server or distributed servers; the server includes, but is not limited to, any one of a plurality of PCs, PC servers, blades, supercomputers, cloud servers in the same local area network and linked through the Internet.
Further, the mobile terminal may be any one of a mobile phone, a tablet computer, a notebook computer, an intelligent bracelet and other intelligent devices capable of receiving alarm information.
A monitoring alarm driving method based on artificial intelligence and image recognition comprises the following steps:
s10, detecting biological signals: the pyroelectric infrared sensor monitors whether biological activity information exists in a monitoring range, and the image data acquisition module and the alarm driving module are in a standby state;
s20, image data acquisition: when a living organism enters a monitoring range, the pyroelectric infrared sensor senses biological information, the camera is started and aims at the direction of the invading living organism to acquire real-time image data, then the acquired image data information is sent to the server through the communication module, and the data processing module carries out preprocessing on the image data, including filtering, noise reduction and compression;
s30, species characteristic extraction: the data processing module is internally provided with three algorithms of species identification, face recognition and body type identification, the species identification algorithm is used for judging the species of the invading organism, a confidence score threshold is set, if the confidence score of the predicted species exceeds the threshold, the species prediction is correct, different alarm driving modes are adopted according to different biological species, and the server sends a prompt message to the mobile terminal after the alarm driving module is started;
S40, model training: matching the icon data set of common animals and people with a prediction model, outputting a training set conforming to expected matching results, and training the extracted training set by using a yolo algorithm to construct an accurate species identification model;
s50, species identification prediction: and predicting and classifying the bounding box by using the constructed species identification model to determine the species category to which the bounding box belongs.
Further, in the step S30, the method further includes the steps of:
s301, judging that the small animals enter a monitoring range according to a species identification algorithm, and driving the small animals through an audible and visual alarm; when the species identification algorithm judges that the large animal enters the monitoring range, the gas expelling device releases the pungent odor gas to expel the large animal;
s302, judging that the human body enters the monitoring range according to a species recognition algorithm, starting a face recognition algorithm by the data processing module, performing similarity matching on the face information of the human body and all face characteristic information stored in the server in advance, and setting a similarity threshold; if the similarity exceeds the threshold, the security personnel are considered as security personnel, and an alarm is not triggered; otherwise, the matching is regarded as failure, and the loudspeaker sends out warning voice to prompt the extraneous person to leave the monitoring area as soon as possible; if the person stays in the monitoring area, the person is considered to be an external invader, the audible and visual alarm is started to warn and drive the person, and meanwhile, the server sends a prompt message to the mobile terminal;
S303, starting a body type recognition algorithm to assist in judging according to the fact that the species recognition algorithm does not recognize the biological species entering the monitoring range, namely, when the confidence score does not exceed a threshold value; estimating the real body type of the invasive living being through the distance d between the camera module and the invasive living being and the pixel size of the real body type in the camera module, and selecting a corresponding driving mode according to the different body type sizes; the large animals are driven by the gas driving device, and the small animals are driven by the audible and visual alarm.
Further, in the step S40, the yolo algorithm trains the data set to obtain a species identification model, and the specific steps include:
s401, collecting and labeling pictures of common animals and people in life, and manufacturing a tag data set;
s402, dividing a picture into S multiplied by S grids, wherein each grid is used for predicting B bounding boxes, and each bounding box has a Confidence (Confidence) in addition to the size and the position to represent the probability that an object exists in the bounding box;
s403, calculating positions x, y, w, h of the boundary frame and confidence, wherein x, y are distances between the boundary frame and the center of the whole picture, w, h represent the ratio of the width and the height of the boundary frame to the whole picture, and the confidence is calculated according to the following formula:
Where Pr (object) indicates whether the center of the object is in the bounding box, if at this value is 1, otherwise 0, IOU refers to the ratio of the intersection of the predicted bounding box and the true object position;
s404, predicting C conditional Class probabilities Pr (class|object) for each grid, namely when an Object exists in the grid, the probability of the Object belonging to a certain Class is calculated according to the following corresponding formula:
s405, continuously calculating a loss function (loss) by adopting the square sum of errors, training a species identification model according to the loss function value calculated in an iterative mode, and terminating training after the model tends to be in a fitting state.
Where loss function loss = λchord x coordinate prediction error + bounding box confidence error with object + λnoobj x bounding box confidence error without object + classification error.
Further, in the step S50, the species identification algorithm predicts the following steps:
for all bounding boxes, a confidence threshold is set, all confidence levels smaller than the threshold are reset to zero, and confidence scores Score and Score of the rest bounding boxes are calculated ij Representing an object C i The likelihood of being present in the jth bounding box;
Score ij =P(C i ∣Object)*Confidence j
non-maximum suppression (NMS) is then performed on all the Score, a Score threshold is set, all the Score below the threshold is zeroed, the Score and its corresponding bounding box with the largest category are found for a certain category of the object, they are added to the output list, for the objects with the remaining Score other than 0, the IOU (cross-over) of the bounding boxes of these objects and the bounding box corresponding to the largest Score is calculated, an IOU threshold is set, the bounding box Score exceeding the threshold is zeroed, and this processing is completed when all the Score is 0 or in the output list. All object categories are processed according to the method, the objects left in the output list are the objects predicted by the model, and the corresponding Score is the confidence Score of the objects.
Further, in the step S302, the specific steps of the face recognition algorithm include:
s3021, face detection: positioning and determining the position of a face to be detected in the image;
s3022, positioning key points: determining key point positions in the face, such as eyes, nose, mouth and the like;
s3023, face correction: for the positioned face feature points, the feature points are aligned through geometric transformation (affine, rotation and scaling), and the eyes, the mouths and the like are moved to the same positions;
s3024, face feature extraction: extracting characteristic information in the face, such as the shape of organs of the face and the position relation between the organs to obtain characteristic data of the face;
s3025, similarity calculation: and comparing the face features in the video with face feature information stored in a face library, and calculating the similarity, wherein the similarity represents the possibility that the detected face and the face in the face library are the same face.
Further, in the step S303, the body type recognition algorithm specifically includes:
s3031, under the condition of the distance d, measuring a pixel change quantity j corresponding to the unit angle change of the camera through a gyroscope, wherein the unit degree/meter is measured;
s3032, under the condition of the distance d, the camera is rotationally moved so as to enable the pixel displacement of the target object in the camera to be Measuring angle change θ by gyroscope 2 The acceleration sensor measures the spatial displacement l of the system 2 。/>Pixel variation for camera angle variation>And pixel variation caused by target spatial displacement +.>The result of the coaction in whichk is the pixel variation corresponding to the space displacement, unitPixels per meter;
s3033, measuring the approximate distance d between the invading organism and the camera by the pyroelectric sensor cluster, and obtaining the pixel height P of the organism in the camera according to the formula l 0 The animal actual height body size is estimated by =p/k.
(III) beneficial effects
Compared with the prior art, the technical scheme of the application has the following beneficial technical effects:
the beneficial effects of the application are as follows: (1) Compared with the traditional monitoring alarm system for people, the animal damage possibility is considered, and the personal property safety can be better protected;
(2) The alarm driving modes are diversified, and a gas driving device is added to drive the large animals besides the alarm of the alarm;
(3) The species recognition algorithm, the face recognition algorithm and the body type recognition algorithm are mutually complemented, so that different scene conditions can be dealt with, the identity of an intruder can be better judged, and corresponding countermeasures can be taken;
(4) After the alarm driving system is started, the server can send prompt information to the mobile terminal, and a user can know in time and know the site situation through the mobile equipment.
The foregoing description is only an overview of the present invention, and is intended to provide a better understanding of the present invention, as it is embodied in the following description, with reference to the preferred embodiments of the present invention and the accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of a supervisory alarm driving system based on artificial intelligence and image recognition according to the present invention;
FIG. 2 is a flow chart of a monitoring alarm driving method based on artificial intelligence and image recognition according to the present invention;
FIG. 3 is a schematic diagram of a feature extraction network of the yolo algorithm of the present invention;
fig. 4 is a schematic flow chart of face recognition in the face recognition algorithm of the present invention;
fig. 5 is a schematic diagram of a measurement method and vector addition structure of the body type recognition algorithm of the present invention.
Wherein: 1. a biological monitoring module; 101. a pyroelectric infrared sensor; 2. an image data acquisition module; 201. a camera; 202. a memory; 203. a speaker; 3. a server; 301. a data processing module; 302. a communication module; 4. an alarm driving module; 401. an audible and visual alarm; 402. a gas driving device; 5. a mobile terminal.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a monitoring alarm driving system based on artificial intelligence and image processing comprises a biological monitoring module 1, an image data acquisition module 2, a server 3, an alarm driving module 4 and a mobile terminal 5, wherein the output end of the biological monitoring module 1 is connected with the input end of the image data acquisition module 2, and the output end of the image data acquisition module 2 is connected with the input end of the server 3; the output end of the server 3 is respectively connected with the alarm driving module 4 and the mobile terminal 5; the biological monitoring module 1 is used for monitoring biological thermal radiation information in a collection range in real time, wherein the image data collection module 2 is used for receiving biological signals monitored by the biological monitoring module 1, and the image data collection module 2 is used for collecting real-time image data information of the living beings in the collection range and transmitting the real-time image data information to the server 3; the server 3 comprises a data processing module 301 and a communication module 302, wherein the data processing module 301 and the communication module 302 are connected; the data processing module 301 is configured to process the acquired real-time image data information, where the communication module 302 is configured to send the processed real-time image data information to the alarm driving module 4 and the mobile terminal 5, respectively; the alarm driving module 4 is configured to receive the real-time image data information transmitted by the communication module 302, where the alarm driving module 4 performs driving processing on an intruder entering the monitoring range according to the corresponding real-time image data information; the mobile terminal 5 is used for receiving and watching the real-time image data information, and the mobile terminal 5 simultaneously receives the alarm information sent by the alarm driving module 4.
Specifically, the biological monitoring module 1 is a pyroelectric infrared sensor 101 cluster with a biological monitoring function, wherein the pyroelectric infrared sensor 101 cluster is used for monitoring biological thermal radiation information in an acquisition range in real time and sending a signal to the image acquisition module 2 when the biological information is monitored; by forming a sensor cluster by a plurality of pyroelectric sensors 101 to surround a monitoring area, the occurrence of monitoring dead angles can be avoided, and the full coverage protection of the area can be realized.
Specifically, the image data acquisition module 2 includes a camera 201, a memory 202, and a speaker 203; the camera 201 is a high-definition night vision camera with a gyroscope and an acceleration sensor, clear video data can be recorded at night, and the rotation angle of the camera 201 and the displacement of the camera in the direction can be calculated by the image data acquisition module 2 through the built-in gyroscope and the acceleration sensor; the image data acquisition module 2 may be a single high-resolution network camera, or may be a camera matrix formed by a plurality of high-resolution network cameras in order to improve the accuracy of detection.
Specifically, the memory 202 includes a memory card storage and a server storage, where the memory 202 is configured to store real-time images into a local SD memory card and store image information into the server via a wireless network; the memory card is used for storing 128GB large-capacity SD memory card, and is mainly used for storing images of organisms entering the monitoring range in real time locally, and when network communication fails, the images can be recorded by the memory card. The server is stored as wireless network storage, the image data is stored in the server through the communication module, and a user can directly watch on-site images on the mobile terminal 5, so that the user can conveniently and timely know on-site conditions, and the method is convenient and efficient.
Specifically, the communication module 302 adopts two communication modes, namely wired and wireless, the wireless communication includes bluetooth communication and wireless network communication, the communication module is used for transmitting signals and data among the devices, different communication modes are adopted according to different communication distances and data transmission speeds, bluetooth communication is adopted among local devices, and wireless network communication is adopted for data transmission of a server.
Specifically, the alarm driving module 4 includes an audible and visual alarm 401 and a gas driving device 402; the audible and visual alarm 401 drives the intruder entering the monitoring range through high decibel warning sound and strong light, wherein the gas driving device 402 is a repeatable filling gas releasing device, and covers the whole monitoring protection area according to interval setting, and drives the large animals which are possibly damaged through releasing the pungent smell gas. The audible and visual alarm 401 is a common audible and visual alarm, and after being started, the audible and visual alarm can drive the invading animal by emitting high decibel alarm sound and strong light; the gas expelling device 402 is a device capable of releasing non-toxic or low-toxic pungent smell non-toxic or low-toxic gas, the gas expelling device 402 stores gas through a bottom storage tank and can be released for multiple times, a top gas filling hole can be used for gas supplement, a plurality of gas expelling devices are matched and installed, gas expelling of a full-monitoring protection area can be realized, and the property safety of a user is protected under the condition that animal health is not affected.
Specifically, the server 3 is more than one centralized server or distributed servers; the server comprises, but is not limited to, any one of a plurality of PC computers, PC servers, blade computers, supercomputers and cloud servers which are connected on the same local area network and through the Internet; the main functions of the server include: the method comprises the steps of receiving video data transmitted by a camera module, performing filtering, denoising and compression processing on the video data, and simultaneously setting a species recognition algorithm, a face recognition algorithm and an animal body type recognition algorithm in the server 3 to deal with the situation that different organisms enter a monitoring range, adopting different coping modes according to the situation, and sending prompt information to the mobile terminal 5 by the server 3 after the alarm driving module 4 is started. The server 3 provides an online access function, and a user can directly access the server by means of the mobile terminal 5 equipment to acquire monitoring site video information.
Specifically, the mobile terminal 5 may be a plurality of electronic devices such as a computer, a tablet, a mobile phone, etc., and is connected to the server through a wireless network; the mobile terminal can watch the real-time images stored in the server in a networking way, meanwhile, the server also has a plurality of prompt messages such as short message prompts, telephone prompts, qq, weChat public number prompts and the like which are sent to the mobile terminal, and the user can set the prompt messages by himself so as to enable the user to know the condition in the monitoring range in the shortest time.
Referring to fig. 2, the invention also provides a monitoring alarm driving method based on artificial intelligence and image recognition, which comprises the following steps:
s10, detecting biological signals: the pyroelectric infrared sensor monitors whether biological activity information exists in a monitoring range, and the image data acquisition module and the alarm driving module are in a standby state;
s20, image data acquisition: when a living organism enters a monitoring range, the pyroelectric infrared sensor senses biological information, the camera is started and aims at the direction of the invading living organism to acquire real-time image data, then the acquired image data information is sent to the server through the communication module, and the data processing module carries out preprocessing on the image data, including filtering, noise reduction and compression;
s30, species characteristic extraction: the data processing module is internally provided with three algorithms of species identification, face recognition and body type identification, the species identification algorithm is used for judging the species of the invading organism, a confidence score threshold is set, if the confidence score of the predicted species exceeds the threshold, the species prediction is correct, different alarm driving modes are adopted according to different biological species, and the server sends a prompt message to the mobile terminal after the alarm driving module is started;
S40, model training: matching the icon data set of common animals and people with a prediction model, outputting a training set conforming to expected matching results, and training the extracted training set by using a yolo algorithm to construct an accurate species identification model;
s50, species identification prediction: and predicting and classifying the bounding box by using the constructed species identification model to determine the species category to which the bounding box belongs.
Specifically, in step S30, the method further includes the steps of:
s301, the server sends different instructions to the alarm driving module according to different biological types, and when the birth species is judged to be a human or the animal type in the monitoring range can not be identified, the server also assists in judgment through a face recognition algorithm and an animal type recognition algorithm; when the species identification algorithm judges that the former living things are large animal types which possibly cause damage, the server starts the gas driving device to release the stimulating gas, so that the stimulating gas is forced to be far away from the monitoring range, and damage is avoided; if the small animal type is judged, starting an audible and visual alarm to drive the small animal type through high decibel noise and strong light; the server sends prompt information to the mobile terminal after the alarm driving module is started, and a user can take follow-up measures according to the actual environment conditions through the images uploaded by the image acquisition module;
S302, judging that the human body enters the monitoring range according to a species recognition algorithm, starting a face recognition algorithm by the data processing module, performing similarity matching on the face information of the human body and all face characteristic information stored in the server in advance, and setting a similarity threshold; if the similarity exceeds the threshold, the security personnel are considered as security personnel, and an alarm is not triggered; otherwise, the matching is regarded as failure, and the loudspeaker sends out warning voice to prompt the extraneous person to leave the monitoring area as soon as possible; if the person stays in the monitoring area, the person is considered to be an external invader, the audible and visual alarm is started to warn and drive the person, and meanwhile, the server sends a prompt message to the mobile terminal;
s303, starting a body type recognition algorithm to assist in judging according to the fact that the species recognition algorithm does not recognize the biological species entering the monitoring range, namely, when the confidence score does not exceed a threshold value; estimating the real body type of the invasive living being through the distance d between the camera module and the invasive living being and the pixel size of the real body type in the camera module, and selecting a corresponding driving mode according to the different body type sizes; the large animals are driven by the gas driving device, and the small animals are driven by the audible and visual alarm.
Referring to fig. 3, in step S40, the species identification algorithm is used to quickly locate and identify the type of the invading organism, and the server performs subsequent operations according to the type of the invading organism, and the model can be trained by yolo algorithm, which specifically includes the following steps:
s401, collecting and labeling pictures of common animals and people in life, and manufacturing a tag data set;
s402, dividing a picture into S multiplied by S grids, wherein each grid is used for predicting B bounding boxes, and each bounding box has a Confidence (Confidence) in addition to the size and the position to represent the probability that an object exists in the bounding box;
s403, calculating positions x, y, w, h of the boundary frame and confidence, wherein x, y are distances between the boundary frame and the center of the whole picture, w, h represent the ratio of the width and the height of the boundary frame to the whole picture, and the confidence is calculated according to the following formula:
where Pr (object) indicates whether the center of the object is in the bounding box, if at this value is 1, otherwise 0, IOU refers to the ratio of the intersection of the predicted bounding box and the true object position;
s404, predicting C conditional Class probabilities Pr (class|object) for each grid, namely when an Object exists in the grid, the probability of the Object belonging to a certain Class is calculated according to the following corresponding formula:
S405, continuously calculating a loss function (loss) by adopting the square sum of errors, training a species identification model according to the loss function value calculated in an iterative mode, and terminating training after the model tends to be in a fitting state.
Where loss function loss = λchord x coordinate prediction error + bounding box confidence error with object + λnoobj x bounding box confidence error without object + classification error.
The specific formula of the loss function is as follows:
optionally, λchord=5 and λnoobj=0.5, by setting the coefficient weight of the coordinate prediction error to be very large, the confidence coefficient weight of the boundary box without the object is very small, and through iteration, the accuracy of position prediction can be improved, and the influence of the boundary box without the object, namely the background, on the prediction can be reduced.
Specifically, in step S50, the model prediction process of the species identification algorithm is as follows:
for all bounding boxes, a threshold value is first set, optionally 50%, and then all bounding boxes smaller than the threshold value are removed, i.e. the confidence of these bounding boxes is zeroed, and the object C in each grid is calculated i Confidence Score at the jth bounding box ij ,Score ij Representing an object C i The possibility of being present in the j-th bounding box.
Score ij =P(C i ∣Object)*Confidence j
All Score other than 0 are calculated, a threshold is set for Score, and all Score below the threshold is excluded, i.e. the Score is zeroed. For a certain object category, finding out the Score with the largest category and the bounding box corresponding to the Score, and adding the Score and the bounding box to an output list. For the bounding boxes with the remaining Score not being 0, the IOU of the bounding box corresponding to the largest Score is calculated, an IOU threshold is set, optionally the threshold is 70%, and for the bounding boxes above the threshold, the bounding boxes are considered to be predicted by the same object as the largest Score bounding box, and the bounding boxes are rejected, i.e. the Score thereof is returned to 0. When all bounding boxes are in the output list or Score is 0, the class object non-maximum suppression process (NMS) is deemed complete.
Referring to fig. 4, in step S302, the face recognition algorithm specifically includes:
s3021, face detection: positioning and determining the position of a face to be detected in an image, setting a detection window, and detecting the face from the image, wherein the detection window is generally smaller than the face;
s3022, positioning key points: determining key point positions in a face such as eyes, nose, mouth and the like, and obtaining pixel coordinates of the key point positions;
S3023, face correction: for the positioned face feature points, the feature points are aligned through geometric transformation (affine, rotation and scaling), and the eyes, the mouths and the like are moved to the same positions;
s3024, face feature extraction: extracting characteristic information in the face, such as the shape of organs of the face and the position relation between the organs to obtain characteristic data of the face;
s3025, similarity calculation: comparing the face features in the video with face feature information stored in a face library, and calculating the similarity, wherein the similarity represents the possibility that the detected face is the same face as the face in the face library; a similarity threshold is set, the optional threshold is 80%, and when the similarity exceeds the threshold, the identified face and the face can be considered.
Referring to fig. 5, in step S303, the body type recognition algorithm specifically includes:
s3031, aiming the camera at a target under the condition of a distance d, recording a pixel position a of the center of the target in the field of view of the camera, rotating the camera to enable the camera to displace P in a screen 1 To the a' position, the gyroscope unit records the rotation angle theta of the camera 1 The amount of change in the pixels in the unit angle corresponding screen is j=p 11 ,P 1 The unit of (1) is pixel, j is pixel/degree;
S3032, aiming the camera at the target under the condition of the distance d, rotating and moving the camera to change the target from c to c' in the pixel position of the camera, and measuring the angle change theta of the camera in the process through the gyroscope 2 The acceleration sensor measures the system space displacement l2, and the pixel displacement of the target object in the camera is thatThe displacement is pixel variation caused by camera angle variation +.>And pixel variation caused by target spatial displacement +.>The result of the coaction in which The unit of P is a pixel, and k is a pixel/m;
s3033, detecting the animal position through the pyroelectric sensor cluster, calculating the distance d between the invading organism and the camera according to the nearest sensor position, and calculating the pixel height P of the target organism in the camera on the distance d 0 According to formula l 0 =P 0 The animal body type, i.e. the estimated value of the actual height, is l 0
When training the model of the species identification algorithm, the identified object is mainly human and common animals, so as to increase the universality of the monitoring alarm driving system, avoid unnecessary performance waste, and for specific animals in certain areas, the training data set can be expanded for the animals, and the species identification algorithm model can be retrained according to the steps.
The gas driving device in the alarm driving module aims to force animals which are possibly damaged to leave the monitoring protection range rather than damage the animals by releasing the pungent odor gas, and the selection of the gas should be avoided as much as possible to avoid the selection of the gas with high toxicity so as to prevent the animals from being permanently damaged. The diluted ammonia gas with low toxicity can be selected to repel large animals.
The body type recognition algorithm is used as a supplement for the species recognition algorithm, and the purpose of the body type recognition algorithm is to judge whether the invading organism is a large organism or a small organism so as to take different countermeasures, so that the value is not required to be excessively accurate, and only the large organism and the small organism need to be distinguished. Animal height data calculated by the body type recognition algorithm are imprecise values, wherein the imprecise values comprise measurement errors of a measurement system and distance errors between the animal and a nearest sensor.
Example 1
At night, wild pigs intrude into the monitoring protection range, the pyroelectric sensor cluster monitors biological signals, and the image acquisition module is started. And the high-definition night vision camera turns the lens to the direction of the wild boar according to the position information sensed by the corresponding pyroelectric sensor cluster and transmits the acquired image data to the server, and the server performs preprocessing on the image data, including filtering, noise reduction and compression. The method comprises the steps that the type of an invading organism is judged through a species identification algorithm in a server, the biological type is predicted by a trained species identification model, a confidence score is given, the score exceeds a set threshold, the threshold is assumed to be 70%, the system identifies that the biological type is a wild boar, the wild boar is judged to be invading, an alarm driving module is started, an audible and visual alarm sounds and emits strong light, a gas driving device at a corresponding position releases ammonia with irritation to drive the wild boar, and the server sends prompt information to a mobile terminal to remind a user that the large organism enters a monitoring area.
After receiving the prompt message sent by the server, the householder checks the image at the mobile phone end to know the site condition, and goes to the monitoring area for the next day to carry out the arrangement of wild boar protection. After the householder enters the monitoring protection range, the interior of the server is identified by a species identification algorithm. When the server identifies that the living beings entering the monitoring area are people, the server starts a face recognition algorithm to judge, the image acquisition module acquires face information of the living beings, the face information is transmitted to the inside of the server through the wireless communication module, and the inside of the server compares the face information with all face characteristics stored in the server by the owner of the living beings through the face recognition algorithm, and gives a similarity score. After traversing all the stored face features, the similarity between the face features stored in advance by the householder and the face information acquired by the image acquisition module is very high and exceeds a set threshold value, and the system judges that the security personnel enter the monitoring protection area without triggering an alarm assuming that the threshold value is 80%.
Example 2
During the daytime, an unusual wild cat enters a monitoring protection range, a species identification algorithm is started in the server to identify the unusual wild cat, the biological species is judged to be the cat, but the confidence score is only 55% due to the large difference between the characteristics of the unusual wild cat and a training data set, the unusual wild cat does not reach a set threshold, and the threshold is assumed to be 70%. And the system judges that the biological species is not correctly identified, and the server enables a body type identification algorithm to carry out auxiliary judgment. According to the position of the pyroelectric sensor, which is detected to be nearest, the distance between the wild cat and the camera is calculated and estimated to be the distance d between the nearest sensor and the camera module, the camera rotates and translates for a distance, the pixel change j in a screen corresponding to a unit angle, the pixel change k corresponding to a unit pixel/degree and a unit space displacement are calculated respectively, the pixel height P of the wild cat in the camera module is measured 3 According to formula l 3 =P 3 The estimated body height l of the wild cat can be obtained by/k 3 Because the wild cat body type is not more than a common domestic dog (height of 40 cm), the system judges that the wild cat body type is a small animal, and starts an audible and visual alarm to drive the wild cat body type through high-resolution police sounds and strong lights.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. The monitoring alarm driving system based on artificial intelligence and image processing is characterized by comprising a biological monitoring module, an image data acquisition module, a server, an alarm driving module and a mobile terminal, wherein the output end of the biological monitoring module is connected with the input end of the image data acquisition module, and the output end of the image data acquisition module is connected with the input end of the server; the output end of the server is connected with the alarm driving module and the mobile terminal respectively; the biological monitoring module is used for monitoring biological thermal radiation information in a collection range in real time, wherein the image data collection module is used for receiving biological signals monitored by the biological monitoring module, and the image data collection module is used for collecting real-time image data information of the living beings in the collection range and transmitting the real-time image data information to the server; the server comprises a data processing module and a communication module, wherein the data processing module is connected with the communication module; the data processing module is used for processing the acquired real-time image data information, and the communication module is used for respectively sending the processed real-time image data information to the alarm driving module and the mobile terminal; the alarm driving module is used for receiving the real-time image data information transmitted by the communication module, and driving the intruder entering the monitoring range according to the corresponding real-time image data information; the mobile terminal is used for receiving the real-time image data information and watching, and simultaneously receives the alarm information.
2. The monitoring alarm driving system based on artificial intelligence and image processing according to claim 1, wherein the biological monitoring module is a pyroelectric infrared sensor cluster with a biological monitoring function, wherein the pyroelectric infrared sensor cluster is used for monitoring the biological thermal radiation information in the acquisition range in real time, and sending a signal to the image acquisition module when the biological information is monitored.
3. The monitoring alarm driving system based on artificial intelligence and image processing according to claim 1, wherein the image data acquisition module comprises a camera, a memory and a speaker; the camera is a high-definition night vision camera with a gyroscope and an acceleration sensor, wherein the camera is used for recording real-time images; the memory comprises a memory card memory and a server memory, wherein the memory is used for storing the real-time image into the local SD memory card and storing the image information into the server through a wireless network.
4. A monitoring alarm driving system based on artificial intelligence and image processing as claimed in claim 1, wherein said alarm driving module comprises an audible and visual alarm and a gas driving device; the audible and visual alarm drives an intruder entering the monitoring range through high-decibel warning sound and strong light, wherein the gas driving device is a repeatable filling gas releasing device, the whole monitoring protection area is covered according to interval arrangement, and the large animals which are possibly damaged are driven through releasing pungent smell gas.
5. The monitoring alarm driving method based on artificial intelligence and image recognition is characterized by comprising the following steps of:
s10, detecting biological signals: the pyroelectric infrared sensor monitors whether biological activity information exists in a monitoring range, and the image data acquisition module and the alarm driving module are in a standby state;
s20, image data acquisition: when a living organism enters a monitoring range, the pyroelectric infrared sensor senses biological information, the camera is started and aims at the direction of the invading living organism to acquire real-time image data, then the acquired image data information is sent to the server through the communication module, and the data processing module carries out preprocessing on the image data, including filtering, noise reduction and compression;
s30, species characteristic extraction: the data processing module is internally provided with three algorithms of species identification, face recognition and body type identification, the species identification algorithm is used for judging the species of the invading organism, a confidence score threshold is set, if the confidence score of the predicted species exceeds the threshold, the species prediction is correct, different alarm driving modes are adopted according to different biological species, and the server sends a prompt message to the mobile terminal after the alarm driving module is started;
S40, model training: matching the icon data set of common animals and people with a prediction model, outputting a training set conforming to expected matching results, and training the extracted training set by using a yolo algorithm to construct an accurate species identification model;
s50, species identification prediction: and predicting and classifying the bounding box by using the constructed species identification model to determine the species category to which the bounding box belongs.
6. The method for driving a monitoring alarm based on artificial intelligence and image processing according to claim 5, wherein in step S30, the method further comprises the steps of:
s301, judging that the small animals enter a monitoring range according to a species identification algorithm, and driving the small animals through an audible and visual alarm; when the species identification algorithm judges that the large animal enters the monitoring range, the gas expelling device releases the pungent odor gas to expel the large animal;
s302, judging that the human body enters the monitoring range according to a species recognition algorithm, starting a face recognition algorithm by the data processing module, performing similarity matching on the face information of the human body and all face characteristic information stored in the server in advance, and setting a similarity threshold; if the similarity exceeds the threshold, the security personnel are considered as security personnel, and an alarm is not triggered; otherwise, the matching is regarded as failure, and the loudspeaker sends out warning voice to prompt the extraneous person to leave the monitoring area as soon as possible; if the person stays in the monitoring area, the person is considered to be an external invader, the audible and visual alarm is started to warn and drive the person, and meanwhile, the server sends a prompt message to the mobile terminal;
S303, starting a body type recognition algorithm to assist in judging according to the fact that the species recognition algorithm does not recognize the biological species entering the monitoring range, namely, when the confidence score does not exceed a threshold value; estimating the real body type of the invasive living being through the distance d between the camera module and the invasive living being and the pixel size of the real body type in the camera module, and selecting a corresponding driving mode according to the different body type sizes; the large animals are driven by the gas driving device, and the small animals are driven by the audible and visual alarm.
7. The method for driving monitoring alarm based on artificial intelligence and image processing according to claim 5, wherein in step S40, the yolo algorithm trains the data set to obtain a species identification model, and the specific steps include:
s401, collecting and labeling pictures of common animals and people in life, and manufacturing a tag data set;
s402, dividing a picture into S multiplied by S grids, wherein each grid is used for predicting B bounding boxes, and each bounding box has a Confidence (Confidence) in addition to the size and the position to represent the probability that an object exists in the bounding box;
s403, calculating positions x, y, w, h of the boundary frame and confidence, wherein x, y are distances between the boundary frame and the center of the whole picture, w, h represent the ratio of the width and the height of the boundary frame to the whole picture, and the confidence is calculated according to the following formula:
Where Pr (object) indicates whether the center of the object is in the bounding box, if at this value is 1, otherwise 0, IOU refers to the ratio of the intersection of the predicted bounding box and the true object position;
s404, predicting C conditional Class probabilities Pr (class|object) for each grid, namely when an Object exists in the grid, the probability of the Object belonging to a certain Class is calculated according to the following corresponding formula:
s405, continuously calculating a loss function (loss) by adopting the square sum of errors, training a species identification model according to the loss function value calculated by iteration, and stopping training after the model tends to be in a fitting state;
where loss function loss = λchord x coordinate prediction error + bounding box confidence error with object + λnoobj x bounding box confidence error without object + classification error.
8. The method for driving monitoring alarm based on artificial intelligence and image processing according to claim 5, wherein in step S50, the species identification algorithm comprises the following prediction processes:
for all bounding boxes, a confidence threshold is set, all confidence levels smaller than the threshold are reset to zero, and confidence scores Score and Score of the rest bounding boxes are calculated ij Representing an object C i The likelihood of being present in the jth bounding box;
Score ij =P(C i ∣Object)*Confidence j
Non-maximum suppression (NMS) is then performed on all the Score, a Score threshold is set, all the Score below the threshold is zeroed, the Score and its corresponding bounding box with the largest category are found for a certain category of the object, they are added to the output list, for the objects with the remaining Score other than 0, the IOU (cross-over) of the bounding boxes of these objects and the bounding box corresponding to the largest Score is calculated, an IOU threshold is set, the bounding box Score exceeding the threshold is zeroed, and this processing is completed when all the Score is 0 or in the output list. All object categories are processed according to the method, the objects left in the output list are the objects predicted by the model, and the corresponding Score is the confidence Score of the objects.
9. The method for driving monitoring alarm based on artificial intelligence and image processing according to claim 6, wherein in step S302, the face recognition algorithm specifically comprises the following steps:
s3021, face detection: positioning and determining the position of a face to be detected in the image;
s3022, positioning key points: determining key point positions in the face, such as eyes, nose, mouth and the like;
s3023, face correction: for the positioned face feature points, the feature points are aligned through geometric transformation (affine, rotation and scaling), and the eyes, the mouths and the like are moved to the same positions;
S3024, face feature extraction: extracting characteristic information in the face, such as the shape of organs of the face and the position relation between the organs to obtain characteristic data of the face;
s3025, similarity calculation: and comparing the face features in the video with face feature information stored in a face library, and calculating the similarity, wherein the similarity represents the possibility that the detected face and the face in the face library are the same face.
10. The method for driving monitoring alarm based on artificial intelligence and image processing according to claim 6, wherein in step S303, the body type recognition algorithm specifically comprises the steps of:
s3031, under the condition of the distance d, measuring a pixel change quantity j corresponding to the unit angle change of the camera through a gyroscope, wherein the unit degree/meter is measured;
s3032, under the condition of the distance d, the camera is rotationally moved so as to enable the pixel displacement of the target object in the camera to beMeasuring angle change θ by gyroscope 2 The acceleration sensor measures the spatial displacement l of the system 2 。/>Pixel variation for camera angle variation>And pixel variation caused by target spatial displacement +.>The result of the coaction in whichk is the pixel variation corresponding to the space displacement, and is unit pixel/meter;
S3033, measuring the approximate distance d between the invading organism and the camera by the pyroelectric sensor cluster, and obtaining the pixel height P of the organism in the camera according to the formula l 0 The animal actual height body size is estimated by =p/k.
CN202310777454.9A 2023-06-29 2023-06-29 Monitoring alarm driving system and method based on artificial intelligence and image recognition Pending CN116994389A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310777454.9A CN116994389A (en) 2023-06-29 2023-06-29 Monitoring alarm driving system and method based on artificial intelligence and image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310777454.9A CN116994389A (en) 2023-06-29 2023-06-29 Monitoring alarm driving system and method based on artificial intelligence and image recognition

Publications (1)

Publication Number Publication Date
CN116994389A true CN116994389A (en) 2023-11-03

Family

ID=88525629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310777454.9A Pending CN116994389A (en) 2023-06-29 2023-06-29 Monitoring alarm driving system and method based on artificial intelligence and image recognition

Country Status (1)

Country Link
CN (1) CN116994389A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315594A (en) * 2023-11-28 2023-12-29 深圳市迪沃视讯数字技术有限公司 Intelligent security video monitoring system based on Internet of things
CN117649734A (en) * 2024-01-29 2024-03-05 湖南力研光电科技有限公司 Intelligent security monitoring method and system based on multidimensional sensor
CN118015660A (en) * 2024-04-08 2024-05-10 深圳市积加创新技术有限公司 Intelligent pet identification method and system based on infrared detection and vision in environment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315594A (en) * 2023-11-28 2023-12-29 深圳市迪沃视讯数字技术有限公司 Intelligent security video monitoring system based on Internet of things
CN117315594B (en) * 2023-11-28 2024-03-15 深圳市迪沃视讯数字技术有限公司 Intelligent security video monitoring system based on Internet of things
CN117649734A (en) * 2024-01-29 2024-03-05 湖南力研光电科技有限公司 Intelligent security monitoring method and system based on multidimensional sensor
CN117649734B (en) * 2024-01-29 2024-04-12 湖南力研光电科技有限公司 Intelligent security monitoring method and system based on multidimensional sensor
CN118015660A (en) * 2024-04-08 2024-05-10 深圳市积加创新技术有限公司 Intelligent pet identification method and system based on infrared detection and vision in environment
CN118015660B (en) * 2024-04-08 2024-08-02 深圳市积加创新技术有限公司 Intelligent pet identification method and system based on infrared detection and vision in environment

Similar Documents

Publication Publication Date Title
CN116994389A (en) Monitoring alarm driving system and method based on artificial intelligence and image recognition
KR101644443B1 (en) Warning method and system using prompt situation information data
US7535353B2 (en) Surveillance system and surveillance method
KR20160135004A (en) Module-based intelligent video surveillance system and antitheft method for real-time detection of livestock theft
CN116743970B (en) Intelligent management platform with video AI early warning analysis
WO2022142973A1 (en) Robot protection system and method
CN113628404A (en) Method and device for reducing invalid alarm
CN113076818A (en) Pet excrement identification method and device and computer readable storage medium
KR102233679B1 (en) Apparatus and method for detecting invader and fire for energy storage system
CN110837753A (en) Collective and separate model for human-vehicle object identification and control and use method thereof
CN117173847A (en) Intelligent door and window anti-theft alarm system and working method thereof
TWI590204B (en) Notification system of environment abnormality and the notification method of the same
WO2021022427A1 (en) Community stray-animal monitoring system and monitoring method
CN114821805B (en) Dangerous behavior early warning method, dangerous behavior early warning device and dangerous behavior early warning equipment
KR20140076184A (en) Monitering apparatus of school-zone using detection of human body and vehicle
EP4012678A1 (en) Security system
CN115294709A (en) Optical fiber vibration monitoring model, precaution system, electronic equipment and storage medium
CN211554957U (en) Be used for swimming pool personnel that fall into water to discriminate positioner and system
US10893243B1 (en) Lawn violation detection
CN114235821A (en) Intelligent early warning method and system for preventing external damage of long-distance oil transportation pipeline
CN116092175A (en) Livestock frame taking behavior identification and early warning method and device, medium and electronic equipment
CN112837471A (en) Security monitoring method and device for internet contract room
KR20160086536A (en) Warning method and system using prompt situation information data
CN113963502B (en) All-weather illegal behavior automatic inspection method and system
KR102635351B1 (en) Crime prevention system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination