CN115661854A - Personnel detection device and detection method - Google Patents

Personnel detection device and detection method Download PDF

Info

Publication number
CN115661854A
CN115661854A CN202110781190.5A CN202110781190A CN115661854A CN 115661854 A CN115661854 A CN 115661854A CN 202110781190 A CN202110781190 A CN 202110781190A CN 115661854 A CN115661854 A CN 115661854A
Authority
CN
China
Prior art keywords
image
pattern
person
analysis module
image analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110781190.5A
Other languages
Chinese (zh)
Inventor
胡庆
付春明
姜鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Starting Point Artificial Intelligence Technology Co ltd
Original Assignee
Shenzhen Starting Point Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Starting Point Artificial Intelligence Technology Co ltd filed Critical Shenzhen Starting Point Artificial Intelligence Technology Co ltd
Priority to CN202110781190.5A priority Critical patent/CN115661854A/en
Publication of CN115661854A publication Critical patent/CN115661854A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a personnel detection and identification device and a detection and identification method. The detection device comprises an image acquisition module and an image analysis module, and the detection method comprises the following steps: the clothing worn by a person comprises a preset pattern; an image acquisition module of the detection device acquires videos or images of a monitored area; an image analysis module of the detection device receives the video or the image of the image acquisition module and identifies whether a preset pattern exists in the video or the image; if a preset pattern is detected, the position of the pattern is the position of the person to be detected. The detection method can distinguish which concerned persons are without identifying the face, and the concerned persons can be accurately identified and positioned even if the concerned persons are shielded by a large area and only a small part of clothes is exposed, so that the detection accuracy is high; when the patterns of the clothes worn by the personnel are different, the identities of the different personnel can be distinguished.

Description

Personnel detection device and detection method
Technical Field
The invention relates to the technical field of personnel detection, in particular to a personnel detection device and a personnel detection method.
Background
People detection technology is the basis for most human-related artificial intelligence applications in automation applications. For example, the method can be used for automatically tracking a target person by a camera, whether the target person is in a specified area or not, and the like.
The determination of the position of the person is currently mainly done by human shape recognition. The technology cannot identify the person when the person is shielded by a large area. And when a plurality of persons are present at the same time, it is impossible to determine which persons need attention. Although the face recognition can be used for assistance, the camera cannot shoot the face of the person in many cases, and in most cases, the face information of the related person is not available.
The technical problem to be solved by the technical staff in the field is to provide a staff detection method which can detect staff under the condition of large-area shielding and can identify whether the staff is concerned.
The invention with the application number of CN201610133706.4 discloses an embedded control system, equipment and a method based on personnel detection, which comprises a monocular camera, a personnel detection unit and a control unit, wherein the monocular camera is used for acquiring an image sequence of a current scene and sending the image sequence to the personnel detection unit; the personnel detection unit is connected with the monocular camera and is used for detecting whether personnel exist in a scene shown by the image sequence and position information of the personnel exist through image identification of the multilayer deep neural network and sending a detection result to the equipment control unit; and the equipment control unit is connected with the personnel detection unit and used for generating an operation instruction according to the detection result and a preset control strategy and controlling the embedded intelligent equipment to execute the operation instruction.
The invention identifies whether personnel exist in the image sequence and the position information of the personnel through the multilayer neural network, and further detects the personnel in different postures and angles through the multilevel detection convolution neural network. According to the method, when the personnel are shielded by a large area, the personnel cannot be identified; when a plurality of people appear at the same time, which person is a person to be concerned cannot be determined; when a plurality of persons to be concerned appear, the person identities cannot be distinguished.
Disclosure of Invention
The invention aims to provide a detection device which can detect people in a video or an image and can distinguish the identities of different people even if the people are shielded in a large area.
Another technical problem to be solved by the present invention is to provide a detection method capable of detecting people in a video or an image and distinguishing identities of different people even if the people are blocked by a large area.
In order to solve the technical problems, the technical scheme adopted by the invention is that the personnel detection device comprises an image acquisition module and an image analysis module, wherein the image analysis module receives a video or an image of the image acquisition module, identifies whether a preset pattern exists in the video or the image, and judges that a person is detected and the position of the pattern is the position of the person if the preset pattern is detected.
The personnel detection device comprises the wireless communication module, and when the image analysis module detects the preset pattern, the information is sent out through the wireless communication module.
In the above personnel detection device, the image analysis module is disposed in the edge calculation box, and the image analysis module performs operation by using the edge calculation box.
The personnel detection device comprises a garment worn by a person and at least one preset pattern.
The personnel detection device is characterized in that the preset pattern and the clothes are arranged in a split mode, and the pattern and the clothes are fixedly connected through inlaying or sewing or adhesion or bayonet.
The preset pattern of the personnel detecting device can be a pattern, a number, a letter, a character, an animal and plant pattern.
A personnel detection method comprises the personnel detection device, and the detection process comprises the following steps:
701 A garment worn by a person includes at least one predetermined pattern;
702 An image acquisition module of the detection device acquires video or images of a monitored area;
703 An image analysis module of the detection device receives the video or the image of the image acquisition module and identifies whether a preset pattern exists in the video or the image;
704 If the image analysis module detects a preset pattern, the presence of a person is determined, and the position of the pattern is the position of the person.
In step 704, if the image analysis module detects a preset pattern, the image analysis module sends information to the mobile phone and/or the bracelet through the wireless communication module, where the information includes alarm information and/or position information of the pattern in the image and/or pattern category information and/or current image.
In step 704, if the image analysis module detects that the preset pattern is in the preset area, the image analysis module sends information to the mobile phone and/or the bracelet through the communication module, where the information includes an alarm information and/or position information of the detected pattern in the image and/or type information of the detected pattern and/or the current image.
In step 704, if the image analysis module detects that the preset pattern is not in the preset area, the image analysis module sends information to the mobile phone and/or the bracelet through the communication module, where the information includes an alarm information and/or position information of the detected pattern in the image and/or type information of the detected pattern and/or the current image.
In the person detecting method, in step 701, the pattern may be a pattern, and/or a number, and/or a letter, and/or a character, and/or an animal and plant pattern.
In the above-mentioned person detection method, in step 701, different patterns on the clothing worn by the person represent different identities, and the identity of the person can be distinguished through the type of the detected pattern.
In the above-mentioned person detection method, in step 701, the clothing worn by the person includes a plurality of preset patterns; the recognition algorithm of the image analysis module employs a target detection neural network. Training a target detection neural network to enable the target detection neural network to recognize various preset patterns; in step 704, if the image analysis module detects one or more of the preset patterns, the image analysis module sends information to the mobile phone and/or the bracelet through the communication module, where the information includes an alarm information and/or a position information of the detected pattern on the image and/or a category information of the detected pattern and/or the current image.
In the above personnel detection method, the image acquisition module is a network camera, the image analysis module is arranged in the cloud server, and the camera transmits the captured video or image to the cloud server; the image analysis module of the cloud server sends information to the mobile phone and/or the bracelet through the communication network, and the information comprises alarm information and/or position information of the detected pattern in the image and/or type information of the detected pattern and/or the current image.
In the above personnel detection method, the image acquisition module is a network camera, the image analysis module is arranged on the edge computing device, and the camera transmits the captured video or image to the edge computing device; the image analysis module of the edge computing device sends information to the mobile phone and/or the bracelet through the communication network, wherein the information comprises alarm information and/or position information of the detected pattern in the image and/or class information of the detected pattern and/or the current image.
The detection method can detect the personnel and the positions of the personnel in the video or the image even if the personnel are shielded in a large area, and can distinguish the identities of different personnel.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a schematic block diagram of a kicked detection apparatus of the present invention.
FIG. 2 is a flow chart of a person detection method according to an embodiment of the invention.
Fig. 3 is a schematic block diagram of a person detection apparatus according to embodiments 1 and 2 of the present invention.
Fig. 4 is a schematic diagram of a deep neural network according to embodiment 1 of the present invention.
Fig. 5 is a flow chart of the person detection method in embodiments 1 and 2 of the present invention.
Fig. 6 is a schematic diagram of a deep neural network according to embodiment 2 of the present invention.
FIG. 7 is a schematic diagram of a specific pattern according to example 2 of the present invention.
Fig. 8 is a schematic block diagram of the person detecting apparatus according to embodiments 3 and 4 of the present invention.
Fig. 9 is a flowchart of a person detection method according to embodiments 3 and 4 of the present invention.
Fig. 10 is a schematic block diagram of a person detection apparatus according to embodiments 5 and 6 of the present invention.
Fig. 11 is a flowchart of the person detection method of embodiments 5 and 6 of the present invention.
Fig. 12 is a schematic view of a nightwear according to the person detection method of the embodiment of the invention.
Detailed Description
The personnel detection device and the detection method are shown in figures 1 and 2, the detection device comprises an image acquisition module and an image analysis module, the image analysis module receives a video or an image of the image acquisition module, identifies whether a preset pattern exists in the video or the image, judges that a person is detected if the preset pattern is detected, and judges that no person is detected if the preset pattern is not detected.
The person wears the clothes with the preset patterns, the preset patterns can be directly printed on the clothes, and can also be arranged in a split mode with the clothes, and the clothes and the preset patterns are fixedly connected through inlaying or sewing or bonding or bayonets or zippers.
The algorithm used by the image analysis module can be one or more of R-CNN, fastR-CNN, fasterR-CNN, FPN, YOLO, SSD, retinaNet, denseBox, RRCdetection, deformableCNN, CNN, RNN, inclusion, xcepttion, mobileNet, resNeXt, denseNet, squeezeNet, shuffleNet, SKNet, SENTet, cascade, HOG/DPM and Hasvm algorithms.
Example 1:
as shown in fig. 3, the image acquisition module hardware adopts a camera, a haisi Hi3516cv500 or Hi3516dv300 chip is used in the camera, and the image analysis module uses the chip to perform operations.
The recognition algorithm of the image analysis module employs a MobileNetv3 picture classification neural network as shown in fig. 4. The network is trained so that it can recognize a specified pattern or patterns. In order to be able to identify also in a night vision environment, the network is able to identify both the colour image and the grey image of the pattern. The network is a classified network and a person is considered to be detected as long as the pattern appears in the picture.
The camera is placed at a proper position so as to collect video or images of a danger area needing to be monitored.
The person wears the garment shown in fig. 12 printed with one of the designated patterns or a label of one of the designated patterns that can be stuck on the garment.
As shown in fig. 5, when there is a specified pattern in the image captured by the camera, it is considered that a person enters the dangerous area. The image analysis module sends information to the mobile phone and/or the bracelet of the related personnel through the communication module, the information is transmitted to the mobile phone and/or the bracelet of the related personnel through a WIFI network or a Bluetooth signal, and the mobile phone and/or the bracelet vibrate, so that the related personnel are reminded.
Example 2:
as shown in fig. 3, the image acquisition module hardware adopts a camera, a haisi Hi3516cv500 or Hi3516dv300 chip is used in the camera, and the image analysis module uses the chip to perform operations.
The recognition algorithm of the image analysis module adopts a YOLOv4tiny target detection neural network shown in fig. 6. The network is trained so that it can recognize a specified pattern or patterns and can unambiguously recognize the position (upper left and lower right coordinates) of the pattern on the picture. In order to be able to identify also in a night vision environment, the network is able to identify both the colour image and the grey image of the pattern. A person is considered detected when a specified pattern appears in the image; further, a surveillance area (e.g., a hazardous area) may be set in the video or image, and an alarm may be triggered when a specified pattern appears in the specified area.
And placing the camera at a proper position, and setting the position of the dangerous area on the system.
The person to be monitored wears a garment printed with one of the specified patterns.
As shown in fig. 5, a video or an image captured by the camera is analyzed, when the image analysis module detects that the pattern is in the set dangerous area range, the image analysis module sends information to the mobile phone and/or the bracelet of the relevant person through the communication module, the information is transmitted to the mobile phone and/or the bracelet of the relevant person through a WIFI network or a bluetooth signal, and the mobile phone and/or the bracelet vibrates, so that the relevant person can see the relevant information.
The embodiment adopts an object detection algorithm, and the detected pattern and the position of the picture where the pattern is located are output by the algorithm. As shown in fig. 7, the picture includes 4 patterns, and the 4 patterns have 3 different patterns, which are predefined as 1, 2, and 3 in the algorithm model. (x 1, y 1) (x 2, y 2) are the left and right lower corner coordinates of the first row first column pattern in the picture, respectively; (x 3, y 3) (x 4, y 4) are the left and right bottom corner coordinates of the first row second column pattern in the picture, respectively; (x 5, y 5) (x 6, y 6) are the left and right bottom corner coordinates of the second row first column pattern in the picture, respectively; (x 7, y 7) (x 8, y 8) are the left and right bottom corner coordinates of the second row second column pattern in the picture, respectively. Then the algorithmic model identifies the output of the picture as (1, x1, y1, x2, y 2) (2, x3, y3, x4, y 4) (3, x5, y5, x6, y 6) (1, x7, y7, x8, y 8).
If there are one or more pattern results specified in the output of the camera, it is determined that someone has entered the hazard zone.
Example 3:
as shown in fig. 8, the image acquisition module hardware adopts a camera; the hardware where the image analysis module is located is an edge calculation box (such as Firefly EC-A3399C), and the image analysis module uses the edge calculation box to perform operation.
The recognition algorithm of the image analysis module employs a MobileNetv3 picture classification neural network as shown in fig. 4. The network is trained so that the network can recognize a specified pattern or patterns. In order to be able to identify also in a night vision environment, the network is able to identify both the colour image and the grey image of the pattern. The network is a classified network, i.e. a person is considered to be recognized as long as the pattern appears in the picture.
The camera is placed at a proper position so as to collect the video or image of the danger area needing to be monitored.
The person wears the garment shown in fig. 12 printed with one of the designated patterns or a label of one of the designated patterns that can be stuck on the garment.
As shown in fig. 9, the camera transmits the captured video to the edge computing box, the image analysis module analyzes the video by using the computing power of the edge computing box, when the image analysis module identifies a designated pattern, the image analysis module sends information to the mobile phone and/or the bracelet of the relevant person through the communication module, the information is transmitted to the mobile phone and/or the bracelet of the relevant person through the WIFI network or the bluetooth signal, and the mobile phone and/or the bracelet vibrates, so as to remind the relevant person.
Example 4:
as shown in fig. 8, the image acquisition module hardware adopts a camera; the hardware where the image analysis module is located is an edge calculation box (such as Firefly EC-A3399C), and the image analysis module uses the edge calculation box to perform operation.
The recognition algorithm of the image analysis module employs a YOLOv4tiny target detection neural network as shown in fig. 6. The network is trained so that the network can recognize a specified pattern or patterns and can unambiguously recognize the position of the pattern on the picture. In order to be able to identify also in a night vision environment, the network is able to identify both the colour image and the grey image of the pattern. A person is considered detected when a specified pattern appears in the image; furthermore, a plurality of operation areas can be set in the video, patterns allowed to appear in each area are set, and when the patterns which do not belong to the area appear in the area, people with violation are considered to enter and alarm.
The camera is placed at a proper position, an operation area for monitoring is set, and patterns allowed to appear in the operation area are set.
The monitored person wears the clothes printed with the designated different patterns according to the post.
As shown in fig. 9, the camera transmits the captured video to the edge computing box, the image analysis module analyzes the video by using the computing power of the edge computing box, when the image analysis module detects that the operating area is not in the set position pattern, the image analysis module transmits information to the mobile phone and/or the bracelet of the relevant person through the communication module, the information is transmitted to the mobile phone and/or the bracelet of the relevant person through the WIFI network or the bluetooth signal, and the mobile phone and/or the bracelet vibrates, so that the relevant person is reminded of the occurrence of the illegal behavior.
The present embodiment adopts the target detection algorithm shown in fig. 6, and the detected pattern type number and the position of the pattern in the image outputted by the algorithm. As shown in fig. 7, the picture detects 4 patterns, and the 4 patterns have 3 different patterns, and the three patterns are predefined as 1, 2 and 3 in the algorithm model. (x 1, y 1) (x 2, y 2) are the left and lower right corner coordinates of the first row first column pattern in the picture, respectively; (x 3, y 3) (x 4, y 4) are the left and right lower left corner coordinates of the first row second column pattern in the picture, respectively; (x 5, y 5) (x 6, y 6) are the left and right lower left corner coordinates of the second row first column pattern in the picture, respectively; (x 7, y 7) (x 8, y 8) are the left and right bottom corner coordinates of the second row second column pattern in the picture, respectively. Then the algorithmic model identifies the output of the picture as (1, x1, y1, x2, y 2) (2, x3, y3, x4, y 4) (3, x5, y5, x6, y 6) (1, x7, y7, x8, y 8).
If one or more patterns specified in the output of the camera appear in an area where the patterns should not appear, it is considered that a offending person enters and issues warning information.
Example 5:
as shown in fig. 10, the image acquisition module hardware adopts a network camera; the image analysis module is arranged on the cloud server, and the camera transmits the captured video or image to the cloud server.
The recognition algorithm of the image analysis module employs a MobileNetv3 picture classification neural network as shown in fig. 4. The network is trained so that the network can recognize a specified pattern or patterns. In order to be able to identify also in a night vision environment, the network is able to identify both the colour image and the grey image of the pattern. The network is a classified network, i.e. a person is considered to be detected as soon as the pattern appears in the picture.
The camera is placed at a proper position so as to collect the video or image of the danger area needing to be monitored.
The person wears the garment shown in fig. 12 printed with one of the designated patterns or a label of one of the designated patterns that can be stuck on the garment.
As shown in fig. 11, the camera transmits the captured video to the cloud server, the image analysis module analyzes the video at the cloud, when the image analysis module detects a specified pattern, it is considered that a person enters a dangerous area, information is transmitted to a mobile phone and/or a bracelet of a related person through a GRPS wireless communication network, and the mobile phone and/or the bracelet vibrates, so as to remind the related person of entering the dangerous area.
Example 6:
as shown in fig. 9, the image acquisition module hardware adopts a network camera; the image analysis module is arranged on the cloud server, and the camera transmits the captured video or image to the cloud server.
The recognition algorithm of the image analysis module employs a YOLOv4tiny target detection neural network as shown in fig. 6. The network is trained so that it can recognize a specified pattern or patterns and can unambiguously recognize the position (upper left and lower right coordinates) of the pattern on the picture. In order to be able to recognize also in a night vision environment, the network can recognize both the color and the grey image of the pattern. A person is considered detected when a specified pattern appears in the image; further, a surveillance area (e.g., a hazardous area) may be set in the video or image, and an alarm may be triggered when a specified pattern appears in the specified area.
And placing the camera at a proper position, and setting the position of the dangerous area on the system.
The monitored person wears a garment printed with one of the designated patterns.
As shown in fig. 10, the camera transmits the captured video to the cloud, the image analysis module analyzes the video at the cloud, and when the image analysis module detects that the pattern is in the set dangerous area range, the information is transmitted to the mobile phone and/or bracelet of the relevant person through the GRPS wireless communication network, and the mobile phone and/or bracelet vibrates, so that the relevant person can see the relevant information.
In the target detection algorithm adopted in this embodiment, as shown in fig. 6, the detected pattern type number and the position of the pattern in the picture are output by the algorithm. As shown in the above figure, the picture detects 4 patterns, and the 4 patterns have 3 different patterns, and the three patterns are predefined as 1, 2 and 3 in the algorithm model. (x 1, y 1) (x 2, y 2) are the left and lower right corner coordinates of the first row first column pattern in the picture, respectively; (x 3, y 3) (x 4, y 4) are the left and right lower left corner coordinates of the first row second column pattern in the picture, respectively; (x 5, y 5) (x 6, y 6) are the left and right lower left corner coordinates of the second row first column pattern in the picture, respectively; (x 7, y 7) (x 8, y 8) are the left and right bottom corner coordinates of the second row second column pattern in the picture, respectively. Then the algorithmic model identifies the output of the picture as (1, x1, y1, x2, y 2) (2, x3, y3, x4, y 4) (3, x5, y5, x6, y 6) (1, x7, y7, x8, y 8).
A person may be considered to enter the hazard zone if there are one or more pattern results specified in the output of the camera.
The above embodiment of the invention has the following beneficial effects:
according to the embodiment of the invention, when the personnel are shielded in a large area, the position of the monitored personnel can be detected through the camera picture.
2> the above embodiment of the present invention can clearly know who is the object that the system is interested in when a plurality of people appear in the video at the same time.
The above embodiment of the invention can distinguish the identities and the posts of different persons by the predefined pattern without the personal information of the monitored person.

Claims (10)

1. The device for detecting the personnel comprises an image acquisition module and an image analysis module and is characterized in that the image analysis module receives a video or an image of the image acquisition module, identifies whether a preset pattern exists in the video or the image, and judges that the personnel is detected if the preset pattern is detected.
2. The person detection device according to claim 1, wherein the location of the pattern is a location of a person.
3. People detection device according to claim 1, characterized in that the different predetermined patterns represent different identities of people, and the patterns can be different patterns and/or numbers and/or letters and/or words and/or animal and plant patterns.
4. The person detection apparatus according to claim 1, comprising a communication module, wherein the image analysis module sends an alarm message to the outside through the communication module when detecting that the preset pattern is not in the preset area.
5. The person detection device according to claim 1, comprising a communication module, wherein the image analysis module sends information to the outside through the communication module when detecting that the preset pattern is in the preset area.
6. The people detection device according to claim 1, wherein the predetermined pattern is provided separately from the clothing, and the pattern is fixedly connected with the clothing by inlaying or sewing or adhering or clipping.
7. A person detection method comprising the person detection apparatus of claim 1, wherein the detection process comprises the steps of:
the garment worn by the person comprises at least one preset pattern;
an image acquisition module of the detection device acquires a video or an image of a monitoring area;
an image analysis module of the detection device receives the video or the image of the image acquisition module and identifies whether a preset pattern exists in the video or the image;
if the image analysis module detects a preset pattern, the existence of the person is judged, and the position of the pattern is the position of the person.
8. A person detection method as claimed in claim 7, wherein the different predetermined patterns represent identities of different persons, and the patterns may be different patterns and/or numbers and/or letters and/or words and/or animal and plant patterns.
9. The person detection method according to claim 7, wherein the detection device comprises a communication module, and in step 704, if the image analysis module detects that the preset pattern is in the preset area, the image analysis module sends information to the mobile phone and/or the bracelet through the communication module.
10. The person detection method according to claim 7, wherein the detection device comprises a communication module, and in step 704, if the image analysis module detects that the preset pattern is not in the preset area, the image analysis module sends information to the mobile phone and/or the bracelet through the communication module.
CN202110781190.5A 2021-07-10 2021-07-10 Personnel detection device and detection method Pending CN115661854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110781190.5A CN115661854A (en) 2021-07-10 2021-07-10 Personnel detection device and detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110781190.5A CN115661854A (en) 2021-07-10 2021-07-10 Personnel detection device and detection method

Publications (1)

Publication Number Publication Date
CN115661854A true CN115661854A (en) 2023-01-31

Family

ID=85014975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110781190.5A Pending CN115661854A (en) 2021-07-10 2021-07-10 Personnel detection device and detection method

Country Status (1)

Country Link
CN (1) CN115661854A (en)

Similar Documents

Publication Publication Date Title
CN110263686A (en) A kind of construction site safety of image cap detection method based on deep learning
CN103942850B (en) Based on medical personnel's monitoring method on duty of video analysis and RFID technique
JP6655727B2 (en) Monitoring system
KR20130085315A (en) Method for video surveillance system based on human identification
CN112396658A (en) Indoor personnel positioning method and positioning system based on video
CN109740411A (en) Intelligent monitor system, monitoring method based on recognition of face and quickly go out alarm method
CN110378179A (en) Subway based on infrared thermal imaging is stolen a ride behavioral value method and system
CN111339901B (en) Image-based intrusion detection method and device, electronic equipment and storage medium
CN116310943B (en) Method for sensing safety condition of workers
CN109800715B (en) Park entrance and exit monitoring method and system based on Internet of things
JP7145622B2 (en) Information processing device, information processing device control method, subject detection system, and program
CN109492548A (en) The preparation method of region mask picture based on video analysis
CN113111771A (en) Method for identifying unsafe behaviors of power plant workers
KR102233679B1 (en) Apparatus and method for detecting invader and fire for energy storage system
JPH09130781A (en) Broad area supervisory equipment
CN115661854A (en) Personnel detection device and detection method
TW201140505A (en) Surveillance video fire detecting and extinguishing system
CN111723725A (en) Multi-dimensional analysis system based on video AI
CN110533889A (en) A kind of sensitizing range electronic equipment monitoring positioning device and method
KR20230078063A (en) Server for determining the posture type and operation method thereof
CN113420626A (en) Construction site safety behavior judging method and storage device
CN117576633B (en) Social security and protection control system intelligent sensing system based on machine vision
CN116883946B (en) Method, device, equipment and storage medium for detecting abnormal behaviors of old people in real time
Dayangac et al. Object recognition for human behavior analysis
CN115171358B (en) According to personnel state information scheduling Internet of things alarm system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication