CN110796032A - Video fence based on human body posture assessment and early warning method - Google Patents

Video fence based on human body posture assessment and early warning method Download PDF

Info

Publication number
CN110796032A
CN110796032A CN201910965361.2A CN201910965361A CN110796032A CN 110796032 A CN110796032 A CN 110796032A CN 201910965361 A CN201910965361 A CN 201910965361A CN 110796032 A CN110796032 A CN 110796032A
Authority
CN
China
Prior art keywords
ankle
key point
video
shooting range
early warning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910965361.2A
Other languages
Chinese (zh)
Inventor
李坚
黄进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yutuo Technology Co Ltd
Original Assignee
Shenzhen Yutuo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yutuo Technology Co Ltd filed Critical Shenzhen Yutuo Technology Co Ltd
Priority to CN201910965361.2A priority Critical patent/CN110796032A/en
Publication of CN110796032A publication Critical patent/CN110796032A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Abstract

The invention relates to a video fence based on human body posture assessment and an early warning method, wherein the early warning method comprises the following steps: s1, setting a video shooting range, and defining an access prohibition area in the shooting range; s2, acquiring coordinate information and confidence of ankle key points of people in the shooting range; s3, judging whether the obtained ankle coordinate is in an forbidden zone, if finding that an ankle key point is in the forbidden zone, turning to the step S4; and S4, sending an early warning signal. The video fence based on human posture assessment can position the ankle coordinates of a person through a machine vision recognition technology, more accurately judges whether the person enters a designated area in a video, and can recognize whether the person enters a designated forbidden area when a small part outside a human foot is shielded.

Description

Video fence based on human body posture assessment and early warning method
Technical Field
The invention relates to the technical field of machine vision recognition, in particular to a video fence based on human body posture assessment and an early warning method.
Background
At present, security cameras are widely used, and especially in scenes of construction sites with complex field environments and numerous personnel.
There may be areas in the worksite where access personnel may be restricted or no one may enter due to the hazardous materials being deposited or where accidents are likely to occur. Therefore, a security monitoring camera is required to monitor a specific area 24 hours a day and autonomously judge whether a person invades the area.
Most of the existing video recognition technologies adopt a rectangular frame to select the position of a person in a two-dimensional space.
The traditional method identifies the position of the whole human body on the image, selects the human body on the image by using a rectangular frame, and reflects a three-dimensional space on a two-dimensional plane, so that the human image is overlapped with the frame selection area in the image.
Disclosure of Invention
The invention aims to solve the technical problem of providing a video fence based on human body posture assessment and an early warning method aiming at the defect of inaccurate judgment in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a video fence early warning method based on human body posture assessment is constructed, and the method comprises the following steps:
s1, setting a video shooting range, and defining an access prohibition area in the shooting range;
s2, acquiring coordinate information and confidence of ankle key points of people in the shooting range;
s3, judging whether the obtained ankle coordinate is in an forbidden zone, if finding that an ankle key point is in the forbidden zone, turning to the step S4;
and S4, sending an early warning signal.
Preferably, in step S2, the information of each frame of video image is obtained based on the deep learning convolutional neural network, and the coordinate information and the confidence of the ankle key point are obtained by combining thermodynamic diagram analysis and affinity analysis.
Preferably, in step S2, if there is a person in the shooting range, coordinate information of all key points of human bone joints on the two-dimensional image corresponding to the shooting range and confidence degrees of the key points of bone joints in the image may be obtained, and the coordinate information of the key points of human ankle and the confidence degrees thereof are selected.
Preferably, in the step S1, the prohibition area is defined on the picture acquired in the shooting range, and the method includes the steps of,
the image frame is extracted, at least one polygon frame in the image is selected by a manual frame to be set as a forbidden region, and the vertex coordinates of each polygon frame are stored in the form of a set P { [ [ x1, y1], [ x2, y2], [ page ], [ xn, yn ], [ [ x ' 1, y ' 1], [ x ' 2, y ' 2], [ x ' n, y ' n ], [ x ' n ], [ x ' 1, y ' 2], n, wherein x and y are respectively the horizontal and vertical coordinates of the polygon vertices, each polygon frame is composed of n vertices, the vertices are sequentially connected in the arrangement sequence in the set to form the polygon frame, and the forbidden region is arranged in the frame.
Preferably, in step S3, the ankle key point to be determined is taken as an end point, a ray is emitted along a horizontal direction, if the number of intersections between the ray and a polygon frame is even, the ankle key point to be determined is outside the forbidden region, and if the number of intersections between the ray and a polygon frame is odd, the ankle key point to be determined is inside the forbidden region.
Preferably, in step S4, the method further includes uploading warning information to a cloud, and displaying the real-time image in the shooting range.
A video fence based on human pose assessment, comprising:
the area defining module is used for defining an entrance forbidding area in the shooting range according to the video shooting range;
the identification module is used for acquiring coordinate information and confidence level of ankle key points of people in a shooting range;
a judging module for judging whether the obtained ankle coordinate is in the forbidden region, and
and the early warning module is used for sending an early warning signal when the judgment module finds that the ankle key point is in the forbidden area.
Preferably, the identification module acquires information of each frame of video image based on a deep learning convolutional neural network, and combines thermodynamic diagram analysis and affinity analysis to obtain the coordinate information and confidence of the ankle key point.
Preferably, the area delineating module extracts an image frame from a shot video, manually frames out at least one polygonal frame in the image to set the polygonal frame as a forbidden area, and generates coordinate information of the forbidden area;
the judgment module takes ankle key points to be judged as end points, sends out a ray along the horizontal direction, if the number of intersection points of the ray and a polygonal frame is an even number, the ankle key points to be judged are outside a forbidden area, and if the number of intersection points of the ray and a polygonal frame is an odd number, the ankle key points to be judged are in the forbidden area.
Preferably, when the judgment module finds that the ankle key point is in the no-entry area and stays for more than a specific time, the early warning module sends coordinate information of the ankle key point of the human body on the image, the time of the ankle key point appearing in the no-entry area, and the image shot when the ankle key point appears in the no-entry area to the cloud end, and sends out an early warning signal.
The video fence and the early warning method based on the human body posture assessment have the following beneficial effects that: the video fence based on human posture assessment can position the ankle coordinates of a person through a machine vision recognition technology, more accurately judges whether the person enters a designated area in a video, and can recognize whether the person enters a designated forbidden area when a small part outside a human foot is shielded.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a block diagram of a video fence based on human pose estimation according to an embodiment of the present invention;
FIG. 2 is a schematic view of monitoring when an ankle of a person is present in the no-entry area;
FIG. 3 is a schematic diagram of monitoring when key points of ankle and knee of a human body are extracted;
FIG. 4 is a schematic view of monitoring when human bones are inferred and ankle key points are extracted;
fig. 5 is a schematic diagram of a case where an ray method is used to determine whether there is an ankle key point in a no-entry region;
FIG. 6 is a flow chart of a video fence pre-warning method based on human body posture assessment;
fig. 7 is a schematic diagram of a frame-selected no-entry region.
Detailed Description
For a more clear understanding of the technical features, objects and effects of the present invention, embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
As shown in fig. 1, the video fence based on human body posture assessment in a preferred embodiment of the present invention includes an area defining module 1, a recognition module 2, a judgment module 3, and an early warning module 4.
Referring to fig. 2, the area defining module 1 is configured to define an access-forbidden area B within the shooting range a according to the video shooting range a, adjust a shooting direction, a shooting angle, a focal length, and the like according to a monitoring requirement, preset the video shooting range a, and define the access-forbidden area B within the video shooting range a according to the monitoring requirement.
The identification module 2 is used for acquiring coordinate information and confidence of ankle key points C of the person in the shooting range A, and the coordinate information and the confidence are used as a judgment basis for judging whether the person enters the no-entry area B, so that the coordinate position of the ankle is easy to judge, and the standing position of the person can be accurately reflected.
The judging module 3 is configured to judge whether the obtained ankle coordinate is in the no-entry region B according to the coordinate information and the confidence of the ankle key point C.
The early warning module 4 is used for sending an early warning signal to remind of taking a corresponding measure when the judgment module 3 finds that the ankle key point C is in the no-entry area B.
The video fence based on human posture assessment can position the ankle coordinates of a person through a machine vision recognition technology, more accurately judges whether the person enters a designated area in a video, and can recognize whether the person enters a designated forbidden area B when a small part outside a human foot is shielded.
The area delimiting module 1 manually frames and draws at least one polygonal frame in the image according to the image acquired by the camera shooting range A to set the polygonal frame as a forbidden area B, and transmits two-dimensional coordinate information of the forbidden area B on the image to the judging module 3, wherein the number of the polygonal frames is indefinite.
In other embodiments, an image recognition technology may also be used to automatically define an area defined on the ground, in which dangerous articles and potential safety hazards are stacked, as an exclusion area, where the area defined on the ground may be an area with a frame or an area coated with a special color.
The identification module 2 acquires information of each frame of video image based on a deep learning convolutional neural network, identifies whether a person appears in the image, and obtains the coordinate information and confidence of the ankle key point C of the person by combining thermodynamic diagram analysis and affinity analysis.
The identification module 2 firstly preprocesses the image, such as image size scaling, and then identifies whether a person is in a specified area in the image by using an image identification technology, wherein the scheme of identifying the position of a key point C of the ankle of the person is used for identifying the ankle of the person in the image.
In the identification process, a method for estimating the human body posture is utilized, firstly, a Convolutional Neural Network (CNN) model is used for extracting information of an image, and a MobileNet _ v2 Network model (a lightweight Convolutional Neural Network) is selected in the method.
The MobileNet _ v2 Network model compares with the existing various models, such as inclusion, VGG, ResNet (Residual Neural Network), and the MobileNet series Network model applies deep-Separable Convolution (Depth-wise Separable convergence), thereby greatly reducing the parameter quantity. Therefore, the calculation amount required by processing the pictures is greatly reduced, the analysis of a certain frame in the video is accelerated, the frame rate of the video recognition result is not too low, and the recognition process has real-time performance.
As shown in fig. 3, a group of feature maps can be obtained through the MobileNet _ v2 model, which are divided into two CNN network models, and a confidence map (Part ConfidenceMaps) of positions of a human ankle key point C and a knee key point D, and a two-dimensional vector field (Part affinity fields) of partial affinities between the human ankle key point C and the knee key point D are respectively extracted.
As shown in fig. 4, all key points of a human body are then related by greedy reasoning and connected in a reasonable manner to form a skeleton of the human body. The coordinate point of the human skeleton in the figure is extracted, only the key point C of the ankle of the human is taken, and the coordinate and the confidence coefficient are recorded. After other skeletal key points are associated, the ankle key point C is taken, so that the position of the ankle key point C is more accurate, and misjudgment is avoided.
As shown in fig. 5, we use the ray method in determining whether a point in the captured image is within the forbidden region B of the polygon. The judging module 3 takes the ankle key point C to be judged as an end point, sends out a ray along the horizontal direction, calculates the number of intersection points of the ray and a polygon edge, and if the number of intersection points of the ray and a polygon frame is an even number, the ankle key point C to be judged is outside the forbidden region B; if the number of intersections between the ray and a polygonal frame is an odd number, the ankle key point C to be judged is in the forbidden region B.
And screening out relatively credible ankle points under the setting of the confidence threshold, and judging whether the ankle key point C is in the polygon selected by the previous frame in the two-dimensional image or not by using a ray method, and if so, sending out an early warning signal.
When the judgment module 3 finds that the ankle key point C is in the no-entry area B and stays for more than a specific time, such as more than 1S, 5S and the like, which is equivalent to that a person enters the no-entry area B, the early warning module 4 sends coordinate information of the ankle key point C on an image of the human body, the time of the ankle key point C appearing in the no-entry area B, and an image shot when the ankle key point C appears in the no-entry area B to the cloud 5 and sends out an early warning signal.
Referring to fig. 2 to 6, a video fence warning method based on human body posture estimation in a preferred embodiment of the present invention includes the following steps:
and S1, setting a video shooting range A and defining an forbidden area B in the shooting range A. According to the picture shot by the camera, a designated area can be manually drawn on the picture and designated as a forbidden area B. In other embodiments, an image recognition technology may also be used to automatically define an area defined on the ground, in which dangerous articles and potential safety hazards are stacked, as an exclusion area, where the area defined on the ground may be an area with a frame or an area coated with a special color.
S2, acquiring coordinate information and confidence of ankle key point C of people in the shooting range A;
s3, judging whether the obtained ankle coordinate is in the no-entry area B, if finding that an ankle key point C is in the no-entry area B, turning to the step S4;
and S4, sending an early warning signal.
In some embodiments, in step S1, the no-entry region B is defined on the picture acquired in the shooting range a, including the following steps,
the image frames are extracted, a monitoring person can manually frame at least one polygon frame in the selected image to set as an forbidden area B, and the vertex coordinates of each polygon frame are stored in the form of a set P { [ [ x1, y1], [ x2, y2], [ a., [ xn, yn ] ], [ [ x ' 1, y ' 1], [ x ' 2, y ' 2],. ], [ x ' n, y ' n ], [ x ' n ], [ a. }, wherein x and y are respectively the horizontal and vertical coordinates of the polygon vertices, each polygon frame is composed of n vertices, and the vertices are sequentially connected in the set in sequence to form a polygon frame, and the forbidden area B is arranged in the frame.
As shown in fig. 7, the monitoring person may manually frame the position of the exclusion area B in the image, first click the clear button, empty the original frame selection area, then click the frame, set several vertices of the exclusion area B, automatically connect line segments between the vertices, complete the frame selection of the exclusion area B, and the monitoring person may click the submit button in the image, and save the information of the exclusion area B.
In step S2, the convolutional neural network based on deep learning acquires information of each frame of video image, and combines thermodynamic diagram analysis and affinity analysis to obtain the coordinate information and confidence of ankle key point C.
As shown in fig. 2 to 4, in step S2, if there is a person in the shooting range a, coordinate information of all 18 human body bone joint key points in the image on the two-dimensional image corresponding to the shooting range a and confidence degrees of the bone joint key points can be obtained, and coordinate information of the human body ankle key point C and confidence degrees thereof are selected. After other skeletal key points are associated, the ankle key point C is taken, so that the position of the ankle key point C is more accurate, and misjudgment is avoided.
The accurate positioning of the position of the person is realized through the ankle key point C of the person, so that even if the body of the person covers the frame selection area, the person can be judged to not enter the forbidden area B, and the current position of the person is outside the frame selection area at the ankle key point C.
The ankle key point C of the left foot and the ankle key point C of the right foot are drawn on the image, and the person is accurately recognized to stand in the no-entry area B.
In step S3, an ankle key point C to be determined is used as an end point, a ray is emitted in the horizontal direction, if the number of intersections between the ray and a polygon frame is an even number, the key point to be determined is outside the forbidden region B, and if the number of intersections between the ray and a polygon frame is an odd number, the key point to be determined is inside the forbidden region B.
In step S4, the method further includes uploading early warning information to the cloud 5, and displaying the real-time image of the shooting range a.
And the cloud 5 receives the early warning information, restores the video screenshot and sends out warning information. The cloud 5 receives the sent information, converts the video screenshot from the base64 format into the RGB format, displays the video screenshot, and sends alarm information.
It is to be understood that the above-described respective technical features may be used in any combination without limitation.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A video fence early warning method based on human posture assessment is characterized by comprising the following steps:
s1, setting a video shooting range (A) and defining an forbidden area (B) in the shooting range (A);
s2, acquiring coordinate information and confidence level of ankle key points (C) of people in the shooting range (A);
s3, judging whether the obtained ankle coordinate is in the no-entry area (B), if finding that an ankle key point (C) is in the no-entry area (B), turning to the step S4;
and S4, sending an early warning signal.
2. The video fence pre-warning method based on human body posture estimation as claimed in claim 1, wherein in step S2, the convolutional neural network based on deep learning obtains information of each frame of video image, and combines thermodynamic diagram analysis and affinity analysis to obtain the coordinate information and confidence of the ankle key point (C).
3. The human body posture estimation-based video fence pre-warning method as claimed in claim 1, wherein in step S2, if there is a person in the shooting range (a), coordinate information of all human body bone joint key points in the image on the two-dimensional image corresponding to the shooting range (a) and confidence degrees of the bone joint key points are obtained, and the coordinate information of the human body ankle key point (C) and the confidence degree thereof are selected.
4. The human-body-posture-evaluation-based video fence warning method according to any one of claims 1 to 3, wherein in the step S1, the forbidden area (B) is defined on the picture obtained from the shooting range (A), comprising the following steps,
extracting image frames, manually framing at least one polygon frame in the image to set the polygon frame as the forbidden region (B), and setting the vertex coordinates of each polygon frame as a set P { [ [ x ]1,y1],[x2,y2],...,[xn,yn]],[[x’1,y’1],[x’2,y’2],...,[x’n,y’n]],., where x and y are the horizontal and vertical coordinates of polygon vertexes, each polygon frame is composed of n vertexes, and the vertexes are arranged in the order of setAnd sequentially connecting to form a polygonal frame, wherein the forbidden region (B) is arranged in the polygonal frame.
5. The method of claim 4, wherein in step S3, a ray is emitted in a horizontal direction with an ankle key point (C) to be determined as an end point, if the number of intersection points of the ray and a polygonal frame is even, the ankle key point (C) to be determined is outside the forbidden region (B), and if the number of intersection points of the ray and a polygonal frame is odd, the ankle key point (C) to be determined is inside the forbidden region (B).
6. The human body posture assessment based video fence early warning method as claimed in any one of claims 1 to 3, wherein the step S4 further comprises uploading early warning information to a cloud end (5) and displaying real-time images of the shooting range (A).
7. A video fence based on human pose assessment, comprising:
the device comprises a region defining module (1) for defining an entrance forbidding region (B) in a shooting range (A) according to the video shooting range (A);
the identification module (2) is used for acquiring coordinate information and confidence level of ankle key points (C) of people in the shooting range (A);
a judging module (3) for judging whether the obtained ankle coordinate is in the no-entry region (B), and
and the early warning module (4) is used for sending an early warning signal when the judgment module (3) finds that the ankle key point (C) is in the forbidden area (B).
8. The human posture assessment based video fence as claimed in claim 7, wherein the recognition module (2) acquires information of each frame of video image based on a deep learning convolutional neural network, and combines thermodynamic diagram analysis and affinity analysis to obtain the coordinate information and confidence of the ankle key point (C).
9. The video fence based on human posture assessment according to claim 7, wherein the region delineating module (1) extracts image frames from the captured video, manually frames out at least one polygon frame in the image to set as the forbidden region (B), and generates coordinate information of the forbidden region (B);
the ankle key point (C) that judging module (3) used waiting to judge sends a ray along the horizontal direction as the endpoint, if the number of crossing of ray and a polygon frame is the even number, waits to judge ankle key point (C) is in outside forbidding into region (B), if the number of crossing of ray and a polygon frame is the odd number, wait to judge ankle key point (C) is in forbidding into in the region (B).
10. The video fence based on human body posture estimation of claim 7, wherein when the judging module (3) finds that there is an ankle key point (C) in the no-entry area (B) and stays for more than a certain time, the early warning module (4) sends coordinate information of the ankle key point (C) on the image of the human body, the time when the ankle key point (C) appears in the no-entry area (B), and the image shot when the ankle key point (C) appears in the no-entry area (B) to a cloud (5), and sends out an early warning signal.
CN201910965361.2A 2019-10-11 2019-10-11 Video fence based on human body posture assessment and early warning method Pending CN110796032A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910965361.2A CN110796032A (en) 2019-10-11 2019-10-11 Video fence based on human body posture assessment and early warning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910965361.2A CN110796032A (en) 2019-10-11 2019-10-11 Video fence based on human body posture assessment and early warning method

Publications (1)

Publication Number Publication Date
CN110796032A true CN110796032A (en) 2020-02-14

Family

ID=69440277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910965361.2A Pending CN110796032A (en) 2019-10-11 2019-10-11 Video fence based on human body posture assessment and early warning method

Country Status (1)

Country Link
CN (1) CN110796032A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476277A (en) * 2020-03-20 2020-07-31 广东光速智能设备有限公司 Alarm method and system based on image recognition
CN111814646A (en) * 2020-06-30 2020-10-23 平安国际智慧城市科技股份有限公司 Monitoring method, device, equipment and medium based on AI vision
CN112597903A (en) * 2020-12-24 2021-04-02 珠高电气检测有限公司 Electric power personnel safety state intelligent identification method and medium based on stride measurement
CN112016528B (en) * 2020-10-20 2021-07-20 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium
CN113229807A (en) * 2021-05-17 2021-08-10 四川大学华西医院 Human body rehabilitation evaluation device, method, electronic device and storage medium
CN113657309A (en) * 2021-08-20 2021-11-16 山东鲁软数字科技有限公司 Adocf-based method for detecting violation behaviors of crossing security fence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034124A (en) * 2018-08-30 2018-12-18 成都考拉悠然科技有限公司 A kind of intelligent control method and system
CN109934111A (en) * 2019-02-12 2019-06-25 清华大学深圳研究生院 A kind of body-building Attitude estimation method and system based on key point
CN110110657A (en) * 2019-05-07 2019-08-09 中冶赛迪重庆信息技术有限公司 Method for early warning, device, equipment and the storage medium of visual identity danger

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034124A (en) * 2018-08-30 2018-12-18 成都考拉悠然科技有限公司 A kind of intelligent control method and system
CN109934111A (en) * 2019-02-12 2019-06-25 清华大学深圳研究生院 A kind of body-building Attitude estimation method and system based on key point
CN110110657A (en) * 2019-05-07 2019-08-09 中冶赛迪重庆信息技术有限公司 Method for early warning, device, equipment and the storage medium of visual identity danger

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476277A (en) * 2020-03-20 2020-07-31 广东光速智能设备有限公司 Alarm method and system based on image recognition
CN111814646A (en) * 2020-06-30 2020-10-23 平安国际智慧城市科技股份有限公司 Monitoring method, device, equipment and medium based on AI vision
CN111814646B (en) * 2020-06-30 2024-04-05 深圳平安智慧医健科技有限公司 AI vision-based monitoring method, device, equipment and medium
CN112016528B (en) * 2020-10-20 2021-07-20 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium
CN112597903A (en) * 2020-12-24 2021-04-02 珠高电气检测有限公司 Electric power personnel safety state intelligent identification method and medium based on stride measurement
CN112597903B (en) * 2020-12-24 2021-08-13 珠高电气检测有限公司 Electric power personnel safety state intelligent identification method and medium based on stride measurement
CN113229807A (en) * 2021-05-17 2021-08-10 四川大学华西医院 Human body rehabilitation evaluation device, method, electronic device and storage medium
CN113657309A (en) * 2021-08-20 2021-11-16 山东鲁软数字科技有限公司 Adocf-based method for detecting violation behaviors of crossing security fence

Similar Documents

Publication Publication Date Title
CN110796032A (en) Video fence based on human body posture assessment and early warning method
CN110568447B (en) Visual positioning method, device and computer readable medium
CN111080679B (en) Method for dynamically tracking and positioning indoor personnel in large-scale place
CN109887040B (en) Moving target active sensing method and system for video monitoring
CN107273846B (en) Human body shape parameter determination method and device
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
CN105286871B (en) Video processing-based body height measurement method
CN109255808B (en) Building texture extraction method and device based on oblique images
Yue et al. Fast 3D modeling in complex environments using a single Kinect sensor
JP7355974B2 (en) Distance estimation device and method
CN106846375A (en) A kind of flame detecting method for being applied to autonomous firefighting robot
JP7092615B2 (en) Shadow detector, shadow detection method, shadow detection program, learning device, learning method, and learning program
CN107862713A (en) Video camera deflection for poll meeting-place detects method for early warning and module in real time
JP4203279B2 (en) Attention determination device
CN110880161A (en) Depth image splicing and fusing method and system for multi-host multi-depth camera
JP6950644B2 (en) Attention target estimation device and attention target estimation method
CN112802208B (en) Three-dimensional visualization method and device in terminal building
CN114511592A (en) Personnel trajectory tracking method and system based on RGBD camera and BIM system
JP2019027882A (en) Object distance detector
CN115131407B (en) Robot target tracking method, device and equipment oriented to digital simulation environment
CN111080712A (en) Multi-camera personnel positioning, tracking and displaying method based on human body skeleton detection
JP2019066909A (en) Object distribution estimation apparatus
Psarras et al. Visual saliency in navigation: Modelling navigational behaviour using saliency and depth analysis
Aliakbarpour et al. Geometric exploration of virtual planes in a fusion-based 3D data registration framework
JP7099809B2 (en) Image monitoring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination