CN114120438A - Human motion posture detection method and device - Google Patents

Human motion posture detection method and device Download PDF

Info

Publication number
CN114120438A
CN114120438A CN202110776030.1A CN202110776030A CN114120438A CN 114120438 A CN114120438 A CN 114120438A CN 202110776030 A CN202110776030 A CN 202110776030A CN 114120438 A CN114120438 A CN 114120438A
Authority
CN
China
Prior art keywords
human
human motion
motion posture
human body
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110776030.1A
Other languages
Chinese (zh)
Inventor
王建勋
李员
李佳
尹哲
宋桂荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongtuo Xinyuan Technology Co ltd
Beijing Guodian Electric Power New Energy Technology Co ltd
Original Assignee
Beijing Zhongtuo Xinyuan Technology Co ltd
Beijing Guodian Electric Power New Energy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongtuo Xinyuan Technology Co ltd, Beijing Guodian Electric Power New Energy Technology Co ltd filed Critical Beijing Zhongtuo Xinyuan Technology Co ltd
Priority to CN202110776030.1A priority Critical patent/CN114120438A/en
Publication of CN114120438A publication Critical patent/CN114120438A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a human motion posture detection method and a human motion posture detection device. The human motion attitude detection method comprises the steps of acquiring video data to be detected, identifying a human target in the video data, extracting a plurality of video frames where the human target is located, generating a human motion attitude map according to the attitude of the human target in the video frames, calculating the similarity between the human motion attitude map and the human motion attitude map in a human motion attitude map library, wherein the human motion attitude map is marked with a corresponding human motion attitude name, judging whether the similarity is smaller than a preset acquaintance threshold value, and outputting the human motion attitude name corresponding to the human motion attitude map if the similarity is smaller than the preset acquaintance threshold value, so that the type of the human motion attitude can be quickly identified, the sudden situation of an inspector in a booster station can be known, and rescue measures can be taken timely.

Description

Human motion posture detection method and device
Technical Field
The invention relates to the technical field of electric power safety, in particular to a human motion posture detection method and device.
Background
In the power industry, especially in booster stations, the environment is complex and the high-voltage environment is more. The inspection personnel fall suddenly and are not helped in time, secondary damage can be caused, immediate processing is needed, the traditional method relies on monitoring and inspection to observe personnel, the efficiency is low, and the personnel on duty are required to be full attention. With the great development of machine learning technology, the technology of falling over of human bodies which depends on videos for automatic judgment is mature day by day. The existing human motion posture detection has limited detection efficiency and needs to be further improved. Therefore, it is necessary to provide a method and an apparatus for detecting a motion gesture of a human body to solve the above problems.
Disclosure of Invention
The invention provides a human body motion posture detection method and device, and aims to solve the problems that the existing human body motion posture detection is limited in detection efficiency and needs to be further improved.
In a first aspect, the present invention provides a human motion gesture detection method, including:
acquiring video data to be detected;
identifying a human target in the video data;
extracting a plurality of video frames where the human body target is located;
generating a human body motion attitude diagram according to the attitude of the human body target in the video frame;
calculating the similarity between the human body motion posture graph and human body motion posture graphs in a human body motion posture graph library, wherein the human body motion posture graphs are marked with corresponding human body motion posture names;
judging whether the similarity is smaller than a preset acquaintance threshold value or not;
and if the similarity is smaller than a preset identification threshold, outputting the human motion posture name corresponding to the human motion posture image.
With reference to the first aspect, in a first implementable manner of the first aspect, the extracting the plurality of video frames where the human body target is located includes:
extracting a video frame with the human body target from video data to be detected;
marking outline key points on the external outline of the human body target in each video frame;
projecting the contour key points into a coordinate system, and giving coordinate values to the contour key points;
calculating a coordinate difference value between contour key points of the external contour of the human body target in two adjacent video frames;
judging whether the coordinate difference value exceeds a preset coordinate difference value threshold value or not;
and if the coordinate difference value exceeds a preset coordinate difference value threshold value, starting with the second video frame of the two adjacent video frames, and backwards extracting a preset number of video frames.
With reference to the first implementable manner of the first aspect, in a second implementable manner of the first aspect, the generating a human motion pose graph according to the pose of the human target in the video frame includes:
extracting skeleton key points of a human body target in the video frame;
drawing lines between the skeleton key points according to human body composition;
and drawing a human motion posture graph according to the lines and the bone key points.
With reference to the first implementable manner of the first aspect, in a third implementable manner of the first aspect, in the step of calculating the similarity between the human motion posture diagram and the human motion posture diagram in the human motion posture diagram library, the calculation manner of the similarity is an euclidean distance method, a cosine distance method, or a hamming distance method.
With reference to the first implementable manner of the first aspect, in a fourth implementable manner of the first aspect, in the step of outputting the human motion posture name corresponding to the human motion posture diagram if the similarity is smaller than a preset similarity threshold, the human motion posture name is pre-classified into different dangerous posture classes, and the method further includes:
identifying the dangerous posture grade corresponding to the human body motion posture name;
judging whether the dangerous attitude grade exceeds a preset grade range or not;
and if the dangerous attitude grade exceeds a preset grade range, sending an emergency rescue alarm.
In a second aspect, the present invention further provides a human motion gesture detection apparatus, including:
the acquisition unit is used for acquiring video data to be detected;
a first identification unit for identifying a human target in the video data;
the extraction unit is used for extracting a plurality of video frames where the human body target is located;
the generating unit is used for generating a human motion posture graph according to the posture of the human target in the video frame;
the calculating unit is used for calculating the similarity between the human motion posture graph and human motion posture graphs in a human motion posture graph library, wherein the human motion posture graphs are marked with corresponding human motion posture names;
the first judgment unit is used for judging whether the similarity is smaller than a preset acquaintance threshold value or not;
and the output unit is used for outputting the human motion posture name corresponding to the human motion posture graph if the similarity is smaller than a preset identification threshold.
With reference to the second aspect, in a first implementable manner of the second aspect, the extraction unit includes:
the extraction subunit is used for extracting the video frame with the human body target from the video data to be detected;
the marking subunit is used for marking the outline key points of the external outline of the human body target in each video frame;
the mapping subunit is used for projecting the contour key points into a coordinate system and giving coordinate values to the contour key points;
the calculating subunit is used for calculating a coordinate difference value between contour key points of the external contour of the human body target in two adjacent video frames;
the judging subunit is used for judging whether the coordinate difference value exceeds a preset coordinate difference value threshold value;
and the first extraction subunit is used for extracting a preset number of video frames backwards from the second video frame of the two adjacent video frames under the condition that the coordinate difference value exceeds a preset coordinate difference value threshold value.
With reference to the first implementable manner of the second aspect, in a second implementable manner of the second aspect, the generating unit includes:
the second extraction subunit is used for extracting the skeleton key points of the human body target in the video frame;
the first drawing subunit is used for drawing lines among the skeleton key points according to the composition of the human body;
and the second drawing subunit is used for drawing a human motion posture graph according to the line and the bone key points.
With reference to the first implementable manner of the second aspect, in a third implementable manner of the second aspect, the calculation unit employs an euclidean distance method, a cosine distance method, or a hamming distance method for calculating the similarity.
With reference to the first implementable manner of the second aspect, in a fourth implementable manner of the second aspect, the human motion posture names are pre-classified into different dangerous posture grades, and the apparatus further includes:
the second identification unit is used for identifying the dangerous posture grade corresponding to the human motion posture name;
the second judgment unit is used for judging whether the dangerous attitude grade exceeds a preset grade range or not;
and the alarm unit is used for sending out emergency rescue alarm under the condition that the dangerous attitude grade exceeds a preset grade range.
According to the technical scheme, the human motion posture detection method and the human motion posture detection device recognize the human target in the video data by acquiring the video data to be detected, extract a plurality of video frames where the human target is located, generate the human motion posture diagram according to the posture of the human target in the video frames, and calculate the similarity between the human motion posture diagram and the human motion posture diagram in the human motion posture diagram library, wherein the human motion posture diagram is marked with the corresponding human motion posture name, judge whether the similarity is smaller than a preset acquaintance threshold value, and if the similarity is smaller than the preset acquaintance threshold value, output the human motion posture name corresponding to the human motion posture diagram, so that the type of the human motion posture can be quickly recognized, the sudden state of the personnel in the booster station can be known, and rescue measures can be taken in time.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any inventive exercise.
Fig. 1 is a flowchart of a human motion gesture detection method of the present invention.
Fig. 2 is a flowchart of an embodiment of a human motion gesture detection method according to the present invention.
Fig. 3 is a flowchart of an embodiment of a human motion gesture detection method according to the present invention.
Fig. 4 is a flowchart of an embodiment of a human motion gesture detection method according to the present invention.
Fig. 5 is a schematic diagram of the human motion gesture detection apparatus of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The technical solutions provided by the embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a first method for detecting a human motion gesture according to the present invention includes an execution unit of the method, including:
step S101, video data to be detected is obtained.
Specifically, a plurality of video acquisition devices may be disposed in the booster station, and the video acquisition devices are in communication connection with the processor through a wired or wireless network, and transmit video data in the booster station to the processor.
Step S102, identifying human body targets in the video data.
And step S103, extracting a plurality of video frames where the human body target is located.
In this embodiment, as shown in fig. 2, extracting a plurality of video frames where the human body target is located includes:
step S201, extracting a video frame with the human body target from the video data to be detected.
Step S202, labeling outline key points of the external outline of the human body target in each video frame.
Specifically, contour key points may be marked from the head of the human target to both sides in sequence until marking of the contour key points is completed for the whole human target.
Step S203, projecting the outline key points into a coordinate system and giving coordinate values to the outline key points.
And step S204, calculating the coordinate difference value between the contour key points of the external contour of the human body target in the two adjacent video frames.
Specifically, the coordinate difference value includes an X-axis coordinate difference value and a Y-axis coordinate difference value.
Step S205, determining whether the coordinate difference exceeds a preset coordinate difference threshold.
Step S206, if the coordinate difference value exceeds a preset coordinate difference value threshold value, a preset number of video frames are extracted backwards by starting with the second video frame of the two adjacent video frames.
Specifically, if one or more of the X-axis coordinate difference and the Y-axis coordinate difference exceeds a preset coordinate difference threshold, it indicates that the motion posture of the target human body changes greatly at that time, and therefore, a preset number of video frames are extracted backwards starting with the second video frame of the two adjacent video frames.
And step S104, generating a human body motion posture graph according to the posture of the human body target in the video frame.
In this embodiment, as shown in fig. 3, generating a human motion pose graph according to the pose of the human target in the video frame includes:
step S301, extracting skeleton key points of the human body target in the video frame.
In particular, the skeletal keypoints of the human target may comprise a shoulder comprising two skeletal keypoints, an elbow comprising two skeletal keypoints, a head comprising one skeletal keypoint, a torso comprising four skeletal keypoints, and a knee comprising two skeletal keypoints.
And step S302, drawing lines between the key points of the bones according to the human body composition.
And step S303, drawing a human motion posture graph according to the lines and the bone key points.
Specifically, in the step of calculating the similarity between the human motion posture diagram and the human motion posture diagram in the human motion posture diagram library, the calculation mode of the similarity is an euclidean distance method, a cosine distance method or a hamming distance method.
And step S105, calculating the similarity between the human motion posture graph and human motion posture graphs in a human motion posture graph library, wherein the human motion posture graphs are marked with corresponding human motion posture names.
And step S106, judging whether the similarity is smaller than a preset acquaintance threshold value.
And S107, if the similarity is smaller than a preset similarity threshold, outputting a human motion posture name corresponding to the human motion posture image.
Further, as shown in fig. 4, in the step of outputting the human motion posture name corresponding to the human motion posture diagram if the similarity is smaller than a preset similarity threshold, the human motion posture name is pre-classified into different dangerous posture grades, and the method further includes:
and S401, identifying the dangerous posture grade corresponding to the human body motion posture name.
Step S402, judging whether the dangerous attitude grade exceeds a preset grade range.
And S403, if the dangerous attitude grade exceeds a preset grade range, sending an emergency rescue alarm.
According to the embodiment, the human motion posture detection method includes the steps of identifying a human target in video data by obtaining the video data to be detected, extracting a plurality of video frames where the human target is located, generating a human motion posture diagram according to postures of the human target in the video frames, calculating similarity between the human motion posture diagram and the human motion posture diagrams in a human motion posture diagram library, wherein the human motion posture diagram is marked with corresponding human motion posture names, judging whether the similarity is smaller than a preset acquaintance threshold value, and outputting the human motion posture name corresponding to the human motion posture diagram if the similarity is smaller than the preset acquaintance threshold value, so that the type of the human motion posture can be quickly identified, the sudden situation of patrol personnel in a booster station can be known, and rescue measures can be timely taken.
As shown in fig. 5, the present invention also provides a human motion gesture detection apparatus, comprising:
an obtaining unit 51, configured to obtain video data to be detected.
A first recognition unit 52 for recognizing the human target in the video data.
The extracting unit 53 is configured to extract a plurality of video frames where the human body target is located.
And the generating unit 54 is used for generating a human motion posture graph according to the posture of the human target in the video frame.
And the calculating unit 55 is configured to calculate similarity between the human motion posture diagram and a human motion posture diagram in a human motion posture diagram library, where the human motion posture diagram is marked with a corresponding human motion posture name.
The first determining unit 56 is configured to determine whether the similarity is smaller than a preset acquaintance threshold.
And the output unit 57 is configured to output the human motion posture name corresponding to the human motion posture diagram if the similarity is smaller than a preset similarity threshold.
In this embodiment, the extraction unit includes:
and the extraction subunit is used for extracting the video frame with the human body target from the video data to be detected.
And the marking subunit is used for marking the outline key points of the external outline of the human body target in each video frame.
And the mapping subunit is used for projecting the contour key points into a coordinate system and endowing coordinate values to the contour key points.
And the calculating subunit is used for calculating the coordinate difference value between the contour key points of the external contour of the human body target in the two adjacent video frames.
And the judging subunit is used for judging whether the coordinate difference value exceeds a preset coordinate difference value threshold value.
And the first extraction subunit is used for extracting a preset number of video frames backwards from the second video frame of the two adjacent video frames under the condition that the coordinate difference value exceeds a preset coordinate difference value threshold value.
In this embodiment, the generating unit includes:
and the second extraction subunit is used for extracting the bone key points of the human body target in the video frame.
And the first drawing subunit is used for drawing lines among the bone key points according to the human body composition.
And the second drawing subunit is used for drawing a human motion posture graph according to the line and the bone key points.
In this embodiment, the calculation unit calculates the similarity by using an euclidean distance method, a cosine distance method, or a hamming distance method.
In this embodiment, the names of the human motion postures are pre-classified into different dangerous posture grades, and the device further includes:
and the second identification unit is used for identifying the dangerous posture grade corresponding to the human motion posture name.
And the second judgment unit is used for judging whether the dangerous attitude grade exceeds a preset grade range.
And the alarm unit is used for sending out emergency rescue alarm under the condition that the dangerous attitude grade exceeds a preset grade range.
An embodiment of the present invention further provides a storage medium, and a computer program is stored in the storage medium, and when the computer program is executed by a processor, the computer program implements part or all of the steps in each embodiment of the human motion posture detection method provided by the present invention. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, for the embodiment of the human motion gesture detection device, since it is basically similar to the method embodiment, the description is simple, and the relevant points can be referred to the description in the method embodiment.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (10)

1. A human motion gesture detection method is characterized by comprising the following steps:
acquiring video data to be detected;
identifying a human target in the video data;
extracting a plurality of video frames where the human body target is located;
generating a human body motion attitude diagram according to the attitude of the human body target in the video frame;
calculating the similarity between the human body motion posture graph and human body motion posture graphs in a human body motion posture graph library, wherein the human body motion posture graphs are marked with corresponding human body motion posture names;
judging whether the similarity is smaller than a preset acquaintance threshold value or not;
and if the similarity is smaller than a preset identification threshold, outputting the human motion posture name corresponding to the human motion posture image.
2. The method of claim 1, wherein extracting the plurality of video frames in which the human target is located comprises:
extracting a video frame with the human body target from video data to be detected;
marking outline key points on the external outline of the human body target in each video frame;
projecting the contour key points into a coordinate system, and giving coordinate values to the contour key points;
calculating a coordinate difference value between contour key points of the external contour of the human body target in two adjacent video frames;
judging whether the coordinate difference value exceeds a preset coordinate difference value threshold value or not;
and if the coordinate difference value exceeds a preset coordinate difference value threshold value, starting with the second video frame of the two adjacent video frames, and backwards extracting a preset number of video frames.
3. The method of claim 1, wherein generating a human motion pose graph from the poses of human targets in the video frames comprises:
extracting skeleton key points of a human body target in the video frame;
drawing lines between the skeleton key points according to human body composition;
and drawing a human motion posture graph according to the lines and the bone key points.
4. The method according to claim 1, wherein in the step of calculating the similarity between the human motion posture diagram and the human motion posture diagram in the human motion posture diagram library, the similarity is calculated by using an euclidean distance method, a cosine distance method or a hamming distance method.
5. The method according to claim 1, wherein in the step of outputting the human motion posture names corresponding to the human motion posture diagram if the similarity is smaller than a preset similarity threshold, the human motion posture names are pre-classified into different dangerous posture levels, and the method further comprises:
identifying the dangerous posture grade corresponding to the human body motion posture name;
judging whether the dangerous attitude grade exceeds a preset grade range or not;
and if the dangerous attitude grade exceeds a preset grade range, sending an emergency rescue alarm.
6. A human motion gesture detection device, comprising:
the acquisition unit is used for acquiring video data to be detected;
a first identification unit for identifying a human target in the video data;
the extraction unit is used for extracting a plurality of video frames where the human body target is located;
the generating unit is used for generating a human motion posture graph according to the posture of the human target in the video frame;
the calculating unit is used for calculating the similarity between the human motion posture graph and human motion posture graphs in a human motion posture graph library, wherein the human motion posture graphs are marked with corresponding human motion posture names;
the first judgment unit is used for judging whether the similarity is smaller than a preset acquaintance threshold value or not;
and the output unit is used for outputting the human motion posture name corresponding to the human motion posture graph if the similarity is smaller than a preset identification threshold.
7. The apparatus of claim 6, wherein the extraction unit comprises:
the extraction subunit is used for extracting the video frame with the human body target from the video data to be detected;
the marking subunit is used for marking the outline key points of the external outline of the human body target in each video frame;
the mapping subunit is used for projecting the contour key points into a coordinate system and giving coordinate values to the contour key points;
the calculating subunit is used for calculating a coordinate difference value between contour key points of the external contour of the human body target in two adjacent video frames;
the judging subunit is used for judging whether the coordinate difference value exceeds a preset coordinate difference value threshold value;
and the first extraction subunit is used for extracting a preset number of video frames backwards from the second video frame of the two adjacent video frames under the condition that the coordinate difference value exceeds a preset coordinate difference value threshold value.
8. The apparatus of claim 6, wherein the generating unit comprises:
the second extraction subunit is used for extracting the skeleton key points of the human body target in the video frame;
the first drawing subunit is used for drawing lines among the skeleton key points according to the composition of the human body;
and the second drawing subunit is used for drawing a human motion posture graph according to the line and the bone key points.
9. The apparatus according to claim 6, wherein the calculation unit calculates the similarity by using an Euclidean distance method, a cosine distance method, or a Hamming distance method.
10. The apparatus of claim 6, wherein the human motion gesture names are pre-classified into different dangerous gesture classes, the apparatus further comprising:
the second identification unit is used for identifying the dangerous posture grade corresponding to the human motion posture name;
the second judgment unit is used for judging whether the dangerous attitude grade exceeds a preset grade range or not;
and the alarm unit is used for sending out emergency rescue alarm under the condition that the dangerous attitude grade exceeds a preset grade range.
CN202110776030.1A 2022-01-11 2022-01-11 Human motion posture detection method and device Pending CN114120438A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110776030.1A CN114120438A (en) 2022-01-11 2022-01-11 Human motion posture detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110776030.1A CN114120438A (en) 2022-01-11 2022-01-11 Human motion posture detection method and device

Publications (1)

Publication Number Publication Date
CN114120438A true CN114120438A (en) 2022-03-01

Family

ID=80359440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110776030.1A Pending CN114120438A (en) 2022-01-11 2022-01-11 Human motion posture detection method and device

Country Status (1)

Country Link
CN (1) CN114120438A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011946A (en) * 2023-10-08 2023-11-07 武汉海昌信息技术有限公司 Unmanned rescue method based on human behavior recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011946A (en) * 2023-10-08 2023-11-07 武汉海昌信息技术有限公司 Unmanned rescue method based on human behavior recognition
CN117011946B (en) * 2023-10-08 2023-12-19 武汉海昌信息技术有限公司 Unmanned rescue method based on human behavior recognition

Similar Documents

Publication Publication Date Title
CN110543800B (en) Target recognition tracking method and device for pod and pod
CN113191699A (en) Power distribution construction site safety supervision method
CN104156819B (en) Method for correcting error and device are observed in a kind of key post unsafe acts automatically
CN110135290B (en) Safety helmet wearing detection method and system based on SSD and AlphaPose
CN109034418B (en) Operation site information transmission method and system
CN110929646A (en) Power distribution tower reverse-off information rapid identification method based on unmanned aerial vehicle aerial image
CN112633343A (en) Power equipment terminal strip wiring checking method and device
CN114120438A (en) Human motion posture detection method and device
CN107818563A (en) A kind of transmission line of electricity bundle spacing space measurement and localization method
AU2021203869A1 (en) Methods, devices, electronic apparatuses and storage media of image processing
CN106682579B (en) Unmanned aerial vehicle binocular vision image processing system for detecting icing of power transmission line
CN110991292A (en) Action identification comparison method and system, computer storage medium and electronic device
CN114022845A (en) Real-time detection method and computer readable medium for electrician insulating gloves
CN111695404B (en) Pedestrian falling detection method and device, electronic equipment and storage medium
CN112163113A (en) Real-time monitoring system for high-voltage combined frequency converter
US11776143B2 (en) Foreign matter detection device, foreign matter detection method, and program
Chang et al. Safety risk assessment of electric power operation site based on variable precision rough set
CN115392407B (en) Non-supervised learning-based danger source early warning method, device, equipment and medium
CN112487924A (en) Method and device for distinguishing accidental fall of human body from video based on deep learning
CN116002480A (en) Automatic detection method and system for accidental falling of passengers in elevator car
CN113762115B (en) Distribution network operator behavior detection method based on key point detection
CN115641607A (en) Method, device, equipment and storage medium for detecting wearing behavior of power construction site operator
CN111898564B (en) Time sequence convolution network model, model training method and device and target recognition method and device
CN112949606A (en) Method and device for detecting wearing state of industrial garment, storage medium and electronic device
CN111194004B (en) Base station fingerprint positioning method, device and system and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination