CN117152844A - High-integrity worker construction attitude detection method and system based on computer vision - Google Patents

High-integrity worker construction attitude detection method and system based on computer vision Download PDF

Info

Publication number
CN117152844A
CN117152844A CN202311243544.6A CN202311243544A CN117152844A CN 117152844 A CN117152844 A CN 117152844A CN 202311243544 A CN202311243544 A CN 202311243544A CN 117152844 A CN117152844 A CN 117152844A
Authority
CN
China
Prior art keywords
worker
skeleton
network model
construction
integrity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311243544.6A
Other languages
Chinese (zh)
Inventor
刘正劼
曹益彰
吴浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202311243544.6A priority Critical patent/CN117152844A/en
Publication of CN117152844A publication Critical patent/CN117152844A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a high-integrity worker construction attitude detection method and system based on computer vision, wherein the method comprises the following steps: s1, acquiring image data of constructors to be tested in real time; s2, inputting the image data of the constructor to be detected into a pre-constructed skeleton classification network model, and extracting worker skeleton information; s3, judging whether the worker skeleton information is complete, if so, executing the step S5 to classify the gestures, and if not, executing the step S4 to complete the worker skeleton information; s4, inputting incomplete worker skeleton information into a pre-trained skeleton completion network model, and outputting complete worker skeleton information; s5, inputting the complete worker skeleton information into a pre-constructed gesture recognition network model, and outputting the construction gesture of the worker. Compared with the prior art, the invention has the advantages of improving the gesture recognition accuracy, stability and the like.

Description

High-integrity worker construction attitude detection method and system based on computer vision
Technical Field
The invention relates to the field of artificial intelligence, in particular to a high-integrity worker construction posture detection method and system based on computer vision.
Background
With the rapid development of artificial intelligence, hardware such as a GPU (graphics processing unit) is iterated continuously, calculation power is gradually enhanced, and the application of the artificial intelligence technology in various fields of engineering is becoming widespread. There are also a number of artificial intelligence algorithm applications in civil engineering construction, such as convolutional neural networks, recurrent neural networks, and the like.
The construction action and the construction state of the worker have significant significance for civil engineering construction, and whether the worker has safety risk or not and whether the operation is standard or not can be judged by identifying the construction state of the worker, so that the construction progress can be known more effectively.
In the existing construction organization management technology, the management personnel on the construction site usually monitor, but due to the limited quantity and energy of the management personnel, the management personnel are inevitably careless. Conventional behavior detection techniques such as Yolo can only identify the worker itself, but cannot identify the specific actions and construction states.
The existing worker construction gesture can cause the problems of poor gesture recognition stability, poor integrity of skeleton nodes and the like simply based on an OpenPose algorithm.
Disclosure of Invention
The invention aims to provide a high-integrity worker construction gesture detection method and system based on computer vision, which can improve gesture recognition accuracy.
The aim of the invention can be achieved by the following technical scheme:
a high-integrity worker construction attitude detection method based on computer vision comprises the following steps:
s1, acquiring image data of constructors to be tested in real time;
s2, inputting the image data of the constructor to be detected into a pre-constructed skeleton classification network model, and extracting worker skeleton information;
s3, judging whether the worker skeleton information is complete, if so, executing the step S5 to classify the gestures, and if not, executing the step S4 to complete the worker skeleton information;
s4, inputting incomplete worker skeleton information into a pre-trained skeleton completion network model, and outputting complete worker skeleton information;
s5, inputting the complete worker skeleton information into a pre-constructed gesture recognition network model, and outputting the construction gesture of the worker.
Further, the construction process of the skeleton classification network model specifically comprises the following steps:
acquiring image datasets of different construction postures of workers;
labeling each node of workers in the image data set;
and inputting the marked image data set into the CNN network model for training to construct a skeleton classification network model.
Further, the noted image dataset is normalized and enhanced prior to input to the CNN network model.
Further, the openelse is used to label each node.
Further, the joint points include a head, a neck, a right shoulder, a right elbow, a right hand, a left shoulder, a left elbow, a left hand, a right leg, a right knee, a right foot, a left leg, a left knee, a left foot, a right eye, a left eye, a right ear, and a left ear.
Further, a CGAN network model is adopted to construct the framework complement network model.
Further, the gesture recognition network model is constructed by adopting a CNN network model.
Further, the worker construction poses include standing, sitting, running, bending, squatting, and lifting.
Further, the method also comprises the step of visually displaying the construction posture of the worker.
The invention also provides a high-integrity worker construction attitude detection system based on computer vision, which comprises:
an image acquisition module: the method is used for acquiring image data of constructors to be tested in real time;
and the skeleton information extraction module is used for: the method comprises the steps of inputting the image data of constructors into a pre-constructed skeleton classification network model, and extracting worker skeleton information;
and a judging module: the method is used for judging whether the worker skeleton information is complete;
and (3) a complement module: the method comprises the steps of inputting incomplete worker skeleton information into a pre-trained skeleton completion network model, and outputting complete worker skeleton information;
and an output module: the method is used for inputting complete worker skeleton information into a pre-constructed gesture recognition network model and outputting worker construction gestures.
Compared with the prior art, the invention has the following beneficial effects:
(1) Compared with the traditional method for identifying skeleton nodes by simply adopting an OpenPose framework, the method introduces the skeleton completion network model to complete incomplete skeleton information and generate the complete skeleton similar to the original skeleton, so that a more complete skeleton identification result can be output, and the accuracy of construction gesture identification is improved.
(2) The invention can obtain complete framework information of workers on the construction site and provides higher stability for the subsequent gesture recognition process.
(3) The invention accurately monitors the construction state of the workers on the construction site with high integrity based on the artificial intelligence theory, and reduces the manpower requirements for supervision personnel such as safety officers and the like during construction.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is a diagram of the human body joint point of the present invention;
fig. 3 is a diagram of the construction gesture recognition result of the worker according to the present invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. The present embodiment is implemented on the premise of the technical scheme of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following examples.
Example 1
The embodiment provides a high-integrity worker construction posture detection method based on computer vision, as shown in fig. 1, which comprises the following steps:
s1, acquiring image data of constructors to be tested in real time.
The monitoring camera of the construction site or the site data shot by the vehicle-mounted unmanned aerial vehicle camera of the construction site. The shooting pitch angle of the video is within +/-45 degrees, and the size of the person in the video is preferably larger than 100 pixels.
S2, inputting the image data of the constructor to be detected into a pre-constructed skeleton classification network model, and extracting worker skeleton information.
The step requires first constructing a skeleton classification network model. Firstly, video data is input, and then the video data is segmented into image data according to frames. The video data is derived from monitoring cameras of a construction site or field data shot by vehicle-mounted unmanned aerial vehicle cameras of the construction site. The shooting pitch angle of the video is within +/-45 degrees, and the size of the person in the video is preferably larger than 100 pixels. In order to intercept different construction postures in the construction process, intercepting a frame of image in the video every 5s, and adding a data set. In addition, human body pose estimation datasets are collected, including Wiezman et al public datasets. Likewise, a frame of image in the video is intercepted every 5s, and a data set is added.
And marking the characters in the data set by using OpenPose, outputting all joint nodes of the characters, and taking-1 as an index if no corresponding joint node is detected. The human body skeleton model adopts a human body 18-point skeleton model, wherein key nodes respectively comprise 18 nodes including a head, a neck, a right shoulder, a right elbow, a right hand, a left shoulder, a left elbow, a left hand, a right leg, a right knee, a right foot, a left leg, a left knee, a left foot, a right eye, a left eye, a right ear and a left ear, as shown in fig. 2.
The skeleton recognition data is standardized and enhanced so as to adapt to network input. Wherein the normalization is as follows: in the image, the height of the human body is generally larger than the width of the human body, and the normalization is a conformal transformation considering that the posture of the human body has close relation with the angle of joints, so the pair (x) takes the height l as a reference i ,y i ) Making an equal ratio change. The coordinates of all key nodes of the human body are recorded as (x) i ,y i ) (i=0, 1,2, …, 17) defining the human body center Total height of human body l=max|y i -y j |(i,j=0,1,2…,17)
In order to enhance the applicability of the model, a perturbation method is adopted to enhance the data. The principle is that a tiny random disturbance is applied to the node position, so that the richness of human body posture data is increased under the condition of not changing the human body posture, and the original node coordinate (x i ,y i ) Node coordinates after data enhancement are as followsRandom disturbance is (delta) xy ) The specific formula is as follows:
the disturbance is selected to be within the range of-0.07 delta or less xy ≤+0.07。
Redrawing the skeleton in the manner described above, wherein each edge color is the same as the openPose output color, has a transparency of 0.5, and is saved as a 108×72 image. Training a skeleton image by adopting a lightweight CNN architecture to complete the construction of a skeleton classification network model, wherein the CNN network architecture comprises 3 layers of convolution layers and pooling layers, and the convolution layers adopt 3 multiplied by 3 convolution kernels and ReLU activation functions; the method comprises 2 full connection layers, wherein a ReLU activation function is adopted; comprises 1 output layer, and adopts softmax activation function.
And carrying out skeleton extraction on the characters in each frame of image. The skeleton extraction adopts an OpenPose method, the positions of all nodes in the image are output as the serial numbers and indexes of all the human joint nodes, and if no corresponding joint node is detected, the index is-1. And then, the coordinate data of the character skeleton node is standardized by adopting the same processing mode as the data set.
And S3, judging whether the worker skeleton information is complete, if so, executing the step S5 to classify the gestures, and if not, executing the step S4 to complete the worker skeleton information.
Judging whether the character skeleton node is complete, if so, directly classifying the gesture, and if not, carrying out complement operation.
For individuals with incomplete skeleton recognition, the completion operation is required to be performed first and then gesture classification is required to be performed.
S4, inputting incomplete worker skeleton information into a pre-trained skeleton completion network model, and outputting complete worker skeleton information.
The step requires first constructing a framework complement network model. First generating a skeleton-complement network dataset: and screening all complete skeletons in the skeleton classification data set, wherein the skeleton data adopts (x, y) coordinates normalized by all human body nodes, and for a human body 18-point model, the human body skeleton data can be represented by a 36-dimensional vector. And combining the complete data of all skeleton nodes in the data set into the data set. Preferably, a random generation method is adopted to randomly remove nodes in the original complete skeleton, an incomplete skeleton calculation example is generated, and the missing nodes are filled with (0, 0). Then training the skeleton complement network: constructing a CGAN network to generate a complete framework similar to the original framework according to the incomplete framework. The input of the generator in the network is the combination of the incomplete skeleton and random noise, and the output is the generated new skeleton. The input of the discriminator model is the splicing of the incomplete skeleton and the complete skeleton vector, and the output is the true and false confidence of the skeleton information. Finally, skeleton complement: and (3) combining the skeleton and the original incomplete skeleton which are similar in morphology and generated by CGAN, and complementing the nodes of the incomplete skeleton. The number of the table below the deleted skeleton node is s, then x s =G(c,z) s
The above generator comprises 3 layers of full connectionThe access layer adopts a ReLU activation function; comprises a 1-layer output layer, and adopts a tanh activation function. The discriminator comprises 3 full-connection layers, and a ReLU activation function is adopted; comprises a 1-layer output layer, and adopts a tanh activation function. Preferably, the complete skeleton vector is noted as x i Incomplete skeleton vector c i The noise vector is z i M samples are used in each round of training, and the loss of the discriminator is The generator loss is
The traditional method simply adopts the OpenPose framework to identify skeleton nodes, is easy to cause incomplete identification, and can bring adverse identification results to the subsequent gesture identification process, so that the embodiment adopts a skeleton complement technology to splice a complete skeleton.
For the incomplete skeleton obtained, the incomplete skeleton is first pre-classified. And then, completing skeleton nodes by adopting a posture completion method, then, performing post-classification, and finally, fusing pre-classification and post-classification results, thereby being capable of remarkably improving the accuracy and stability of classification.
The classification method after pre-classification and complement adopts the CNN lightweight network, and is divided into six types of standing, sitting, running, bending, squatting and lifting, and the confidence coefficient of each type of action is output.
The skeleton complement method adopts the generator model and the node replacement method in the CGAN network, inputs the coordinates of the incomplete skeleton after the node standardization, outputs a skeleton model similar to the original incomplete skeleton by the CGAN network, and replaces the node of the original skeleton with the node of the generated skeleton.
The data fusion method adopts a confidence coefficient weighting superposition method, and for 6 types of actions needing to be classified, the output result of the gesture classification network is the confidence coefficient of each action, and p= (p) 1 ,p 2 ,…,p 6 ) Classifying before and after completion, wherein the classification result is p 1 、p 2 The fusion classification decision result may be set as a linear superposition of the two, p=λp 1 +(1-λ)p 2 . The data fusion judgment is carried out by adopting the skeleton information before and after completion, so that the possible distortion of the original gesture caused by skeleton completion can be reduced. Preferably, lambda is preferably between 0.4 and 0.6.
S5, inputting the complete worker skeleton information into a pre-constructed gesture recognition network model, and outputting the construction gesture of the worker.
In the embodiment, the gesture recognition network model is constructed by adopting a CNN network model, complete worker skeleton information is input into the model, the result is visually output, the skeleton model with the completed character is superimposed on the original image, the gesture classification result of the character is output, and the image is output. Finally, the images are spliced frame by frame, video is output, and a detection result is formed, as shown in fig. 3.
By adopting the automatic and high-integrity monitoring method for the construction state of the workers on the construction site, the manpower requirements of supervision personnel such as safety officers and the like during construction are reduced.
Example 2
The embodiment provides a high-integrity workman construction gesture detecting system based on computer vision, includes:
an image acquisition module: the method is used for acquiring image data of constructors to be tested in real time;
and the skeleton information extraction module is used for: the method comprises the steps of inputting the image data of constructors into a pre-constructed skeleton classification network model, and extracting worker skeleton information;
and a judging module: the method is used for judging whether the worker skeleton information is complete;
and (3) a complement module: the method comprises the steps of inputting incomplete worker skeleton information into a pre-trained skeleton completion network model, and outputting complete worker skeleton information;
and an output module: the method is used for inputting complete worker skeleton information into a pre-constructed gesture recognition network model and outputting worker construction gestures.
The remainder were as in example 1.
The above functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The scheme in the embodiment of the invention can be realized by adopting various computer languages, such as object-oriented programming language Java, an transliteration script language JavaScript and the like.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The high-integrity worker construction posture detection method based on computer vision is characterized by comprising the following steps of:
s1, acquiring image data of constructors to be tested in real time;
s2, inputting the image data of the constructor to be detected into a pre-constructed skeleton classification network model, and extracting worker skeleton information;
s3, judging whether the worker skeleton information is complete, if so, executing the step S5 to classify the gestures, and if not, executing the step S4 to complete the worker skeleton information;
s4, inputting incomplete worker skeleton information into a pre-trained skeleton completion network model, and outputting complete worker skeleton information;
s5, inputting the complete worker skeleton information into a pre-constructed gesture recognition network model, and outputting the construction gesture of the worker.
2. The high-integrity worker construction posture detection method based on computer vision according to claim 1, wherein the construction process of the skeleton classification network model specifically comprises the following steps:
acquiring image datasets of different construction postures of workers;
labeling each node of workers in the image data set;
and inputting the marked image data set into the CNN network model for training to construct a skeleton classification network model.
3. The method for detecting the construction posture of the high-integrity worker based on the computer vision according to claim 2, wherein the marked image data set is subjected to standardization and enhancement processing before being input into a CNN network model.
4. The high-integrity worker construction posture detection method based on computer vision according to claim 2, wherein each node is marked by using openPose.
5. The method for detecting the construction posture of a high-integrity worker based on computer vision according to claim 2, wherein the joint points comprise a head, a neck, a right shoulder, a right elbow, a right hand, a left shoulder, a left elbow, a left hand, a right leg, a right knee, a right foot, a left leg, a left knee, a left foot, a right eye, a left eye, a right ear and a left ear.
6. The high-integrity worker construction posture detection method based on computer vision according to claim 1, wherein the skeleton completion network model is constructed by adopting a CGAN network model.
7. The high-integrity worker construction gesture detection method based on computer vision according to claim 1, wherein the gesture recognition network model is constructed by adopting a CNN network model.
8. The method for detecting the construction posture of a high-integrity worker based on computer vision according to claim 1, wherein the construction posture of the worker comprises standing, sitting, running, bending, squatting and lifting.
9. The method for detecting the construction posture of the high-integrity worker based on the computer vision according to claim 1, further comprising visually displaying the construction posture of the worker.
10. A high integrity worker construction pose detection system based on computer vision, comprising:
an image acquisition module: the method is used for acquiring image data of constructors to be tested in real time;
and the skeleton information extraction module is used for: the method comprises the steps of inputting the image data of constructors into a pre-constructed skeleton classification network model, and extracting worker skeleton information;
and a judging module: the method is used for judging whether the worker skeleton information is complete;
and (3) a complement module: the method comprises the steps of inputting incomplete worker skeleton information into a pre-trained skeleton completion network model, and outputting complete worker skeleton information;
and an output module: the method is used for inputting complete worker skeleton information into a pre-constructed gesture recognition network model and outputting worker construction gestures.
CN202311243544.6A 2023-09-25 2023-09-25 High-integrity worker construction attitude detection method and system based on computer vision Pending CN117152844A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311243544.6A CN117152844A (en) 2023-09-25 2023-09-25 High-integrity worker construction attitude detection method and system based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311243544.6A CN117152844A (en) 2023-09-25 2023-09-25 High-integrity worker construction attitude detection method and system based on computer vision

Publications (1)

Publication Number Publication Date
CN117152844A true CN117152844A (en) 2023-12-01

Family

ID=88898839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311243544.6A Pending CN117152844A (en) 2023-09-25 2023-09-25 High-integrity worker construction attitude detection method and system based on computer vision

Country Status (1)

Country Link
CN (1) CN117152844A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117647788A (en) * 2024-01-29 2024-03-05 北京清雷科技有限公司 Dangerous behavior identification method and device based on human body 3D point cloud

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117647788A (en) * 2024-01-29 2024-03-05 北京清雷科技有限公司 Dangerous behavior identification method and device based on human body 3D point cloud
CN117647788B (en) * 2024-01-29 2024-04-26 北京清雷科技有限公司 Dangerous behavior identification method and device based on human body 3D point cloud

Similar Documents

Publication Publication Date Title
CN110728209B (en) Gesture recognition method and device, electronic equipment and storage medium
CN110427867B (en) Facial expression recognition method and system based on residual attention mechanism
CN106570464A (en) Human face recognition method and device for quickly processing human face shading
CN111241989A (en) Image recognition method and device and electronic equipment
CN109472193A (en) Method for detecting human face and device
CN109711283A (en) A kind of joint doubledictionary and error matrix block Expression Recognition algorithm
CN107316029A (en) A kind of live body verification method and equipment
CN110781882A (en) License plate positioning and identifying method based on YOLO model
CN105303163B (en) A kind of method and detection device of target detection
CN112001859A (en) Method and system for repairing face image
CN110956158A (en) Pedestrian shielding re-identification method based on teacher and student learning frame
CN117152844A (en) High-integrity worker construction attitude detection method and system based on computer vision
WO2022227765A1 (en) Method for generating image inpainting model, and device, medium and program product
CN111898566B (en) Attitude estimation method, attitude estimation device, electronic equipment and storage medium
CN108961358A (en) A kind of method, apparatus and electronic equipment obtaining samples pictures
CN112989947A (en) Method and device for estimating three-dimensional coordinates of human body key points
CN110458794A (en) Fittings quality detection method and device for track train
CN112906520A (en) Gesture coding-based action recognition method and device
CN116363748A (en) Power grid field operation integrated management and control method based on infrared-visible light image fusion
CN111178129B (en) Multi-mode personnel identification method based on human face and gesture
CN114118303B (en) Face key point detection method and device based on prior constraint
CN111310720A (en) Pedestrian re-identification method and system based on graph metric learning
CN113420289B (en) Hidden poisoning attack defense method and device for deep learning model
CN107369086A (en) A kind of identity card stamp system and method
CN108664906A (en) The detection method of content in a kind of fire scenario based on convolutional network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination