CN112790758A - Human motion measuring method and system based on computer vision and electronic equipment - Google Patents

Human motion measuring method and system based on computer vision and electronic equipment Download PDF

Info

Publication number
CN112790758A
CN112790758A CN201911112035.3A CN201911112035A CN112790758A CN 112790758 A CN112790758 A CN 112790758A CN 201911112035 A CN201911112035 A CN 201911112035A CN 112790758 A CN112790758 A CN 112790758A
Authority
CN
China
Prior art keywords
human
motion
human body
current frame
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911112035.3A
Other languages
Chinese (zh)
Inventor
郑少杰
张晓璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinovation Ventures Beijing Enterprise Management Co ltd
Original Assignee
Beijing Innovation Workshop Kuangshi International Artificial Intelligence Technology Research Institute Co ltd
Sinovation Ventures Beijing Enterprise Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Innovation Workshop Kuangshi International Artificial Intelligence Technology Research Institute Co ltd, Sinovation Ventures Beijing Enterprise Management Co ltd filed Critical Beijing Innovation Workshop Kuangshi International Artificial Intelligence Technology Research Institute Co ltd
Priority to CN201911112035.3A priority Critical patent/CN112790758A/en
Publication of CN112790758A publication Critical patent/CN112790758A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content

Abstract

The invention provides a human body movement measuring method, a system and electronic equipment based on computer vision, the method obtains a human body movement video stream through a human body movement measuring method based on optics, extracts a current reference image and a current frame image from the human body movement video stream, calculates and obtains movement information of image pixel points in the current frame image according to the current reference image and the current frame image, obtains coordinate information of human skeleton key points in the current frame image, calculates and obtains movement measuring information of human body joint movement at least according to the movement information of the image pixel points in the current frame image and the coordinate information of the human skeleton key points, and combines semantic vision and movement vision, so that the operation of the human body movement measuring method is more convenient, and the obtained movement measuring information is more robust.

Description

Human motion measuring method and system based on computer vision and electronic equipment
[ technical field ] A method for producing a semiconductor device
The invention relates to the field of human motion measurement, in particular to a human motion measurement method and system based on computer vision and electronic equipment.
[ background of the invention ]
The existing human body movement measuring methods are roughly divided into two types: the method is based on an inertial motion sensor, sensor components are often required to be fixed on all parts of a human body, the weight of the sensor can often seriously influence the motion of the human body, measured data is distorted, calculation and evaluation are not accurate enough, large errors exist, preparation work of the method is complicated, and inconvenience is brought to users. Second, the human movement measurement method based on optics, this kind of method needs binocular even many mesh cameras at present, need mark the point in the human body and is used for collecting and discerning the key point, the measurement environment of this method has higher requirements, have higher requirements to the performance of the apparatus.
[ summary of the invention ]
In order to overcome the problem of inconvenient operation brought by the existing human motion measuring method, the invention provides a human motion measuring method, a human motion measuring system and electronic equipment based on computer vision.
In order to solve the technical problems, the invention provides a technical scheme as follows: a human body movement measuring method based on computer vision includes the steps of S1: acquiring a human motion video stream by an optical human motion measurement method; step S2: extracting a current reference image and a current frame image from the human motion video stream, and calculating motion information of image pixel points in the current frame image according to the current reference image and the current frame image; step S3: acquiring coordinate information of human skeleton key points in a current frame image; and step S4: and calculating to obtain the motion measurement information of the human body joint motion at least according to the motion information of the image pixel points in the current frame image and the coordinate information of the human body skeleton key points.
Preferably, step S2 specifically includes the following steps: step S21: respectively taking two frames of images sequentially acquired from a human motion video stream as a current reference image and a current frame image, and performing denoising pretreatment on the current reference image and the current frame image; and step S22: and calculating to obtain the motion information of image pixel points in the current frame image by using the change of the image in a time domain and the correlation between adjacent frames based on an optical flow method.
Preferably, step S3 specifically includes the following steps: step S31: inputting the current frame image into a human skeleton key point recognition pre-training model to obtain Feature maps of all joint points of a human body; and step S32: and acquiring pixel coordinate information of each skeleton key point of the human body based on the high-value region of the Feature Map.
Preferably, the computer vision-based human body movement measuring method further comprises the steps of: step Sa: determining a human body Mask region comprising at least two human body skeleton key points in a current frame image; and step Sb: and repeating the step S4 to obtain the motion measurement information of the human body joint motion corresponding to a plurality of pixel points in the human body trunk Mask area, and counting according to the motion measurement information of the human body joint motion corresponding to the plurality of pixel points to obtain the final motion measurement information of the human body joint motion.
Preferably, the at least two human skeletal key points are defined as C and D, and the step Sa includes: step Sa 1: setting a threshold value; step Sa2, selecting pixel point E from the current frame image, calculating
Figure BDA0002272266220000021
Comparing the S with the threshold, and determining whether the pixel point E belongs to a Mask region according to a comparison result; and step Sa3, repeating the step Sa2, and acquiring a human body Mask region in the current frame image.
Preferably, the human body joint is provided with two human body skeleton key points correspondingly, and the motion measurement information of the human body joint motion is set as the human body joint motion angular velocity
Figure BDA0002272266220000022
And is
Figure BDA0002272266220000023
vaB-a is the motion information of the image pixel points corresponding to the human skeleton key points in the current frame image obtained in the step S2, and b-a is the pixel distance of the images corresponding to the two human skeleton key points in the current frame image, VAThe real motion linear velocity of the human skeleton joint point is shown, and B-A is the length information of the human skeleton corresponding to the human joint.
Preferably, the motion measurement information of the human body joint motion in the above step S4 may be set as any motion index of linear velocity, linear acceleration and angle variation of the human body joint motion.
The invention also provides a human motion measuring system based on computer vision, comprising: the image acquisition unit is used for acquiring a human motion video stream based on an optical human motion measurement method; the motion information acquisition unit is used for extracting a current reference image and a current frame image from the human motion video stream and calculating motion information of image pixel points in the current frame image according to the current reference image and the current frame image; the key point information acquisition unit is used for acquiring the coordinate information of the key points of the human skeleton in the current frame image; and the data processing unit is used for calculating and obtaining the motion measurement information of the human joint motion at least according to the motion information of the image pixel points in the current frame image and the coordinate information of the human skeleton key points.
Preferably, the key point information obtaining unit further includes: the human body skeleton key point identification pre-training model is used for extracting Feature maps of all joint points of a human body to obtain coordinate information of all skeleton key points of the human body; and the human body trunk Mask area acquisition unit is used for determining the human body trunk Mask area according to the acquired human body skeleton key point coordinate information and a set threshold value.
The invention also provides an electronic device comprising a memory and a processor, wherein the memory stores a computer program which is set to execute the human body movement measuring method based on computer vision in any one of the above items when running; the processor is arranged to perform the computer vision based human movement measurement method of any of the above by the computer program.
Compared with the prior art, the human body movement measuring method, the human body movement measuring system and the electronic equipment based on computer vision provided by the invention have the following advantages:
1. the method comprises the steps of obtaining a human motion video stream through an optical-based human motion measuring method, extracting adjacent current reference images and current frame images from the human motion video stream, calculating motion information of image pixel points in the current frame images according to pixel information extracted from two adjacent frame images, and obtaining coordinate information of human skeleton key points from the current frame images, so that motion measuring information of human joint motion is calculated and obtained according to the motion information of the image pixel points in the current frame images and the coordinate information of the human skeleton key points, and semantic vision and motion vision are combined, so that the operation of the human motion measuring method is more convenient, and the obtained motion measuring information is more robust.
2. The method comprises the steps of respectively taking two frames of images sequentially acquired from a human motion video stream as a current reference image and a current frame image, performing denoising pretreatment on the two frames of images, calculating and obtaining motion information of image pixels in the current frame image by utilizing the change of the images in a time domain and the correlation between adjacent frames based on an optical flow method, improving the robustness of the pixel motion information on noise and illumination change, and dynamically analyzing the current frame image to obtain the motion information with the robustness.
3. The method comprises the steps of inputting a current frame image into a human skeleton key point recognition pre-training model based on a deep learning model to obtain Feature maps of all joint points of a human body, obtaining pixel coordinate information of all skeleton key points of the human body based on a high-value region of the Feature maps, and providing accurate data support for subsequent calculation by taking the human joint points as all skeleton key points of the human body.
4. The method comprises the steps of obtaining movement measurement information of human body joint movement corresponding to a plurality of pixel points in a human body trunk Mask region by determining the human body trunk Mask region comprising at least two human body skeleton key points, and obtaining final movement measurement information of the human body joint movement according to the movement measurement information of the human body joint movement corresponding to the pixel points.
5. The method comprises the steps of judging whether a selected pixel point belongs to a Mask region or not by setting a threshold value set according to an empirical value so as to obtain a human body trunk Mask region in a current frame image, combining key point information obtained from the human body trunk Mask region in the current frame image with motion information of the pixel point, obtaining motion measurement information of a plurality of human body joint motions through calculation, and counting the motion measurement information of the plurality of human body joint motions so as to obtain final motion measurement information of the human body joint motions and obtain a measurement index with robustness.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart.
Which when executed by a processor performs the above-described functions defined in the method of the present application. It should be noted that the computer memory described herein may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer memory may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof.
More specific examples of computer memory may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable signal medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an image acquisition unit, a motion information acquisition unit, a key point information acquisition unit, and a data processing unit. The names of the units do not form a limitation to the unit itself in some cases, and for example, the image acquisition unit may also be described as a "unit for acquiring a human motion video stream to be measured and calculated based on a monocular camera".
As another aspect, the present application also provides a computer memory, which may be included in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer memory carries one or more programs that, when executed by the apparatus, cause the apparatus to: the method comprises the steps of obtaining a human body motion video stream based on an optical human body motion measuring method, extracting a current reference image and a current frame image from the human body motion video stream, calculating motion information of image pixel points in the current frame image according to the current reference image and the current frame image, obtaining human body skeleton key point coordinate information in the current frame image, and calculating motion measuring information of human body joint motion at least according to the motion information of the image pixel points in the current frame image and the human body skeleton key point coordinate information.
[ description of the drawings ]
Fig. 1 is an overall flowchart of a human motion measurement method based on computer vision according to a first embodiment of the present invention.
Fig. 2 is a schematic diagram of an application scenario of step S4 in a human motion measurement method based on computer vision according to a first embodiment of the present invention.
Fig. 3 is a detailed flowchart of step S2 in a method for measuring human body movement based on computer vision according to a first embodiment of the present invention.
Fig. 4 is a detailed flowchart of step S3 in a method for measuring human body movement based on computer vision according to a first embodiment of the present invention.
Fig. 5 is a schematic diagram of the human skeleton key points obtained in step S31 in the method for measuring human motion based on computer vision according to the first embodiment of the present invention.
Fig. 6 is a detailed flowchart of another embodiment of a human motion measurement method based on computer vision according to the present invention.
Fig. 7 is a detailed flowchart of step Sa in fig. 6.
Fig. 8 is a schematic diagram of the obtained human torso Mask region in step Sa3 of the method for measuring human body movement based on computer vision according to the first embodiment of the present invention.
Fig. 9 is a block diagram of a human body movement measuring system based on computer vision according to a second embodiment of the present invention.
Fig. 10 is a block diagram of a key point information obtaining unit in a human motion measurement system based on computer vision according to a second embodiment of the present invention.
Fig. 11 is a block diagram of an electronic device according to a third embodiment of the invention.
Description of reference numerals:
0. a nose; 1. a neck portion; 2. a right shoulder; 3. the right elbow; 4. a right wrist; 5. a left shoulder; 6. the left elbow; 7. a left wrist; 8. the right hip; 9. the right knee; 10. a right ankle; 11. the left hip; 12. the left knee; 13. a left ankle; 14. a right eye; 15. a left eye; 16. a right ear; 17. a left ear;
20. a memory; 30. a processor;
100. an image acquisition unit; 200. a motion information acquisition unit; 300. a key point information acquisition unit; 301. identifying a pre-training model for key points of human bones; 302. a human body trunk Mask area acquisition unit; 400. a data processing unit.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, a first embodiment of the present invention provides a method for measuring human body movement based on computer vision, which includes the following steps:
step S1: the human motion video stream is obtained by an optical-based human motion measurement method.
It can be understood that the human motion video stream obtained by the optical-based human motion measurement method is not limited to be obtained by a monocular camera, and can also be obtained by any existing device capable of obtaining the human motion video stream.
Step S2: and extracting a current reference image and a current frame image from the human motion video stream, and calculating motion information of image pixel points in the current frame image according to the current reference image and the current frame image.
It is understood that the image for calculation is extracted through the human motion video stream, image resolution information is acquired from the read adjacent frame image, and the acquired previous image is taken as the current reference image and the subsequent image is taken as the current frame image. Or taking the previous image as the current reference image, and taking the second frame or the Nth frame image after the previous image as the current frame image, wherein N is greater than 2 and is a positive integer.
As an embodiment, the motion information of the image pixel points in the current frame image may be obtained based on an optical flow method, and the motion information of the image pixel points in the current frame image, including the direction and the size, is calculated by the optical flow method, so as to provide data support for subsequent calculation, where the motion information of the image pixel points includes a motion speed.
Step S3: and acquiring the coordinate information of the key points of the human skeleton in the current frame image.
The function of obtaining semantic information by fitting the convolutional neural network based on the deep learning model can be understood, and the data related to human joints, such as human skeleton information, human skeleton key point coordinate information and the like, can be extracted by analyzing and identifying the current frame image. In the invention, the coordinate information of the key points of the human skeleton is at least obtained through a convolutional neural network based on a deep learning model. The convolutional neural network based on the deep learning model is specifically a human skeleton key point identification pre-training model.
The human skeleton information comprises the length of each joint of the human body and the distribution range of muscles where the human skeleton is located, excessive errors are avoided by comprehensively processing the information, and a data base is laid for a computer to improve the accuracy of a computing process.
Step S4: and calculating to obtain the motion measurement information of the human body joint motion at least according to the motion information of the image pixel points in the current frame image and the coordinate information of the human body skeleton key points.
It can be understood that based on semantic vision and motion vision, the obtained motion information and the coordinate information of key points of human bones are calculated to obtain the motion measurement information of human joint motion. The motion measurement information of the human body joint motion comprises motion angular velocity.
As an embodiment, the obtained motion information and the coordinate information of the key points of the human skeleton can be compared with each other based on semantic vision and motion vision, and the motion measurement information of the human joint motion which the user wants to obtain is calculated by combining the human skeleton information, so that a motion index which is more robust than that of the existing human motion measurement method is obtained, and more accurate data information is provided for judging the motion health state of the human body. The motion measurement information of the human body joint motion can be set as any motion index of linear velocity, linear acceleration and angle variation of the human body joint motion.
It is understood that the step S3 may be performed simultaneously with the step S2, or prior to or subsequent to the step S2.
As an implementation manner of step S4, specifically, referring to fig. 2, in this embodiment, the motion information of the image pixel points in the current frame image extracted in the above step and the coordinate information of the key points of the human skeleton are used to calculate the angular velocity of the human joint motion. The description will be given by taking an example in which the motion information of the image pixel points in the previous frame image is obtained by optical flow calculation. Assuming that a motion scene is shown in fig. 2, fig. 2(a) is a camera imaging plane, fig. 2(b) is a camera pinhole imaging model, and fig. 2(c) is an object to be measured. Assuming that the point a and the point B are two skeletal key points of the to-be-measured child at a certain moment, and assuming that the point a moves to the point a ' by the next moment (unit time), the projection points of the point A, B, A ' on the camera imaging surface through the camera pinhole imaging model are respectively a, B and a ', and the relationship among the three is as follows:
a=M*(R*A+T) (1)
b=M*(R*B+T) (2)
a'=M*(R*A'+T) (3)
wherein M is an intrinsic parameter matrix of the camera, R is a rotation matrix between the real world coordinate system and the camera coordinate system, and T is a translation matrix between the real world coordinate system and the camera coordinate system. And converting the real three-dimensional world coordinate point into a camera coordinate system through the rotation matrix R and the translation matrix T, and obtaining the corresponding relation between the real three-dimensional world coordinate point and the pixel coordinate imaged by the camera through the conversion of the parameter matrix M in the camera.
Subtracting the point a 'from the point a' yields the pixel displacement between the two time instants, i.e. the optical flow motion field:
va=a'-a
=M*(R*A'+T)-M*(R*A+T)
=M*R*(A'-A) (4)
subtracting the point a from the point b to obtain the pixel distance between the key points of the human skeleton in the image, namely the length of the trunk in the imaging plane:
b-a=M*(R*B+T)-M*(R*A+T)
=M*R*(B-A) (5)
then, dividing equation (5) by equation (6) yields:
Figure BDA0002272266220000111
a' -A is the real movement speed of the point A of the small person to be measured in the real three-dimensional world, namely
Figure BDA0002272266220000112
Where M is the in-camera parameter matrix, it is irreversible, so the numerator denominator cannot be cancelled. The camera intrinsic parameter matrix M is a transformation matrix between three-dimensional coordinates under a camera coordinate system and two-dimensional coordinates under a camera imaging coordinate system. Therefore, the motion parallel to the imaging surface of the camera can be accurately measured, that is, the motion with the depth change is not concerned, and if the depth of the target to be measured is approximately unchanged, the formula (7) can be simplified as follows:
Figure BDA0002272266220000113
v is shown in the formula (8)aThe motion information of the optical flow field, that is, the motion information of the image pixel points corresponding to the human skeleton key points in the current frame image obtained in step S2, can be obtained by calculation in step S2, and b-a is the pixel distance of the human skeleton key points in the image, that is, the pixel distance of the image corresponding to the two human skeleton key points in the current frame image, and can be obtained by calculation from the human skeleton key point information obtained in step S3; vAIs the real motion linear velocity of the human body joint point to be measured, B-A is the length of the real trunk of the human body to be measured, namely the length information of the human body skeleton corresponding to the human body joint,
Figure BDA0002272266220000114
the angular velocity of the human joint movement to be measured.
It can be understood that the monocular visual measurement of the angular velocity of the human body joint motion can be realized by combining the optical flow field motion information obtained by calculation from a single camera and the human body bone key point coordinate information obtained by using the deep learning model, namely the motion measurement information of the human body joint motion is obtained.
Optionally, the motion angular acceleration and the angle variation can be obtained by performing operations such as integration and differentiation on the human joint motion angular velocity, and richer motion indexes such as the linear velocity and the linear acceleration of the human joint motion can be further obtained by combining the real trunk length of the human body, namely richer motion measurement information of the human joint motion can be obtained by further combining the human skeleton information.
Referring to fig. 3, step S2: and extracting a current reference image and a current frame image from the human motion video stream, and calculating motion information of image pixel points in the current frame image according to the current reference image and the current frame image. The step S2 specifically includes steps S21 to S22:
step S21: respectively taking two frames of images sequentially acquired from a human motion video stream as a current reference image and a current frame image, and performing denoising pretreatment on the current reference image and the current frame image; and
step S22: and calculating to obtain the motion information of image pixel points in the current frame image by using the change of the image in a time domain and the correlation between adjacent frames based on an optical flow method.
It can be understood that, in step S21, at least two frames of images are extracted from the human motion video stream, the first frame of image that is read is taken as a current reference image, the subsequent image that is read is taken as a current frame of image, and one or more kinds of preprocessing such as gaussian smoothing, noise filtering and normalization are performed on the current frame of image, so as to improve the robustness of the pixel motion information with respect to noise and illumination changes.
It can be understood that, in step S22, the current reference image and the current frame image are used as input of optical flow calculation, so as to obtain an optical flow field with the same resolution, where the optical flow field is a two-dimensional vector field and respectively represents pixel displacement in the horizontal direction and the vertical direction, so that abundant pixel-level motion information, i.e., motion direction and motion magnitude, can be obtained, and further motion speed can be obtained. After the motion information of the current frame image is obtained through calculation, the current frame image is assigned as a current reference image so as to carry out optical flow calculation for the next time.
It can be understood that after acquiring a plurality of frames of images from a human motion video stream and obtaining motion information of a current frame of image in an adjacent frame of image, taking the current frame of image as a next current reference image, then taking another frame of image acquired from the human motion video stream as a next current frame of image, and sequentially taking the next current frame of image as input of optical flow calculation, and performing dynamic analysis on the current frame of image to obtain motion information with robustness.
It is understood that steps S21-S22 are only one embodiment of this example, and the embodiment is not limited to steps S21-S22.
Referring to fig. 4, step S3: and acquiring the coordinate information of the key points of the human skeleton in the current frame image. The step S3 specifically includes steps S31 to S32:
step S31: inputting the current frame image into a human skeleton key point recognition pre-training model to obtain Feature maps of all joint points of a human body; and
step S32: and acquiring pixel coordinate information of each skeleton key point of the human body based on the high-value region of the Feature Map.
It can be understood that, in step S31, the current frame image is fed into the human skeleton key point recognition pre-training model, and forward inference operation is performed, so as to finally obtain Feature maps of each joint point of the human body, and according to the high-value region of each Feature Map, the pixel coordinates of each skeleton key point of the human body can be obtained.
It can be understood that the human skeleton key point identification pre-training model takes a deep learning model as a framework, so as to obtain the pixel coordinates of each human skeleton key point.
It can be understood that, in step S32, by obtaining Feature maps of each joint point of the human body, a high-value region of each Feature Map can be detected, so as to identify the position of each bone key point of the human body, and obtain coordinate information of a corresponding pixel position.
Referring to fig. 5, in the present embodiment, the human body 18 joint points are used as the key points of the human body bones, which respectively represent: 0: nose, 1: neck, 2: right shoulder, 3: right elbow, 4: right wrist, 5: left shoulder, 6: left elbow, 7: left wrist, 8: right hip, 9: right knee, 10: right ankle, 11: left hip, 12: left knee, 13: left ankle, 14: right eye, 15: left eye, 16: right ear, 17: the left ear. It can be understood that the human body 18 joints are used as identification standards and input into a human body skeleton key point identification pre-training model, so that the coordinate information of the human body skeleton key points in the current frame image is obtained, and the coordinate information of the human body skeleton key points corresponds to the coordinate information of the human body joint points.
It is understood that steps S31-S32 are only one embodiment of this example, and the embodiment is not limited to steps S31-S32.
Referring to fig. 6, in a further embodiment of the method for measuring human body movement based on computer vision according to the first embodiment of the present invention, the method further includes the following steps:
step Sa: determining a human body Mask region comprising at least two human body skeleton key points in a current frame image; and
and Sb: and repeating the step S4 to obtain the motion measurement information of the human body joint motion corresponding to a plurality of pixel points in the human body trunk Mask area, and counting according to the motion measurement information of the human body joint motion corresponding to the plurality of pixel points to obtain the final motion measurement information of the human body joint motion.
It can be understood that, in step Sa, a more accurate and comprehensive human body Mask region can be further obtained based on the human skeleton key point information in the current frame image.
It can be understood that, in step Sb, the coordinate information of the pixel points in the human trunk Mask region is calculated as in step S4, so as to obtain the motion measurement information of the human body joint motion corresponding to the plurality of pixel points in the human trunk Mask region, and the error of the final motion measurement information of the human body joint motion is reduced by counting the plurality of motion measurement information, so as to obtain the human body motion state information with higher robustness.
It can be understood that, in step Sb, the motion measurement information of the human joint motion obtained by calculation may be traversed through all the pixel points in the Mask region. And motion measurement information of human joint motion obtained by partial pixel points in the Mask region can be selected.
It is understood that the statistics of the plurality of motion measurement information includes, but is not limited to, averaging to obtain the final motion measurement information of the human joint motion; or giving different weights to the motion measurement information of the human body joint motion corresponding to different pixel points, and then calculating to obtain the final motion measurement information of the human body joint motion. The statistical approach is not limited.
It is to be understood that steps Sa to Sb are only one embodiment of this example, and the embodiment is not limited to steps Sa to Sb.
Referring to fig. 7, step Sa: determining a human body Mask region comprising at least two human skeleton key points in the current frame image. The step Sa specifically includes steps Sa1 to Sa3:
step Sa 1: setting a threshold value;
step Sa2, selecting pixel point E from the current frame image, calculating
Figure BDA0002272266220000151
Comparing the S with the threshold, and determining whether the pixel point E belongs to a Mask region according to a comparison result; and
and step Sa3, repeating the step Sa2, and acquiring a human body Mask region in the current frame image.
Specifically, as shown in fig. 8, in the present embodiment, the recognition of the forearm joint of the human body is taken as an example,based on the human skeletal key points extracted in step S3, the at least two human skeletal key points are defined as C and D. From the coordinate information of the two points C, D, a rectangular region for preliminary search in image recognition is determined, as shown by the dot-dash region in fig. 8. Suppose there is a pixel E in the current frame image, as shown in FIG. 8, connecting CD and CE, and vector
Figure BDA0002272266220000152
And vector
Figure BDA0002272266220000153
The angle between them is theta, then the vector
Figure BDA0002272266220000154
And vector
Figure BDA0002272266220000155
The cross product of (a) is modulo:
Figure BDA0002272266220000156
it is understood that the geometric meaning of the vector cross product is that the module value of the two vector cross products is equal to the area S of the parallelogram with the two vectors as the side length, the pixel distance defined by the CD is a certain value, and the vector cross product has a certain value
Figure BDA0002272266220000157
And vector
Figure BDA0002272266220000158
Cross product result of (a) and pixel point E to vector
Figure BDA0002272266220000159
The linear distance of the segment is related, so that the extent of the segment CD expansion can be controlled by controlling the result of the cross multiplication of the two vectors, that is, in a rectangular region taking point C, D as a vertex, the distance from a pixel point E to a diagonal CD in the current frame image is controlled, and a more accurate Mask region of the human body trunk can be obtained, as shown in the solid line in fig. 8Shown as an area.
It can be understood that the pixel point E can be selected by setting an empirical value, that is, according to an empirical value of a muscle distribution range in which a human skeleton is located, where the empirical value is a threshold value of the selection range of the pixel point E.
In this embodiment, when the value of S is greater than the threshold, the selected position of the pixel point E is located in an area outside the human body, that is, an area outside the solid line area in fig. 8. When the value of S is smaller than the threshold, the selected position of the pixel point E is located in an area within the trunk of the human body, that is, an area within the solid line area in fig. 8. Therefore, only when the value of S is equal to the threshold, the selected position of the pixel point E is exactly located in the Mask region of the trunk of the human body, and the pixel point E is used as a calculation condition to reduce the calculation error in the step S4 and obtain more accurate motion measurement information. And judging whether the pixel point E belongs to a Mask region or not by setting the threshold value, thereby obtaining the human body Mask region in the current frame image.
It can be understood that the key point information acquired in the human body trunk Mask region in the current frame image is combined with the motion information of the pixel points, the motion measurement information of a plurality of human body joint motions is acquired through calculation, and then the motion measurement information of the plurality of human body joint motions is counted, so that the final motion measurement information of the human body joint motions is acquired, and the measurement index with higher robustness is acquired.
It is understood that steps Sa1 to Sa3 are only one embodiment of this example, and embodiments thereof are not limited to steps Sa1 to Sa 3.
Referring to fig. 9, a human motion measurement system based on computer vision is further provided in the second embodiment of the present invention. The computer vision-based human motion measurement system may include:
an image acquisition unit 100 for acquiring a human motion video stream based on an optical human motion measurement method;
a motion information obtaining unit 200, configured to extract a current reference image and a current frame image from the human motion video stream, and calculate motion information of image pixels in the current frame image according to the current reference image and the current frame image;
a key point information obtaining unit 300, configured to obtain coordinate information of a key point of a human skeleton in the current frame image; and
and the data processing unit 400 is configured to calculate and obtain motion measurement information of the human joint motion at least according to the motion information of the image pixel points in the current frame image and the coordinate information of the key points of the human skeleton.
Referring to fig. 10, the keypoint information obtaining unit 300 further includes:
the human body skeleton key point identification pre-training model 301 is used for extracting Feature maps of all joint points of a human body to obtain coordinate information of all skeleton key points of the human body; and
a human body trunk Mask region obtaining unit 302, configured to determine a human body trunk Mask region according to the obtained coordinate information of the key points of the human skeleton and a set threshold.
It can be understood that, the human body movement measuring system based on computer vision according to the second embodiment of the present invention is used for executing the human body movement measuring method based on computer vision according to the first embodiment of the present invention, and the human body movement measuring system based on computer vision includes modules corresponding to executing one or more steps in the human body movement measuring method based on computer vision.
Referring to fig. 11, a third embodiment of the present invention provides an electronic device for implementing the above-mentioned method for measuring human body movement based on computer vision, the electronic device includes a memory 20 and a processor 30, the memory 20 stores a computer program, and the computer program is configured to execute the steps in any of the above-mentioned embodiments of the method for measuring human body movement based on computer vision when running. The processor 30 is arranged to perform the steps of any of the above embodiments of the computer vision based body movement measurement method by means of the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of an operating machine network.
The electronic equipment is particularly suitable for human motion measurement equipment based on computer vision, the equipment can acquire motion information of image pixel points and coordinate information of key points of human bones based on an optical flow method and a deep learning model, the acquired motion information of the image pixel points and the coordinate information of the key points of the human bones are input to a data processing unit to obtain motion measurement information of human joint motion, and semantic vision and motion vision are combined, so that the influence that a monocular camera cannot zoom quickly to enable the resolution of the acquired image pixels to be different is reduced, the operation of the human motion measurement method is more convenient, and the acquired motion measurement information is more robust.
Compared with the prior art, the human body movement measuring method, the human body movement measuring system and the electronic equipment based on computer vision provided by the invention have the following advantages:
1. the method comprises the steps of obtaining a human motion video stream through an optical-based human motion measuring method, extracting adjacent current reference images and current frame images from the human motion video stream, calculating motion information of image pixel points in the current frame images according to pixel information extracted from two adjacent frame images, and obtaining coordinate information of human skeleton key points from the current frame images, so that motion measuring information of human joint motion is calculated and obtained according to the motion information of the image pixel points in the current frame images and the coordinate information of the human skeleton key points, and semantic vision and motion vision are combined, so that the operation of the human motion measuring method is more convenient, and the obtained motion measuring information is more robust.
2. The method comprises the steps of respectively taking two frames of images sequentially acquired from a human motion video stream as a current reference image and a current frame image, performing denoising pretreatment on the two frames of images, calculating and obtaining motion information of image pixels in the current frame image by utilizing the change of the images in a time domain and the correlation between adjacent frames based on an optical flow method, improving the robustness of the pixel motion information on noise and illumination change, and dynamically analyzing the current frame image to obtain the motion information with the robustness.
3. The method comprises the steps of inputting a current frame image into a human skeleton key point recognition pre-training model based on a deep learning model to obtain Feature maps of all joint points of a human body, obtaining pixel coordinate information of all skeleton key points of the human body based on a high-value region of the Feature maps, and providing accurate data support for subsequent calculation by taking the human joint points as all skeleton key points of the human body.
4. The method comprises the steps of obtaining movement measurement information of human body joint movement corresponding to a plurality of pixel points in a human body trunk Mask region by determining the human body trunk Mask region comprising at least two human body skeleton key points, and obtaining final movement measurement information of the human body joint movement according to the movement measurement information of the human body joint movement corresponding to the pixel points.
5. The method comprises the steps of judging whether a selected pixel point belongs to a Mask region or not by setting a threshold value set according to an empirical value so as to obtain a human body trunk Mask region in a current frame image, combining key point information obtained from the human body trunk Mask region in the current frame image with motion information of the pixel point, obtaining motion measurement information of a plurality of human body joint motions through calculation, and counting the motion measurement information of the plurality of human body joint motions so as to obtain final motion measurement information of the human body joint motions and obtain a measurement index with robustness.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart.
Which when executed by a processor performs the above-described functions defined in the method of the present application. It should be noted that the computer memory described herein may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer memory may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof.
More specific examples of computer memory may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable signal medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an image acquisition unit, a motion information acquisition unit, a key point information acquisition unit, and a data processing unit. The names of the units do not form a limitation to the unit itself in some cases, and for example, the image acquisition unit may also be described as a "unit for acquiring a human motion video stream to be measured and calculated based on a monocular camera".
As another aspect, the present application also provides a computer memory, which may be included in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer memory carries one or more programs that, when executed by the apparatus, cause the apparatus to: the method comprises the steps of obtaining a human body motion video stream based on an optical human body motion measuring method, extracting a current reference image and a current frame image from the human body motion video stream, calculating motion information of image pixel points in the current frame image according to the current reference image and the current frame image, obtaining human body skeleton key point coordinate information in the current frame image, and calculating motion measuring information of human body joint motion at least according to the motion information of the image pixel points in the current frame image and the human body skeleton key point coordinate information.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent alterations and improvements made within the spirit of the present invention should be included in the scope of the present invention.

Claims (10)

1. A human motion measurement method based on computer vision is characterized in that: the method comprises the following steps:
step S1: acquiring a human motion video stream by an optical human motion measurement method;
step S2: extracting a current reference image and a current frame image from the human motion video stream, and calculating motion information of image pixel points in the current frame image according to the current reference image and the current frame image;
step S3: acquiring coordinate information of human skeleton key points in a current frame image; and
step S4: and calculating to obtain the motion measurement information of the human body joint motion at least according to the motion information of the image pixel points in the current frame image and the coordinate information of the human body skeleton key points.
2. The computer vision-based human motion measurement method as claimed in claim 1, wherein: step S2 specifically includes the following steps:
step S21: respectively taking two frames of images sequentially acquired from a human motion video stream as a current reference image and a current frame image, and performing denoising pretreatment on the current reference image and the current frame image; and
step S22: and calculating to obtain the motion information of image pixel points in the current frame image by using the change of the image in a time domain and the correlation between adjacent frames based on an optical flow method.
3. The computer vision-based human motion measurement method as claimed in claim 1, wherein: step S3 specifically includes the following steps:
step S31: inputting the current frame image into a human skeleton key point recognition pre-training model to obtain Feature maps of all joint points of a human body; and
step S32: and acquiring pixel coordinate information of each skeleton key point of the human body based on the high-value region of the Feature Map.
4. The computer vision-based human motion measurement method as claimed in claim 3, wherein: the human body movement measuring method based on computer vision also comprises the following steps:
step Sa: determining a human body Mask region comprising at least two human body skeleton key points in a current frame image; and
and Sb: and repeating the step S4 to obtain the motion measurement information of the human body joint motion corresponding to a plurality of pixel points in the human body trunk Mask area, and counting according to the motion measurement information of the human body joint motion corresponding to the plurality of pixel points to obtain the final motion measurement information of the human body joint motion.
5. The computer vision-based human motion measurement method as claimed in claim 4, wherein: defining said at least two human skeletal key points as C and D, said step Sa comprising:
step Sa 1: setting a threshold value;
step Sa2 of selecting a current frame imageTaking a pixel E, calculating
Figure FDA0002272266210000021
Comparing the S with the threshold, and determining whether the pixel point E belongs to a Mask region according to a comparison result; and
and step Sa3, repeating the step Sa2, and acquiring a human body Mask region in the current frame image.
6. The computer vision-based human motion measurement method as claimed in claim 1, wherein: the human body joint is provided with two human body skeleton key points correspondingly, and the motion measurement information of the human body joint motion is set as the human body joint motion angular velocity
Figure FDA0002272266210000022
And is
Figure FDA0002272266210000023
vaB-a is the motion information of the image pixel points corresponding to the human skeleton key points in the current frame image obtained in the step S2, and b-a is the pixel distance of the images corresponding to the two human skeleton key points in the current frame image, VAThe real motion linear velocity of the human skeleton joint point is shown, and B-A is the length information of the human skeleton corresponding to the human joint.
7. The computer vision-based human motion measurement method as claimed in claim 1, wherein: the motion measurement information of the human body joint motion in step S4 may be set as any motion index of linear velocity, linear acceleration and angle variation of the human body joint motion.
8. A computer vision based human motion measurement system comprising:
the image acquisition unit is used for acquiring a human motion video stream based on an optical human motion measurement method;
the motion information acquisition unit is used for extracting a current reference image and a current frame image from the human motion video stream and calculating motion information of image pixel points in the current frame image according to the current reference image and the current frame image;
the key point information acquisition unit is used for acquiring the coordinate information of the key points of the human skeleton in the current frame image; and
and the data processing unit is used for calculating and obtaining the motion measurement information of the human joint motion at least according to the motion information of the image pixel points in the current frame image and the coordinate information of the key points of the human skeleton.
9. The computer vision based human motion measurement system as claimed in claim 8, wherein the key point information obtaining unit further comprises:
the human body skeleton key point identification pre-training model is used for extracting Feature maps of all joint points of a human body to obtain coordinate information of all skeleton key points of the human body; and
and the human body trunk Mask area acquisition unit is used for determining the human body trunk Mask area according to the acquired human body skeleton key point coordinate information and a set threshold.
10. An electronic device comprising a memory and a processor, characterized in that: the memory has stored therein a computer program arranged to, when run, perform the computer vision based human motion measurement method of any one of claims 1 to 7;
the processor is arranged to execute the computer vision based human movement measurement method of any one of claims 1 to 7 by means of the computer program.
CN201911112035.3A 2019-11-13 2019-11-13 Human motion measuring method and system based on computer vision and electronic equipment Pending CN112790758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911112035.3A CN112790758A (en) 2019-11-13 2019-11-13 Human motion measuring method and system based on computer vision and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911112035.3A CN112790758A (en) 2019-11-13 2019-11-13 Human motion measuring method and system based on computer vision and electronic equipment

Publications (1)

Publication Number Publication Date
CN112790758A true CN112790758A (en) 2021-05-14

Family

ID=75803614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911112035.3A Pending CN112790758A (en) 2019-11-13 2019-11-13 Human motion measuring method and system based on computer vision and electronic equipment

Country Status (1)

Country Link
CN (1) CN112790758A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253614A (en) * 2021-11-25 2022-03-29 上海齐感电子信息科技有限公司 Control method and control system
TWI797916B (en) * 2021-12-27 2023-04-01 博晶醫電股份有限公司 Human body detection method, human body detection device, and computer readable storage medium
WO2024045208A1 (en) * 2022-08-31 2024-03-07 Hong Kong Applied Science And Technology Research Institute Co., Ltd Method and system for detecting short-term stress and generating alerts inside the indoor environment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253614A (en) * 2021-11-25 2022-03-29 上海齐感电子信息科技有限公司 Control method and control system
TWI797916B (en) * 2021-12-27 2023-04-01 博晶醫電股份有限公司 Human body detection method, human body detection device, and computer readable storage medium
WO2024045208A1 (en) * 2022-08-31 2024-03-07 Hong Kong Applied Science And Technology Research Institute Co., Ltd Method and system for detecting short-term stress and generating alerts inside the indoor environment

Similar Documents

Publication Publication Date Title
CN109059895B (en) Multi-mode indoor distance measurement and positioning method based on mobile phone camera and sensor
CN110221690B (en) Gesture interaction method and device based on AR scene, storage medium and communication terminal
CN111354042B (en) Feature extraction method and device of robot visual image, robot and medium
US10068344B2 (en) Method and system for 3D capture based on structure from motion with simplified pose detection
KR100855657B1 (en) System for estimating self-position of the mobile robot using monocular zoom-camara and method therefor
CN112790758A (en) Human motion measuring method and system based on computer vision and electronic equipment
US20200219284A1 (en) System and method for posture sequence on video from mobile terminals
CN108628306B (en) Robot walking obstacle detection method and device, computer equipment and storage medium
CN111652047B (en) Human body gesture recognition method based on color image and depth image and storage medium
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
JP2018120283A (en) Information processing device, information processing method and program
CN112861808B (en) Dynamic gesture recognition method, device, computer equipment and readable storage medium
CN113487674A (en) Human body pose estimation system and method
CN113658265A (en) Camera calibration method and device, electronic equipment and storage medium
CN109859237B (en) Human skeleton motion analysis method based on infrared scanning
JP2007127478A (en) Device and method for speed detection of tracking subject
KR20200057572A (en) Hand recognition augmented reality-intraction apparatus and method
CN114694257A (en) Multi-user real-time three-dimensional action recognition and evaluation method, device, equipment and medium
Shahmoradi Investigating the feasibility of using a RealSense depth camera D435i by creating a framework for 3D pose analysis
Chen et al. An integrated sensor network method for safety management of construction workers
KR20150061549A (en) Motion tracking apparatus with hybrid cameras and method there
CN117671738B (en) Human body posture recognition system based on artificial intelligence
Korovin et al. Human pose estimation applying ANN while RGB-D cameras video handling
CN114359328B (en) Motion parameter measuring method utilizing single-depth camera and human body constraint
CN111462234B (en) Position determining method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230314

Address after: 100080 room 1001-003, building 1, No.3 Haidian Street, Haidian District, Beijing

Applicant after: SINOVATION VENTURES (BEIJING) ENTERPRISE MANAGEMENT CO.,LTD.

Address before: 100080 room 1001-003, building 1, No.3 Haidian Street, Haidian District, Beijing

Applicant before: SINOVATION VENTURES (BEIJING) ENTERPRISE MANAGEMENT CO.,LTD.

Applicant before: Beijing Innovation workshop Kuangshi international Artificial Intelligence Technology Research Institute Co.,Ltd.