CN113361333A - Non-contact riding motion state monitoring method and system - Google Patents

Non-contact riding motion state monitoring method and system Download PDF

Info

Publication number
CN113361333A
CN113361333A CN202110539560.4A CN202110539560A CN113361333A CN 113361333 A CN113361333 A CN 113361333A CN 202110539560 A CN202110539560 A CN 202110539560A CN 113361333 A CN113361333 A CN 113361333A
Authority
CN
China
Prior art keywords
riding
module
image acquisition
motion
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110539560.4A
Other languages
Chinese (zh)
Other versions
CN113361333B (en
Inventor
王伟
姜小明
张钰佳
李章勇
郭毅军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202110539560.4A priority Critical patent/CN113361333B/en
Publication of CN113361333A publication Critical patent/CN113361333A/en
Application granted granted Critical
Publication of CN113361333B publication Critical patent/CN113361333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The invention belongs to the technical field of motion state monitoring, and particularly relates to a non-contact riding motion state monitoring method and a non-contact riding motion state monitoring system, wherein the method comprises the following steps: acquiring pose information of the image acquisition equipment, and fixing the image acquisition equipment according to the pose information; acquiring motion state data of a person to be monitored in real time; inputting the motion state data of the person to be monitored into a motion state monitoring model, judging the motion condition of the current person to be monitored, and adjusting the motion attitude according to the condition; the motion state monitoring model comprises a neural network model, a spatial information recovery module and a riding state analysis model; the invention provides a non-contact riding motion state monitoring system, which can acquire the information of riding motion time and space in real time under a riding state while not influencing the riding motion of a human body, optimize data, guide riding motion based on objective data and improve the exercise effect.

Description

Non-contact riding motion state monitoring method and system
Technical Field
The invention belongs to the technical field of motion state monitoring, and particularly relates to a non-contact riding motion state monitoring method and a non-contact riding motion state monitoring system.
Background
Along with the more and more high attention degree of people to healthy for the motion of riding receives people's favor widely, has makeed the indoor platform of riding that does not receive time, place and environmental restriction. Proper riding movement is beneficial to physical and mental health of a human body, but excessive movement state can damage the human body and is not beneficial to the health of the human body, so that state detection and guidance analysis based on objective data of the sporters in the riding movement process are necessary.
At present, the mode of monitoring is carried out the motion of riding is mostly the contact, if wear the rhythm of the heart collection device monitoring motion in-process rhythm of the heart, with the body temperature of wearing formula intelligence undershirt collection riding in-process, wear safety helmet collection riding in-process brain electricity and respiratory rate, the monitoring of motion personnel state of riding is accomplished through the collection of these physiological parameters, the shortcoming of this method lies in having certain influence to human motion, can not reach the motion effect of relaxing body and mind completely, consequently, an urgent need for a non-contact type motion state monitoring method of riding.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a non-contact riding motion state monitoring method, which comprises the following steps: acquiring pose information of the image acquisition equipment, and fixing the image acquisition equipment according to the pose information; acquiring motion state data of a person to be monitored in real time; inputting the motion state data of the person to be monitored into a motion state monitoring model, judging the motion condition of the current person to be monitored, and adjusting the motion attitude according to the condition; the motion state monitoring model comprises a neural network model, a spatial information recovery module and a riding state analysis model;
the process of the motion state monitoring model for processing the motion data comprises the following steps:
s1: inputting the acquired image data into a trained neural network to obtain the position information of different joint points in the riding state of each picture under a pixel coordinate system; the neural network model is an openfuse network model;
s2: matching and corresponding the information of the joint points acquired at different angles at the same moment according to a time sequence, and inputting the corresponding joint point information into a spatial information recovery module to recover the spatial information of each joint point;
s3: and sequentially inputting the spatial information of each joint point into the riding state analysis model according to the time sequence to obtain the motion description in the current riding state.
Preferably, the process of acquiring pose information of the image capturing device includes: determining the position information of the central image acquisition equipment according to the position information of each image acquisition equipment; the determination process comprises: calculating the sum of the linear distances from each image acquisition device to other devices except the device, comparing the sum of all the distances, selecting the sum of the minimum distances, and taking the image acquisition device as a central image acquisition device; setting and acquiring chessboard images at the same position acquired by image acquisition equipment at different angles; and resolving the position and pose information of the image acquisition equipment relative to the central position according to the acquired chessboard image to obtain the relative position and pose information of the image acquisition equipment.
Furthermore, at least 3 image acquisition devices are adopted in the data acquisition process, and the image acquisition devices are fixed according to the solved pose information.
Further, the process of resolving the pose information of the image acquisition device relative to the center position includes: performing monocular calibration and binocular calibration on the image acquisition equipment, and obtaining pose information of each equipment according to a calibration result; the monocular calibration comprises the steps of correcting the image acquisition equipment according to the chessboard image, obtaining internal parameters of the corrected image acquisition equipment, and obtaining an internal parameter matrix; the binocular calibration process comprises the steps of calibrating a chessboard image by adopting a correction calibration method to obtain a chessboard image matching point pair; acquiring a rotation matrix and a translation matrix of the image acquisition equipment according to the chessboard image matching point pairs; and obtaining the relative pose information of the image acquisition equipment according to the internal parameter matrix, the rotation matrix and the translation matrix.
Preferably, the openposition network model comprises a backbone network VGG-19 and at least one stage module; the stage module comprises a PCM module, a PAF module, two convolution layers and a full connection layer; the first convolution layer is connected with the PCM module, the second convolution layer is connected with the PAF module, and the PCM module and the PAF module are connected in parallel; the process of training the opencast network model comprises the following steps:
step 1: acquiring original image data, and preprocessing the original image data;
step 2: inputting the preprocessed image into a backbone network VGG-19 to extract the characteristics of the image;
and step 3: convolving the extracted features, and inputting the result of convolution into a PCM module to obtain a key point thermodynamic diagram;
and 4, step 4: convolving the extracted features, and inputting the result of convolution into a PAF module to obtain a key point affinity field map;
and 5: inputting the characteristic diagram, the key point thermodynamic diagram and the key point affinity field diagram into a full connection layer to obtain pixel coordinates of human body joint points in the riding diagram;
step 6: and calculating a loss function of the openfuse network model, continuously adjusting parameters of the loss function, and finishing the training of the model when the loss function is minimum.
Furthermore, expressions of loss functions of the PAF module and the PCM module in the openpos network model
Figure BDA0003068912830000031
Figure BDA0003068912830000032
Respectively as follows:
Figure BDA0003068912830000033
Figure BDA0003068912830000034
wherein W (p) is a weight, and when the label is absent, the value is 0,
Figure BDA0003068912830000035
for the labels obtained from the training in the PAF,
Figure BDA0003068912830000036
for the pre-labelling of the data in the PAF,
Figure BDA0003068912830000037
for the labels obtained from the training in the PCM,
Figure BDA0003068912830000038
data are pre-marked in the PAF; the overall loss function can be expressed as:
Figure BDA0003068912830000039
preferably, the process of recovering the spatial information of each joint point includes: acquiring pixel coordinates of a joint point pair image in the two paired images; restoring the updated joint point space information according to the pose information of the equipment by combining a triangularization method; setting a weight, and performing weighted updating on the spatial information of the joint point according to the set weight; the joint points in the two images are joint points which are output after the images acquired by the image acquisition equipment and the image acquisition equipment at the central position at the same time are input into the neural network, and comprise all the image acquisition equipment and the joint points corresponding to the image acquisition equipment at the central position.
Further, the set weight is:
Figure BDA0003068912830000041
preferably, the riding state analysis model is a support vector machine model; and inputting the collected spatial information and the corresponding time information of the joint points into a trained support vector machine model, and extracting riding motion characteristics to achieve the purpose of analyzing the riding state.
A non-contact cycling state monitoring system, the system comprising: the system comprises a riding scene monitoring module, a riding state space-time parameter obtaining module and a riding state analyzing module;
the riding scene monitoring module comprises a plurality of data acquisition modules, all the data acquisition modules need to run simultaneously and complete calibration, pose information of different data acquisition modules relative to a central point is obtained, and two-dimensional pictures of riding postures of people at different angles at the same moment are obtained;
the space-time parameter acquisition module in the riding state comprises a neural network module for attitude analysis and a spatial information recovery module;
the gesture analysis neural network module is used for acquiring image pixel coordinates of joint points for acquiring human body motion images;
the spatial information recovery module is used for acquiring spatial information of the joint points;
the riding state analysis module is used for analyzing the time and the space information of the obtained riding state of the human body respectively, and monitoring the riding state by combining the included angle and the movement speed information of the limbs under the riding state.
The invention has the beneficial effects that: the non-contact riding motion state monitoring system is provided, the information of riding motion time and space under the riding state is acquired in real time while the riding motion of a human body is not influenced, data are optimized, riding motion guidance is carried out based on objective data, the exercise effect is improved, and sports personnel are prevented from being injured due to nonstandard actions.
Drawings
FIG. 1 is a schematic diagram of the human body key points of the present invention;
FIG. 2 is a schematic diagram of triangulation key points of the present invention;
FIG. 3 is a flowchart of the image capturing device position information calculation of the present invention;
fig. 4 is a detailed flow chart of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A non-contact riding motion state monitoring method, as shown in fig. 4, the method includes: acquiring pose information of the image acquisition equipment, and fixing the image acquisition equipment according to the pose information; acquiring motion state data of a person to be monitored in real time; inputting the motion state data of the person to be monitored into a motion state monitoring model, judging the motion condition of the current person to be monitored, and adjusting the motion attitude according to the condition; the motion state monitoring model comprises a neural network model, a spatial information recovery module and a riding state analysis model.
An embodiment of a motion state monitoring model for processing motion data, comprising:
s1: inputting the acquired image data into a trained neural network to obtain the position information of different joint points in the riding state of each picture under a pixel coordinate system; the neural network model is an openfuse network model;
s2: matching and corresponding the information of the joint points acquired at different angles at the same moment according to a time sequence, and inputting the corresponding joint point information into a spatial information recovery module to recover the spatial information of each joint point;
s3: and sequentially inputting the spatial information of each joint point into the riding state analysis model according to the time sequence to obtain the motion description in the current riding state.
The method comprises the steps of monitoring the left side and the right side of a riding human body, considering the visual angle problem of the image acquisition equipment, increasing the constraint among the equipment and improving the accuracy of three-dimensional coordinate estimation, wherein three image acquisition equipment are used on each side, and simultaneously, the six image acquisition equipment on the two sides are simultaneously monitored. When the monitoring system initially operates, each camera needs to be calibrated, the middle image acquisition device is taken as a central image acquisition device, and the pose information of the image acquisition device relative to the central image acquisition device is simultaneously acquired, which is now described by taking a single-side three-image acquisition device as an example.
Optionally, the image capturing device is a camera.
A specific implementation process for acquiring pose information of image acquisition equipment comprises the steps of determining position information of central image acquisition equipment according to position information of each image acquisition equipment; the determination process comprises: calculating the sum of the linear distances from each image acquisition device to other devices except the device, comparing the sum of all the distances, selecting the sum of the minimum distances, and taking the image acquisition device as a central image acquisition device; setting and acquiring chessboard images at the same position acquired by image acquisition equipment at different angles; and resolving the position and pose information of the image acquisition equipment relative to the central position according to the acquired chessboard image to obtain the relative position and pose information of the image acquisition equipment.
As shown in fig. 3, the process of resolving the pose information of the image capturing device with respect to the center position includes: performing monocular calibration and binocular calibration on the image acquisition equipment, and obtaining the pose information of the central position according to the calibration result; the monocular calibration comprises the steps of correcting the image acquisition equipment according to the chessboard image, obtaining internal parameters of the corrected image acquisition equipment, and obtaining an internal parameter matrix; the binocular calibration process comprises the steps of calibrating a chessboard image by adopting a correction calibration method to obtain a chessboard image matching point pair; acquiring a rotation matrix and a translation matrix of the image acquisition equipment according to the chessboard image matching point pairs; and obtaining the relative pose information of the image acquisition equipment according to the internal parameter matrix, the rotation matrix and the translation matrix.
The imaging process of acquiring the image acquisition equipment from the geometrical angle comprises the steps of mapping a point in a three-dimensional space into a two-dimensional image plane, wherein the mapping process relates to the mapping relation among a world coordinate system W, a camera coordinate system C, an image coordinate system A and a pixel coordinate system A'; the mapping relation is a point A in the left frame of the spacew=[Xw,Yw,Zw]TWhich is correspondingly expressed as A in the camera coordinate systemc=[Xc,Yc,Zc]TAnd is expressed as a ═ x, y in the image coordinate system]TAnd is represented by a '═ x', y 'in the pixel coordinate system']T
The mapping relationship between the world coordinate system to the pixel coordinate system is described in the homogeneous coordinate system as follows:
Figure BDA0003068912830000061
wherein f is the focal length of the camera; dx、dyFor the physical size of each pixel, i.e. dx×dy(mm2);x'0、y'0Indicating a translation of the pixel origin of coordinates by x relative to the image origin of coordinates0′,y′0]T(ii) a R represents a 3 × 3 rotation matrix; t denotes a 3 × 1 offset matrix. f. dx、dy、x'0、y'0Internal parameters called cameras, which are only relevant to the camera; r, T refer to the extrinsic parameters of the video camera, which change when the relative positions of the world coordinate system and the camera coordinate system change.
ZcA′=K(RAW+t)=KTAw
In a multi-camera system, a camera is centered on one of the cameras I, and a point A in the real world is assumedw=[Xw,Yw,Zw]TAnd point A in the I camera coordinate systemc=[Xc,Yc,Zc]TIs the same point, has Aw=AcThen two pixel points A 'under cameras I and I'1And A'2The pixel positions of (a) are:
Figure BDA0003068912830000071
thus, pixel point A 'can be obtained'1And A'2The relationship of (a) to (b) is as follows:
0=A′2 TK-Tt×RK-1A′1
the middle part can be denoted as E ═ t × R, or F ═ K-1EK-1Referred to as the camera's essential matrix and fundamental matrix, respectively. The pose relationship between the cameras is described as a process for solving an essential matrix E or a basic matrix F between the cameras.
In the process, the cameras are calibrated by referring to a Zhangyingyou calibration mode, three cameras on the same side acquire chessboard images at the same position from different angles to obtain related pose parameters of the cameras, and relative pose information among the cameras is determined to provide conditions for acquiring spatial data in a riding state.
In the invention, two-dimensional human body skeleton key point detection based on a neural network model is combined with a method for obtaining spatial information through triangulation, and meanwhile, a constraint item of three-dimensional reconstruction of a camera is added to recover spatial data of human body key points in a stable riding state.
The detection of the two-dimensional human skeleton key points adopts a Kanai base, the Meilong university is an openposition model based on a convolutional neural network and supervised learning development, and the method is realized by utilizing a bottom-up algorithm of PAFs and can detect the two-dimensional human skeleton key points in a motion state in real time.
The openposition network model comprises a backbone network VGG-19 and at least one stage module; the stage module comprises a PCM module, a PAF module, two convolution layers and a full connection layer; the first convolution layer is connected with the PCM module, the second convolution layer is connected with the PAF module, and the PCM module and the PAF module are connected in parallel to form an openfuse network model.
Preferably, the openposition network model comprises a plurality of stages, and all the stages are connected in series, so that the pixel coordinates of the human body joint points obtained through the openposition network model are more accurate.
The process of training the opencast network model comprises the following steps:
step 1: acquiring original image data, and preprocessing the original image data;
step 2: inputting the preprocessed image into a backbone network VGG-19 to extract the characteristics of the image;
and step 3: convolving the extracted features, and inputting the result of convolution into a PCM module to obtain a key point thermodynamic diagram;
and 4, step 4: convolving the extracted features, and inputting the result of convolution into a PAF module to obtain a key point affinity field map;
and 5: inputting the characteristic diagram, the key point thermodynamic diagram and the key point affinity field diagram into a full connection layer to obtain pixel coordinates of human body joint points in the riding diagram;
step 6: and calculating a loss function of the openfuse network model, continuously adjusting parameters of the loss function, and finishing the training of the model when the loss function is minimum.
Expressions of loss functions of PAF module and PCM module in openpos network model
Figure BDA0003068912830000081
Respectively as follows:
Figure BDA0003068912830000082
Figure BDA0003068912830000083
wherein W (p) is a weight, and when the label is absent, the value is 0,
Figure BDA0003068912830000084
for the labels obtained from the training in the PAF,
Figure BDA0003068912830000085
for the pre-labelling of the data in the PAF,
Figure BDA0003068912830000086
for the labels obtained from the training in the PCM,
Figure BDA0003068912830000087
data is pre-labeled in the PAF. The overall loss function can be expressed as:
Figure BDA0003068912830000091
wherein, TpDenotes the total number of layers, T, of the PAF modulecThe total number of layers of the PCM module is indicated,
Figure BDA0003068912830000092
representing the loss function of the PAF module,
Figure BDA0003068912830000093
representing the loss function in the PCM block.
The process of restoring the spatial information of each joint point includes: acquiring pixel coordinates of a joint point pair image in the two paired images; restoring the updated joint point space information according to the pose information of the equipment by combining a triangularization method; setting a weight, and performing weighted updating on the spatial information of the joint point according to the set weight; the joint points in the two images are joint points which are output after the images acquired by the image acquisition equipment and the image acquisition equipment at the central position at the same time are input into the neural network, and comprise all the image acquisition equipment and the joint points corresponding to the image acquisition equipment at the central position.
The set weight is:
Figure BDA0003068912830000094
wherein n is the number of image acquisition devices.
As shown in fig. 2, triangulation refers to a method of recovering depth information by observing angles formed at different positions by the same spatial point. Suppose AwFor a detected key point of a human body, detecting the key point in two cameras I through a neural network1And I2Is A 'in the pixel coordinate system'1And A'2In conjunction with the derivation of the first section, there is the equation:
d1A′1=d2RA′2+t
wherein d is1、d1Are respectively two points A'1And A'2Of the depth of (c). Knowing R, t, d can be solved from the above equation and the epipolar constraint1、d1However, due to the existence of random noise, the above formula does not have a solution necessarily, depth information can be estimated through a least square method, meanwhile, for the problem that ambiguity exists in three-dimensional information acquired by different cameras, practical application scenes are considered, three cameras are used on each side of a riding human body, the constraint of recovering spatial information is increased, and the accuracy of acquiring the spatial information is improved.
The riding state analysis model is a support vector machine model; and inputting the collected spatial information and the corresponding time information of the joint points into a trained support vector machine model, and extracting riding motion characteristics to achieve the purpose of analyzing the riding state.
After the spatial information of the riding state within a period of time is acquired, three-dimensional coordinates of the riding state are analyzed respectively, information such as included angles and movement speeds of limbs under the riding state is calculated according to the geometric relation of three-dimensional human body joint points, and the riding state is monitored.
A non-contact riding motion state monitoring system comprises a riding scene monitoring module, a riding state space-time parameter acquisition module and a riding state analysis module;
the riding scene monitoring module comprises a plurality of data acquisition modules, all the data acquisition modules need to run simultaneously and complete calibration, pose information of different data acquisition modules relative to a central position is obtained, and two-dimensional pictures of riding postures of people at different angles at the same moment are obtained;
the space-time parameter acquisition module in the riding state comprises a neural network module for attitude analysis and a spatial information recovery module;
the gesture analysis neural network module is used for acquiring image pixel coordinates of joint points for acquiring human body motion images;
the spatial information recovery module is used for acquiring spatial information of the joint points;
the riding state analysis module is used for analyzing the time and the space information of the obtained riding state of the human body respectively, and monitoring the riding state by combining the included angle and the movement speed information of the limbs under the riding state.
In the present invention, embodiments of the system are similar to embodiments of the method.
The above-mentioned embodiments, which further illustrate the objects, technical solutions and advantages of the present invention, should be understood that the above-mentioned embodiments are only preferred embodiments of the present invention, and should not be construed as limiting the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A non-contact riding motion state monitoring method is characterized by comprising the following steps: acquiring pose information of the image acquisition equipment, and fixing the image acquisition equipment according to the pose information; acquiring motion state data of a person to be monitored in real time; inputting the motion state data of the person to be monitored into a motion state monitoring model, judging the motion condition of the current person to be monitored, and adjusting the motion attitude according to the condition; the motion state monitoring model comprises a neural network model, a spatial information recovery module and a riding state analysis model;
the process of the motion state monitoring model for processing the motion data comprises the following steps:
s1: inputting the acquired image data into a trained neural network to obtain the position information of different joint points in the riding state of each picture under a pixel coordinate system; the neural network model is an openfuse network model;
s2: matching and corresponding the information of the joint points acquired at different angles at the same moment according to a time sequence, and inputting the corresponding joint point information into a spatial information recovery module to recover the spatial information of each joint point;
s3: and sequentially inputting the spatial information of each joint point into the riding state analysis model according to the time sequence to obtain the motion condition under the current riding state.
2. The non-contact riding motion state monitoring method according to claim 1, wherein the process of acquiring pose information of the image acquisition device comprises: determining the position information of the central image acquisition equipment according to the position information of each image acquisition equipment; the determination process comprises: calculating the sum of the linear distances from each image acquisition device to other devices except the device, comparing the sum of all the distances, selecting the sum of the minimum distances, and taking the image acquisition device as a central image acquisition device; setting and acquiring chessboard images at the same position acquired by image acquisition equipment at different angles; and resolving the position and pose information of the image acquisition equipment relative to the central position according to the acquired chessboard image to obtain the relative position and pose information of the image acquisition equipment.
3. The non-contact riding motion state monitoring method according to claim 2, characterized in that at least 3 image acquisition devices are adopted in the data acquisition process, and the image acquisition devices are fixed according to the solved pose information.
4. The non-contact riding motion state monitoring method according to claim 2, wherein the process of calculating the pose information of the image acquisition device relative to the central position comprises: performing monocular calibration and binocular calibration on the image acquisition equipment, and obtaining pose information of each equipment according to a calibration result; the monocular calibration comprises the steps of correcting the image acquisition equipment according to the chessboard image, obtaining internal parameters of the corrected image acquisition equipment, and obtaining an internal parameter matrix; the binocular calibration process comprises the steps of calibrating a chessboard image by adopting a correction calibration method to obtain a chessboard image matching point pair; acquiring a rotation matrix and a translation matrix of the image acquisition equipment according to the chessboard image matching point pairs; and obtaining the relative pose information of the image acquisition equipment according to the internal parameter matrix, the rotation matrix and the translation matrix.
5. The non-contact riding motion state monitoring method according to claim 1, wherein the openposition network model comprises a backbone network VGG-19 and at least one stage module; the stage module comprises a PCM module, a PAF module, two convolution layers and a full connection layer; the first convolution layer is connected with the PCM module, the second convolution layer is connected with the PAF module, and the PCM module and the PAF module are connected in parallel; the process of training the opencast network model comprises the following steps:
step 1: acquiring original image data, and preprocessing the original image data;
step 2: inputting the preprocessed image into a backbone network VGG-19 to extract the characteristics of the image;
and step 3: convolving the extracted features, and inputting the result of convolution into a PCM module to obtain a key point thermodynamic diagram;
and 4, step 4: convolving the extracted features, and inputting the result of convolution into a PAF module to obtain a key point affinity field map;
and 5: inputting the characteristic diagram, the key point thermodynamic diagram and the key point affinity field diagram into a full connection layer to obtain pixel coordinates of human body joint points in the riding diagram;
step 6: and calculating a loss function of the openfuse network model, continuously adjusting parameters of the loss function, and finishing the training of the model when the loss function is minimum.
6. The method for monitoring the state of non-contact cycling motion as claimed in claim 5, wherein the expressions of loss functions of the PAF module and the PCM module in the openuse network model
Figure FDA0003068912820000021
Respectively as follows:
Figure FDA0003068912820000022
Figure FDA0003068912820000031
wherein C represents the C-th confidence mapping, C represents the total number of the confidence mappings, W (p) represents the weight, p represents the current processing as the p-th group of data,
Figure FDA0003068912820000032
labels derived for training in PAF, tiIndicating the ith level in the PAF module, J indicates the jth confidence map,
Figure FDA0003068912820000033
for pre-labeling the data in the PAF module,
Figure FDA0003068912820000034
representing labels derived from training in PCM modules, tkRepresenting the k-th layer in a PCM block,
Figure FDA0003068912820000035
data are marked in advance in the PCM; the overall loss function is:
Figure FDA0003068912820000036
wherein, TpDenotes the total number of layers, T, of the PAF modulecThe total number of layers of the PCM module is indicated,
Figure FDA0003068912820000037
representing the loss function of the PAF module,
Figure FDA0003068912820000038
representing the loss function in the PCM block.
7. The non-contact riding motion state monitoring method according to claim 1, wherein the process of recovering spatial information of each joint point comprises: acquiring pixel coordinates of a joint point pair image in the two paired images; restoring the updated joint point space information according to the pose information of the equipment by combining a triangularization method; setting a weight, and performing weighted updating on the spatial information of the joint point according to the set weight; the joint points in the two images are joint points which are output after the images acquired by the image acquisition equipment and the image acquisition equipment at the central position at the same time are input into the neural network, and comprise all the image acquisition equipment and the joint points corresponding to the image acquisition equipment at the central position.
8. The non-contact riding motion state monitoring method according to claim 7, wherein the set weight is as follows:
Figure FDA0003068912820000039
wherein n is the number of image acquisition devices.
9. The non-contact riding motion state monitoring method according to claim 1, wherein the riding state analysis model is a support vector machine model; inputting the collected spatial information of the joint points and the corresponding time information into a trained support vector machine model, extracting riding motion characteristics, analyzing a riding state according to the extracted riding motion characteristics, judging whether the current riding state is correct or not, giving an alarm if the riding state is incorrect, and having no influence if the riding state is correct.
10. A non-contact cycling state monitoring system, characterized in that, the system comprises: the system comprises a riding scene monitoring module, a riding state space-time parameter obtaining module and a riding state analyzing module;
the riding scene monitoring module comprises a plurality of data acquisition modules, all the data acquisition modules need to run simultaneously and complete calibration, pose information of different data acquisition modules relative to a central position is obtained, and two-dimensional pictures of riding postures of people at different angles at the same moment are obtained;
the space-time parameter acquisition module in the riding state comprises a neural network module for attitude analysis and a spatial information recovery module;
the gesture analysis neural network module is used for acquiring image pixel coordinates of joint points for acquiring human body motion images;
the spatial information recovery module is used for acquiring spatial information of the joint points;
the riding state analysis module is used for analyzing the time and the space information of the obtained riding state of the human body respectively, and monitoring the riding state by combining the included angle and the movement speed information of the limbs under the riding state.
CN202110539560.4A 2021-05-17 2021-05-17 Non-contact type riding motion state monitoring method and system Active CN113361333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110539560.4A CN113361333B (en) 2021-05-17 2021-05-17 Non-contact type riding motion state monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110539560.4A CN113361333B (en) 2021-05-17 2021-05-17 Non-contact type riding motion state monitoring method and system

Publications (2)

Publication Number Publication Date
CN113361333A true CN113361333A (en) 2021-09-07
CN113361333B CN113361333B (en) 2022-09-27

Family

ID=77526831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110539560.4A Active CN113361333B (en) 2021-05-17 2021-05-17 Non-contact type riding motion state monitoring method and system

Country Status (1)

Country Link
CN (1) CN113361333B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034280A (en) * 2022-03-16 2022-09-09 宁夏广天夏科技股份有限公司 System for detecting unsafe behavior of underground personnel

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130236089A1 (en) * 2011-09-11 2013-09-12 Primesense Ltd. Learning-based estimation of hand and finger pose
US20150116472A1 (en) * 2013-10-24 2015-04-30 Global Action Inc. Measuring system and measuring method for analyzing knee joint motion trajectory during cycling
CN105664462A (en) * 2016-01-07 2016-06-15 北京邮电大学 Auxiliary training system based on human body posture estimation algorithm
CN107451568A (en) * 2017-08-03 2017-12-08 重庆邮电大学 Use the attitude detecting method and equipment of depth convolutional neural networks
CN108345869A (en) * 2018-03-09 2018-07-31 南京理工大学 Driver's gesture recognition method based on depth image and virtual data
CN110458046A (en) * 2019-07-23 2019-11-15 南京邮电大学 A kind of human body motion track analysis method extracted based on artis
US20200074165A1 (en) * 2017-03-10 2020-03-05 ThirdEye Labs Limited Image analysis using neural networks for pose and action identification
CN111008583A (en) * 2019-11-28 2020-04-14 清华大学 Pedestrian and rider posture estimation method assisted by limb characteristics
CN111476187A (en) * 2020-04-13 2020-07-31 福州联合力拓信息科技有限公司 Bicycle riding posture analysis method and correction system
CN111680586A (en) * 2020-05-26 2020-09-18 电子科技大学 Badminton player motion attitude estimation method and system
CN111695402A (en) * 2019-03-12 2020-09-22 沃尔沃汽车公司 Tool and method for labeling human body posture in 3D point cloud data
CN111743543A (en) * 2020-05-27 2020-10-09 武汉齐物科技有限公司 Method and device for detecting motion state of rider and code table
CN112129290A (en) * 2019-06-24 2020-12-25 罗伯特·博世有限公司 System and method for monitoring riding equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130236089A1 (en) * 2011-09-11 2013-09-12 Primesense Ltd. Learning-based estimation of hand and finger pose
US20150116472A1 (en) * 2013-10-24 2015-04-30 Global Action Inc. Measuring system and measuring method for analyzing knee joint motion trajectory during cycling
CN105664462A (en) * 2016-01-07 2016-06-15 北京邮电大学 Auxiliary training system based on human body posture estimation algorithm
US20200074165A1 (en) * 2017-03-10 2020-03-05 ThirdEye Labs Limited Image analysis using neural networks for pose and action identification
CN107451568A (en) * 2017-08-03 2017-12-08 重庆邮电大学 Use the attitude detecting method and equipment of depth convolutional neural networks
CN108345869A (en) * 2018-03-09 2018-07-31 南京理工大学 Driver's gesture recognition method based on depth image and virtual data
CN111695402A (en) * 2019-03-12 2020-09-22 沃尔沃汽车公司 Tool and method for labeling human body posture in 3D point cloud data
CN112129290A (en) * 2019-06-24 2020-12-25 罗伯特·博世有限公司 System and method for monitoring riding equipment
CN110458046A (en) * 2019-07-23 2019-11-15 南京邮电大学 A kind of human body motion track analysis method extracted based on artis
CN111008583A (en) * 2019-11-28 2020-04-14 清华大学 Pedestrian and rider posture estimation method assisted by limb characteristics
CN111476187A (en) * 2020-04-13 2020-07-31 福州联合力拓信息科技有限公司 Bicycle riding posture analysis method and correction system
CN111680586A (en) * 2020-05-26 2020-09-18 电子科技大学 Badminton player motion attitude estimation method and system
CN111743543A (en) * 2020-05-27 2020-10-09 武汉齐物科技有限公司 Method and device for detecting motion state of rider and code table

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔莉亚: "运动场景下人体动作分析算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)社会科学Ⅱ辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034280A (en) * 2022-03-16 2022-09-09 宁夏广天夏科技股份有限公司 System for detecting unsafe behavior of underground personnel

Also Published As

Publication number Publication date
CN113361333B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN111881887A (en) Multi-camera-based motion attitude monitoring and guiding method and device
CN110321754B (en) Human motion posture correction method and system based on computer vision
CN107392964B (en) The indoor SLAM method combined based on indoor characteristic point and structure lines
CN102697508B (en) Method for performing gait recognition by adopting three-dimensional reconstruction of monocular vision
CN109949341B (en) Pedestrian target tracking method based on human skeleton structural features
US11521373B1 (en) System for estimating a three dimensional pose of one or more persons in a scene
CN106251399A (en) A kind of outdoor scene three-dimensional rebuilding method based on lsd slam
CN109934848A (en) A method of the moving object precise positioning based on deep learning
WO2004095373A2 (en) Method and system for determining object pose from images
CN113516005B (en) Dance action evaluation system based on deep learning and gesture estimation
CN106846372B (en) Human motion quality visual analysis and evaluation system and method thereof
CN111476077A (en) Multi-view gait recognition method based on deep learning
CN114119739A (en) Binocular vision-based hand key point space coordinate acquisition method
CN113139962B (en) System and method for scoliosis probability assessment
CN111401340B (en) Method and device for detecting motion of target object
CN110032940A (en) A kind of method and system that video pedestrian identifies again
CN112966628A (en) Visual angle self-adaptive multi-target tumble detection method based on graph convolution neural network
CN113361333B (en) Non-contact type riding motion state monitoring method and system
CN113378649A (en) Identity, position and action recognition method, system, electronic equipment and storage medium
Chen et al. Measurement of body joint angles for physical therapy based on mean shift tracking using two low cost Kinect images
Xu et al. Multiview video-based 3-D pose estimation of patients in computer-assisted rehabilitation environment (CAREN)
CN115035546A (en) Three-dimensional human body posture detection method and device and electronic equipment
Ingwersen et al. SportsPose-A Dynamic 3D sports pose dataset
Jin et al. Estimating human weight from a single image
Wong et al. Enhanced classification of abnormal gait using BSN and depth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant