CN112115863A - Human body action reconstruction method and system based on Doppler radar time-frequency image sequence and cross convolution neural network - Google Patents

Human body action reconstruction method and system based on Doppler radar time-frequency image sequence and cross convolution neural network Download PDF

Info

Publication number
CN112115863A
CN112115863A CN202010986634.4A CN202010986634A CN112115863A CN 112115863 A CN112115863 A CN 112115863A CN 202010986634 A CN202010986634 A CN 202010986634A CN 112115863 A CN112115863 A CN 112115863A
Authority
CN
China
Prior art keywords
neural network
data
human body
motion
doppler radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010986634.4A
Other languages
Chinese (zh)
Other versions
CN112115863B (en
Inventor
向宇涛
贾勇
陈怡良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Univeristy of Technology
Original Assignee
Chengdu Univeristy of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Univeristy of Technology filed Critical Chengdu Univeristy of Technology
Priority to CN202010986634.4A priority Critical patent/CN112115863B/en
Publication of CN112115863A publication Critical patent/CN112115863A/en
Application granted granted Critical
Publication of CN112115863B publication Critical patent/CN112115863B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a human body action reconstruction method and a human body action reconstruction system based on a Doppler radar time-frequency image sequence and a cross convolution neural network, wherein firstly, the phase of an initial human body action is obtained through a Kinect camera, and the result output by a motion estimation neural network is compared with the operation result of a real image neural network to obtain the increment value of a human body target in unit time length; and calculating to obtain the action coordinate of the human body target according to the initial human body action and the incremental value of the human body target. The neural network in the method utilizes the newly added information of the moving target and the image characteristics of the Doppler radar to train, the actual neural network is easy to operate, and the input of the motion estimation network and the actual image network and the output of the whole neural network can be modified in the actual application environment. Therefore, the predicted performance of the neural network of the method is more general than the effect of the decision-first reconstruction technique used in the traditional achievement in a series of actual human motion.

Description

Human body action reconstruction method and system based on Doppler radar time-frequency image sequence and cross convolution neural network
Technical Field
The invention relates to the technical field of reconstruction of radar image target action sequences and image processing based on deep learning, in particular to a method and a system for reconstructing human body actions based on a Doppler radar time-frequency image sequence and a cross convolution neural network.
Background
Human motion detection is a popular technical field, and has a great deal of research in the technical fields of human-computer interaction, biomedical science and computer behavioural science. In the field of traditional motion detection, methods for recognizing and detecting human body motions by using a pressure sensor, a Kinect depth camera and wearable equipment are the most classical and most effective technologies, and particularly, the Kinect camera can be used in the field of human-computer interaction, while in the emerging field, methods for recognizing human body motions and gesture types by using a radar image technology are available, which include related researches for recognizing human body motion types by using doppler radar images.
The principle of the doppler radar is the doppler effect, and the doppler effect of the radar calculates the velocity of the target human body by transmitting a radar beam of a certain frequency and receiving the doppler frequency of the return echo of the moving human body. Compared with other radars, the Doppler radar has the advantages that the action range is far away from that of other radars, the movement of any part of a human body generates Doppler frequency, and therefore the Doppler frequency can be captured by the Doppler radar, and the computer can estimate the relative speed of the movement of the human body through the returned echo frequency difference of the Doppler radar. If the processed time-frequency spectrogram of the Doppler radar is directly observed, the computer can judge the approximate movement speed of the human body target according to the Doppler frequency of the radar echo, and can estimate the swing amplitude and frequency of the target limb according to the envelope size and distribution of the constructed Doppler radar image.
In the field of traditional technical methods, there are a lot of researches on processing a time-frequency spectrogram of a doppler radar to classify the class of target movement. However, the research of reconstructing a target moving mode by using a doppler radar time-frequency spectrogram is still in the primary stage, and the main difficulty is that the characteristics of the time-frequency spectrogram are not obvious, so that the action of a human body is difficult to reconstruct by directly using the time-frequency spectrogram of the radar. Some of the results employ a method of classifying the motion type of the estimation target in advance and reconstructing the target motion, but this method is difficult to reconstruct a complicated motion, and these results are also difficult to correctly classify the target motion type and thus unable to correctly reconstruct the target motion for a series of different motions.
Disclosure of Invention
In view of the above, the present invention provides a method and a system for reconstructing human body actions based on a doppler radar time-frequency image sequence and a cross convolution neural network, in which the method uses doppler radar and Kinect devices to take data and uses the data in approximately reconstructing human body actions, and the method uses the continuity of human body actions in space and time to reconstruct human body actions.
In order to achieve the purpose, the invention provides the following technical scheme:
the invention provides a human body action reconstruction method based on a Doppler radar time-frequency image sequence and a cross convolution neural network, which comprises the following steps of:
acquiring Doppler radar data of human body movement; acquiring human body target motion data of human body motion;
performing time-frequency analysis processing on the Doppler radar data to obtain a corresponding time-frequency spectrogram; dividing a time-frequency spectrogram of the Doppler radar data into a plurality of sections of sub-Doppler radar data;
inputting the data of each sub Doppler radar to a real image neural network;
dividing the human body target motion data into a plurality of sections of sub human body target motion data; obtaining the motion increment of each limb of the human body through the sub human body target motion data, and manufacturing a tag set of the motion increment;
inputting the motion data of each sub-human body target into a motion estimation neural network for network training processing;
comparing the result output by the motion estimation neural network with the operation result of the real image neural network to obtain the increment value of the human body target in unit time length;
and calculating to obtain the action coordinate of the human body target according to the initial human body action and the incremental value of the human body target.
Further, the human body target motion data of the human body motion is Kinect human body motion data acquired through a Kinect camera, and the human body target motion data comprises motion data of human body joints.
Further, the time-frequency spectrogram is obtained according to the following steps:
synthesizing Doppler radar data;
using short-time Fourier transform on the synthesized Doppler radar data;
and windowing the processed data to obtain a time-frequency spectrogram.
Further, the sub-doppler radar data is isochronous sub-doppler radar data, the sub-human target motion data is isochronous sub-human target motion data, and the sub-temporal doppler radar data and the isochronous sub-human target motion data are segmented according to the following steps: setting a time duration tpAccording to the time length tpAnd segmenting the human body target motion data and the Doppler radar data.
Furthermore, the data of each sub-Doppler radar is converted into RGB three channels and then input into an input layer of a real image neural network.
Further, the motion estimation neural network adopts a cross convolution neural network, and the cross convolution neural network performs network training processing according to the following mode:
setting a cross convolution neural network; the middle layer of the cross convolution neural network adopts a Sigmoid function as an activation function, a convolution layer and a pooling layer of the cross convolution neural network are arranged, and a convolution kernel of the convolution layer and a pooling kernel of the pooling layer both move in equal step length;
inputting the motion increment of each limb of the human body into an input layer;
calculating the mean square error of the actual output value of the human motion increment and the neural network;
training by taking the mean square error as a loss function, and updating the weight of the cross convolution neural network;
and the cross convolution neural network outputs an increment value of unit time length.
Further, after the network training process is completed, the motion estimation neural network calculates the action coordinates of the human body target according to the following steps:
combining the motion increment output after the motion estimation neural network processing and the human body target motion data according to an output sequence to obtain combined data of each joint;
and comparing and verifying with the human body target motion data.
Further, the combined data is obtained according to the following formula:
Figure BDA0002689471190000031
Figure BDA0002689471190000032
Figure BDA0002689471190000033
Figure BDA0002689471190000034
Figure BDA0002689471190000035
Figure BDA0002689471190000036
Figure BDA0002689471190000037
Figure BDA0002689471190000038
wherein v ispredIs the predicted speed of the neural network output, a is the maximum amplitude of each part rotation,
Figure BDA0002689471190000039
is the angle of rotation of each part,
Figure BDA00026894711900000310
is the total increment value to be predicted by the neural network, and Loc is the original coordinate of each partiIs the coordinates of each part obtained after the ith iteration, L is the length of a specific limb, li
Figure BDA00026894711900000311
Is the total delta value that the neural network needs to predict.
The invention also provides a human body action reconstruction system based on the Doppler radar time-frequency image sequence and the cross convolution neural network, which comprises a Doppler radar data acquisition device, a Kinect data acquisition device, a data preprocessing unit, a data dividing unit, a motion increment generating unit, a cross convolution neural network training unit and a cross convolution neural network testing unit;
the Doppler radar data acquisition device is used for acquiring Doppler radar data;
the Kinect data acquisition device is used for acquiring coordinate data of the human body joint;
the data preprocessing unit is used for performing time-frequency analysis processing on the Doppler radar data to obtain a corresponding time-frequency spectrogram;
the data dividing unit is used for dividing a time-frequency spectrogram of the Doppler radar data into a plurality of sections of sub-Doppler radar data and dividing the human body joint coordinate data into a plurality of sections of sub-human body joint coordinate data;
the motion increment generating unit is used for calculating the increment of the corresponding human body part according to the coordinate and the motion speed of different human body parts relative to the original human body;
the cross convolution neural network training unit is used for training according to the coordinate data of the human joint and obtaining network weight and a data label set;
and the cross convolution neural network test unit is used for calculating the human body joint coordinates according to the trained neural network and the sub Doppler radar data.
Further, the weights in the cross-convolution neural network training unit are performed according to the following steps:
comparing the operation result of each section of data of the neural network with the data obtained by actual Kinect sampling, and using the operation result as a loss source of weight to train the neural network, wherein the output of the neural network at the moment is marked as yi
The mean square error of the predicted output of the neural network and the label is used as a penalty, as follows:
Loss=E[(yi-Ki)2]
wherein, KiProcessed Kinect data is human motion increment in unit time length;
loss represents Loss of the actual label set and the neural network output value;
e [ ] represents the averaging function;
yirepresenting the output value of the neural network at the ith moment;
Kia tag set representing Kinect at the ith time;
after the loss is obtained, the method records the weight of the neural network as WijNamely, the gradient weight update formula of the neural network is as follows:
Figure BDA0002689471190000041
wherein the learningrateThe learning rate is preset by the neural network;
Wij *is the updated weight of the neural network;
Wijrepresenting the weights to be updated by the neural network.
The invention has the beneficial effects that:
the invention provides a method and a system for reconstructing human body actions based on a Doppler radar time-frequency image sequence and a convolutional neural network.
The method includes the steps that Doppler radar equipment and Kinect depth camera equipment are adopted on hardware equipment, a time-frequency spectrogram is obtained through a time-frequency analysis method of short-time Fourier transform and windowing processing on Doppler radar data in advance, the time-frequency spectrogram is further converted into an RGB three-channel image, Kinect data and time-frequency spectrogram data are divided according to equal time, increment of human body target motion in unit time is calculated to serve as a label set of a neural network in the method, the divided time-frequency spectrogram serves as input of a real image neural network, and a motion increment label set is input into a motion estimation network. Therefore, the neural network can be trained after a certain data set is collected. Therefore, compared with other neural networks, the neural network used in the method is simple and easy to use, is convenient to modify, and only needs to modify network parameters for input and output. The neural network in the method does not need to predict the motion type of the human body target, and the final output result is the motion increment of each unit time length, so that the predicted performance of the neural network is more general than the effect of the prior judgment-first reconstruction technology used in the traditional achievement in a series of actual human body motions.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
In order to make the object, technical scheme and beneficial effect of the invention more clear, the invention provides the following drawings for explanation:
fig. 1 is an instrument and scene set-up used in the present invention.
Fig. 2 is a flow chart of the operation of the apparatus used in the present invention.
FIG. 3 is a schematic diagram of a scene in which the Doppler radar and the Kinect acquire human motion data according to the present invention.
Fig. 4 is a block diagram of a neural network used in the present invention, in which a motion estimation neural network is located at the upper side and a real image neural network is located at the lower side.
Detailed Description
The present invention is further described with reference to the following drawings and specific examples so that those skilled in the art can better understand the present invention and can practice the present invention, but the examples are not intended to limit the present invention.
Example 1
As shown in fig. 2, the method for reconstructing a human body motion based on a doppler radar time-frequency image sequence and a cross convolution neural network provided in this embodiment specifically includes the following steps:
step 1: acquiring human body motion data by using a camera, wherein the human body motion data comprises joint coordinate data of human body motion, and synchronously acquiring Doppler radar data of the human body motion by using a Doppler radar; the camera provided by the embodiment adopts a Kinect camera, and Kinect human motion data of a human body is acquired through the Kinect camera;
step 2: performing time-frequency analysis processing on the obtained Doppler radar data, and obtaining a corresponding time-frequency spectrogram;
and step 3: dividing the acquired Doppler radar data into a plurality of sections of equal-time subdata;
and 4, step 4: analyzing the Kinect human motion data and obtaining the variable of each joint geometric parameter;
and 5: the data are sorted, the motion increment data of the human body target are obtained, the motion increment data of the human body target are input into a motion estimation network in a neural network for training, and the target data acquired by the Doppler radar are input into a real image network for training;
the neural network provided by the embodiment needs to obtain the initial state of human body action, and takes each sub-Doppler radar data and motion increment data as the input of different neural networks, and the trained neural network outputs the motion increment of each human body joint;
the neural network provided by the embodiment adopts a cross convolution neural network.
For the motion estimation network, the entity is similar to an autoencoder, but the part of the neural network does not output images, but outputs a variable which can be directly multiplied by image characteristics, and obtains the motion increment of the output by the variable, so that the loss of the weight in the actual network is obtained by the mean square difference value of the input label set and the output value;
for the real image network, the essence is to extract the features of the radar image, multiply the features with the output of the motion estimation network, and obtain the increment at the output layer, so as to further obtain the loss value of the network;
step 6: comparing the operation result of each section of data of the neural network with actual data obtained by sampling of a Kinect camera, and taking the operation result as a loss source of weight so as to train the neural network;
and 7: carrying out actual test on the trained neural network by using a time-frequency spectrogram;
in order to obtain the optimal human body motion target data and Kinect tag set, the method performs the following processing on Doppler radar data and actual data collected by a Kinect camera in step 1 according to the following steps:
and 8: recording an initial human body action state for a Kinect camera, and recording the initial state as thetainit
And step 9: simultaneously acquiring Doppler radar and Kinect data;
in order to obtain a suitable time-frequency spectrogram, in step 2, the method further needs to perform the following processing on the obtained doppler radar data:
step 10: IQ path synthesis is adopted for data of the Doppler radar;
step 11: processing the processed Doppler radar data by adopting short-time Fourier transform, and performing Hanning window processing on the processed data to obtain a proper Doppler radar time-frequency spectrogram;
after the preprocessing, a synchronous time-frequency spectrogram and Kinect data can be obtained;
in order to obtain suitable equal-time-length data, in step 3, the Kinect data and the doppler radar data need to be subjected to the following segmentation process:
step 12: setting a suitable time period tpDividing data of the Kinect and the Doppler radar according to the duration;
the invention herein marks the number of subdata as N and the ith segment of Kinect data as KiThe ith Doppler radar time spectrum chart is denoted as Di
At this time, the Kinect obtains not the increment of the joint movement parameter but the increment of each joint in different time periods, so in step 4, the invention takes the human body walking as an example, and the following steps are executed to obtain the increment of the joint movement parameter of the human body.
Step 13: the method adopts a Cartesian coordinate system to describe the position relation between the human body and the radar Kinect equipment, and the human body and the radar Kinect equipment are positioned on an X axis;
step 14: for a human body walking at a constant speed v by a linear Doppler radar and Kinect equipment, recording the coordinate and motion parameter increment of different parts of the human body relative to the original human body according to the following formula;
for the original head coordinates (x) of a human bodyhead,yhead,zhead) The motion of the human body relative to the original human body coordinate system can be recorded as a target which moves linearly at a uniform speed along the X axis, and in the ith data, the head coordinate of the human body is recorded as (X)head-∑ivtp,yhead,zhea) Thus, in each piece of data, the increment of the motion parameter of the head coordinate in a unit time length is (-vt)p,0,0)。
Wherein the content of the first and second substances,
xheadx-axis coordinates representing the head of the test person relative to the radar;
yheady-axis coordinates representing the head of the test person relative to the radar;
zheadz-axis coordinates representing the head of the test person relative to the radar;
v represents an average moving speed of the human body in unit time;
tprepresents one unit time;
for the original chest coordinates (x) of the human bodychest,ychest,zchest) Original spinal coordinates (x)spine,yspine,zspine) The breast of the human body is a target which moves linearly at a constant speed along the X axis relative to the original human body coordinate system, so that the breast coordinate of the human body can be recorded as (X) in the ith datachest-∑ivtp,yche,zchest) The improvement in unit duration being-vtp
Wherein the content of the first and second substances,
xchestx-axis coordinates representing the chest of the test person relative to the radar;
ychesty-axis coordinates representing the chest of the test person relative to the radar;
zchestz-axis coordinates representing the chest of the test person relative to the radar;
xspinex-axis coordinates representing the spine of the tester relative to the radar;
yspiney-axis coordinates representing the spine of the tester relative to the radar;
zspinez-axis coordinates representing the spine of the tester relative to the radar;
for the human body vertebra coordinate, the method roughly records the coordinate as an object moving along a uniform linear motion, so that the coordinate can be similarly recorded as an in-listIncrement in bit duration of (-vt)p0, 0). If accurate calculations are required, the vertebral coordinates also need to be calculated as components of the displacement relative to the body coordinates, which are referred to herein as:
Figure BDA0002689471190000085
wherein, Δ XOsIs the slight displacement of the spine relative to the human body,
Figure BDA0002689471190000086
is the initial phase parameter of the human vertebra;
for the original shoulder coordinates of the human body, the method divides the original shoulder coordinates into the left shoulder coordinates (x) of the human bodysl,ysl,zsl) And coordinates (x) of the right shoulder of the human bodysr,ysr,zsr)。
Wherein the content of the first and second substances,
xslx-axis coordinates representing the left shoulder of the tester relative to the radar;
ysly-axis coordinates representing the left shoulder of the tester relative to the radar;
zsla z-axis coordinate representing the left shoulder of the tester relative to the radar;
xsrx-axis coordinates representing the tester's right shoulder relative to the radar;
ysry-axis coordinates representing the tester's right shoulder relative to the radar;
zsrz-axis coordinates representing the tester's right shoulder relative to the radar;
the movement of the shoulder of the human body relative to the original coordinates of the human body can be equivalent to a cosine function plus the displacement of the human body, taking the left shoulder as an example, namely
Figure BDA0002689471190000081
Wherein A issIs the human shoulder deviation amplitude, can be obtained by taking the average value by Kinect,
Figure BDA0002689471190000082
is the human shoulder deviation angle, and can be calculated by the following formula
Figure BDA0002689471190000083
Specific values of (a):
Figure BDA0002689471190000084
Figure BDA0002689471190000091
wherein the content of the first and second substances,
Figure BDA0002689471190000092
is the deviation angle of the shoulders of the human body, is obtained by the slight angle change of the shoulders on an XY plane,
Figure BDA0002689471190000093
is the initial angle of the human shoulder. Thus, the shoulder is increased by an amount per unit time length of
Figure BDA0002689471190000094
For the original coordinates of the arm joints of the human body, the method marks the left and right arm seats as (x)arml,yarml,zarml) And (x)armr,yarmr,zarmr)。
Wherein the content of the first and second substances,
xarmlx-axis coordinates representing the left arm of the tester relative to the radar;
yarmly-axis coordinates representing the left arm of the tester relative to the radar;
zarmlz-axis coordinates representing the left arm of the tester relative to the radar;
xarmrx-axis coordinates representing the tester's right arm relative to the radar;
yarmry-axis coordinates representing the tester's right arm relative to the radar;
zarmrpresentation measurementThe z-axis coordinate of the right arm of the test person relative to the radar;
the arm coordinate can be equivalent to the motion of a cosine function relative to the shoulder coordinate of the human body, and can be equivalent to a target which rotates on an XZ plane and adds self displacement relative to the original coordinate of the human body. Taking the left arm as an example, i.e.
Figure BDA0002689471190000095
Figure BDA0002689471190000096
Wherein A isaIs the amplitude of the rotation of the arm joint of the human body, can be obtained by obtaining the average value through Kinect,
Figure BDA0002689471190000097
is the offset angle of the human arm relative to the shoulder, and the specific parameters
Figure BDA0002689471190000098
Can be obtained by the following formula:
Figure BDA0002689471190000099
Figure BDA00026894711900000910
wherein the content of the first and second substances,
Figure BDA00026894711900000911
is the deviation angle of the arm joint of the human body, is obtained by the tiny angle change of the arm joint relative to the shoulder on the XZ plane,
Figure BDA00026894711900000912
is the initial angle of the human arm joint.
In summary, the increment of the arm joint in the unit time length is
Figure BDA00026894711900000913
For the original coordinates of the hands of the human body, the method marks the left and right hand seats as (x)handl,yha,zhandl) And (x)handr,yhandr,zhandr)。
Wherein the content of the first and second substances,
xhandlx-axis coordinates representing the left hand of the test person relative to the radar;
yhandy-axis coordinates representing the left hand of the test person relative to the radar;
zhanz-axis coordinates representing the left hand of the test person relative to the radar;
xhandrx-axis coordinates representing the tester's right hand relative to the radar;
yhandry-axis coordinates representing the right hand of the test person relative to the radar;
zhandrz-axis coordinates representing the right hand of the test person relative to the radar;
since the rotation angle of the hand coordinate relative to the human body is related to the angle of the arm joint when the human body moves, the method calculates the position of the arm joint and the rotation increment of the hand as the position of the hand in order to simplify the calculation. Take the left hand as an example, i.e
Figure BDA0002689471190000101
Where L ishandIs the length from the arm joint to the hand, needs to be measured by the tester in advance,
Figure BDA0002689471190000102
is the relative angle of the hand during motion, and the detailed calculation formula is as follows:
Figure BDA0002689471190000103
Figure BDA0002689471190000104
wherein the content of the first and second substances,
Figure BDA0002689471190000105
is the initial angle of the hand relative to the arm at the time of measurement,
Figure BDA0002689471190000106
is the slight angular offset of the hand relative to the arm joints.
In summary, the increment of the unit time length of the hand relative to the arm is
Figure BDA0002689471190000107
For the coordinates of the legs and the feet, because the increment modes of the motion parameters of the legs and the motion parameters of the feet are similar to the increment modes of the arms and the hands, for the data acquired by the Kinect, the method uses the same mode as the steps 4 and 5 to obtain the parameter increment, and for the leg joints of the human body, the parameter increment is as follows:
the motion parameter increment in a unit time length is
Figure BDA0002689471190000108
Wherein A islIs the swing amplitude of the leg joint, can be obtained by measuring the average value by the Kinect,
Figure BDA0002689471190000109
is the angular offset of the leg joint.
The same is true for the foot motion of the human body to simplify the amount of computation.
The method uses the motion parameters of the foot relative to the leg joints to calculate the position of the foot, the increment of which in a unit time length is recorded as
Figure BDA00026894711900001010
Where L isfootIs the leg length of the subject, can be pre-measured,
Figure BDA00026894711900001011
is the angular offset of the foot.
After the step of extracting the motion increment of each joint as a tag set, in steps 5 and 6, the method sends the processed time-frequency spectrogram of the Doppler radar and the Kinect stepping increment tag set into a specific network of a cross convolution neural network.
In the method, a cross-convolution neural network is set up as follows: epoch is 100000 times, batchs are 8 copies each time, and the global learning rate is set to 10-5. The detailed setting of the cross convolution neural network is as follows:
step 15: for an input layer of a real image neural network, inputting a time-frequency spectrogram converted into RGB three channels, and for an input layer of a motion estimation neural network, inputting a motion increment captured by Kinect;
step 16: for each intermediate layer of the neural network, the method adopts a Sigmoid function as an activation function, and does not adopt the activation function for a final output layer so that the neural network outputs different variables, and for a convolution layer and a pooling layer, the method reduces the data operation amount, and the convolution kernel and the pooling kernel both adopt movement with the step length of 2;
and step 17: for an output layer of the neural network, the method takes human motion increment of each section of unit time of Kinect as a label, takes the mean square error between the actual output of the neural network and the label as a loss function for training, and updates the weight of the neural network;
after training, because the output of the neural network is the motion increment of unit time length, in the actual test, the output of the neural network and the initial phase theta of the human body need to be comparedinitThe combination is performed in output order and compared with the data obtained from the Kinect acquisition for verification, so that in step 7 the method tests the results according to the following steps.
Step 18: combining the data, the final data for each joint can be obtained by:
Figure BDA0002689471190000111
Figure BDA0002689471190000112
Figure BDA0002689471190000113
Figure BDA0002689471190000114
Figure BDA0002689471190000115
Figure BDA0002689471190000116
Figure BDA0002689471190000117
Figure BDA0002689471190000118
wherein v ispredIs the predicted speed of the neural network output, a is the maximum amplitude of each part rotation,
Figure BDA0002689471190000119
is the angle of rotation of each part, and at the same time,
Figure BDA00026894711900001110
is the total increment value to be predicted by the neural network, and Loc is the original coordinate of each partiIs the coordinates of each part obtained after the ith iteration, and L is the length of a specific limb, where
Figure BDA00026894711900001111
And is also the total delta value that the neural network needs to predict.
Wherein the content of the first and second substances,
Figure BDA00026894711900001112
indicating the shoulder position of the tester at the ith moment;
Figure BDA0002689471190000121
indicating the arm position of the tester at the ith moment;
Figure BDA0002689471190000122
indicating the hand position of the tester at the ith moment;
Figure BDA0002689471190000123
representing the leg position of the test person at the ith moment;
Figure BDA0002689471190000124
indicating the position of the tester's foot at the ith moment;
tprepresents one unit time;
Locsrepresenting the original tester shoulder position;
Locarmrepresenting the original tester arm position;
Lochandrepresenting the original tester hand position;
Loclegrepresenting the original tester leg position;
Locfootrepresenting the original tester foot position;
Lfootrepresents the length of the shank portion of the test person;
Lhandindicating tester armThe length of the minute;
Figure BDA0002689471190000125
represents the cosine swing value of the shoulder at the ith moment;
Figure BDA0002689471190000126
represents the cosine swing value of the arm at the ith moment;
Figure BDA0002689471190000127
represents the cosine swing value of the hand at the ith moment;
Figure BDA0002689471190000128
represents the cosine swing value of the foot at the ith moment;
Figure BDA0002689471190000129
represents the sinusoidal oscillation value of the leg at the ith moment;
al ian amplitude value representing the leg swing at the ith time instant;
As ian amplitude value representing the shoulder swing at the ith time instant;
i represents a measured value at the ith time;
step 19: recombining the motion action model of the human body by using the data and the method, and evaluating a final result;
example 2
The embodiment provides an example of human target walking to explain the specific steps of the method in detail:
as shown in fig. 1, a human target needs to stand at a position about 3 to 6 meters away from the Kinect camera, because the effective identification distance of the Kinect camera is about 4 meters, the position can be accurately acquired by the Kinect camera and the doppler radar at the same time, and after the target is prepared, the following steps can be performed:
step 1: the Kinect camera and the Doppler radar are used for collecting joint coordinate data and Doppler radar data of a human body in the same time, and in order to eliminate errors of different offset increments of a tag set caused by human body position factors, the Kinect camera direction and a tester are required to be kept in the X-axis direction in the step. At this time, the method also needs to record the initial phase theta of each limb of the Kinect human body movementinitThe required motion increment can be calculated from the data collected by the Kinect in the following measurements.
Meanwhile, in order to make the types of data diverse, in an actual measurement scenario, the method requires a tester to perform different gestures in one action type, for example: in the case of human target walking, the amplitude of arm swing or the presence or absence of swing, the change in human walking speed, and the presence or absence of suspended walking are variables that should be considered in actual measurement.
Step 2: for the acquired Doppler radar data, IQ (in-phase quadrature) path synthesis is firstly adopted in the method, and the radar data are synthesized, wherein the specific method is as follows:
IQ path data acquired by radar are respectively recorded as dataiAnd dataqThen, the IQ-synthesized radar data is recorded as:
dataIQ=datai+i×dataq
wherein, the parameter data of the above formulaiQData representing the synthesized doppler radar;
datairepresenting in-phase path data; dataqRepresenting orthogonal route data; i represents an imaginary part;
after Doppler radar data are processed, performing time-frequency analysis by adopting short-time Fourier transform, wherein the calculation method of the short-time Fourier transform comprises the following steps:
Figure BDA0002689471190000131
wherein STFT (t, f) represents;
x (τ) is the original signal of the input;
g (τ -t) is the Hanning window;
t represents a time portion;
f represents a frequency component;
τ represents a time variable symbol;
from here, the method completes the processing of the doppler radar data.
And step 3: dividing the acquired data into a plurality of sections of isochronous subdata, and if the data are divided into N sections of data according to isochronous property, for the radar data and Kinect data with the total time length T, the time length of each section of data is
Figure BDA0002689471190000132
If the data is divided by the equal time period t, the truncated data needs to be considered in this case. For example:
data*=data-mod(T,t)
the expression shows that if the collected data can not be exactly divided, redundant parts of the collected data are cut off;
wherein the content of the first and second substances,
the data represents original data;
data*representing the original data after cutting the data;
mod represents the remainder function.
The acquisition system shown in fig. 3, step 4: since the Kinect data obtained above is analyzed to obtain the motion increment of each joint, and the Kinect data is substantially the coordinates of each limb of the human body in space, it is necessary to measure the motion increments of each limb of the human body according to the detailed method in step 14, and add the motion increments to a list defined in advance as a label source for the neural network output.
And 5: the data are sorted and input into a cross-convolution neural network for training, wherein the neural network needs to obtain the initial state of human body action, and each sub-Doppler radar data is input into the neural network as a real imageAnd each piece of sub-Kinect data is used as the input of the motion estimation neural network, and the method assumes that enough processed Kinect data K are obtainediAnd processed Doppler radar data Di
In the method, the resolution ratio of a Doppler radar time-frequency spectrogram D is 256 multiplied by 3, and the size of input data received by a real image neural network is 32 multiplied by 256 multiplied by 3, so that in actual training, the time-frequency spectrogram data of the Doppler radar needs to be equally divided into 8 parts, and Kinect data K input to a motion estimation network also needs to be equally divided into 8 parts, wherein each Kinect data K needs to be equally divided into 8 partsiThe human motion increment of each piece of Kinect data needs to be acquired according to the method in the step 14 and is used as a label at the same time.
As shown in fig. 4, fig. 4 is a neural network used in the present invention, in depth, the method sets the first layer convolution kernel of the real image neural network to 64, and the output part of the real image neural network performs Sigmoid activation, so that the size of each piece of data after the first layer neural network operation is 16 × 128 × 64. Similarly, in the pooling layer, the maximum pooling operation is performed, so that an output with a size of 8 × 64 × 64 per data can be obtained.
On the second layer convolution layer of the neural network, the depth of the convolution kernel is set to 128, the output part still executes Sigmoid activation, the size of each data of the output part is 4 x 32 x 128, after the maximum pooling operation is executed, the size of the data of the output part is 2 x 16 x 128, and the output part can be stretched into a full connection layer.
At this time, for the motion estimation neural network, the size of the input layer is 8 × 11, and the input dimension is obviously not large enough compared with the size of the hidden layer, so the method performs deconvolution operation here to increase the dimension size to a proper dimension, and similarly, in order to reduce the data calculation amount, a convolution kernel with the same parameters as the real image neural network is adopted in the present neural network, and an 8 × hidden result is output in the motion estimation network, wherein the variable hidden is the size of the hidden layer preset by the method.
In the subsequent output part of the cross convolution neural network, the size of a hidden layer is preset to be 1024, and a full connection layer can be constructed according to the multiplication of the output size of the obtained pooling layer of the real image network and the output value of the motion estimation network, wherein the final output layer can be constructed according to the size of the output layer, and the size of the final output layer is 8 multiplied by 11, namely the size of the motion increment of the human joint in each period of time.
As shown in fig. 2, fig. 2 is a workflow of the neural network in the method, and step 6: comparing the operation result of each section of data of the neural network with the data obtained by actual Kinect sampling, and using the operation result as a loss source of weight to train the neural network, wherein the output of the neural network at the moment is marked as yi
In order to obtain the loss of each weight, the method directly uses the mean square error of the predicted output and the label of the neural network as the loss, and the method is written as follows:
Loss=E[(yi-Ki)2]
wherein, KiIs processed Kinect data which is still human motion increment per unit time length;
loss represents Loss of the actual label set and the neural network output value;
e [ ] represents the averaging function;
yirepresenting the output value of the neural network at the ith moment;
Kia tag set representing Kinect at the ith time;
after the loss is obtained, the method records the weight of the neural network as WijNamely, the gradient weight update formula of the neural network is as follows:
Figure BDA0002689471190000151
wherein the learningrateThe learning rate is preset by the neural network;
Wij *is the updated weight of the neural network;
Wijrepresenting a neural networkA weight to be updated;
training can begin when sufficient data for a sufficient number of types of motion is collected. Here, the learning rate employed by the method is 10-5And training with the iteration number of 100000 times can fully learn the characteristics of the Doppler radar image, and can avoid the problems that the loss value lingers and is difficult to reduce due to the fact that the learning rate is too low.
And 7: in practical tests, since the neural network outputs motion increments of each unit time length, the initial phase theta of the human motion needs to be combinedinitAnd the magnitude of the increment of the output to determine the actual performance. The specific calculation method needs to update the position of each limb according to the method of step 18, so as to obtain the reconstructed human body motion.
Example 3
The embodiment also provides a human body action reconstruction system based on the Doppler radar time-frequency image sequence and the cross convolution neural network, which comprises a Doppler radar data acquisition device, a Kinect data acquisition device, a data preprocessing unit, a data dividing unit, a motion increment generating unit, a cross convolution neural network training unit and a cross convolution neural network testing unit;
the Doppler radar data acquisition device is used for acquiring Doppler radar data;
the Kinect data acquisition device is used for acquiring coordinate data of the human body joint;
the data preprocessing unit is used for performing time-frequency analysis processing on the Doppler radar data to obtain a corresponding time-frequency spectrogram;
the data dividing unit is used for dividing a time-frequency spectrogram of the Doppler radar data into a plurality of sections of sub-Doppler radar data and dividing the human body joint coordinate data into a plurality of sections of sub-human body joint coordinate data;
the motion increment generating unit is used for calculating the increment of the corresponding human body part according to the coordinate and the motion speed of different human body parts relative to the original human body;
the cross convolution neural network training unit is used for training according to the coordinate data of the human joint and obtaining network weight and a data label set;
and the cross convolution neural network test unit is used for calculating the human body joint coordinates according to the trained neural network and the sub Doppler radar data.
The weights in the cross-convolution neural network training unit are carried out according to the following steps:
comparing the operation result of each section of data of the neural network with the data obtained by actual Kinect sampling, and using the operation result as a loss source of weight to train the neural network, wherein the output of the neural network at the moment is marked as yi
The mean square error of the predicted output of the neural network and the label is used as a penalty, as follows:
Loss=E[(yi-Ki)2]
wherein, KiProcessed Kinect data is human motion increment in unit time length;
loss represents Loss of the actual label set and the neural network output value;
e [ ] represents the averaging function;
yirepresenting the output value of the neural network at the ith moment;
Kia tag set representing Kinect at the ith time;
after the loss is obtained, the method records the weight of the neural network as WijNamely, the gradient weight update formula of the neural network is as follows:
Figure BDA0002689471190000161
wherein the learningrateThe learning rate is preset by the neural network;
Wij *is the updated weight of the neural network;
Wijrepresenting the weights to be updated by the neural network.
The above-mentioned embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.

Claims (10)

1. A human body action reconstruction method based on a Doppler radar time-frequency image sequence and a cross convolution neural network is characterized by comprising the following steps: the method comprises the following steps:
acquiring Doppler radar data of human body movement; acquiring human body target motion data of human body motion;
performing time-frequency analysis processing on the Doppler radar data to obtain a corresponding time-frequency spectrogram; dividing a time-frequency spectrogram of the Doppler radar data into a plurality of sections of sub-Doppler radar data;
inputting the data of each sub Doppler radar to a real image neural network;
dividing the human body target motion data into a plurality of sections of sub human body target motion data; obtaining the motion increment of each limb of the human body through the sub human body target motion data, and manufacturing a tag set of the motion increment;
inputting the motion data of each sub-human body target into a motion estimation neural network for network training processing;
comparing the result output by the motion estimation neural network with the operation result of the real image neural network to obtain the increment value of the human body target in unit time length;
and calculating to obtain the action coordinate of the human body target according to the initial human body action and the incremental value of the human body target.
2. The method of claim 1, wherein: the human body target motion data of the human body motion are Kinect human body motion data acquired through a Kinect camera, and the human body target motion data comprise motion data of human body joints.
3. The method of claim 1, wherein: the time-frequency spectrogram is obtained according to the following steps:
synthesizing Doppler radar data;
using short-time Fourier transform on the synthesized Doppler radar data;
and windowing the processed data to obtain a time-frequency spectrogram.
4. The method of claim 1, wherein: the sub Doppler radar data is isochronous sub Doppler radar data, the sub human body target motion data is isochronous sub human body target motion data, and the sub Doppler radar data and the isochronous sub human body target motion data are segmented according to the following steps: setting a time duration tpAccording to the time length tpAnd segmenting the human body target motion data and the Doppler radar data.
5. The method of claim 1, wherein: and converting the data of each sub Doppler radar into RGB three channels and inputting the data into an input layer of a real image neural network.
6. The method of claim 1, wherein: the motion estimation neural network adopts a cross convolution neural network, and the cross convolution neural network carries out network training processing according to the following modes:
setting a cross convolution neural network; the middle layer of the cross convolution neural network adopts a Sigmoid function as an activation function, a convolution layer and a pooling layer of the cross convolution neural network are arranged, and a convolution kernel of the convolution layer and a pooling kernel of the pooling layer both move in equal step length;
inputting the motion increment of each limb of the human body into an input layer;
calculating the mean square error of the actual output value of the human motion increment and the neural network;
training by taking the mean square error as a loss function, and updating the weight of the cross convolution neural network;
and the cross convolution neural network outputs an increment value of unit time length.
7. The method of claim 1, wherein: after the network training processing of the motion estimation neural network is completed, the motion coordinates of the human body target are calculated according to the following steps:
combining the motion increment output after the motion estimation neural network processing and the human body target motion data according to an output sequence to obtain combined data of each joint;
and comparing and verifying with the human body target motion data.
8. The method of claim 7, wherein: the combined data is obtained according to the following formula:
Figure FDA0002689471180000021
Figure FDA0002689471180000022
Figure FDA0002689471180000023
Figure FDA0002689471180000024
Figure FDA0002689471180000025
Figure FDA0002689471180000026
Figure FDA0002689471180000027
Figure FDA0002689471180000028
wherein v ispredIs the predicted speed of the neural network output, a is the maximum amplitude of each part rotation,
Figure FDA0002689471180000029
is the angle of rotation of each part,
Figure FDA00026894711800000210
is the total increment value to be predicted by the neural network, and Loc is the original coordinate of each partiIs the coordinates of each part obtained after the ith iteration, L is the length of a specific limb, li
Figure FDA00026894711800000211
Is the total delta value that the neural network needs to predict.
9. A human body action reconstruction system based on a Doppler radar time-frequency image sequence and a cross convolution neural network is characterized in that: the device comprises a Doppler radar data acquisition device, a Kinect data acquisition device, a data preprocessing unit, a data dividing unit, a motion increment generating unit, a cross convolution neural network training unit and a cross convolution neural network testing unit;
the Doppler radar data acquisition device is used for acquiring Doppler radar data;
the Kinect data acquisition device is used for acquiring coordinate data of the human body joint;
the data preprocessing unit is used for performing time-frequency analysis processing on the Doppler radar data to obtain a corresponding time-frequency spectrogram;
the data dividing unit is used for dividing a time-frequency spectrogram of the Doppler radar data into a plurality of sections of sub-Doppler radar data and dividing the human body joint coordinate data into a plurality of sections of sub-human body joint coordinate data;
the motion increment generating unit is used for calculating the increment of the corresponding human body part according to the coordinate and the motion speed of different human body parts relative to the original human body;
the cross convolution neural network training unit is used for training according to the coordinate data of the human joint and obtaining network weight and a data label set;
and the cross convolution neural network test unit is used for calculating the human body joint coordinates according to the trained neural network and the sub Doppler radar data.
10. The method of claim 7, wherein: the weights in the cross-convolution neural network training unit are carried out according to the following steps:
comparing the operation result of each section of data of the neural network with the data obtained by actual Kinect sampling, and using the operation result as a loss source of weight to train the neural network, wherein the output of the neural network at the moment is marked as yi
The mean square error of the predicted output of the neural network and the label is used as a penalty, as follows:
Loss=E[(yi-Ki)2]
wherein, KiProcessed Kinect data is human motion increment in unit time length;
loss represents Loss of the actual label set and the neural network output value;
e [ ] represents the averaging function;
yirepresenting the output value of the neural network at the ith moment;
Kia tag set representing Kinect at the ith time;
after the loss is obtained, the method records the weight of the neural network as WijNamely, the gradient weight update formula of the neural network is as follows:
Figure FDA0002689471180000031
wherein the learningrateThe learning rate is preset by the neural network;
Wij *is the updated weight of the neural network;
Wijrepresenting the weights to be updated by the neural network.
CN202010986634.4A 2020-09-18 2020-09-18 Human body action reconstruction method and system based on Doppler radar time-frequency image sequence and cross convolution neural network Active CN112115863B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010986634.4A CN112115863B (en) 2020-09-18 2020-09-18 Human body action reconstruction method and system based on Doppler radar time-frequency image sequence and cross convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010986634.4A CN112115863B (en) 2020-09-18 2020-09-18 Human body action reconstruction method and system based on Doppler radar time-frequency image sequence and cross convolution neural network

Publications (2)

Publication Number Publication Date
CN112115863A true CN112115863A (en) 2020-12-22
CN112115863B CN112115863B (en) 2022-10-18

Family

ID=73799813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010986634.4A Active CN112115863B (en) 2020-09-18 2020-09-18 Human body action reconstruction method and system based on Doppler radar time-frequency image sequence and cross convolution neural network

Country Status (1)

Country Link
CN (1) CN112115863B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113985393A (en) * 2021-10-25 2022-01-28 南京慧尔视智能科技有限公司 Target detection method, device and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608053A (en) * 2015-12-28 2016-05-25 大连理工大学 Polynomial phase signal parameter estimating method and system
CN107004139A (en) * 2014-12-01 2017-08-01 欧司朗股份有限公司 By means of the image procossing of cross-correlation
CN108256488A (en) * 2018-01-19 2018-07-06 中国人民解放军陆军装甲兵学院 A kind of radar target identification method based on micro-Doppler feature extraction and deep learning
US20190130566A1 (en) * 2015-04-06 2019-05-02 IDx, LLC Systems and methods for feature detection in retinal images
CN109948532A (en) * 2019-03-19 2019-06-28 桂林电子科技大学 ULTRA-WIDEBAND RADAR human motion recognition method based on depth convolutional neural networks
CN111368930A (en) * 2020-03-09 2020-07-03 成都理工大学 Radar human body posture identification method and system based on multi-class spectrogram fusion and hierarchical learning
CN111523509A (en) * 2020-05-08 2020-08-11 江苏迪赛司自动化工程有限公司 Equipment fault diagnosis and health monitoring method integrating physical and deep expression characteristics

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107004139A (en) * 2014-12-01 2017-08-01 欧司朗股份有限公司 By means of the image procossing of cross-correlation
US20190130566A1 (en) * 2015-04-06 2019-05-02 IDx, LLC Systems and methods for feature detection in retinal images
CN105608053A (en) * 2015-12-28 2016-05-25 大连理工大学 Polynomial phase signal parameter estimating method and system
CN108256488A (en) * 2018-01-19 2018-07-06 中国人民解放军陆军装甲兵学院 A kind of radar target identification method based on micro-Doppler feature extraction and deep learning
CN109948532A (en) * 2019-03-19 2019-06-28 桂林电子科技大学 ULTRA-WIDEBAND RADAR human motion recognition method based on depth convolutional neural networks
CN111368930A (en) * 2020-03-09 2020-07-03 成都理工大学 Radar human body posture identification method and system based on multi-class spectrogram fusion and hierarchical learning
CN111523509A (en) * 2020-05-08 2020-08-11 江苏迪赛司自动化工程有限公司 Equipment fault diagnosis and health monitoring method integrating physical and deep expression characteristics

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐朝阳: ""毫米波雷达运动人体目标建模与特征提取"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
蒋留兵: ""基于卷积神经网络的雷达人体动作识别方法"", 《计算机应用与软件》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113985393A (en) * 2021-10-25 2022-01-28 南京慧尔视智能科技有限公司 Target detection method, device and system
CN113985393B (en) * 2021-10-25 2024-04-16 南京慧尔视智能科技有限公司 Target detection method, device and system

Also Published As

Publication number Publication date
CN112115863B (en) 2022-10-18

Similar Documents

Publication Publication Date Title
CN111432733B (en) Apparatus and method for determining motion of an ultrasound probe
CN103340632B (en) Human joint angle measuring method based on feature point space position
CN113543718B (en) Apparatus and method for determining motion of an ultrasound probe including front-to-back directionality
US11826140B2 (en) System and method for human motion detection and tracking
Zerpa et al. The use of microsoft Kinect for human movement analysis
Surer et al. Methods and technologies for gait analysis
CN112115863B (en) Human body action reconstruction method and system based on Doppler radar time-frequency image sequence and cross convolution neural network
Özsoy et al. Reliability and agreement of Azure Kinect and Kinect v2 depth sensors in the shoulder joint range of motion estimation
US20130158403A1 (en) Method for Obtaining a Three-Dimensional Velocity Measurement of a Tissue
Page et al. Optimal average path of the instantaneous helical axis in planar motions with one functional degree of freedom
Barbosa et al. Conception, development and validation of a software interface to assess human’s horizontal intra-cyclic velocity with a mechanical speedo-meter
JP2003265480A (en) Medical exercise analyzing device and method
Crabolu et al. Evaluation of the accuracy in the determination of the center of rotation by magneto-inertial sensors
CN107817492A (en) The imaging method and device of wide angle synthetic aperture radar
Luo et al. Multi-IMU with Online Self-consistency for Freehand 3D Ultrasound Reconstruction
Carletti et al. Analyzing body fat from depth images
Zettinig et al. 3D velocity field and flow profile reconstruction from arbitrarily sampled Doppler ultrasound data
Sun et al. Application of Multi-channel Impedance Measurement Device in Teaching of" Signal Analysis and Processing"
Liu et al. Kinematic Analysis of Intra-Limb Joint Symmetry via Multi-Sensor Fusion
NL2026924B1 (en) Spectro-dynamic magnetic resonance imaging
Van Riel et al. Spectro-dynamic MRI: Characterizing mechanical systems on a millisecond scale
JP2005245476A (en) Method of measuring joint rotation axis and apparatus therefor
de Almeida et al. A Comparison of Time-Delay Estimators for Speckle Tracking Echocardiography
Mohammed et al. Strengths and Weaknesses of 3D Pose Estimation and Inertial Motion Capture System for Movement Therapy
WO2024073418A1 (en) Multi-sensor calibration of portable ultrasound system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant