CN114782987A - Millimeter wave radar attitude identification method based on depth camera supervision - Google Patents

Millimeter wave radar attitude identification method based on depth camera supervision Download PDF

Info

Publication number
CN114782987A
CN114782987A CN202210314377.9A CN202210314377A CN114782987A CN 114782987 A CN114782987 A CN 114782987A CN 202210314377 A CN202210314377 A CN 202210314377A CN 114782987 A CN114782987 A CN 114782987A
Authority
CN
China
Prior art keywords
axis coordinate
knee
data set
learning model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210314377.9A
Other languages
Chinese (zh)
Other versions
CN114782987B (en
Inventor
苟先太
周晨晨
魏亚林
黄毅凯
苟瀚文
姚一可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Bawei Jiuzhang Technology Co ltd
Original Assignee
Sichuan Bawei Jiuzhang Technology Co ltd
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Bawei Jiuzhang Technology Co ltd, Southwest Jiaotong University filed Critical Sichuan Bawei Jiuzhang Technology Co ltd
Priority to CN202210314377.9A priority Critical patent/CN114782987B/en
Publication of CN114782987A publication Critical patent/CN114782987A/en
Application granted granted Critical
Publication of CN114782987B publication Critical patent/CN114782987B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a millimeter wave radar attitude identification method based on depth camera supervision, which adopts a result of identification of a depth camera and a trained first deep learning model as a tag of millimeter wave radar data, trains a second deep learning model by using a generated tagged data set, can acquire a test set from a millimeter wave radar in real time after the training is finished to test the effect of the second deep learning model, and considers that the second deep learning model is trained when the accuracy reaches a set threshold. The data that the degree of depth camera acquireed need not the storage, and whole artifical the participation of not having, does not have visual scene, and the degree of depth camera can be withdrawn from after the training of second degree of depth learning model is accomplished, can effectively solve user privacy problem. In addition, the method can also flexibly adjust the second deep learning model in a specific environment according to the specific requirements of specific objects, and the problem of single application of the model is solved.

Description

Millimeter wave radar attitude identification method based on depth camera supervision
Technical Field
The invention relates to the field of human body posture recognition, in particular to a millimeter wave radar posture recognition method based on depth camera supervision.
Background
With the development of society, people pay more and more attention to physical conditions, especially to the physical health of the elderly. The elderly cannot get timely intervention treatment when in sudden diseases or accidents, and serious consequences are easy to cause. Although the real-time state of the old people can be concerned by advanced wearable equipment or video monitoring equipment, the real-time state of the old people can be concerned by the advanced wearable equipment or video monitoring equipment, but the real-time state of the old people also faces the difficult-to-overcome problems, including that contact type equipment is expensive and inconvenient to wear, and the old people are easy to conflict with the contact type equipment; and privacy issues arising from video surveillance equipment, make such solutions difficult to implement on the ground. In addition, different old people have different health problems, different health states have different processing on detection information, and classification and judgment based on a single model also leads to single application scene of the traditional non-contact detection scheme.
The depth camera is widely applied in the field of gesture recognition, has a good recognition effect, still faces the problem that privacy is not protected, and cannot be directly used in daily nursing of old people.
The millimeter wave radar has the characteristics of all weather, non-contact and no imaging, and is just suitable for daily nursing of the old. However, the millimeter wave radar is greatly influenced by changes of the use environment, and due to the characteristic that the millimeter wave radar is not imaged, great difficulty is caused to labeling of a data set during gesture recognition learning, human gestures corresponding to point cloud data are difficult to judge through human eyes, and a large amount of labor and energy are consumed in the learning process.
Disclosure of Invention
Aiming at the defects in the prior art, the millimeter wave radar attitude identification method based on the depth camera supervision solves the problems that privacy is not protected and a millimeter wave radar scheme is difficult to label a data set in the conventional mode of directly identifying and monitoring the attitude by adopting the depth camera.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
the millimeter wave radar attitude identification method based on depth camera supervision comprises the following steps:
s1, respectively constructing a first deep learning model and a second deep learning model; simultaneously carrying out data acquisition on an acquisition object by a depth camera and a millimeter wave radar; for a first group of collected objects, taking data collected by a depth camera as a first data set, and taking data collected by a millimeter wave radar as a second data set; for a second group of collected objects, taking data collected by the depth camera as a third data set, and taking data collected by the millimeter wave radar as a fourth data set;
s2, performing posture recognition on data at each moment in the first data set through a first deep learning model to obtain a first target posture set; extracting the characteristic parameters of each moment in the second data set to obtain a first characteristic parameter set;
s3, for data at the same moment, carrying out attitude marking on the second characteristic parameter set by adopting the first target attitude set to obtain marked data;
s4, training the second deep learning model by taking the data with the labels as a training set to obtain a pre-trained second deep learning model;
s5, performing posture recognition on the data at each moment in the third data set through the first deep learning model to obtain a second target posture set; extracting the characteristic parameter of each moment in the fourth data set to obtain a second characteristic parameter set;
s6, taking the second characteristic parameter set as a training set of the pre-trained second deep learning model, and comparing the output of the pre-trained second deep learning model with a second target posture set to obtain the posture identification accuracy of the pre-trained second deep learning model;
s7, judging whether the gesture recognition accuracy of the pre-trained second deep learning model reaches a threshold value, if so, taking the current pre-trained second deep learning model as a final gesture recognition model, and entering the step S8; otherwise, modifying the parameters of the pre-trained second deep learning model, and returning to the step S3;
and S8, adopting the millimeter wave radar as a data acquirer of the object to be recognized, adopting the final posture recognition model to perform posture recognition on the data acquired by the millimeter wave radar, and outputting a recognition result.
Further, the specific method for performing gesture recognition on the data at each time in the first data set in step S2 includes the following sub-steps:
s2-1, acquiring position information of key points of human body features for data at each moment in the first data set;
s2-2, calculating human posture characteristic parameters according to the position information of the human characteristic key points;
s2-3, taking the extracted human body posture characteristic parameters as input of the first deep learning model, and obtaining a posture label output by the first deep learning model to obtain a first target posture set.
Further, the key points of the human body characteristics in the step S2-1 include a neck, a right shoulder, a right elbow, a right wrist, a left shoulder, a left elbow, a left wrist, a right hip, a right knee, a right ankle, a left hip, a left knee, and a left ankle.
Further, the specific method of step S2-2 is:
according to the formula:
Figure BDA0003568529070000031
acquiring the height H of a human body; wherein HtopThe distance from the neck to the central point of the connecting line of the left hip and the right hip; hbottomThe mean value of the lengths of the left leg and the right leg; hLLeft leg length; hRIs the length of the right leg; x is a radical of a fluorine atomneck、yneckAnd zneckRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the neck in the three-dimensional coordinate system; x is the number ofl-hip、yl-hipAnd zl-hipRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the left hip in a three-dimensional coordinate system; x is the number ofr-hip、yr-hipAnd zr-hipRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the right hip in a three-dimensional coordinate system; x is a radical of a fluorine atoml-knee、yl-kneeAnd zl-kneeThe coordinate values of the x axis, the y axis and the z axis of the left knee in the three-dimensional coordinate system are respectively; x is the number ofr-knee、yr-kneeAnd zr-kneeRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the right knee in a three-dimensional coordinate system; x is the number ofl-ank、yl-ankAnd zl-ankRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the left ankle in a three-dimensional coordinate system; x is the number ofr-ank、yr-ankAnd zr-ankRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the right ankle in a three-dimensional coordinate system; x is a radical of a fluorine atoml-knee、yl-kneeAnd zl-kneeRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the left knee in a three-dimensional coordinate system; x is a radical of a fluorine atomr-knee、yr-kneeAnd zr-kneeThe coordinate values of the x axis, the y axis and the z axis of the right knee in the three-dimensional coordinate system are respectively;
according to the formula:
Hp=max[(zneck-zl-ank),(zneck-zr-ank)]
obtaining the maximum value H of the height difference between the neck and the left ankle and the height difference between the neck and the right anklepAnd taking it as the current height; wherein max [. C]Is a function of taking the maximum value;
according to the formula:
Figure BDA0003568529070000041
obtaining the included angle theta between the connecting line of the neck part and the midpoints of the left and right buttocks and the horizontal directiontopAnd the upper half body is taken as the horizontal inclination angle; wherein tan (·) is a tangent function;
according to the formula:
Figure BDA0003568529070000042
obtaining the mean value theta of the included angle between the connecting line from the left hip to the left ankle and the horizontal direction and the included angle between the connecting line from the right hip to the right ankle and the horizontal directionbottomAnd the lower half body is used as a horizontal inclination angle;
according to the formula:
Figure BDA0003568529070000051
Figure BDA0003568529070000052
Figure BDA0003568529070000053
Figure BDA0003568529070000054
Figure BDA0003568529070000055
Figure BDA0003568529070000056
Figure BDA0003568529070000057
obtaining the mean value theta of the included angle between the connecting line from the left hip to the left knee and the connecting line from the left knee to the left ankle and the included angle between the connecting line from the right hip to the right knee and the connecting line from the right knee to the right anklethigh-calfAnd the included angle between the thigh and the shank is taken as the included angle;
according to the formula:
Figure BDA0003568529070000058
Figure BDA0003568529070000059
Figure BDA00035685290700000510
Figure BDA00035685290700000511
obtaining the included angle theta between the connecting line from the neck to the midpoint of the left and right buttocks and the connecting line from the midpoint of the left and right buttocks to the midpoint of the left and right kneestop-thighAnd the included angle between the upper half body and the thigh is taken as the included angle;
according to the formula:
Figure BDA0003568529070000061
obtaining the midpoint of the connecting line of the left hip and the right hipCoordinate (x) ofg,yg,zg) And using the coordinate as the center coordinate of the human body;
namely the human body posture characteristic parameters comprise the height H of the human body and the current height HpUpper half and horizontal inclination angle thetatopLower half and horizontal inclination angle thetabottomThe included angle theta between thigh and shankthigh-calfUpper half and thigh angle thetatop-thighAnd body center coordinates (x)g,yg,zg)。
Further, in step S2-3, the first deep learning model includes an input layer, a hidden layer, and an output layer, which are connected in sequence; wherein:
the input of the input layer is a characteristic vector formed by human posture characteristic parameters;
number of nodes N of hidden layerhiddenComprises the following steps:
Figure BDA0003568529070000062
wherein N isinThe number of nodes of the input layer; n is a radical of hydrogenoutThe number of nodes of the output layer; con is [1,10 ]]A constant between;
the number of output nodes of the output layer is 5, that is, 5 gesture recognition results are included, which are respectively: standing, falling, sitting, squatting and walking.
Further, the specific method for extracting the feature parameters of each time in the second data set and the fourth data set is the same, and comprises the following steps:
according to the formula:
xmax=max{Ri cosθsinα}i∈[1,m]
xmin=min{Ri cosθsinα}i∈[1,m]
ymax=max{Ri cosθcosα}i∈[1,m]
ymin=min{Ri cosθcosα}i∈[1,m]
zmax=max{Ri sinθ}i∈[1,m]
zmin=min{Ri sinθ}i∈[1,m]
obtaining the maximum value x in the x directionmaxMinimum value in x directionxminY-direction maximum value ymaxMinimum value y in y-directionminAnd a maximum value z in the z directionmaxAnd z-direction minimum value zmin(ii) a Wherein max {. is a function of taking the maximum value; min {. is a function for taking the minimum value; r isiThe distance parameter is the ith distance parameter in the second data set or the fourth data set, and m is the total number of the distance parameters in the second data set or the fourth data set; cos is a cosine function; sin is a sine function; θ is the pitch angle in the second data set or the fourth data set; α is the azimuth in the second data set or the fourth data set;
according to the formula:
Figure BDA0003568529070000071
Figure BDA0003568529070000072
Figure BDA0003568529070000073
obtaining x-direction velocity VxY-direction velocity VyAnd velocity V in z directionz(ii) a Wherein ViRepresenting the ith radial velocity in the second data set or the fourth data set;
according to the formula:
Figure BDA0003568529070000074
Figure BDA0003568529070000075
Figure BDA0003568529070000076
obtaining target center coordinates
Figure BDA0003568529070000077
Namely, the characteristic parameter corresponding to the data collected by the millimeter wave radar comprises the maximum value x in the x directionmaxThe minimum value x in the x directionminY-direction maximum value ymaxMinimum value y in y-directionminAnd a maximum value z in the z directionmaxMinimum value z in z directionminX direction velocity VxY-direction velocity VyZ-direction velocity VzAnd target center coordinates
Figure BDA0003568529070000078
Further, in step S4, the second deep learning model includes a first convolution layer, a first ReLU active layer, a first dropout layer, a second convolution layer, a second ReLU active layer, a second dropout layer, a first fully-connected layer, a second fully-connected layer, and a softmax layer, which are connected in sequence; wherein:
the first convolution layer and the second convolution layer are both 1-D convolution layers, and the expression is as follows:
Figure BDA0003568529070000081
wherein f isconv(k) An output vector representing the 1-D convolution layer when the number of sliding steps is k; x (j) represents the j-th data in the input; n is the input data length; w (k) represents a convolution kernel when the number of sliding steps is k;
the dropout rates of the first dropout layer and the second dropout layer are both 0.25;
the softmax layer includes 5 output neurons, and the expression of the softmax layer is:
Figure BDA0003568529070000082
wherein a isgG-th input signal representing softmax layer; y isgRepresenting the output of the g output neuron of the softmax layer; e is a constant; a is aqIs the q outputThe signal, the qth gesture.
Further, the threshold value in step S7 is 0.9.
The invention has the beneficial effects that: the method can overcome the difficulty of training set generation in the gesture recognition learning process, save a large amount of manpower, remove the depth camera assistance after the training is finished, solve the privacy problem, carry out specific learning according to specific scenes and specific personnel, have stronger applicability and improve the accuracy of the gesture recognition of the millimeter wave radar.
Drawings
FIG. 1 is a schematic flow diagram of the process.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined by the appended claims, and all changes that can be made by the invention using the inventive concept are intended to be protected.
As shown in fig. 1, the millimeter wave radar attitude identification method based on depth camera supervision comprises the following steps:
s1, respectively constructing a first deep learning model and a second deep learning model; simultaneously carrying out data acquisition on an acquisition object by a depth camera and a millimeter wave radar; for a first group of collected objects, taking data collected by a depth camera as a first data set, and taking data collected by a millimeter wave radar as a second data set; for a second group of collected objects, taking data collected by the depth camera as a third data set, and taking data collected by the millimeter wave radar as a fourth data set; the depth camera is adjacent to the millimeter wave radar, and the visual fields of the depth camera and the millimeter wave radar are approximately the same by adjusting angles and the like;
s2, performing gesture recognition on the data at each moment in the first data set through a first deep learning model to obtain a first target gesture set; extracting the characteristic parameters of each moment in the second data set to obtain a first characteristic parameter set;
s3, for data at the same moment, carrying out attitude marking on the second characteristic parameter set by adopting the first target attitude set to obtain marked data;
s4, training the second deep learning model by taking the labeled data as a training set to obtain a pre-trained second deep learning model;
s5, performing gesture recognition on the data at each moment in the third data set through the first deep learning model to obtain a second target gesture set; extracting the characteristic parameter of each moment in the fourth data set to obtain a second characteristic parameter set;
s6, taking the second characteristic parameter set as a training set of the pre-trained second deep learning model, and comparing the output of the pre-trained second deep learning model with a second target posture set to obtain the posture recognition accuracy of the pre-trained second deep learning model;
s7, judging whether the gesture recognition accuracy of the pre-trained second deep learning model reaches a threshold value, if so, taking the currently pre-trained second deep learning model as a final gesture recognition model, and entering S8; otherwise, modifying the pre-trained second deep learning model parameters, and returning to the step S3;
and S8, adopting the millimeter wave radar as a data acquirer of the object to be recognized, adopting the final attitude recognition model to perform attitude recognition on the data acquired by the millimeter wave radar, and outputting a recognition result.
The specific method for performing gesture recognition on the data at each moment in the first data set in step S2 includes the following sub-steps:
s2-1, acquiring the position information of the key points of the human body features for the data at each moment in the first data set;
s2-2, calculating human posture characteristic parameters according to the position information of the human characteristic key points;
s2-3, taking the extracted human body posture characteristic parameters as input of the first deep learning model, and obtaining a posture label output by the first deep learning model to obtain a first target posture set.
The key points of the human body characteristics in the step S2-1 include a neck, a right shoulder, a right elbow, a right wrist, a left shoulder, a left elbow, a left wrist, a right hip, a right knee, a right ankle, a left hip, a left knee, and a left ankle.
The specific method of step S2-2 is:
according to the formula:
Figure BDA0003568529070000101
acquiring the height H of a human body; wherein HtopThe distance from the neck to the central point of the connecting line of the left hip and the right hip; hbottomThe mean value of the lengths of the left leg and the right leg; hLLeft leg length; hRIs the right leg length; x is the number ofneck、yneckAnd zneckRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the neck in a three-dimensional coordinate system; x is a radical of a fluorine atoml-hip、yl-hipAnd zl-hipRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the left hip in a three-dimensional coordinate system; x is the number ofr-hip、yr-hipAnd zr-hipRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the right hip in a three-dimensional coordinate system; x is the number ofl-knee、yl-kneeAnd zl-kneeThe coordinate values of the x axis, the y axis and the z axis of the left knee in the three-dimensional coordinate system are respectively; x is the number ofr-knee、yr-kneeAnd zr-kneeRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the right knee in a three-dimensional coordinate system; x is a radical of a fluorine atoml-ank、yl-ankAnd zl-ankRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the left ankle in a three-dimensional coordinate system; x is a radical of a fluorine atomr-ank、yr-ankAnd zr-ankRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the right ankle in a three-dimensional coordinate system; x is the number ofl-knee、yl-kneeAnd zl-kneeRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the left knee in a three-dimensional coordinate system; x is a radical of a fluorine atomr-knee、yr-kneeAnd zr-kneeThe coordinate values of the x axis, the y axis and the z axis of the right knee in the three-dimensional coordinate system are respectively;
according to the formula:
Hp=max[(zneck-zl-ank),(zneck-zr-ank)]
obtaining the maximum value H of the height difference between the neck and the left ankle and the height difference between the neck and the right anklepAnd taking it as the current height; wherein max [. C]Is a function of taking the maximum value;
according to the formula:
Figure BDA0003568529070000111
obtaining the included angle theta between the connecting line of the neck part and the midpoints of the left and right buttocks and the horizontal directiontopAnd the upper half body is taken as the horizontal inclination angle; wherein tan (·) is a tangent function;
according to the formula:
Figure BDA0003568529070000112
obtaining the mean value theta of the included angle between the connecting line from the left hip to the left ankle and the horizontal direction and the included angle between the connecting line from the right hip to the right ankle and the horizontal directionbottomAnd the lower half body is used as a horizontal inclination angle;
according to the formula:
Figure BDA0003568529070000121
Figure BDA0003568529070000122
Figure BDA0003568529070000123
Figure BDA0003568529070000124
Figure BDA0003568529070000125
Figure BDA0003568529070000126
Figure BDA0003568529070000127
obtaining the mean value theta of the included angle between the connecting line from the left hip to the left knee and the connecting line from the left knee to the left ankle and the included angle between the connecting line from the right hip to the right knee and the connecting line from the right knee to the right anklethigh-calfAnd the angle between the thigh and the shank is taken as the angle between the thigh and the shank;
according to the formula:
Figure BDA0003568529070000128
Figure BDA0003568529070000129
Figure BDA0003568529070000131
Figure BDA0003568529070000132
obtaining the included angle theta between the connecting line from the neck to the midpoint of the left and right buttocks and the connecting line from the midpoint of the left and right buttocks to the midpoint of the left and right kneestop-thighAnd the angle between the upper half body and the thigh is taken as the angle;
according to the formula:
Figure BDA0003568529070000133
obtaining the coordinate (x) of the middle point of the connecting line of the left hip and the right hipg,yg,zg) And using the coordinate as the center coordinate of the human body;
namely the human body posture characteristic parameters comprise the height H of the human body and the current height HpUpper body and horizontal inclination angle thetatopLower half and horizontal inclination angle thetabottomThe included angle theta between thigh and shankthigh-calfUpper half and thigh angle thetatop-thighAnd the center coordinates (x) of the human bodyg,yg,zg)。
In the step S2-3, the first deep learning model comprises an input layer, a hidden layer and an output layer which are connected in sequence; wherein:
the input of the input layer is a characteristic vector formed by human posture characteristic parameters;
number of nodes N of hidden layerhiddenComprises the following steps:
Figure BDA0003568529070000134
wherein N isinThe number of nodes of the input layer; n is a radical ofoutThe number of nodes of the output layer; con is [1,10 ]]A constant between;
the number of output nodes of the output layer is 5, that is, 5 gesture recognition results are included, which are respectively: standing, falling, sitting, squatting and walking.
The specific method for extracting the characteristic parameters of each moment in the second data set and the fourth data set is the same, and comprises the following steps:
according to the formula:
xmax=max{Ri cosθsinα}i∈[1,m]
xmin=min{Ri cosθsinα}i∈[1,m]
ymax=max{Ri cosθcosα}i∈[1,m]
ymin=min{Ri cosθcosα}i∈[1,m]
zmax=max{Ri sinθ}i∈[1,m]
zmin=min{Ri sinθ}i∈[1,m]
obtaining the x-squareTo a maximum value xmaxThe minimum value x in the x directionminY-direction maximum value ymaxMinimum value y in y-directionminAnd a maximum value z in the z directionmaxAnd z-direction minimum value zmin(ii) a Wherein max {. is a function taking the maximum value; min {. is a function for taking the minimum value; riThe distance parameter is the ith distance parameter in the second data set or the fourth data set, and m is the total number of the distance parameters in the second data set or the fourth data set; cos is a cosine function; sin is a sine function; θ is the pitch angle in the second data set or the fourth data set; α is the azimuth in the second data set or the fourth data set;
according to the formula:
Figure BDA0003568529070000141
Figure BDA0003568529070000142
Figure BDA0003568529070000143
obtaining x-direction velocity VxY-direction velocity VyAnd z-direction velocity Vz(ii) a Wherein ViRepresenting the ith radial velocity in the second data set or the fourth data set;
according to the formula:
Figure BDA0003568529070000144
Figure BDA0003568529070000145
Figure BDA0003568529070000146
obtaining target center coordinates
Figure BDA0003568529070000147
Namely, the characteristic parameter corresponding to the data collected by the millimeter wave radar comprises the maximum value x in the x directionmaxThe minimum value x in the x directionminY-direction maximum value ymaxMinimum value y in y-directionminAnd a maximum value z in the z directionmaxThe minimum value z in the z directionminX direction velocity VxY-direction velocity VyZ-direction velocity VzAnd target center coordinates
Figure BDA0003568529070000151
Forming the characteristic parameters of the same time into vectors
Figure BDA0003568529070000152
And combining the posture labels obtained by the first deep learning model to form a training set of a second deep learning model.
In the step S4, the second deep learning model includes a first convolution layer, a first ReLU active layer, a first dropout layer, a second convolution layer, a second ReLU active layer, a second dropout layer, a first full-link layer, a second full-link layer, and a softmax layer, which are connected in sequence; wherein:
the first convolution layer and the second convolution layer are both 1-D convolution layers, and the expression is as follows:
Figure BDA0003568529070000153
wherein f isconv(k) An output vector representing the 1-D convolution layer when the number of sliding steps is k; x (j) represents the j-th data in the input; n is the input data length; w (k) represents a convolution kernel when the number of sliding steps is k;
the dropout rates of the first dropout layer and the second dropout layer are both 0.25;
the softmax layer comprises 5 output neurons, and the expression of the softmax layer is as follows:
Figure BDA0003568529070000154
wherein a isgThe g-th input signal representing the softmax layer; y isgRepresenting the output of the g output neuron of the softmax layer; e is a constant; a is aqIs the qth output signal, i.e., the qth gesture.
In one embodiment of the invention, the threshold in step S7 is 0.9.
In summary, the model provided by the invention integrates data acquisition, data labeling, model training, model testing, model correction and model application. Firstly, a result of recognition of the depth camera and the trained first depth learning model is used as a label of millimeter wave radar data, a three-dimensional point cloud picture generated by the millimeter wave radar does not have visual attitude characteristics, the attitude result of the three-dimensional point cloud picture is difficult to judge manually through simple naked eyes, manpower is effectively liberated in the process, and the difficulty of manually marking the millimeter wave radar data is solved. And then training a second deep learning model by using the generated labeled data set, obtaining a test set from the millimeter wave radar in real time after the training is finished, testing the effect of the second deep learning model by taking the recognition result of the first deep learning model as a standard, and when the accuracy reaches a set threshold, considering that the training of the second deep learning model is finished, or automatically adjusting the parameters of the model and repeating the process until the training of the model is finished. The data that the degree of depth camera acquireed need not the storage, and whole artifical the participation, does not have visual scene, and the degree of depth camera can be withdrawn from after the training of second degree of depth learning model is accomplished, can effectively solve user privacy problem. In addition, the method can also flexibly adjust the second deep learning model in a specific environment according to specific requirements of specific objects, and the problem of single model application is solved.

Claims (8)

1. A millimeter wave radar attitude identification method based on depth camera supervision is characterized by comprising the following steps:
s1, respectively constructing a first deep learning model and a second deep learning model; simultaneously carrying out data acquisition on an acquisition object by a depth camera and a millimeter wave radar; for a first group of collected objects, taking data collected by a depth camera as a first data set, and taking data collected by a millimeter wave radar as a second data set; for a second group of collected objects, taking data collected by the depth camera as a third data set, and taking data collected by the millimeter wave radar as a fourth data set;
s2, performing posture recognition on data at each moment in the first data set through a first deep learning model to obtain a first target posture set; extracting the characteristic parameters of each moment in the second data set to obtain a first characteristic parameter set;
s3, for the data at the same moment, adopting the first target attitude set to perform attitude marking on the second characteristic parameter set to obtain the data with the labels;
s4, training the second deep learning model by taking the labeled data as a training set to obtain a pre-trained second deep learning model;
s5, performing posture recognition on the data at each moment in the third data set through the first deep learning model to obtain a second target posture set; extracting the characteristic parameter of each moment in the fourth data set to obtain a second characteristic parameter set;
s6, taking the second characteristic parameter set as a training set of the pre-trained second deep learning model, and comparing the output of the pre-trained second deep learning model with a second target posture set to obtain the posture recognition accuracy of the pre-trained second deep learning model;
s7, judging whether the gesture recognition accuracy of the pre-trained second deep learning model reaches a threshold value, if so, taking the current pre-trained second deep learning model as a final gesture recognition model, and entering the step S8; otherwise, modifying the parameters of the pre-trained second deep learning model, and returning to the step S3;
and S8, adopting the millimeter wave radar as a data acquirer of the object to be recognized, adopting the final attitude recognition model to perform attitude recognition on the data acquired by the millimeter wave radar, and outputting a recognition result.
2. The depth camera surveillance-based millimeter wave radar attitude recognition method according to claim 1, wherein the specific method for performing attitude recognition on the data at each moment in the first data set in step S2 comprises the following sub-steps:
s2-1, acquiring position information of key points of human body features for data at each moment in the first data set;
s2-2, calculating human posture characteristic parameters according to the position information of the human characteristic key points;
s2-3, taking the extracted human body posture characteristic parameters as input of the first deep learning model, and obtaining a posture label output by the first deep learning model to obtain a first target posture set.
3. The millimeter wave radar posture recognition method based on depth camera surveillance as claimed in claim 2, wherein the key points of the human body features in step S2-1 comprise neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, left knee and left ankle.
4. The millimeter wave radar attitude identification method based on depth camera supervision according to claim 3, characterized in that the specific method of step S2-2 is:
according to the formula:
Figure FDA0003568529060000021
acquiring the height H of a human body; wherein HtopThe distance from the neck to the central point of the connecting line of the left hip and the right hip; hbottomThe mean value of the lengths of the left leg and the right leg; hLLeft leg length; hRIs the right leg length; x is the number ofneck、yneckAnd zneckRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the neck in a three-dimensional coordinate system; x is the number ofl-hip、yl-hipAnd zl-hipRespectively is the left hipX-axis coordinate value, y-axis coordinate value and z-axis coordinate value in the dimensional coordinate system; x is a radical of a fluorine atomr-hip、yr-hipAnd zr-hipRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the right hip in a three-dimensional coordinate system; x is the number ofl-knee、yl-kneeAnd zl-kneeRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the left knee in a three-dimensional coordinate system; x is the number ofr-knee、yr-kneeAnd zr-kneeRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the right knee in a three-dimensional coordinate system; x is the number ofl-ank、yl-ankAnd zl-ankRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the left ankle in a three-dimensional coordinate system; x is a radical of a fluorine atomr-ank、yr-ankAnd zr-ankRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the right ankle in a three-dimensional coordinate system; x is a radical of a fluorine atoml-knee、yl-kneeAnd zl-kneeRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the left knee in a three-dimensional coordinate system; x is the number ofr-knee、yr-kneeAnd zr-kneeRespectively an x-axis coordinate value, a y-axis coordinate value and a z-axis coordinate value of the right knee in a three-dimensional coordinate system;
according to the formula:
Hp=max[(zneck-zl-ank),(zneck-zr-ank)]
obtaining the maximum value H of the height difference between the neck and the left ankle and the height difference between the neck and the right anklepAnd taking it as the current height; wherein max [. C]Is a function of taking the maximum value;
according to the formula:
Figure FDA0003568529060000031
obtaining the included angle theta between the connecting line of the neck part and the midpoints of the left and right buttocks and the horizontal directiontopAnd the upper half body is used as a horizontal inclination angle; wherein tan (·) is a tangent function;
according to the formula:
Figure FDA0003568529060000041
obtaining the mean value theta of the included angle between the connecting line of the left hip and the left ankle and the horizontal direction and the included angle between the connecting line of the right hip and the right ankle and the horizontal directionbottomAnd the lower half body is taken as the horizontal inclination angle;
according to the formula:
Figure FDA0003568529060000042
Figure FDA0003568529060000043
Figure FDA0003568529060000044
Figure FDA0003568529060000045
Figure FDA0003568529060000046
Figure FDA0003568529060000047
Figure FDA0003568529060000048
obtaining the mean value theta of the included angle between the connecting line from the left hip to the left knee and the connecting line from the left knee to the left ankle and the included angle between the connecting line from the right hip to the right knee and the connecting line from the right knee to the right anklethigh-calfAnd the thigh and the shank are used as a clampAn angle;
according to the formula:
Figure FDA0003568529060000051
Figure FDA0003568529060000052
Figure FDA0003568529060000053
Figure FDA0003568529060000054
obtaining the included angle theta between the connecting line from the neck to the midpoint of the left and right buttocks and the connecting line from the midpoint of the left and right buttocks to the midpoint of the left and right kneestop-thighAnd the included angle between the upper half body and the thigh is taken as the included angle;
according to the formula:
Figure FDA0003568529060000055
obtaining the coordinate (x) of the middle point of the connecting line of the left hip and the right hipg,yg,zg) And using the coordinate as the center coordinate of the human body;
namely the human body posture characteristic parameters comprise the height H of the human body and the current height HpUpper body and horizontal inclination angle thetatopLower body and horizontal inclination angle thetabottomThe included angle theta between thigh and shankthigh-calfUpper half and thigh angle thetatop-thighAnd the center coordinates (x) of the human bodyg,yg,zg)。
5. The depth camera surveillance-based millimeter wave radar attitude recognition method according to claim 4, wherein the first depth learning model in step S2-3 comprises an input layer, a hidden layer and an output layer connected in sequence; wherein:
the input of the input layer is a characteristic vector formed by human posture characteristic parameters;
number of nodes N of hidden layerhiddenComprises the following steps:
Figure FDA0003568529060000056
wherein N isinThe number of nodes of the input layer; n is a radical of hydrogenoutThe number of nodes of the output layer; con is [1,10 ]]A constant between;
the number of output nodes of the output layer is 5, that is, 5 gesture recognition results are included, which are respectively: standing, falling, sitting, squatting and walking.
6. The millimeter wave radar attitude identification method based on depth camera surveillance as claimed in claim 1, wherein the specific method for extracting the feature parameters at each moment in the second data set and the fourth data set is the same as that of the first data set and the second data set, and is characterized in that:
according to the formula:
xmax=max{Ri cosθsinα}i∈[1,m]
xmin=min{Ri cosθsinα}i∈[1,m]
ymax=max{Ri cosθcosα}i∈[1,m]
ymin=min{Ri cosθcosα}i∈[1,m]
zmax=max{Ri sinθ}i∈[1,m]
zmin=min{Ri sinθ}i∈[1,m]
obtaining the maximum value x in the x directionmaxThe minimum value x in the x directionminY-direction maximum value ymaxY-direction minimum value yminZ-direction maximum value zmaxAnd z-direction minimum zmin(ii) a Wherein max {. is a function of taking the maximum value; min {. is a function for taking the minimum value; r isiThe distance parameter is the ith distance parameter in the second data set or the fourth data set, and m is the total number of the distance parameters in the second data set or the fourth data set; cos isA cosine function; sin is a sine function; θ is the pitch angle in the second data set or the fourth data set; α is the azimuth in the second data set or the fourth data set;
according to the formula:
Figure FDA0003568529060000061
Figure FDA0003568529060000062
Figure FDA0003568529060000063
obtaining x-direction velocity VxY-direction velocity VyAnd velocity V in z directionz(ii) a Wherein ViRepresenting the ith radial velocity in the second data set or the fourth data set;
according to the formula:
Figure FDA0003568529060000071
Figure FDA0003568529060000072
Figure FDA0003568529060000073
obtaining target center coordinates
Figure FDA0003568529060000074
Namely, the characteristic parameter corresponding to the data collected by the millimeter wave radar comprises the maximum value x in the x directionmaxMinimum value in x directionxminY-direction maximum value ymaxMinimum value y in y-directionminAnd a maximum value z in the z directionmaxMinimum value z in z directionminX direction velocity VxY-direction velocity VyZ-direction velocity VzAnd target center coordinates
Figure FDA0003568529060000075
7. The millimeter wave radar attitude identification method based on depth camera supervision according to claim 6, wherein the second depth learning model in step S4 includes a first convolution layer, a first ReLU active layer, a first dropout layer, a second convolution layer, a second ReLU active layer, a second dropout layer, a first fully-connected layer, a second fully-connected layer and a softmax layer which are connected in sequence; wherein:
the first convolution layer and the second convolution layer are both 1-D convolution layers, and the expression is as follows:
Figure FDA0003568529060000076
wherein f isconv(k) An output vector representing the 1-D convolution layer when the number of sliding steps is k; x (j) represents the j-th data in the input; n is the input data length; w (k) represents a convolution kernel for a sliding step number of k;
the dropout rates of the first dropout layer and the second dropout layer are both 0.25;
the softmax layer comprises 5 output neurons, and the expression of the softmax layer is as follows:
Figure FDA0003568529060000077
wherein a isgThe g-th input signal representing the softmax layer; y isgRepresents the output of the g output neuron of the softmax layer; e is a constant; a isqIs the qth output signal, i.e., the qth gesture.
8. The depth camera surveillance-based millimeter wave radar gesture recognition method of claim 1, wherein the threshold in step S7 is 0.9.
CN202210314377.9A 2022-03-28 2022-03-28 Millimeter wave radar gesture recognition method based on depth camera supervision Active CN114782987B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210314377.9A CN114782987B (en) 2022-03-28 2022-03-28 Millimeter wave radar gesture recognition method based on depth camera supervision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210314377.9A CN114782987B (en) 2022-03-28 2022-03-28 Millimeter wave radar gesture recognition method based on depth camera supervision

Publications (2)

Publication Number Publication Date
CN114782987A true CN114782987A (en) 2022-07-22
CN114782987B CN114782987B (en) 2023-06-20

Family

ID=82424773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210314377.9A Active CN114782987B (en) 2022-03-28 2022-03-28 Millimeter wave radar gesture recognition method based on depth camera supervision

Country Status (1)

Country Link
CN (1) CN114782987B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544563A (en) * 2018-11-12 2019-03-29 北京航空航天大学 A kind of passive millimeter wave image human body target dividing method towards violated object safety check
CN112097374A (en) * 2020-09-16 2020-12-18 珠海格力电器股份有限公司 Device control method, device and computer readable medium
CN112184626A (en) * 2020-09-02 2021-01-05 珠海格力电器股份有限公司 Gesture recognition method, device, equipment and computer readable medium
CN112861624A (en) * 2021-01-05 2021-05-28 哈尔滨工业大学(威海) Human body posture detection method, system, storage medium, equipment and terminal
CN113283415A (en) * 2021-07-26 2021-08-20 浙江光珀智能科技有限公司 Sedentary and recumbent detection method based on depth camera
CN113298152A (en) * 2021-05-26 2021-08-24 深圳市优必选科技股份有限公司 Model training method and device, terminal equipment and computer readable storage medium
CN113625750A (en) * 2021-08-03 2021-11-09 同济大学 Unmanned aerial vehicle keeps away barrier system based on millimeter wave combines with degree of depth vision camera
CN114169355A (en) * 2020-08-19 2022-03-11 北京万集科技股份有限公司 Information acquisition method and device, millimeter wave radar, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544563A (en) * 2018-11-12 2019-03-29 北京航空航天大学 A kind of passive millimeter wave image human body target dividing method towards violated object safety check
CN114169355A (en) * 2020-08-19 2022-03-11 北京万集科技股份有限公司 Information acquisition method and device, millimeter wave radar, equipment and storage medium
CN112184626A (en) * 2020-09-02 2021-01-05 珠海格力电器股份有限公司 Gesture recognition method, device, equipment and computer readable medium
CN112097374A (en) * 2020-09-16 2020-12-18 珠海格力电器股份有限公司 Device control method, device and computer readable medium
CN112861624A (en) * 2021-01-05 2021-05-28 哈尔滨工业大学(威海) Human body posture detection method, system, storage medium, equipment and terminal
CN113298152A (en) * 2021-05-26 2021-08-24 深圳市优必选科技股份有限公司 Model training method and device, terminal equipment and computer readable storage medium
CN113283415A (en) * 2021-07-26 2021-08-20 浙江光珀智能科技有限公司 Sedentary and recumbent detection method based on depth camera
CN113625750A (en) * 2021-08-03 2021-11-09 同济大学 Unmanned aerial vehicle keeps away barrier system based on millimeter wave combines with degree of depth vision camera

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
TAO ZHOU 等: "Human Sleep Posture Recognition Based on Millimeter-Wave Radar" *
元志安 等: "基于RDSNet的毫米波雷达人体跌倒检测方法" *
李宇杰 等: "基于视觉的三维目标检测算法研究综述" *
许光朋: "基于毫米波雷达的人体姿态及呼吸心跳检测" *

Also Published As

Publication number Publication date
CN114782987B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN104715493B (en) A kind of method of movement human Attitude estimation
Farooq et al. Dense RGB-D map-based human tracking and activity recognition using skin joints features and self-organizing map
CN112560741A (en) Safety wearing detection method based on human body key points
CN109101865A (en) A kind of recognition methods again of the pedestrian based on deep learning
WO2017133009A1 (en) Method for positioning human joint using depth image of convolutional neural network
CN109949341B (en) Pedestrian target tracking method based on human skeleton structural features
CN107092894A (en) A kind of motor behavior recognition methods based on LSTM models
CN109902565B (en) Multi-feature fusion human behavior recognition method
CN114187665B (en) Multi-person gait recognition method based on human skeleton heat map
Mehrizi et al. A Deep Neural Network-based method for estimation of 3D lifting motions
CN112668531A (en) Motion posture correction method based on motion recognition
CN109993103A (en) A kind of Human bodys' response method based on point cloud data
CN106815855A (en) Based on the human body motion tracking method that production and discriminate combine
CN109766838A (en) A kind of gait cycle detecting method based on convolutional neural networks
CN109447175A (en) In conjunction with the pedestrian of deep learning and metric learning recognition methods again
CN111709365A (en) Automatic human motion posture detection method based on convolutional neural network
CN102156994B (en) Joint positioning method for single-view unmarked human motion tracking
CN113920326A (en) Tumble behavior identification method based on human skeleton key point detection
CN112232184A (en) Multi-angle face recognition method based on deep learning and space conversion network
CN115346272A (en) Real-time tumble detection method based on depth image sequence
CN116524586A (en) Dance scoring algorithm based on CNN and GCN gesture estimation and similarity matching
CN114170686A (en) Elbow bending behavior detection method based on human body key points
CN114782987B (en) Millimeter wave radar gesture recognition method based on depth camera supervision
CN110765925A (en) Carrier detection and gait recognition method based on improved twin neural network
CN114332922A (en) Fall detection method based on image static characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231019

Address after: 610000 Chengdu, Sichuan Province, China (Sichuan) Free Trade Pilot Zone

Patentee after: Sichuan Bawei Jiuzhang Technology Co.,Ltd.

Address before: 610031 north section of two ring road, Sichuan, Chengdu

Patentee before: SOUTHWEST JIAOTONG University

Patentee before: Sichuan Bawei Jiuzhang Technology Co.,Ltd.

TR01 Transfer of patent right