CN114612524B - Motion recognition method based on RGB-D camera - Google Patents

Motion recognition method based on RGB-D camera Download PDF

Info

Publication number
CN114612524B
CN114612524B CN202210506139.8A CN202210506139A CN114612524B CN 114612524 B CN114612524 B CN 114612524B CN 202210506139 A CN202210506139 A CN 202210506139A CN 114612524 B CN114612524 B CN 114612524B
Authority
CN
China
Prior art keywords
point cloud
point
rgb
nearest neighbor
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210506139.8A
Other languages
Chinese (zh)
Other versions
CN114612524A (en
Inventor
苟先太
顾凡
魏峰
蒋晓凤
程丽红
曾开心
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Bawei Jiuzhang Technology Co ltd
Original Assignee
Sichuan Bawei Jiuzhang Technology Co ltd
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Bawei Jiuzhang Technology Co ltd, Southwest Jiaotong University filed Critical Sichuan Bawei Jiuzhang Technology Co ltd
Priority to CN202210506139.8A priority Critical patent/CN114612524B/en
Publication of CN114612524A publication Critical patent/CN114612524A/en
Application granted granted Critical
Publication of CN114612524B publication Critical patent/CN114612524B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/40Tree coding, e.g. quadtree, octree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a motion recognition method based on an RGB-D camera, which comprises the steps of collecting an RGB-D image by using the RGB-D camera, and preprocessing the RGB-D image to obtain a preprocessed point cloud; obtaining a motion difference point cloud according to the preprocessed point cloud, and optimizing the motion difference point cloud set to obtain an optimized motion difference point cloud set; obtaining a motion linear point cloud model according to the optimized motion difference point cloud set by using a point cloud shrinkage algorithm based on a Laplacian operator, and constructing a human body joint point identification template according to the motion linear point cloud model so as to finish human body motion identification; according to the invention, the depth image is subjected to subsequent processing, and no visible camera exists in an actual application scene, so that the privacy of a user can be well protected; and the K neighborhood denoising method is utilized to remove discrete noise points and simplify point cloud, thereby greatly reducing data processing amount, accelerating algorithm processing speed and providing a method for fast extraction and accurate identification.

Description

Motion recognition method based on RGB-D camera
Technical Field
The invention relates to the technical field of point cloud identification, in particular to a motion identification method based on an RGB-D camera.
Background
With the economic development, the aging problem of the population is continuously aggravated, and the indoor personnel posture detection device based on the unmanned equipment plays a great role in the fields of home furnishing and nursing; two human point cloud sensors commonly used at present are a millimeter wave radar and an RGB-D depth camera; the millimeter wave radar has the defects of high cost, long scanning time, poor penetrability and the like, and cannot meet the requirements of most scenes; on the other hand, the current indoor personnel action virtual reconstruction method based on the RGB-D depth camera has the defects of low reconstruction precision, high computational power requirement and the like.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a motion recognition method based on an RGB-D camera, and provides a method for quickly extracting and accurately recognizing.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
in one aspect, a motion recognition method based on an RGB-D camera includes the following steps:
s1, collecting an RGB-D image by using an RGB-D camera, and preprocessing the RGB-D image to obtain a preprocessed point cloud;
s2, obtaining motion difference point cloud according to the preprocessed point cloud;
s3, optimizing the motion difference point cloud set to obtain an optimized motion difference point cloud set;
s4, obtaining a motion linear point cloud model according to the optimized motion difference point cloud set by using a point cloud shrinkage algorithm based on a Laplacian operator;
s5, constructing a human body joint point identification template according to the motion linear point cloud model;
and S6, recognizing the human motion by using the human joint point recognition template.
The invention has the following beneficial effects:
collecting an RGB-D image by using an RGB-D camera, and preprocessing the RGB-D image to obtain a preprocessed point cloud; obtaining a motion difference point cloud according to the preprocessed point cloud, and optimizing the motion difference point cloud set to obtain an optimized motion difference point cloud set; obtaining a motion linear point cloud model according to the optimized motion difference point cloud set by using a point cloud shrinkage algorithm based on a Laplacian operator, and constructing a human body joint point identification template according to the motion linear point cloud model so as to finish human body motion identification; the method and the device collect RGB-D images based on the depth camera, only carry out subsequent processing on the depth images, have no visible camera in the actual application scene, can better protect the privacy of users, and have the advantages of low equipment cost and short scanning period; and the point cloud is simplified while discrete noise points are removed by utilizing a judgment threshold value of a K neighborhood denoising method, so that the simplified accurate indoor moving person point cloud is obtained, the data processing amount is greatly reduced, the algorithm processing speed is accelerated, and the method for quickly extracting and accurately identifying is provided.
Drawings
FIG. 1 is a flowchart illustrating steps of a RGB-D camera-based motion recognition method according to the present invention;
FIG. 2 is a schematic diagram of the left arm joint angle according to the embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, a motion recognition method based on RGB-D camera includes the following sub-steps:
s1, collecting an RGB-D image by using an RGB-D camera, and preprocessing the RGB-D image to obtain a preprocessed point cloud;
preferably, step S1 is specifically:
the method comprises the steps of collecting two groups of RGB-D images at a preset time interval by using an RGB-D camera, converting the RGB-D images into at least one-dimensional point cloud, and obtaining the preprocessed point cloud.
In the embodiment of the invention, an RGB-D camera is used for collecting two groups of RGB-D images respectively at an interval of 50ms, and the two groups of RGB-D images are converted into six-dimensional point cloud to obtain preprocessed point cloud; the method specifically comprises the following steps: firstly, obtaining a group of indoor personnel images of an RGB-D camera A at a time t1, and constructing a scene six-dimensional point cloud PA1 at the time t1 by the group of images; then another group of indoor personnel images of the RGB-D camera A is obtained at t1+50ms, namely at t2, and a scene six-dimensional point cloud PA2 at t2 is constructed by the group of images;
wherein the RGB-D camera a is suspended in a ceiling corner of an indoor room; each group of images comprises an RGB image acquired by a color camera and a depth image acquired by a depth camera; all pixels of the RGB image have color values corresponding to the pixels and are divided into R, G, B channels; the value of each pixel in the depth image corresponds to the depth value of the point, i.e., the distance of the point from the camera focal point.
Mapping two-dimensional coordinates of depth image pixels into point cloud data, and calculating the coordinates of each pixel point in a three-dimensional space; the three-dimensional space coordinate value (x, y, z) corresponding to a depth image pixel point (u, v, d) and some camera parameters is converted, and can be calculated by the following formula:
Figure 50298DEST_PATH_IMAGE001
wherein (u, v, d) is the corresponding pixel coordinate of the pixel point in the depth image,
Figure 282696DEST_PATH_IMAGE002
the focal lengths of the cameras in the x, y axes,
Figure 788764DEST_PATH_IMAGE003
is the coordinate of the center of the aperture of the camera, s is the zoom factor of the depth image,
Figure 473823DEST_PATH_IMAGE004
are all inside the cameraA parameter;
after the space coordinate of each pixel point is obtained, (x, y, z) corresponding to each pixel point and three channel values (r, g, b) of the color image of the pixel point are spliced to obtain point cloud
Figure 90749DEST_PATH_IMAGE005
And calculating pixel by pixel to obtain scene six-dimensional point cloud data set
Figure 177654DEST_PATH_IMAGE006
S2, obtaining motion difference point cloud according to the preprocessed point cloud;
preferably, step S2 specifically includes the following sub-steps:
s21, performing intersection processing on the preprocessed point clouds;
preferably, step S21 is specifically:
a1, extracting point cloud data with the same serial number in the preprocessed point clouds, judging whether the coordinates of the point cloud data with the same serial number are consistent, if so, recording the point cloud data with the same coordinates as sub-data of an intersection point cloud, and entering the step A2; otherwise go to step A3;
a2, traversing the preprocessed point clouds, and constructing a point cloud intersection according to the subdata of the intersection point cloud;
and A3, adding 1 to the serial number of the preprocessed point cloud data, returning to the step A1, re-extracting the point cloud data with the same serial number in the preprocessed point cloud until a preset iteration condition is met, and ending the iteration loop.
In the embodiment of the invention, the method specifically comprises the following steps:
1) extracting points with subscript i (i is the serial number of single point data in the point cloud sets of PA1 and PA2, and the initial value is 0) in PA1 and PA2, comparing whether the coordinates (x, y, z) of the points are consistent or not, and recording if the coordinates are consistent
Figure 854623DEST_PATH_IMAGE007
(j is a point data serial number to be stored in the PA3 point cloud set, and the initial value is 0); otherwise, entering step 2);
2) checking whether all points are compared completely, if all points are compared completely, solving the intersection and obtaining an intersection PA3, and exiting the loop; otherwise, go to step 3)
3) And adding 1 to the count values i and j, and returning to the step 1.
S22, homogenizing the point cloud after intersection processing to obtain an average point cloud data set;
in the embodiment of the invention, the homogenization treatment specifically comprises the following steps:
averaging (x, y, z) of each point of PA1 and PA2 in a one-to-one correspondence manner to obtain an average point cloud data set
Figure 276246DEST_PATH_IMAGE008
The formula is as follows:
Figure 431284DEST_PATH_IMAGE009
wherein, i is the serial number of single point data in the PA1 and PA2 point cloud sets, j is the serial number of point data to be stored in the PA3 point cloud sets, and the initial value is 0.
And S23, performing complementary collection processing on the homogenized point cloud data under the preset Euclidean distance threshold value to obtain motion difference point cloud.
Preferably, step S23 is specifically:
extracting point cloud data with the same serial number in the average point cloud data set, judging whether the Euclidean distance of the extracted point cloud data meets a preset Euclidean distance threshold value, if so, rejecting the point cloud data, and otherwise, retaining the point cloud data;
traversing the homogenized point cloud data to obtain a complementary set point cloud, and constructing a motion difference point cloud set according to the complementary set point cloud.
In the embodiment of the invention, the method specifically comprises the following steps:
the predetermined euclidean distance threshold is determined according to the point cloud obtaining effect, and is usually 0.05 m.
S3, optimizing the motion difference point cloud set to obtain an optimized motion difference point cloud set;
preferably, step S3 specifically includes the following sub-steps:
b1, segmenting each point cloud in the motion difference point cloud set by using a threshold segmentation method to obtain a segmented point cloud data set;
preferably, step B1 is specifically:
establishing a preset segmentation condition according to a threshold segmentation method, judging whether each point cloud in a motion difference point cloud set meets the first preset condition, if so, rejecting the point cloud, and otherwise, keeping the point cloud; traversing the motion difference point cloud set to obtain a segmented point cloud data set, wherein the first preset condition is represented as:
Figure 638274DEST_PATH_IMAGE010
wherein (A), (B), (C), (D), (C), (B), (C)x,y,z) The coordinates of each point cloud in the motion difference point cloud set,
Figure 220565DEST_PATH_IMAGE011
is a distance threshold.
In the embodiment of the invention, the segmented point cloud data set is a point cloud data set with a background removed.
B2, screening the segmented point cloud data set by using a random sampling consistency algorithm to obtain an inner point set;
preferably, step B2 is specifically:
calculating the standard deviation of each point cloud in the segmented point cloud data set, judging whether the standard deviation of each point cloud meets a second preset condition, and if so, retaining the point cloud; otherwise, rejecting the point cloud; traversing the segmented point cloud data set, and constructing a screened point cloud set as an inner point set according to the screened point cloud, wherein the second preset condition is represented as:
Figure 880217DEST_PATH_IMAGE012
wherein,
Figure 838945DEST_PATH_IMAGE013
is as followsiThe euclidean distance of the point cloud to plane z,
Figure 634863DEST_PATH_IMAGE014
is as followsiAnd coordinates of each point cloud, wherein a plane z is a plane constructed by each point cloud in the point cloud data set after segmentation, and the following conditions are met: z =ax + by + c; a. b and c are coefficients of the plane z, respectively.
In the embodiment of the invention, the method specifically comprises the following steps:
1) randomly selecting xyz data of three points from the point cloud PA 6;
Figure 401437DEST_PATH_IMAGE015
2) and calculating a plane equation z = ax + by + c by the following formula:
Figure 548385DEST_PATH_IMAGE016
3) calculating Euclidean space distance from residual point cloud data in the point cloud PA6 to the plane z = ax + by + c
Figure 310805DEST_PATH_IMAGE013
(i is the serial number of a single point data in the PA6 point cloud set, the initial value is 0, and each calculation
Figure 226808DEST_PATH_IMAGE017
Adding 1) at the time;
4) according to the Euclidean distance from each point cloud obtained in the step 3) to the plane
Figure 150902DEST_PATH_IMAGE013
Calculating to obtain the standard deviation
Figure 785145DEST_PATH_IMAGE018
Is of the formula
Figure 85677DEST_PATH_IMAGE019
Wherein
Figure 121766DEST_PATH_IMAGE020
5) And repeating the steps, judging that the points meeting the conditions are inner points and the points not meeting the conditions are outer points according to the condition2, and finally obtaining an inner point set PA7 with the ground point cloud data removed.
B3, denoising the inner point set by using a K neighborhood denoising method to obtain the optimized motion difference point cloud.
Preferably, step B3 is specifically:
c1, carrying out proximity search on each point cloud in the internal point set to obtain at least one nearest neighbor point of each point cloud;
c2, calculating the average distance from each point cloud to the nearest neighbor point, wherein the calculation formula is as follows:
Figure 466028DEST_PATH_IMAGE021
wherein,
Figure 587568DEST_PATH_IMAGE022
is as followsiThe average distance from the point cloud to the nearest neighbor point, k is theiThe number of nearest neighbor and near points of each point cloud,
Figure 691790DEST_PATH_IMAGE023
is as followsiThe spatial coordinates of the individual point clouds,
Figure 316807DEST_PATH_IMAGE024
is as followsiFirst of point cloudsjThe spatial coordinates of the nearest neighboring points,
Figure 848282DEST_PATH_IMAGE025
an Euclidean space distance calculation formula of the three-dimensional data;
c3, traversing the inner point set, and calculating the average distance and the mean square error of each point cloud in the inner point set according to the average distance from each point cloud to the nearest neighbor point;
in the embodiment of the present invention, the calculation formulas of the average distance and the mean square error are respectively expressed as:
Figure 191539DEST_PATH_IMAGE026
wherein N is the total number of the point clouds in the interior point set and the mean square error
Figure 99452DEST_PATH_IMAGE027
Can be calculated by variance.
C4, establishing a third preset condition according to the average distance and the mean square error of each point cloud in the inner point set, judging whether each point cloud in the inner point set meets the third preset condition, if so, removing the point cloud, otherwise, retaining the point cloud, and obtaining an optimized motion difference point cloud set, wherein the third preset condition is represented as:
Figure 844554DEST_PATH_IMAGE028
wherein U is the average distance of each point cloud in the inner point set,
Figure 32084DEST_PATH_IMAGE029
in order to be able to adjust the parameters,
Figure 862637DEST_PATH_IMAGE027
is the mean square error of each point cloud in the interior point set,u i is as followsiAverage distance of each point cloud to nearest neighbor.
In the embodiment of the present invention, the preset condition may be further expressed as:
Figure 308662DEST_PATH_IMAGE030
wherein
Figure 908270DEST_PATH_IMAGE031
is also an adjustable parameter.
S4, obtaining a motion linear point cloud model according to the optimized motion difference point cloud set by using a point cloud shrinkage algorithm based on a Laplacian operator;
preferably, step S4 is specifically:
d1, performing proximity search on the optimized motion difference point cloud set by using a spatial binary tree method to obtain at least one nearest neighbor point and obtain a nearest neighbor point set;
d2, calculating to obtain an initial motion linear point cloud model according to the nearest neighbor point set by using a point cloud shrinkage algorithm based on a Laplacian operator; wherein, the calculation formula is expressed as:
Figure 781548DEST_PATH_IMAGE032
wherein PA8 is the point cloud before contraction, PA8 Is the point cloud after the contraction,W L for the diagonal matrix to control the degree of shrinkage,W H to control the diagonal matrix of the degree of point cloud retention,Lfor adopting the Laplace matrix of cosine tangent, satisfy:
Figure 99397DEST_PATH_IMAGE033
wherein,
Figure 83534DEST_PATH_IMAGE034
is a function of the cosine cut function,
Figure 52496DEST_PATH_IMAGE035
is the first of the Laplace matrixiGo to the firstjThe value of the column element is set,
Figure 96675DEST_PATH_IMAGE036
is the first of the Laplace matrixiGo to the firstjThe residual weight value corresponding to the column element value,
Figure 901820DEST_PATH_IMAGE037
and
Figure 689647DEST_PATH_IMAGE038
is the first of the Laplace matrixiGo to the firstjThe column elements construct the outer angle values with their left and right elements, their sum is 180 degrees,
Figure 263848DEST_PATH_IMAGE039
is the first of the Laplace matrixiGo to the firstiColumn, the set of field elements for the right diagonal elements;
d3, iteratively updating a diagonal matrix used for controlling the shrinkage degree and a diagonal matrix used for controlling the point cloud retention degree, judging whether a fourth preset condition is met, if so, ending the iterative updating, and obtaining a motion linear point cloud model according to the current iterative result; otherwise, the iterative updating is continued.
Preferably, the updating formulas of the diagonal matrix for controlling the degree of shrinkage and the diagonal matrix for controlling the degree of point cloud retention in step D3 are respectively expressed as:
Figure 478929DEST_PATH_IMAGE040
wherein,
Figure 505791DEST_PATH_IMAGE041
as a number of iterationst+1 diagonal matrix to control the degree of shrinkage;
Figure 97309DEST_PATH_IMAGE042
is the current number of iterationstA diagonal matrix for controlling the degree of shrinkage;
Figure 273819DEST_PATH_IMAGE043
is the first in the initial motion linear point cloud modeliPoints of contraction
Figure 659801DEST_PATH_IMAGE044
The current neighborhood of (c);
Figure 439538DEST_PATH_IMAGE045
linear point cloud model for initial motionTo middleiPoints of contraction
Figure 569168DEST_PATH_IMAGE044
The original domain of (1);
Figure 852382DEST_PATH_IMAGE046
as a number of iterationst+1 under the firstiPoints of contraction
Figure 674845DEST_PATH_IMAGE044
A corresponding diagonal matrix for controlling the degree of point cloud retention,
Figure 676299DEST_PATH_IMAGE047
is the first in the initial iterationiPoints of contraction
Figure 609620DEST_PATH_IMAGE044
The corresponding diagonal matrix is used for controlling the point cloud maintaining degree.
Preferably, the fourth preset condition in step D3 is represented as:
Condition 4:
Figure 996607DEST_PATH_IMAGE048
wherein,tin order to be the threshold value of the number of iterations,
Figure 989971DEST_PATH_IMAGE049
is the first in the initial motion linear point cloud modeliPoints of contraction
Figure 478721DEST_PATH_IMAGE044
The latest field of the art.
In the embodiment of the present invention, the iterative update specifically includes:
1) calculating the latest searching point cloud according to the current iteration times; the calculation formula is as follows:
Figure 950154DEST_PATH_IMAGE050
wherein,
Figure 207960DEST_PATH_IMAGE051
is the current number of iterationstA diagonal matrix for controlling the degree of shrinkage;
Figure 372225DEST_PATH_IMAGE052
is the current number of iterationstA diagonal matrix for controlling the point cloud holding degree is arranged below;
Figure 82692DEST_PATH_IMAGE053
is the current number of iterationstA pull-down placian matrix;
Figure 357816DEST_PATH_IMAGE054
is the current number of iterationstLower shrinkage point cloud;
Figure 486440DEST_PATH_IMAGE055
is the latest iteration numbert+Searching point clouds under 1, namely: searching the point cloud latest;trepresenting the depth value of the binary traversal of the point cloud for the iteration times;
2) according to the update formula:
Figure 556027DEST_PATH_IMAGE056
are updated respectively
Figure 19369DEST_PATH_IMAGE057
And
Figure 832605DEST_PATH_IMAGE052
a value of (1), wherein
Figure 65003DEST_PATH_IMAGE045
And
Figure 305491DEST_PATH_IMAGE043
are respectively as
Figure 256130DEST_PATH_IMAGE044
Original neighborhood sum of pointsA latest neighborhood;
3) judging according to the condition4, if yes, stopping iteration, otherwise, continuing iteration, and recalculating new contracted point cloud
Figure 873056DEST_PATH_IMAGE058
Calculating a new Laplace matrix
Figure 209228DEST_PATH_IMAGE059
And S5, constructing a human body joint point identification template according to the motion linear point cloud model.
Preferably, step S5 specifically includes the following sub-steps:
s51, constructing a human limb endpoint set according to the motion linear point cloud model;
preferably, step S51 specifically includes the following sub-steps:
s511, initializing cloud serial numbers of all points and human body joint point serial numbers in the motion linear point cloud model, respectively screening the point clouds in the motion linear point cloud model to obtain a first nearest neighbor point and a second nearest neighbor point related to the point clouds, and constructing a nearest neighbor point set related to the point clouds according to the first nearest neighbor point and the second nearest neighbor point related to the point clouds;
preferably, step S511 specifically includes:
initializing the serial number of each point cloud and the serial number of human body joint points in the motion linear point cloud model, respectively screening the closest adjacent points of the point clouds in the motion linear point cloud model, selecting one point cloud, obtaining a first closest adjacent point of the point cloud, and screening the closest adjacent points of the first closest adjacent point to obtain the first closest adjacent point of the closest adjacent point, namely the first closest adjacent point of the closest adjacent point is used as a second closest adjacent point of the arbitrary point cloud; the first nearest neighbor point and the second nearest neighbor point of the arbitrary point cloud together form a nearest neighbor point related to the arbitrary point cloud.
S512, calculating the Euclidean space vector included angle of each nearest neighbor point set according to the nearest neighbor point set related to each point cloud, judging whether the Euclidean space vector included angle meets a fifth preset condition, if so, entering the step S513, otherwise, constructing the nearest neighbor point set related to each point cloud by using the nearest neighbor point of the next level, and returning to the step S511;
in the embodiment of the present invention, the first and second substrates,
let vector quantity
Figure 886197DEST_PATH_IMAGE060
=
Figure 58553DEST_PATH_IMAGE061
(Vector)
Figure 213590DEST_PATH_IMAGE062
The euclidean space vector angle of the three points is expressed as:
Figure 420581DEST_PATH_IMAGE063
(ii) a The method comprises the following steps:
Figure 2872DEST_PATH_IMAGE064
is an included angle of two vectors; and judging whether the included angle of the Euclidean space vectors of the three points is less than 90 degrees, if so, entering the step S513, otherwise, constructing a nearest neighbor point set related to each point cloud by using the nearest neighbor point of the next level, specifically: recording or else reselecting point
Figure 662523DEST_PATH_IMAGE065
As the next nearest neighbor
Figure 621252DEST_PATH_IMAGE066
Returning to step S511;
s513, performing nearest neighbor point search on the second nearest neighbor point of each point cloud to obtain a third nearest neighbor point of each point cloud, judging whether the third nearest neighbor point of each point cloud meets a sixth preset condition, if so, updating the sequence number in the point cloud in the motion linear point cloud model, and returning to the step S511; otherwise, go to step S514;
in the embodiment of the invention
Figure 430552DEST_PATH_IMAGE067
Linear point cloud model in motion
Figure 449323DEST_PATH_IMAGE068
In the method, neighbor search is carried out to obtain points
Figure 596271DEST_PATH_IMAGE067
Nearest neighbors of
Figure 93111DEST_PATH_IMAGE069
Determine nearest neighbor
Figure 9115DEST_PATH_IMAGE069
Whether or not to satisfy the nearest neighbor
Figure 198788DEST_PATH_IMAGE070
Is 0, i.e.:
Figure 833031DEST_PATH_IMAGE071
if the motion linear point cloud model meets the requirement, updating the point cloud serial number in the motion linear point cloud model, namely: and adding 1 to the point cloud serial number and returning to the step S511.
S514, recording the third nearest neighbor point of the point cloud as a subset of the human body limb endpoint set, and adding 2 to the serial number of the point cloud to serve as an auxiliary value of the subset; judging whether the serial number of the current human body joint point meets a seventh preset condition or not; if so, ending the iteration to obtain a human body limb end point set; otherwise, adding 1 to the serial number of the current human body joint point, and returning to the step S511.
In the embodiment of the invention, the method specifically comprises the following steps: recording dot
Figure 133563DEST_PATH_IMAGE070
Is an element of the human limb end point set P5 that:
Figure 153340DEST_PATH_IMAGE072
i.e. by
Figure 513914DEST_PATH_IMAGE073
Simultaneously recording the depth plus 2 and making the depth be
Figure 635454DEST_PATH_IMAGE074
And judging whether the serial number of the current human body joint point is 5, namely: the number of subsets in the human body limb endpoint set is met, and if the number of subsets in the human body limb endpoint set is met, the iteration is ended to obtain a human body limb endpoint set; otherwise, updating the serial number of the current human body joint point, namely: adding 1 to the serial number of the current human body joint point; and returns to step S511.
And S52, constructing a human body joint point identification template according to the human body limb endpoint set.
Step S52 specifically includes:
traversing the human body limb endpoint set, and taking the subset corresponding to the minimum space Z-axis coordinate value as the joint point of the foot; taking the subset corresponding to the minimum depth value as the head joint point; taking the screened residual subset as hand joint point data, and constructing eighth preset conditions to distinguish the hand joint point data; obtaining a human body joint point identification template; wherein the eighth preset condition is identified as:
Condition 8:
Figure DEST_PATH_IMAGE075
wherein,
Figure 942939DEST_PATH_IMAGE076
in the human body extremity endpoint set
Figure 833534DEST_PATH_IMAGE077
The spatial coordinates of the optical system (c),
Figure 16209DEST_PATH_IMAGE078
in the human body extremity endpoint set
Figure 359466DEST_PATH_IMAGE079
The spatial coordinates of (a);
if the eighth preset condition is met, the human body limb endpoint set
Figure 516646DEST_PATH_IMAGE077
For the data of the left-hand joint point,
Figure 261748DEST_PATH_IMAGE079
right hand joint point data; otherwise
Figure 964125DEST_PATH_IMAGE077
For the right-hand joint point data,
Figure 529099DEST_PATH_IMAGE079
left-handed joint data.
In the embodiment of the invention, the distribution rule of the data of 15 joint points of the human body can be summarized by inquiring the relevant anatomical data, namely the head, the neck, the right shoulder, the left shoulder, the right elbow, the left elbow, the right hand, the left hand, the trunk, the right hip, the left hip, the right knee, the left knee, the right foot and the left foot, and assuming that the length from the head to the neck is 1.5a, the rule of Euclidean space distances of the other joint points is as follows:
1) the distance from the left foot to the left knee is 2a, and the distance from the right foot to the right knee is 2 a;
2) the distance from the left knee to the left hip is 2.3a, and the distance from the right knee to the right hip is 2.3 a;
3) the distance from the left hand to the left elbow is 2.1a, and the distance from the right hand to the right elbow is 2.1 a;
4) the distance from the left shoulder to the neck is a, and the distance from the right shoulder to the neck is a;
5) the distance from the left shoulder to the left elbow is 1.5a, and the distance from the right shoulder to the right elbow is 1.5a
6) The distance from the trunk to the left hip is 1.2a, and the distance from the trunk to the right hip is 1.2 a;
7) the distance from the trunk to the neck is 2.2 a;
traversing the linear point cloud model of the human body according to the data rule of the human body joint points and the known five-limb point cloud data of the head, the left hand, the right hand, the left foot and the right foot
Figure 709544DEST_PATH_IMAGE080
Selecting the human body joint points meeting the rule conditions as available data to obtain 15 human bodiesJoint point cloud data set P15
Figure 574732DEST_PATH_IMAGE081
And S6, recognizing the human motion by using the human joint point recognition template.
In the embodiment of the invention, 4 upper body angle characteristics and 4 lower body angle characteristics are required to be selected for human motion recognition, and the angle size of the upper body angle characteristics and the angle size of the lower body angle characteristics are taken as characteristic data of 6 standard actions, so that rapid action recognition can be carried out; the selection of 8 angular features is shown in Table 1-1:
TABLE 1-1 table for selecting angle characteristics between various joint points
Figure 448010DEST_PATH_IMAGE082
Taking fig. 2 as a schematic diagram of the joint angle of the left arm, taking the joint angle of the left arm, i.e. the feature a2 as an example, let the joint point of the left shoulder be a and the spatial coordinate be
Figure DEST_PATH_IMAGE083
The left elbow joint point is B, and the space coordinate thereof is
Figure 251012DEST_PATH_IMAGE084
The left-hand joint point is C, and the space coordinate thereof is
Figure 235149DEST_PATH_IMAGE085
Then vector of
Figure 954843DEST_PATH_IMAGE086
Sum vector
Figure 999023DEST_PATH_IMAGE087
Angle of (2)
Figure 804168DEST_PATH_IMAGE088
The solving formula of (2) is as follows:
Figure 591995DEST_PATH_IMAGE089
Figure 166196DEST_PATH_IMAGE090
and determining the motion characteristics of the human body motion so as to perform rapid template matching motion recognition. The characteristic table of the corresponding angle of 6 reference actions is shown in the table 1-2:
table 1-2 six kinds of reference action corresponding angle characteristic table
Figure 630544DEST_PATH_IMAGE091
First, define the feature angle set under the first action as
Figure DEST_PATH_IMAGE092
And the input real-time motion feature angle set
Figure 391827DEST_PATH_IMAGE093
Two data sets are subjected to difference obtaining
Figure 983345DEST_PATH_IMAGE094
2) Calculating the normalized angle of C
Figure 412052DEST_PATH_IMAGE095
Figure 798034DEST_PATH_IMAGE096
Is a weight of the characteristic angle, satisfies
Figure 577771DEST_PATH_IMAGE097
Since the angle data are not greatly different in size, the angle data are taken here
Figure 707402DEST_PATH_IMAGE098
) Record of
Figure 738418DEST_PATH_IMAGE099
Data;
3) replacing the A with a reference characteristic angle set of the next action, repeating the steps, and executing 6 times in total;
4) counting the above 6
Figure 560881DEST_PATH_IMAGE099
Data are put into the array D, and the minimum value is taken out
Figure 562335DEST_PATH_IMAGE100
The corresponding reference motion is regarded as the motion recognition result.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (9)

1. A motion recognition method based on an RGB-D camera is characterized by comprising the following steps:
s1, collecting an RGB-D image by using an RGB-D camera, and preprocessing the RGB-D image to obtain a preprocessed point cloud;
s2, obtaining motion difference point cloud according to the preprocessed point cloud;
s3, optimizing the motion difference point cloud set to obtain an optimized motion difference point cloud set;
s4, obtaining a motion linear point cloud model according to the optimized motion difference point cloud set by using a point cloud shrinkage algorithm based on a Laplacian operator;
s5, constructing a human body joint point identification template according to the motion linear point cloud model;
s6, recognizing human motion by using a human joint point recognition template;
step S3 specifically includes the following sub-steps:
b1, segmenting each point cloud in the motion difference point cloud set by using a threshold segmentation method to obtain a segmented point cloud data set;
b2, screening the segmented point cloud data set by using a random sampling consistency algorithm to obtain an inner point set;
b3, denoising the inner point set by using a K neighborhood denoising method to obtain the optimized motion difference point cloud.
2. The RGB-D camera-based motion recognition method according to claim 1, wherein the step S1 specifically includes:
the method comprises the steps of collecting two groups of RGB-D images at a preset time interval by using an RGB-D camera, converting the RGB-D images into at least one-dimensional point cloud, and obtaining the preprocessed point cloud.
3. The RGB-D camera-based motion recognition method of claim 1, wherein the step S2 includes the following sub-steps:
s21, performing intersection processing on the preprocessed point clouds;
s22, homogenizing the point cloud after intersection processing to obtain an average point cloud data set;
and S23, performing complementary collection processing on the homogenized point cloud data by using a preset Euclidean distance threshold value to obtain motion difference point cloud.
4. The RGB-D camera-based motion recognition method according to claim 3, wherein the step S21 specifically includes:
a1, extracting point cloud data with the same serial number in the preprocessed point clouds, judging whether the coordinates of the point cloud data with the same serial number are consistent, if so, recording the point cloud data with the same coordinates as subdata of an intersection point cloud, and entering the step A2; otherwise go to step A3;
a2, traversing the preprocessed point clouds, and constructing a point cloud intersection according to the subdata of the intersection point cloud;
and A3, adding 1 to the serial number of the preprocessed point cloud data, returning to the step A1, re-extracting the point cloud data with the same serial number in the preprocessed point cloud until a preset iteration condition is met, and ending the iteration loop.
5. The RGB-D camera-based motion recognition method according to claim 3, wherein the step S23 specifically includes:
extracting point cloud data with the same serial number in the average point cloud data set, judging whether the Euclidean distance of the extracted point cloud data meets a preset Euclidean distance threshold value, if so, rejecting the point cloud data, and otherwise, retaining the point cloud data;
traversing the homogenized point cloud data to obtain a complementary set point cloud, and constructing a motion difference point cloud set according to the complementary set point cloud.
6. The RGB-D camera-based motion recognition method according to claim 3, wherein the step B3 includes the following sub-steps:
c1, carrying out proximity search on each point cloud in the internal point set to obtain at least one nearest neighbor point of each point cloud;
c2, calculating the average distance from each point cloud to the nearest neighbor point, wherein the calculation formula is as follows:
Figure 12885DEST_PATH_IMAGE001
wherein,
Figure 68565DEST_PATH_IMAGE002
is as followsiThe average distance from the point cloud to the nearest neighbor point, k is theiThe number of nearest neighbor near points of each point cloud,
Figure 619633DEST_PATH_IMAGE003
is as followsiThe spatial coordinates of the individual point clouds,
Figure 933808DEST_PATH_IMAGE004
is as followsiFirst of point cloudsjThe spatial coordinates of the nearest neighboring points,
Figure 270243DEST_PATH_IMAGE005
an Euclidean space distance calculation formula of the three-dimensional data;
c3, traversing the inner point set, and calculating the average distance and the mean square error of each point cloud in the inner point set according to the average distance from each point cloud to the nearest neighbor point;
c4, establishing a third preset condition according to the average distance and the mean square error of each point cloud in the inner point set, judging whether each point cloud in the inner point set meets the third preset condition, if so, removing the point cloud, otherwise, retaining the point cloud, and obtaining an optimized motion difference point cloud set, wherein the third preset condition is represented as:
Figure 27983DEST_PATH_IMAGE006
wherein U is the average distance of each point cloud in the inner point set,
Figure 518876DEST_PATH_IMAGE007
in order to be able to adjust the parameters,
Figure 394601DEST_PATH_IMAGE008
is the mean square error of each point cloud in the interior point set,u i is as followsiAverage distance of each point cloud to nearest neighbor.
7. The RGB-D camera-based motion recognition method according to claim 1, wherein the step S4 specifically includes:
d1, performing proximity search on the optimized motion difference point cloud set by using a spatial binary tree method to obtain at least one nearest neighbor point and obtain a nearest neighbor point set;
d2, calculating to obtain an initial motion linear point cloud model according to the nearest neighbor point set by using a point cloud shrinkage algorithm based on a Laplacian operator; wherein, the calculation formula is expressed as:
Figure 365968DEST_PATH_IMAGE009
wherein PA8 is the point cloud before contraction, PA8 Is the point cloud after the contraction,W L for the diagonal matrix to control the degree of shrinkage,W H to control the diagonal matrix of the degree of point cloud retention,Lfor adopting the Laplace matrix of cosine tangent, satisfy:
Figure 514184DEST_PATH_IMAGE010
wherein,
Figure 820269DEST_PATH_IMAGE011
is a function of the cosine cut function,
Figure 961400DEST_PATH_IMAGE012
is the first of the Laplace matrixiGo to the firstjThe value of the column element is set,
Figure 538006DEST_PATH_IMAGE013
is the first of the Laplace matrixiGo to the firstjThe residual weight value corresponding to the column element value,
Figure 637549DEST_PATH_IMAGE014
and
Figure 165351DEST_PATH_IMAGE015
is the first of the Laplace matrixiGo to the firstjThe outer angle value constructed by the column element and the left and right elements thereof,
Figure 110174DEST_PATH_IMAGE016
is the first of the Laplace matrixiGo to the firstiColumn, the set of field elements for the right diagonal elements;
d3, iteratively updating a diagonal matrix used for controlling the shrinkage degree and a diagonal matrix used for controlling the point cloud retention degree, judging whether a fourth preset condition is met, if so, ending the iterative updating, and obtaining a motion linear point cloud model according to the current iterative result; otherwise, the iterative updating is continued.
8. The RGB-D camera-based motion recognition method according to claim 1, wherein the step S5 specifically includes the following sub-steps:
s51, constructing a human limb endpoint set according to the motion linear point cloud model;
and S52, constructing a human body joint point identification template according to the human body limb endpoint set.
9. The RGB-D camera-based motion recognition method of claim 8, wherein the step S51 includes the following sub-steps:
s511, initializing cloud serial numbers of all points and human body joint point serial numbers in the motion linear point cloud model, respectively screening the point clouds in the motion linear point cloud model to obtain a first nearest neighbor point and a second nearest neighbor point related to the point clouds, and constructing a nearest neighbor point set related to the point clouds according to the first nearest neighbor point and the second nearest neighbor point related to the point clouds;
s512, calculating the Euclidean space vector included angle of each nearest neighbor point set according to the nearest neighbor point set related to each point cloud, judging whether the Euclidean space vector included angle meets a fifth preset condition, if so, entering the step S513, otherwise, constructing the nearest neighbor point set related to each point cloud by using the nearest neighbor point of the next level, and returning to the step S511;
s513, performing nearest neighbor point search on the second nearest neighbor point of each point cloud to obtain a third nearest neighbor point of each point cloud, judging whether the third nearest neighbor point of each point cloud meets a sixth preset condition, if so, updating the sequence number in the point cloud in the motion linear point cloud model, and returning to the step S511; otherwise, go to step S514;
s514, recording the third nearest neighbor point of the point cloud as a subset of the human body limb endpoint set, and adding 2 to the serial number of the point cloud to serve as an auxiliary value of the subset; judging whether the serial number of the current human body joint point meets a seventh preset condition or not; if so, ending the iteration to obtain a human body limb end point set; otherwise, adding 1 to the serial number of the current human body joint point, and returning to the step S511.
CN202210506139.8A 2022-05-11 2022-05-11 Motion recognition method based on RGB-D camera Active CN114612524B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210506139.8A CN114612524B (en) 2022-05-11 2022-05-11 Motion recognition method based on RGB-D camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210506139.8A CN114612524B (en) 2022-05-11 2022-05-11 Motion recognition method based on RGB-D camera

Publications (2)

Publication Number Publication Date
CN114612524A CN114612524A (en) 2022-06-10
CN114612524B true CN114612524B (en) 2022-07-29

Family

ID=81869634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210506139.8A Active CN114612524B (en) 2022-05-11 2022-05-11 Motion recognition method based on RGB-D camera

Country Status (1)

Country Link
CN (1) CN114612524B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597974A (en) * 2020-05-14 2020-08-28 哈工大机器人(合肥)国际创新研究院 Monitoring method and system based on TOF camera for personnel activities in carriage

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9898856B2 (en) * 2013-09-27 2018-02-20 Fotonation Cayman Limited Systems and methods for depth-assisted perspective distortion correction
US20170273665A1 (en) * 2016-03-28 2017-09-28 Siemens Medical Solutions Usa, Inc. Pose Recovery of an Ultrasound Transducer
JP2018119833A (en) * 2017-01-24 2018-08-02 キヤノン株式会社 Information processing device, system, estimation method, computer program, and storage medium
WO2019006760A1 (en) * 2017-07-07 2019-01-10 深圳市大疆创新科技有限公司 Gesture recognition method and device, and movable platform
CN108573231B (en) * 2018-04-17 2021-08-31 中国民航大学 Human body behavior identification method of depth motion map generated based on motion history point cloud
WO2019217903A1 (en) * 2018-05-11 2019-11-14 Visionairy Health, Inc. Automated screening of medical data
CN110910426A (en) * 2019-11-26 2020-03-24 爱菲力斯(深圳)科技有限公司 Action process and action trend identification method, storage medium and electronic device
CN113011381B (en) * 2021-04-09 2022-09-02 中国科学技术大学 Double-person motion recognition method based on skeleton joint data
CN114038011A (en) * 2021-11-08 2022-02-11 沈阳理工大学 Method for detecting abnormal behaviors of human body in indoor scene

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597974A (en) * 2020-05-14 2020-08-28 哈工大机器人(合肥)国际创新研究院 Monitoring method and system based on TOF camera for personnel activities in carriage

Also Published As

Publication number Publication date
CN114612524A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
Melekhov et al. Dgc-net: Dense geometric correspondence network
CN110378997B (en) ORB-SLAM 2-based dynamic scene mapping and positioning method
CN110084243B (en) File identification and positioning method based on two-dimensional code and monocular camera
CN109544677A (en) Indoor scene main structure method for reconstructing and system based on depth image key frame
WO2015017941A1 (en) Systems and methods for generating data indicative of a three-dimensional representation of a scene
Li et al. Fast visual odometry using intensity-assisted iterative closest point
CN111998862B (en) BNN-based dense binocular SLAM method
CN112734652B (en) Near-infrared blood vessel image projection correction method based on binocular vision
CN111679661A (en) Semantic map construction method based on depth camera and sweeping robot
Ozbay et al. A hybrid method for skeleton extraction on Kinect sensor data: Combination of L1-Median and Laplacian shrinking algorithms
CN111192194A (en) Panoramic image splicing method for curtain wall building vertical face
CN113936210A (en) Anti-collision method for tower crane
CN117078982B (en) Deep learning-based large-dip-angle stereoscopic image alignment dense feature matching method
CN108597589B (en) Model generation method, target detection method and medical imaging system
Zhou Application and Analysis of Computer Vision Algorithms in Graphics and Image Processing
CN114612524B (en) Motion recognition method based on RGB-D camera
CN116563104A (en) Image registration method and image stitching method based on particle swarm optimization
CN115564915A (en) Map construction method and device for environment digital area of transformer substation
Zhang et al. Stereo matching cost computation based on nonsubsampled contourlet transform
Du et al. A high-precision vision-based mobile robot slope detection method in unknown environment
Liu et al. Deep learning of volumetric representation for 3D object recognition
Li et al. A real-time pineapple matching system based on speeded-up robust features
CN113379663B (en) Space positioning method and device
Hu et al. Research and implementation of 3D reconstruction algorithm for multi-angle monocular garment image
Persad et al. Automatic co-registration of pan-tilt-zoom (PTZ) video images with 3D wireframe models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231013

Address after: 610000 Chengdu, Sichuan Province, China (Sichuan) Free Trade Pilot Zone

Patentee after: Sichuan Bawei Jiuzhang Technology Co.,Ltd.

Address before: 610031 section 1, 2nd Ring Road North, Jinniu District, Chengdu City, Sichuan Province

Patentee before: SOUTHWEST JIAOTONG University

Patentee before: Sichuan Bawei Jiuzhang Technology Co.,Ltd.