CN109785389A - A kind of three-dimension object detection method based on Hash description and iteration closest approach - Google Patents
A kind of three-dimension object detection method based on Hash description and iteration closest approach Download PDFInfo
- Publication number
- CN109785389A CN109785389A CN201910049505.XA CN201910049505A CN109785389A CN 109785389 A CN109785389 A CN 109785389A CN 201910049505 A CN201910049505 A CN 201910049505A CN 109785389 A CN109785389 A CN 109785389A
- Authority
- CN
- China
- Prior art keywords
- point
- scene
- formula
- hash
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The three-dimension object detection method based on Hash description and iteration closest approach that the invention discloses a kind of, by the pose data for detecting target object in three-dimensional information, the Job Operations that can allow the power tools such as robot that can complete richer job task and more be replicated, so that structuring is produced towards unstructured transformation, come to carry out only explanation to three data using Hash description, reduce the matching to invalid data, accelerates characteristic matching speed.The iterative closest point approach of tangent plane is arrived using point, so that matching is relatively reliable, precision is higher.
Description
Technical field
The present invention relates to the Three-dimension object recognition fields in machine vision, more particularly to one kind is based on Hash description and iteration
The three-dimension object detection method of closest approach.
Background technique
The method perceived by machine vision technique to working scene is in express delivery sorting, object ranging, automatic makeup
It has a wide range of applications with equal fields, the identification especially in two dimensional image develops highly developed with location technology, can expire
The current most automated production job requirements of foot.With the iterative method of National Industrial 4.0, journey is automated to manufacturing industry
Degree requires higher and higher, just seems awkward with location technology based on the identification of two dimensional image.With three-dimensional visual sensor
The decline of the maturation and price of technology, more and more application scenarios begin trying to obtain operation using three-dimensional visual sensor
The three-dimensional information of scene, and position detection and attitude detection are carried out to target in scene according to three-dimensional information, i.e. pose identifies.
The prior art to the method for recognizing position and attitude of three-dimension object need to extract in three-dimensional data a large amount of characteristic information and
Characteristic matching, recognition speed is slow, and positioning accuracy is very low, cannot achieve fast and accurately objective pose identification.
Summary of the invention
The purpose of the present invention is to provide a kind of three-dimension object detection method based on Hash description and iteration closest approach, tools
There is the time complexity for reducing objective signature search, improve the advantages of identification positioning accuracy.
Above-mentioned purpose of the invention has the technical scheme that
A kind of three-dimension object detection method based on Hash description and iteration closest approach, including depth camera and PC machine, also
The following steps are included:
The contextual data for the object that S1, depth camera acquisition need to detect;
S2, depth camera emit infrared-ray to scene, and the infrared remote receiver in depth camera will receive the close red of scene
External reflectance generates the three-dimensional point data of scene, i.e., each reflection point in scene is believed relative to the XYZ of depth camera coordinate system
Breath;
Collected contextual data is passed to PC machine and saved by S3, depth camera, and preservation format is PLY format;
S4, the object module M that detection object is extracted using the contextual data of the S3 PLY format obtained;
S5, PC machine calculate the covariance matrix of each point of threedimensional model point cloud, seek the feature vector of covariance matrix,
Obtain the normal vector of each point;
S6, the point of object module M is generated to feature according to the normal vector in S5;
It S7, is cryptographic Hash, the Hash description of generation object module M by the all-pair Feature Conversion of object module M;
S8, scene S is handled using depth camera and PC machine, obtains the cryptographic Hash of scene S, access target model M
Hash description, be quickly found out matched similitude pair, matching result be converted into Hough ballot value;
S9, accumulator of being voted by global Hough count Hough value, and the Hough value of highest scoring is as scene S's
Optimal Hough value generates the first testing result to object module M by optimal Hough value;
S10, by carrying out secondary correction to first testing result to iteration closest approach algorithm, obtain accurate object pose inspection
Survey result.
Further, the step S4, the coordinate of contextual data is defined as on depth transducer, the coordinate of each data
Value is (x, y, z), counts the quantity of z value in all data, removal most z values occurs to reject data, and then obtains to be checked
Survey the object module M of object.
Further, the step S5 obtains the normal vector of each three-dimensional data points in object module MObject module M=
{m0,m1,m2,...,mk-1,mk, the normal vector of each three-dimensional data pointsAround can be by each point
Consecutive points calculate, in order to calculate normal vector a little, it is necessary first to obtain the covariance matrix between the point and consecutive points, it is false
If seeking miThe normal vector of pointShown in the calculating of covariance matrix C such as formula (1), R is indicated with miSpherical shape centered on point is empty
Between radius, distance value di(i ∈ 1,2 ..., k }) indicate consecutive points mjWith central point miEuclidean distance, pass through formula (1) count
Calculate miThe covariance matrix C of point and all the points in radius R:
M can be calculated by covariance matrixiThe normal vector of point can to the carry out Eigenvalues Decomposition of covariance matrix C
The characteristic value and feature vector for acquiring covariance matrix C, for the point cloud data of body surface, in the space of normal direction point
Cloth variation is the faintest, therefore central point miNormal vectorThen for feature corresponding to covariance matrix C minimal eigenvalue to
Amount, and so on, obtain the normal vector of all the points
Further, in the step S6, the normal vector of each point is utilizedTo calculate a little to feature, for three dimensional point cloud
In point piWith the arbitrary point p in addition to the pointj, by two o'clock can form a little pair, using point-to-point transmission normal vector and it is European away from
From the point that can be formed a little pair to feature, put shown in the calculation such as formula (2) to feature:
WhereinIndicate the line direction of two o'clock,Indicate the Euclidean distance of two o'clock line,Indicate piPoint
Method phase vectorWith two o'clock line directionAngle,Indicate two o'clock normal vector between angle, for vector it
Between angle calcu-lation such as formula (3) shown in:
Further, in step S7, it is cryptographic Hash by the all-pair Feature Conversion of object module M, generates object module M's
Hash description, defining point converge the arbitrary point in p to (pi, pj) ∈ p, it puts shown in the expression formula such as formula (2) to feature, utilizes formula
(4) cryptographic Hash is converted to feature by point:
Wherein ddistIndicate sampling step length, dangleIndicate sampling angle, dangle=2 π/Nangle, NangleIndicate angle grain
Degree, user can adjust N according to actual resultangle, can usually be set to 30.Pass through ddist、dangleCome to point to spy
Sign carries out sliding-model control, and similar features is enabled to be converted into identical characteristic value, after putting to feature F discretization, point
To feature F (p1,p2)discretized=F (f1,f2,f3,f4) cryptographic Hash by (5) formula obtain:
Index=P1*f1+P2*f2+P3*f3+f4 (5)
P1, P2, P3 in formula (5) are 3 different prime numbers, this is the system in order to enable the cryptographic Hash after conversion is unique
The point of identical cryptographic Hash can be carried out central access, all cryptographic Hash compositions by the cryptographic Hash for counting all-pair by Hash table
Hash table, the Hash table are known as the Hash description of point cloud data.
Further, it in step S8, describes to generate Hough ballot value using Hash, defines object module M, the three-dimensional point of scene
Cloud data set is S, is described according to the Hash that step S5 generates object module point cloud data collection, by by the point of contextual data collection S
To cryptographic Hash is converted into, the Hash description that access target model M generates, and then realize quick similar features matching, then will
Hough value is converted to result, carries out global Hough ballot.
Further, step S9 counts Hough value by global Hough ballot accumulator, the Hough value of highest scoring
As the optimal Hough value of scene S, the first detection to object module M is generated as a result, Hough ballot value by optimal Hough value
(mr, α) in mrDirectly by (mr, mi) obtain, and angle [alpha] then is calculated to come by formula (6), definition world coordinate system W, in formula (6)
'sWithIt describes point srAnd mrIt is moved to the displacement movement that world unit coordinate origin is done and the method by two o'clock
VectorWithMain shaft x with unit coordinate system, the world is aligned done rotation transformation:
For the point with identical cryptographic Hash to (mr, mi) ∈ M and (sr, si) ∈ S, it first will point srWith point mrNormal vectorIt rotates to that parallel with the main shaft x of world unit coordinate system W and direction is consistent, then moves the position and the world of two o'clock
The origin of unit coordinate system is overlapped, this time point is to (mr, mi) only need to can be with around the x-axis rotation alpha angle of world unit coordinate system
Point is to (sr, si) be overlapped, it is calculated by world unit coordinate systemWithLater, angle [alpha] can be solved according to formula (6),
Scene point srHough ballot value (mr, α) solve finish;
Followed by the optimal Hough ballot value of vote by ballot, establishn×mGlobal Hough ballot accumulator it is each to collect
The poll of a Hough ballot value, the row element of two-dimentional accumulator is by each reference point for filtering out in model point cloudLine number n be equal to reference point number, the columns m of accumulator then with angle granularity NangleHave
It closes, works as NangleThe columns of ballot accumulator is 30 when=30;
Scene reference point srIt can be with other scene point siComposition point pair, each group of scene point is several to may all search out
Similar model points are to (mr, mi), these points are to illustrating (sr, si) may the position present in object module M, calculate field
Sight spot is to (sr, si) and similitude to (mr, mi) between angle [alpha], so that it may obtain scene point srMultiple Hough ballot value (mr,
α), position adds one in corresponding ballot accumulator, and the point that current scene reference point and other scene points form throws all completion Houghs
After the calculating of ticket value, optimal Hough is thrown the optimal Hough ballot value as current scene point by the peak value in accumulator of voting
Ticket result brings formula (7) into, it can obtains the first pose testing result RigidTrans between object module M and scene S:
Further, step S10 carries out second-order correction, first testing result to first testing result using iteration closest approach algorithm
Reacted position orientation relation between object module M and scene S, since just to meet reference point posture close for first testing result
Condition, can to iteration closest approach algorithm carry out linearization process so that iteration closest approach algorithm is while ensureing precision
Possess faster speed;
Second-order correction is carried out to first testing result using the iteration closest approach algorithm of point to tangent plane, for putting to cutting flat with
The iteration closest approach algorithm in face, the optimization aim of error function are that the mean square error of model points to the tangent plane for corresponding to scene point reaches
To minimum;
Shown in error function such as formula (8):
M in formula (8)i=(mix,miy,miz,Tim), TimCertain point that expression model points are concentrated, and si=(six,siy,siz,
Tis), TisIndicate miCorresponding points in target point cloud, ni=(nix,niy,niz, 0) and it is then point siNormal vector;
M is the rotational transformation matrix of 4x4, by formula (8) as it can be seen that moving after object module M is moved according to transformation matrix M
Model points m after dynamiciWith scene point siMake the difference, obtain one description displacement difference vector, this vector again with point siNormal direction
AmountDo dot product, the result of dot product can be used to estimate a point to the distance of another tangent plane, as all model points miIt presses
After being moved through according to Metzler matrix, so that formula (8) adds up and reach minimum, M at this time is just Mopt;
Since model point set is similar to the posture of scene point set, can using point to tangent plane iteration closest approach algorithm come
Nonlinear problem is approached towards linear problem, so that the mathematical form of iteration closest approach algorithm is simpler, is calculated
The method speed of service is more quick;
For the transformation matrix M of description displacement and rotation, by rotating part R (α, beta, gamma) and displaced portion T (tx,ty,
tz) composition, as shown in formula (9):
M=T (tx,ty,tz)·R(α,β,γ) (9)
Wherein:
Each value in formula (11) are as follows:
r11=cos γ cos β,
r12=-sin γ cos α+cos γ sin β sin α,
r13=sin γ sin α+cos γ sin β cos α,
r21=sin γ cos β,
r22=cos γ cos α+sin γ sin β sin α,
r23=-cos γ sin α+sin γ sin β cos α,
r31=-sin β
r32=cos β sin α
r33=cos β cos α
Rx (α), Ry (β), Rz (γ) are respectively indicated around x, y, the rotation of z-axis, utilize the similitude of equal value of trigonometric function
It is found that we can carry out the trigonometric function in R-portion generation of equal value if model point set is similar to the posture of scene point set
It changes, the rotating part R in M can be written as the form of formula (12) at this time:
Therefore, transformation matrix M can be indicated are as follows:
At this point, formula (8) can also convert into:
Wherein:
Point pair relevant for N group can get N group system of linear equations by formula (15), these equation groups can be written as Ax-
The form of b, in which:
Solve MoptThe problem of translated into solution XoptThe problem of, this is the linear optimization problem of a standard:
By SVD decomposition come the solution of perfect (18), carrying out SVD decomposition to A can be obtained A=U Σ VT, to calculate A
Pseudoinverse A+=V Σ+UT, then the linear least-squares solution of formula (18) are as follows:
xopt=A+·b (19)
The secondary correction that three-dimension object spatial pose is just estimated is completed, XoptThe as final knowledge of three-dimension object spatial pose
Other result.
In conclusion the invention has the following advantages:
It 1) is to use Hash description table look-up comparing, speed is quickly;
2) Hash description is utilized and point arrives the advantage of the iterative closest point approach of tangent plane respectively, compensates for lacking for the two
Point accelerates so that original slow point realizes algorithm by linearisation to tangent plane method.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the measurement scene three-dimensional data of the embodiment of the present invention and extracts target point cloud model schematic diagram;
Fig. 2 is schematic diagram of the point to feature of the three-dimensional point cloud of the embodiment of the present invention;
Fig. 3 is that the point of the embodiment of the present invention generates the flow chart of Hash description to feature;
Fig. 4 is the schematic diagram of the world unit coordinate system W of the embodiment of the present invention;
Fig. 5 is the flow chart for taking optimal office's Hough ballot value of the embodiment of the present invention;
Fig. 6 be the embodiment of the present invention point to tangent plane iteration closest approach algorithm illustraton of model;
Fig. 7 is the exemplary examined object of the embodiment of the present invention;
Fig. 8 is that the example of the embodiment of the present invention describes the effect picture after progress coarse localization by Hash;
Fig. 9 is the example of the embodiment of the present invention by the revised effect picture of iteration closest approach.
Specific embodiment
In the following detailed description, many details are proposed, in order to complete understanding of the present invention.But
It will be apparent to those skilled in the art that the present invention can not need some details in these details
In the case of implement.Below to the description of embodiment just for the sake of provided by showing example of the invention to it is of the invention more
Understand well.
Below in conjunction with attached drawing, the technical solution of the embodiment of the present invention is described.
Embodiment:
A kind of three-dimension object detection method based on Hash description and iteration closest approach, including depth camera and PC machine, also
The following steps are included:
The contextual data for the object that S1, depth camera acquisition need to detect;
S2, depth camera emit infrared-ray to scene, and the infrared remote receiver in depth camera will receive the close red of scene
External reflectance generates the three-dimensional point data of scene, i.e., each reflection point in scene is believed relative to the XYZ of depth camera coordinate system
Breath;
Collected contextual data is passed to PC machine and saved by S3, depth camera, and preservation format is PLY format;
S4, the object module M that detection object is extracted using the contextual data of the S3 PLY format obtained;
S5, PC machine calculate the covariance matrix of each point of threedimensional model point cloud, seek the feature vector of covariance matrix,
Obtain the normal vector of each point;
S6, the point of object module M is generated to feature according to the normal vector in S5;
It S7, is cryptographic Hash, the Hash description of generation object module M by the all-pair Feature Conversion of object module M;
S8, scene S is handled using depth camera and PC machine, obtains the cryptographic Hash of scene S, access target model M
Hash description, be quickly found out matched similitude pair, matching result be converted into Hough ballot value;
S9, accumulator of being voted by global Hough count Hough value, and the Hough value of highest scoring is as scene S's
Optimal Hough value generates the first testing result to object module M by optimal Hough value;
S10, by carrying out secondary correction to first testing result to iteration closest approach algorithm, obtain accurate object pose inspection
Survey result.
Wherein, in step S4, the coordinate of contextual data is defined as on depth transducer, as shown in Figure 1, each data
Coordinate value be (x, y, z), count the quantity of z value in all data, removal most z values occurs to reject data, and then obtains
Take the object module M of examined object.
Step S5 obtains the normal vector of each three-dimensional data points in object module MObject module M={ m0,m1,m2,...,
mk-1,mk, the normal vector of each three-dimensional data pointsIt can be counted by the consecutive points around each point
It calculates, in order to calculate normal vector a little, it is necessary first to obtain the covariance matrix between the point and consecutive points, it is assumed that seek miPoint
Normal vectorShown in the calculating of covariance matrix C such as formula (1), R is indicated with miThe radius of diameter of Spherical Volume centered on point, away from
From value di(i ∈ 1,2 ..., k }) indicate consecutive points mjWith central point miEuclidean distance, pass through formula (1) calculate miPoint and radius
The covariance matrix C of all the points in R:
M can be calculated by covariance matrixiThe normal vector of point can to the carry out Eigenvalues Decomposition of covariance matrix C
The characteristic value and feature vector for acquiring covariance matrix C, for the point cloud data of body surface, in the space of normal direction point
Cloth variation is the faintest, therefore central point miNormal vectorThen for feature corresponding to covariance matrix C minimal eigenvalue to
Amount, and so on, obtain the normal vector of all the points
As shown in Fig. 2, utilizing the normal vector of each point in step S6To calculate a little to feature, for three dimensional point cloud
In point piWith the arbitrary point p in addition to the pointj, by two o'clock can form a little pair, using point-to-point transmission normal vector and it is European away from
From the point that can be formed a little pair to feature, put shown in the calculation such as formula (2) to feature:
WhereinIndicate the line direction of two o'clock,Indicate the Euclidean distance of two o'clock line,Indicate piPoint
Method phase vectorWith two o'clock line directionAngle,The angle between two o'clock normal vector is indicated, for vector
Between angle calcu-lation such as formula (3) shown in:
It is cryptographic Hash by the all-pair Feature Conversion of object module M in step S7, the Hash for generating object module M is retouched
It states, defining point converges the arbitrary point in p to (pi, pj) ∈ p, it puts to shown in the expression formula such as formula (2) of feature, using formula (4) by point
Cryptographic Hash is converted into feature:
Wherein ddistIndicate sampling step length, dangleIndicate sampling angle, dangle=2 π/Nangle, NangleIndicate angle grain
Degree, this value is user's sets itself, NangleIt is bigger, dangleWith regard to smaller, that is to say, that got over to the discrimination of two angles
Greatly, this value is usually 30, if sampling angle dangleIt is 12 °, that is for the angle in 0 ° -12 ° this sections, it is believed that he
Be similar.User can adjust N according to actual resultangle, can usually be set to 30.Pass through ddist、dangleCome
Sliding-model control is carried out to feature to point, similar features is enabled to be converted into identical characteristic value, to point to feature F discretization
Later, point is to feature F (p1,p2)discretized=F (f1,f2,f3,f4) cryptographic Hash by (5) formula obtain:
Index=P1*f1+P2*f2+P3*f3+f4 (5)
P1, P2, P3 in formula (5) are 3 different prime numbers, this is the example in order to enable the cryptographic Hash after conversion is unique
Such as F (1,2,3,4) and F (4,3,2,1), the cryptographic Hash of all-pair is counted, the point of identical cryptographic Hash can be passed through into Hash table
Central access is carried out, all cryptographic Hash form Hash table, which is known as the Hash description of point cloud data, as shown in Figure 3.
In step S8, describes to generate Hough ballot value using Hash, define object module M, the three dimensional point cloud of scene
Integrate as S, described according to the Hash that step S5 generates object module point cloud data collection, by by the point of contextual data collection S to conversion
For cryptographic Hash, the Hash description that access target model M generates, and then realize quick similar features matching, then by matching result
Hough value is converted to, global Hough ballot is carried out, the definition mode of the Hough value of matching result is as follows:
The random partial data point chosen in S is as a reference point, it is assumed that target to be detected exists among scene, deposits
In a reference point sr∈ S is placed exactly in the surface of target to be detected.At this point, should exist a bit in model point cloud data set
mrThe same s of ∈ Mr∈ S is corresponding.Hypothetical model point cloud can be with mrPoint movement, with mrThe normal vector rotation of point.If by mrPoint moves
Move srPosition, while allowing mrPoint and srThe normal vector of point is overlapped, and model point cloud is only needed around s at this timerThe normal vector rotation of point
Turn α degree and achieves that scene objects are overlapped with cloud template is put.Under such thinking, by by scene point srWith model points mr
Position be aligned with normal vector, point converges the point m of the displacement rotating transformation of M to scene S just in available MrIt is indicated with angle [alpha].
At this point, (mr, α) and it is known as scene reference point srThe Hough ballot value in M is converged in point.
If a point s in scenerIt is placed exactly in the surface of target object, and on the point and another target object surface
Another point siIt constitutes a little pair.It will put to (sr, si) it is converted into cryptographic Hash, the Hash description of index object module M will obtain
It obtains and (sr, si) similitude is to (mr, mi), using these points to (mr, mi) point s can be calculatedrHough ballot value.
Step S9 counts Hough value by global Hough ballot accumulator, and the Hough value of highest scoring is as field
The optimal Hough value of scape S generates the first detection to object module M as a result, Hough ballot value (m by optimal Hough valuer, α) in
MrDirectly by (mr, mi) obtain, and angle [alpha] then by formula (6) calculate come, define world coordinate system W, as shown in figure 4, scheme (4)
In formula (6)WithIt describes point srAnd mrBe moved to displacement movement that world unit coordinate origin is done and
By the normal vector of two o'clockWithMain shaft x with unit coordinate system, the world is aligned done rotation transformation:
For the point with identical cryptographic Hash to (mr, mi) ∈ M and (sr, si) ∈ S, it first will point srWith point mrNormal vectorIt rotates to that parallel with the main shaft x of world unit coordinate system W and direction is consistent, then moves the position and the world of two o'clock
The origin of unit coordinate system is overlapped, this time point is to (mr, mi) only need to can be with around the x-axis rotation alpha angle of world unit coordinate system
Point is to (sr, si) be overlapped, it is calculated by world unit coordinate systemWithLater, angle [alpha] can be solved according to formula (6),
Scene point srHough ballot value (mr, α) solve finish;
Followed by the optimal Hough ballot value of vote by ballot, as shown in figure 5, establishingn×mThe ballot of global Hough it is cumulative
Device collects the poll of each Hough ballot value, and the row element of two-dimentional accumulator is by each reference point for filtering out in model point cloudLine number n be equal to reference point number, the columns m of accumulator then with angle granularity NangleHave
It closes, works as NangleThe columns of ballot accumulator is 30 when=30;
Scene reference point srIt can be with other scene point siComposition point pair, each group of scene point is several to may all search out
Similar model points are to (mr, mi), these points are to illustrating (sr, si) may the position present in object module M, calculate field
Sight spot is to (sr, si) and similitude to (mr, mi) between angle [alpha], so that it may obtain scene point srMultiple Hough ballot value (mr,
α), position adds one in corresponding ballot accumulator, and the point that current scene reference point and other scene points form throws all completion Houghs
After the calculating of ticket value, optimal Hough is thrown the optimal Hough ballot value as current scene point by the peak value in accumulator of voting
Ticket result brings formula (7) into, it can obtains the first pose testing result RigidTrans between object module M and scene S:
Step S10 carries out second-order correction to first testing result using iteration closest approach algorithm, and first testing result is reacted
Position orientation relation between object module M and scene S, since first testing result just meets condition similar in reference point posture,
Linearization process can be carried out to iteration closest approach algorithm, so that iteration closest approach algorithm possesses faster while ensureing precision
Speed;
As shown in fig. 6, second-order correction is carried out to first testing result using the iteration closest approach algorithm of point to tangent plane, it is right
The iteration closest approach algorithm of tangent plane is arrived in point, the optimization aim of error function is tangent plane of the model points to corresponding scene point
Mean square error reaches minimum;
Shown in error function such as formula (8):
M in formula (8)i=(mix,miy,miz,Tim), TimCertain point that expression model points are concentrated, and si=(six,siy,siz,
Tis), TisIndicate miCorresponding points in target point cloud, ni=(nix,niy,niz, 0) and it is then point siNormal vector;
M is the rotational transformation matrix of 4x4, by formula (8) as it can be seen that moving after object module M is moved according to transformation matrix M
Model points m after dynamiciWith scene point siMake the difference, obtain one description displacement difference vector, this vector again with point siNormal direction
AmountDo dot product, the result of dot product can be used to estimate a point to the distance of another tangent plane, as all model points miIt presses
After being moved through according to Metzler matrix, so that formula (8) adds up and reach minimum, M at this time is just Mopt;
Since model point set is similar to the posture of scene point set, can using point to tangent plane iteration closest approach algorithm come
Nonlinear problem is approached towards linear problem, so that the mathematical form of iteration closest approach algorithm is simpler, is calculated
The method speed of service is more quick;
For the transformation matrix M of description displacement and rotation, by rotating part R (α, beta, gamma) and displaced portion T (tx,ty,
tz) composition, as shown in formula (9):
M=T (tx,ty,tz)·R(α,β,γ) (9)
Wherein:
Each value in formula (11) are as follows:
r11=cos γ cos β,
r12=-sin γ cos α+cos γ sin β sin α,
r13=sin γ sin α+cos γ sin β cos α,
r21=sin γ cos β,
r22=cos γ cos α+sin γ sin β sin α,
r23=-cos γ sin α+sin γ sin β cos α,
r31=-sin β
r32=cos β sin α
r33=cos β cos α
Rx (α), Ry (β), Rz (γ) are respectively indicated around x, y, the rotation of z-axis, utilize the similitude of equal value of trigonometric function
It is found that we can carry out the trigonometric function in R-portion generation of equal value if model point set is similar to the posture of scene point set
It changes, the rotating part R in M can be written as the form of formula (12) at this time:
Therefore, transformation matrix M can be indicated are as follows:
At this point, formula (8) can also convert into:
Wherein:
Point pair relevant for N group can get N group system of linear equations by formula (15), these equation groups can be written as Ax-
The form of b, in which:
Solve MoptThe problem of translated into solution XoptThe problem of, this is the linear optimization problem of a standard:
By SVD decomposition come the solution of perfect (18), carrying out SVD decomposition to A can be obtained A=U Σ VT, to calculate A
Pseudoinverse A+=V Σ+UT, then the linear least-squares solution of formula (19) are as follows:
xopt=A+·b (19)
The secondary correction that three-dimension object spatial pose is just estimated is completed, XoptThe as final knowledge of three-dimension object spatial pose
Other result.
The secondary correction that three-dimension object spatial pose is just estimated is completed, XoptThe as final knowledge of three-dimension object spatial pose
Other result.
Positioning accuracy height is because algorithm uses Hash and describes this method, when algorithm is run not yet, just
The Hash table for foring object module M, in the algorithm operation phase, it is only necessary to which the then Hash feature for calculating scene directly looks into mesh
The Hash table of mark model M, which can be obtained by, is quickly found out similar feature, to position object.
In practice, as shown in fig. 7, being a scene to be measured, matched result such as Fig. 8 after being described by Hash
It is shown, it can be seen that have certain error, this is because using d in calculating Hash descriptiondistAnd NangleDiscretization is carried out
Processing, precision is lossy, but by way of looking into Hash table, can be realized quick coarse positioning, that is, can be fast
Speed reaches around target position, then carries out second-order correction by the way of iteration closest approach, as shown in figure 9, Fig. 9 is exactly repeatedly
For the revised result of closest approach.Simultaneously as known Hash describes method can navigate to the substantially position of target as Fig. 8
It sets, it is possible to the iteration closest approach algorithm of tangent plane is arrived using point,
It illustrates, there are also a kind of method of iteration closest approach for being point-to-point, the iteration closest approach sides of point-to-point
Method be it is most common, under normal circumstances the iterative closest point approach of point-to-point than put arrive tangent plane iterative closest point approach than
It fastly, is because before being positioned, object module M is often placed into some position in scene by we, from positioning
Position have a certain distance, the iterative closest point approach of point-to-point relative to the iterative closest point approach of point to tangent plane will less based on
Calculate tangent plane the step of point.
This patent is had been able to navigate near target position, therefore can arrive point due to using Hash to describe
Non-linearization part in the iterative closest point approach of tangent plane carries out linearization process, and available linearization processing can be also point to cutting
One unique place of the iterative closest point approach of plane.It is equivalent to this patent and incorporates two algorithms, enable them to benefit
With the advantage of the two, disadvantage is mutually made up.
The advantage of Hash description is that locating speed is very fast, and disadvantage is that positioning accuracy is not high, because of discretization process therein
It will cause loss of significance.
If the advantage of iterative closest point approach of point to tangent plane is when initial position is close to actual position, such as
Shown in Fig. 8, the non-linear partial in algorithm can be linearized, disadvantage be if initial position apart from actual position compared with
Far, the speed of positioning can be very slow.
The mode of iteration closest approach has been used to carry out two to the positioning result that Hash describes as it can be seen that positioning accuracy height is embodied in
Secondary correction.Speed is embodied in two o'clock fastly:
It 1) is to use Hash description table look-up comparing, speed is quickly;
2) iterative closest point approach that Hash description is utilized and puts to tangent plane combines the advantage of two methods respectively,
The shortcomings that compensating for the two accelerates so that original slow point realizes algorithm by linearisation to tangent plane method.
The above embodiments are merely illustrative of the technical solutions of the present invention, rather than limits the protection scope of invention.It is aobvious
So, described embodiment is only section Example of the present invention, rather than whole embodiments.Based on these embodiments, ability
Domain those of ordinary skill every other embodiment obtained without creative efforts, belongs to institute of the present invention
Scope of protection.
Although referring to above-described embodiment, invention is explained in detail, and those of ordinary skill in the art still can be with
In the absence of conflict, creative work is not made to be according to circumstances combined with each other the feature in various embodiments of the present invention, increase
It deletes or makees other adjustment, to obtain other technologies scheme different, that essence is without departing from design of the invention, these technical sides
Case similarly belongs to invention which is intended to be protected.
Claims (8)
1. a kind of three-dimension object detection method based on Hash description and iteration closest approach, which is characterized in that including depth camera
And PC machine, it is further comprising the steps of:
The contextual data for the object that S1, depth camera acquisition need to detect;
S2, depth camera emit infrared-ray to scene, and the infrared remote receiver in depth camera is anti-by the near-infrared for receiving scene
It penetrates, generates the three-dimensional point data of scene, i.e., XYZ information of each reflection point relative to depth camera coordinate system in scene;
Collected contextual data is passed to PC machine and saved by S3, depth camera, and preservation format is PLY format;
S4, the object module M that detection object is extracted using the contextual data of the S3 PLY format obtained;
S5, PC machine calculate the covariance matrix of each point of threedimensional model point cloud, seek the feature vector of covariance matrix, obtain
The normal vector of each point;
S6, the point of object module M is generated to feature according to the normal vector in S5;
It S7, is cryptographic Hash, the Hash description of generation object module M by the all-pair Feature Conversion of object module M;
S8, scene S is handled using depth camera and PC machine, obtains the cryptographic Hash of scene S, the Kazakhstan of access target model M
Uncommon description, is quickly found out matched similitude pair, matching result is converted to Hough ballot value;
S9, accumulator of being voted by global Hough count Hough value, and the Hough value of highest scoring is as the optimal of scene S
Hough value generates the first testing result to object module M by optimal Hough value;
S10, by carrying out secondary correction to first testing result to iteration closest approach algorithm, obtain accurate object pose detection knot
Fruit.
2. a kind of three-dimension object detection method based on Hash description and iteration closest approach according to claim 1, special
Sign is that the coordinate of contextual data is defined as on depth transducer by the step S4, the coordinate values of each data be (x,
Y, z), the quantity of z value in all data is counted, removal most z values occurs to reject data, and then obtains examined object
Object module M.
3. a kind of three-dimension object detection method based on Hash description and iteration closest approach according to claim 1, special
Sign is that the step S5 obtains the normal vector of each three-dimensional data points in object module MObject module M={ m0,m1,
m2,...,mk-1,mk, the normal vector of each three-dimensional data pointsIt can be by adjacent around each point
Point is to calculate, in order to calculate normal vector a little, it is necessary first to obtain the covariance matrix between the point and consecutive points, it is assumed that it is required that
Take miThe normal vector of pointShown in the calculating of covariance matrix C such as formula (1), R is indicated with miHalf of diameter of Spherical Volume centered on point
Diameter, distance value di(i ∈ 1,2 ..., k }) indicate consecutive points mjWith central point miEuclidean distance, pass through formula (1) calculate miPoint
With the covariance matrix C of all the points in radius R:
M can be calculated by covariance matrixiThe normal vector of point can acquire the carry out Eigenvalues Decomposition of covariance matrix C
The characteristic value and feature vector of covariance matrix C becomes the point cloud data of body surface in the spatial distribution of normal direction
Change is the faintest, therefore central point miNormal vectorIt is then feature vector corresponding to covariance matrix C minimal eigenvalue, with
This analogizes, and obtains the normal vector of all the points
4. a kind of three-dimension object detection method based on Hash description and iteration closest approach according to claim 1, special
Sign is, in the step S6, utilizes the normal vector of each pointTo calculate a little to feature, for the point p in three dimensional point cloudi
With the arbitrary point p in addition to the pointj, it can be made up of a little pair two o'clock, it can be with using the normal vector and Euclidean distance of point-to-point transmission
The point of point pair is formed to feature, is put shown in the calculation such as formula (2) to feature:
WhereinIndicate the line direction of two o'clock,Indicate the Euclidean distance of two o'clock line,Indicate piThe method of point
Phase vectorWith two o'clock line directionAngle,The angle between two o'clock normal vector is indicated, between vector
Angle calcu-lation such as formula (3) shown in:
5. a kind of three-dimension object detection method based on Hash description and iteration closest approach according to claim 1, special
Sign is, is cryptographic Hash by the all-pair Feature Conversion of object module M in step S7, the Hash for generating object module M is retouched
It states, defining point converges the arbitrary point in p to (pi, pj) ∈ p, it puts to shown in the expression formula such as formula (2) of feature, using formula (4) by point
Cryptographic Hash is converted into feature:
Wherein ddistIndicate sampling step length, dangleIndicate sampling angle, dangle=2 π/Nangle, NangleIt indicates angle granularity, makes
User can adjust N according to actual resultangle, can usually be set to 30.Pass through ddist、dangleTo carry out feature point
Sliding-model control enables similar features to be converted into identical characteristic value, after putting to feature F discretization, puts to feature F
(p1,p2)discretized=F (f1,f2,f3,f4) cryptographic Hash by (5) formula obtain:
Index=P1*f1+P2*f2+P3*f3+f4 (5)
P1, P2, P3 in formula (5) are 3 different prime numbers, this is to count institute in order to enable the cryptographic Hash after conversion is unique
The point of identical cryptographic Hash can be passed through Hash table and carry out central access by cryptographic Hash a little pair, and all cryptographic Hash form Hash
Table, the Hash table are known as the Hash description of point cloud data.
6. a kind of three-dimension object detection method based on Hash description and iteration closest approach according to claim 1, special
Sign is, in step S8, describes to generate Hough ballot value using Hash, defines object module M, the three dimensional point cloud collection of scene
For S, described according to the Hash that step S5 generates object module point cloud data collection, by by the point of contextual data collection S to being converted into
Cryptographic Hash, the Hash description that access target model M generates, and then realize quick similar features matching, then matching result is turned
It is changed to Hough value, carries out global Hough ballot.
7. a kind of three-dimension object detection method based on Hash description and iteration closest approach according to claim 1, special
Sign is that step S9 counts Hough value by global Hough ballot accumulator, and the Hough value of highest scoring is as scene
The optimal Hough value of S generates the first detection to object module M as a result, Hough ballot value (m by optimal Hough valuer, α) in mr
Directly by (mr, mi) obtain, and angle [alpha] then is calculated to come by formula (6), definition world coordinate system W, in formula (6)WithIt retouches
Having stated will point srAnd mrIt is moved to the displacement movement that world unit coordinate origin is done and the normal vector by two o'clockWithTogether
The main shaft x of world unit coordinate system is aligned done rotation transformation:
For the point with identical cryptographic Hash to (mr, mi) ∈ M and (sr, si) ∈ S, it first will point srWith point mrNormal vectorIt rotates to that parallel with the main shaft x of world unit coordinate system W and direction is consistent, then moves the position and the world of two o'clock
The origin of unit coordinate system is overlapped, this time point is to (mr, mi) only need to can be with around the x-axis rotation alpha angle of world unit coordinate system
Point is to (sr, si) be overlapped, it is calculated by world unit coordinate systemWithLater, angle [alpha] can be solved according to formula (6),
Scene point srHough ballot value (mr, α) solve finish;
Followed by the optimal Hough ballot value of vote by ballot, establishn×mGlobal Hough ballot accumulator come collect it is each suddenly
The poll of husband's ballot value, the row element of two-dimentional accumulator is by each reference point for filtering out in model point cloudLine number n be equal to reference point number, the columns m of accumulator then with angle granularity NangleHave
It closes, works as NangleThe columns of ballot accumulator is 30 when=30;
Scene reference point srIt can be with other scene point siComposition point pair, each group of scene point is several similar to may all search out
Model points to (mr, mi), these points are to illustrating (sr, si) may the position present in object module M, calculate scene point
To (sr, si) and similitude to (mr, mi) between angle [alpha], so that it may obtain scene point srMultiple Hough ballot value (mr, α), it is right
The position in accumulator that should vote adds one, and the point of current scene reference point and other scene points composition is to all completing Hough ballot value
After calculating, the peak value in accumulator of voting is by the optimal Hough ballot value as current scene point, by optimal Hough voting results
Bring formula (7) into, it can obtain the first pose testing result RigidTrans between object module M and scene S:
8. a kind of three-dimension object detection method based on Hash description and iteration closest approach according to claim 1, special
Sign is that step S10 carries out second-order correction to first testing result using iteration closest approach algorithm, and first testing result is reacted
Position orientation relation between object module M and scene S, since first testing result just meets condition similar in reference point posture,
Linearization process can be carried out to iteration closest approach algorithm, so that iteration closest approach algorithm possesses faster while ensureing precision
Speed;
Second-order correction is carried out to first testing result using the iteration closest approach algorithm of point to tangent plane, tangent plane is arrived for point
Iteration closest approach algorithm, the optimization aim of error function are that the mean square error of model points to the tangent plane for corresponding to scene point reaches most
It is small;
Shown in error function such as formula (8):
M in formula (8)i=(mix,miy,miz,Tim), TimCertain point that expression model points are concentrated, and si=(six,siy,siz,Tis), Tis
Indicate miCorresponding points in target point cloud, ni=(nix,niy,niz, 0) and it is then point siNormal vector;
M is the rotational transformation matrix of 4x4, by formula (8) as it can be seen that after object module M is moved according to transformation matrix M, after mobile
Model points miWith scene point siMake the difference, obtain one description displacement difference vector, this vector again with point siNormal vector
Do dot product, the result of dot product can be used to estimate a point to the distance of another tangent plane, as all model points miAccording to M square
After battle array is moved through, so that formula (8) adds up and reach minimum, M at this time is just Mopt;
Since model point set is similar to the posture of scene point set, the iteration closest approach algorithm that can use point to tangent plane will be non-
Linear problem is approached towards linear problem, so that the mathematical form of iteration closest approach algorithm is simpler, algorithm fortune
Scanning frequency degree is more quick;
For the transformation matrix M of description displacement and rotation, by rotating part R (α, beta, gamma) and displaced portion T (tx,ty,tz) group
At as shown in formula (9):
M=T (tx,ty,tz)·R(α,β,γ) (9)
Wherein:
Each value in formula (11) are as follows:
r11=cos γ cos β,
r12=-sin γ cos α+cos γ sin β sin α,
r13=sin γ sin α+cos γ sin β cos α,
r21=sin γ cos β,
r22=cos γ cos α+sin γ sin β sin α,
r23=-cos γ sin α+sin γ sin β cos α,
r31=-sin β
r32=cos β sin α
r33=cos β cos α
Rx (α), Ry (β), Rz (γ) are respectively indicated around x, y, the rotation of z-axis, using trigonometric function similitude of equal value it is found that
If model point set is similar to the posture of scene point set, the trigonometric function in R-portion can be carried out equivalent substitution by us, at this time
Rotating part R in M can be written as the form of formula (12):
Therefore, transformation matrix M can be indicated are as follows:
At this point, formula (8) can also convert into:
Wherein:
Point pair relevant for N group can get N group system of linear equations by formula (15), these equation groups can be written as Ax-b's
Form, in which:
Solve MoptThe problem of translated into solution XoptThe problem of, this is the linear optimization problem of a standard:
By SVD decomposition come the solution of perfect (18), carrying out SVD decomposition to A can be obtained A=U Σ VT, to calculate the pseudoinverse of A
A+=V Σ+UT, then the linear least-squares solution of formula (18) are as follows:
xopt=A+·b (19)
The secondary correction that three-dimension object spatial pose is just estimated is completed, XoptThe as final identification knot of three-dimension object spatial pose
Fruit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910049505.XA CN109785389A (en) | 2019-01-18 | 2019-01-18 | A kind of three-dimension object detection method based on Hash description and iteration closest approach |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910049505.XA CN109785389A (en) | 2019-01-18 | 2019-01-18 | A kind of three-dimension object detection method based on Hash description and iteration closest approach |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109785389A true CN109785389A (en) | 2019-05-21 |
Family
ID=66500979
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910049505.XA Pending CN109785389A (en) | 2019-01-18 | 2019-01-18 | A kind of three-dimension object detection method based on Hash description and iteration closest approach |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109785389A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110348310A (en) * | 2019-06-12 | 2019-10-18 | 西安工程大学 | A kind of Hough ballot 3D colour point clouds recognition methods |
CN110456377A (en) * | 2019-08-15 | 2019-11-15 | 中国人民解放军63921部队 | It is a kind of that foreign matter detecting method and system are attacked based on the satellite of three-dimensional laser radar |
CN110889243A (en) * | 2019-12-20 | 2020-03-17 | 南京航空航天大学 | Aircraft fuel tank three-dimensional reconstruction method and detection method based on depth camera |
CN110942515A (en) * | 2019-11-26 | 2020-03-31 | 北京迈格威科技有限公司 | Point cloud-based target object three-dimensional computer modeling method and target identification method |
CN112307971A (en) * | 2020-10-30 | 2021-02-02 | 中科新松有限公司 | Sphere target identification method and identification device based on three-dimensional point cloud data |
WO2022087916A1 (en) * | 2020-10-28 | 2022-05-05 | 华为技术有限公司 | Positioning method and apparatus, and electronic device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103729643A (en) * | 2012-10-12 | 2014-04-16 | Mv科技软件有限责任公司 | Recognition and pose determination of 3d objects in multimodal scenes |
CN104040590A (en) * | 2011-12-19 | 2014-09-10 | 三菱电机株式会社 | Method for estimating pose of object |
US20150010202A1 (en) * | 2013-07-03 | 2015-01-08 | Mitsubishi Electric Research Laboratories, Inc. | Method for Determining Object Poses Using Weighted Features |
-
2019
- 2019-01-18 CN CN201910049505.XA patent/CN109785389A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104040590A (en) * | 2011-12-19 | 2014-09-10 | 三菱电机株式会社 | Method for estimating pose of object |
CN103729643A (en) * | 2012-10-12 | 2014-04-16 | Mv科技软件有限责任公司 | Recognition and pose determination of 3d objects in multimodal scenes |
US20150010202A1 (en) * | 2013-07-03 | 2015-01-08 | Mitsubishi Electric Research Laboratories, Inc. | Method for Determining Object Poses Using Weighted Features |
WO2015002114A1 (en) * | 2013-07-03 | 2015-01-08 | Mitsubishi Electric Corporation | Method and apparatus for determining pose of object in scene |
Non-Patent Citations (3)
Title |
---|
KOK-LIM LOW: "Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration", 《TECHNICAL REPORT TR04-004, UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL》 * |
张凯霖等: "复杂场景下基于C-SHOT特征的3D物体识别与位姿估计", 《计算机辅助设计与图形学学报》 * |
杨厚易: "基于视觉的工件定位与抓取", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110348310A (en) * | 2019-06-12 | 2019-10-18 | 西安工程大学 | A kind of Hough ballot 3D colour point clouds recognition methods |
CN110456377A (en) * | 2019-08-15 | 2019-11-15 | 中国人民解放军63921部队 | It is a kind of that foreign matter detecting method and system are attacked based on the satellite of three-dimensional laser radar |
CN110456377B (en) * | 2019-08-15 | 2021-07-30 | 中国人民解放军63921部队 | Satellite foreign matter attack detection method and system based on three-dimensional laser radar |
CN110942515A (en) * | 2019-11-26 | 2020-03-31 | 北京迈格威科技有限公司 | Point cloud-based target object three-dimensional computer modeling method and target identification method |
CN110889243A (en) * | 2019-12-20 | 2020-03-17 | 南京航空航天大学 | Aircraft fuel tank three-dimensional reconstruction method and detection method based on depth camera |
CN110889243B (en) * | 2019-12-20 | 2020-08-18 | 南京航空航天大学 | Aircraft fuel tank three-dimensional reconstruction method and detection method based on depth camera |
WO2022087916A1 (en) * | 2020-10-28 | 2022-05-05 | 华为技术有限公司 | Positioning method and apparatus, and electronic device and storage medium |
CN112307971A (en) * | 2020-10-30 | 2021-02-02 | 中科新松有限公司 | Sphere target identification method and identification device based on three-dimensional point cloud data |
CN112307971B (en) * | 2020-10-30 | 2024-04-09 | 中科新松有限公司 | Sphere target identification method and device based on three-dimensional point cloud data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109785389A (en) | A kind of three-dimension object detection method based on Hash description and iteration closest approach | |
CN110648361B (en) | Real-time pose estimation method and positioning and grabbing system of three-dimensional target object | |
CN104636725B (en) | A kind of gesture identification method and system based on depth image | |
JP5759161B2 (en) | Object recognition device, object recognition method, learning device, learning method, program, and information processing system | |
JP5627325B2 (en) | Position / orientation measuring apparatus, position / orientation measuring method, and program | |
US9467682B2 (en) | Information processing apparatus and method | |
CN105091744B (en) | The apparatus for detecting position and posture and method of a kind of view-based access control model sensor and laser range finder | |
CN102435188B (en) | Monocular vision/inertia autonomous navigation method for indoor environment | |
CN102472609A (en) | Position and orientation calibration method and apparatus | |
JP2013217893A (en) | Model generation device, position posture estimation device, information processor, model generation method, position posture estimation method, information processing method | |
CN102236794A (en) | Recognition and pose determination of 3D objects in 3D scenes | |
GB2512460A (en) | Position and orientation measuring apparatus, information processing apparatus and information processing method | |
CN108876852B (en) | Online real-time object identification and positioning method based on 3D vision | |
Bu et al. | RF-3DScan: RFID-based 3D reconstruction on tagged packages | |
CN104121902A (en) | Implementation method of indoor robot visual odometer based on Xtion camera | |
JPH08136220A (en) | Method and device for detecting position of article | |
CN109211132A (en) | A kind of photogrammetric method for obtaining tall and big object deformation information of unmanned plane high-precision | |
US20180374237A1 (en) | Method, system and apparatus for determining a pose for an object | |
CN104050372A (en) | Method for automatically evaluating errors of three-dimensional geometrical shapes | |
Nardari et al. | Place recognition in forests with urquhart tessellations | |
Liu et al. | 6D pose estimation of occlusion-free objects for robotic Bin-Picking using PPF-MEAM with 2D images (occlusion-free PPF-MEAM) | |
Di Franco et al. | Solving ambiguities in MDS relative localization | |
Kasaei et al. | An orthographic descriptor for 3D object learning and recognition | |
Hashimoto et al. | Current status and future trends on robot vision technology | |
JP5462662B2 (en) | Position / orientation measurement apparatus, object identification apparatus, position / orientation measurement method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190521 |
|
RJ01 | Rejection of invention patent application after publication |