CN107403440A - For the method and apparatus for the posture for determining object - Google Patents
For the method and apparatus for the posture for determining object Download PDFInfo
- Publication number
- CN107403440A CN107403440A CN201610329835.0A CN201610329835A CN107403440A CN 107403440 A CN107403440 A CN 107403440A CN 201610329835 A CN201610329835 A CN 201610329835A CN 107403440 A CN107403440 A CN 107403440A
- Authority
- CN
- China
- Prior art keywords
- frame
- feature
- particular frame
- present
- present frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
A kind of method and apparatus for being used to determine the posture of object are provided, methods described includes:Obtain the picture frame sequence that object gathers during movement;The feature of present frame in detection image frame sequence;Character pair based on each feature in the particular frame before present frame and its at least frame before the particular frame and the moving parameter information between at least frame before the particular frame and the particular frame, determine the corresponding relation between the feature in the feature and the particular frame in present frame;Based on the image of the corresponding relation, the image of the present frame and the particular frame, estimation is carried out;And the result based on estimation, determine the posture of object.
Description
Technical field
The present invention relates to the field of image procossing, more particularly it relates to a kind of for determining the posture of object
Method and apparatus.
Background technology
During three-dimensional reconstruction is carried out to scene by image procossing, it usually needs vision measurement processing.Regarding
Feel in measurement processing, the appearance of the object of robot, vehicle etc. is determined by analyzing image that associated camera collects
State, the posture include the position and orientation of the object.
In a kind of vision measuring method, by based on the present frame and the letter of the former frame of present frame in picture frame sequence
Cease to determine the corresponding relation between characteristics of image.The corresponding relation that this mode obtains is less reliable, the motion after causing
The result of estimation and vision measurement is not accurate enough, obvious accumulated error be present.
The content of the invention
In view of the above, the invention provides a kind of method and apparatus for being used to determine the posture of object, it can
The precision of estimation is significantly improved, so as to significantly improve the precision for the posture for determining object, reduces accumulated error.
According to one embodiment of the invention, there is provided a kind of method for being used to determine the posture of object, including:Object is obtained to exist
The picture frame sequence gathered during motion;The feature of present frame in detection image frame sequence;Based on specific before present frame
Each feature in frame and its character pair at least frame before the particular frame and in the particular frame and this is specific
The moving parameter information between an at least frame before frame, determine between the feature in the feature and the particular frame in present frame
Corresponding relation;Based on the image of the corresponding relation, the image of the present frame and the particular frame, estimation is carried out;And base
In the result of estimation, the posture of object is determined.
According to another embodiment of the present invention, a kind of device for being used to determine the posture of object, including:Image acquisition unit,
Obtain the picture frame sequence that object gathers during movement;Characteristic detection unit, the spy of the present frame in detection image frame sequence
Sign;Relation determination unit, based on each feature in the particular frame before present frame and its at least one before the particular frame
Character pair in frame and the moving parameter information between at least frame before the particular frame and the particular frame, it is determined that
The corresponding relation between the feature in feature and the particular frame in present frame;Motion estimation unit, based on the corresponding relation, it is somebody's turn to do
The image of the image of present frame and the particular frame, carry out estimation;And posture determining unit, the knot based on estimation
Fruit, determine the posture of object.
According to another embodiment of the present invention, there is provided a kind of object gesture determining device, including:Image capture module, adopt
The picture frame sequence that collection object gathers during movement;Processor;Memory;With the computer journey being stored in the memory
Sequence is instructed, and following steps are performed when the computer program instructions are run by the processor:From described image acquisition module
Obtain described image frame sequence;The feature of present frame in detection image frame sequence;Based in the particular frame before present frame
Each feature and its character pair at least frame before the particular frame and before the particular frame and the particular frame
An at least frame between moving parameter information, determine between the feature in present frame and the feature in the particular frame it is corresponding pass
System;Based on the image of the corresponding relation, the image of the present frame and the particular frame, estimation is carried out;And based on motion
The result of estimation, determine the posture of object
According to another embodiment of the present invention, there is provided a kind of computer program product, including computer-readable recording medium,
Computer program instructions are stored on the computer-readable recording medium, the computer program instructions are transported by computer
Following steps are performed during row:Obtain the picture frame sequence that object gathers during movement;Present frame in detection image frame sequence
Feature;Correspondence based on each feature in the particular frame before present frame and its at least frame before the particular frame
Feature and the moving parameter information between at least frame before the particular frame and the particular frame, are determined in present frame
The corresponding relation between feature in feature and the particular frame;Based on the corresponding relation, the present frame image and this is specific
The image of frame, carry out estimation;And the result based on estimation, determine the posture of object.
The embodiment of the present invention be used for determine the method and apparatus of posture of object, due to before make use of present frame
The image informations of at least two frames determine the corresponding relation between feature and carry out estimation, motion can be significantly improved and estimated
The precision of meter, so as to significantly improve the precision for the posture for determining object, reduce accumulated error.
Brief description of the drawings
Fig. 1 is the scene that the method and apparatus of the diagram posture for determining object according to embodiments of the present invention are applied
Schematic diagram;
Fig. 2 is the diagram flow for being used to determine the key step of the method for the posture of object according to embodiments of the present invention
Figure;
Fig. 3 is to be shown in determine the feature in present frame and the spy in the particular frame in method according to embodiments of the present invention
The flow chart of the step of main processing of corresponding relation between sign;
Fig. 4 is to be shown in determine the feature in present frame and the spy in the particular frame in method according to embodiments of the present invention
The flow chart of the step of detailed processing of corresponding relation between sign;
Fig. 5 is to be shown in determine the feature in present frame and the spy in the particular frame in method according to embodiments of the present invention
The schematic diagram of corresponding relation between sign;
Fig. 6 is the diagram block diagram for being used to determine the main configuration of the device of the posture of object according to embodiments of the present invention;
Fig. 7 is the detailed of the relation determination unit that is used to determine in the device of the posture of object of the diagram according to Fig. 6
The block diagram of configuration;
Fig. 8 is the block diagram of the detailed configuration for the corresponding relation determining unit being shown in the relation determination unit shown in Fig. 7;
And
Fig. 9 is the structure chart of the main hardware configuration of diagram object gesture determining device according to embodiments of the present invention.
Embodiment
The embodiment of the present invention is described in detail below with reference to accompanying drawing.
First, reference picture 1 describes the scene that method and apparatus according to embodiments of the present invention are applied.
Fig. 1 is the diagram method and apparatus for being used to determine the posture of object according to embodiments of the present invention (below, when appropriate
Also referred to as object gesture determines method and object gesture determining device) schematic diagram of scene applied.
As shown in figure 1, according to embodiments of the present invention is used to determine that the method and apparatus of the posture of object to be applied to object
100.The object 100 can for example include but is not limited to intelligent robot, automobile, wearable device etc..In one example,
Moved in the scene that the object 100 itself can be residing for it.In another example, the object 100 can be grasped by user
Make or wear and moved in the scene residing for it.Hereinafter, the situation of above two example is referred to as to the fortune of the object 100
It is dynamic.
The object 100 includes imaging unit 110.The imaging unit 110 is for example imaged by Still camera, dynamic
First-class various photographing elements are formed.When the object 100 moves in the scene residing for it, the imaging unit 110 can
The scene is shot, so as to gather still image or dynamic image (picture frame sequence).
Thus, by analyzing the acquired image of imaging unit 110, it may be determined that the object 100
Posture.The posture may include the position and orientation of the object 100.
The processing of the method for the posture for determining object is described in detail next, with reference to Fig. 2.
Fig. 2 is the diagram flow for being used to determine the key step of the method for the posture of object according to embodiments of the present invention
Figure.
As shown in Fig. 2 first, in step S1, obtain the picture frame sequence that object gathers during movement.Specifically, institute
The method of stating can obtain the described image frame sequence that the object is gathered in real time during object motion.Alternatively, the side
Method can also terminate to obtain the described image frame sequence gathered during movement by the object afterwards in object motion.
Next, the feature of the present frame in step S2, detection image frame sequence.
Specifically, described image frame sequence may include t frames (t is natural number), i.e. described image frame sequence is represented by figure
As frame (0,1 ..., t-1, t).Where it is assumed that t frames are current frame (that is, described present frame) to be processed, then the 0th frame to the
T-1 frames are the frame before the present frame, are referred to as historical frames.
More specifically, the feature for example may include it is at least one in the positional information and description information of characteristic point.Such as
Known to those skilled in the art, the description information can be histogram of gradients information, grey level histogram information etc.
The information of the characteristic in pixel region can be represented.Exemplarily, methods described can use such as SIFT (Scale-
Invariant Feature Transform, Scale invariant features transform), ORB (Oriented FAST and Rotated
) etc. BRIEF various features extraction algorithm, detects the feature.Certainly, it the foregoing is only example.Those skilled in the art
Other known in the art and exploitation in the future various features detection methods can be used, detect other in the image of present frame
Various features.
Assuming that by step S2 processing, N number of feature is detected in present frame t, (N is nature by i=1,2 ..., N
Number).Exemplarily, each feature includes positional informationAnd description information.Hereafter, methods described is carried out to step S3.
In step S3, based on each feature in the particular frame before present frame and its at least one before the particular frame
Character pair in frame and the moving parameter information between at least frame before the particular frame and the particular frame, it is determined that
The corresponding relation between the feature in feature and the particular frame in present frame.
Specifically, the particular frame is any frame in historical frames.As an example, to make the result of posture determination more
Accurately, the particular frame can be frame nearer apart from the present frame in time.For example, the particular frame can be current
The former frame of frame, i.e. t-1 frames.
In addition, at least frame before the particular frame can be a frame or multiframe in the 0th frame to t-2 frames.As
One example, to make the result that posture determines more accurate, at least frame before the particular frame can be the 0th frame to t-2
Each frame in frame.
In addition, for some feature in particular frame, its character pair table at least frame before the particular frame
Show by the motion estimation process previously performed to be defined as the feature to correspond to each other.Hereafter, for convenience of description, for
The feature to be corresponded to each other between two frames is had determined as, same feature can be treated as sometimes.
Below, it is the 0th frame to by least frame before using the particular frame as t-1 frames and the particular frame
It is described in case of each frame of t-2 frames.However, as described above, it will be appreciated by those skilled in the art that the present invention is real
Apply the method not limited to this of example.
Thus, in step S3, based on the feature in the 0th frame to t-1 frames and the 0th frame to the motion between t-1 frames
Parameter information, determine the corresponding relation between the feature in the feature and the t-1 frames in t frames.
Specifically, Fig. 3 show in step s3 determine present frame in feature and the particular frame in feature between
The specific processing of corresponding relation.
As shown in figure 3, first, in step S31, the characteristic model of each feature in the particular frame is obtained.
Specifically, the characteristic model of each feature based on this feature in the particular frame and its before the particular frame extremely
Lack the character pair in a frame and formed.Exemplarily, the characteristic model of each feature in t-1 frames can be based on t-1
This feature and its character pair in the 0th frame to t-2 frames in frame and formed.
More specifically, as described above, feature may include it is at least one in positional information and description information.Exemplarily,
Characteristic model can be formed by the description information of feature.
For example, it is assumed that after the estimation of t-1 frames terminates, M feature is remained, wherein, M is natural number.
Thus, the characteristic model of each feature in t-1 frames is represented byWherein,Represent the feature j in t-1 frames and its character pair in t-2 to the 0th frame;F () table
Show characteristic model function, its specific manifestation form is different and different dependent on detected scene, is not especially limited herein.
It is pointed out that the characteristic model of some feature in t-1 frames can also be understood to estimated in t frames
The description information of feature corresponding with this feature.In other words, it is contemplated that the description information of feature corresponding with feature j in t framesIt can be represented by following formula (1):
In addition it should be pointed out that although illustrating characteristic model by taking description information as an example above, art technology
Personnel, but can be by other information that feature is included (such as it is understood that the method not limited to this of the embodiment of the present invention
Positional information etc.) characteristic model is formed, it will not be described in detail herein.
On the other hand, in step S32, the history attitude mode of object is obtained.
The history attitude mode of the object is based on the motion between at least frame before the particular frame and the particular frame
Parameter information and formed.
Exemplarily, the history attitude mode of object can based on the 0th frame to the moving parameter information between t-1 frames shape
Into.The moving parameter information may include the object in three dimensions it is every it is one-dimensional on kinematic parameter.For example, the fortune
Dynamic parameter information may include translation parameters, rotation parameter etc..Exemplarily, the position that the history attitude mode of object passes through feature
Confidence ceases and formed.
Thus, the history attitude mode of object is represented by g (P after t-1 framest-1,t-2,…,P2,1,P1,0).Wherein,
Pt-1,t-2Represent the moving parameter information between t-2 frames and t-1 frames, P2,1Represent the motion ginseng between the 1st frame and the 2nd frame
Number information, P1,0Represent moving parameter information between the 0th frame and the 1st frame, etc..The moving parameter information can be based on spy
The positional information of sign and obtain.G () represents attitude mode function, and its specific manifestation form depends on detected scene not
It is same and different, it is not especially limited herein.
It should be noted that, although step S32 is shown as after step S31 in figure 3, however, in fact, step
S31 and step S32 can be performed (for example, concurrently or reversedly) in any order.
Then, in step S33, based on history attitude mode, it is contemplated that the current pose model of object, wherein,
Represent the estimated of the attitude mode (kinematic parameter) between t-1 frames and t frames.
More specifically, in the first example, can be by the method for recurrence come estimated current pose model.In the second example
In, can be by the method for maximum likelihood come estimated current pose model.In the 3rd example, the side of maximum a posteriori probability can be passed through
Method carrys out estimated current pose model.Certainly, it the foregoing is only example.Those skilled in the art take on can instructing herein
Other appropriate methods carry out the current pose model of estimated object.
In addition, it will be appreciated by those skilled in the art that the current pose model of be expected object is three dimensions here
In model.When being related to the calculating of two dimensional image plane, corresponding two dimensional model can be obtained by transforming function transformation function.
Specifically, for example, the positional information of feature corresponding with feature j in the estimated t frames of following formula (2) can be passed through
Wherein, as described above,It is the current pose model of the object in estimated three dimensions, in other words, institute is pre-
The t-1 frames of meter are to the moving parameter information between t frames;
It is the positional information of j-th of feature in the t-1 frames in two-dimensional space (plane of delineation);
H () is transforming function transformation function, and it is used for the two-dimensional position information of j-th of feature in t-1 framesBe converted to
Three dimensional local information corresponding to j-th of feature in t-1 frames, is then based in t-1 frames three-dimensional position corresponding to j-th of feature
Information and the t-1 frames that are expected are to the moving parameter information between t frames, it is contemplated that in t frames with j pairs of feature
The three dimensional local information for the feature answered, and the estimated three dimensional local information is reversed and is changed to two-dimensional position information, i.e.,.It is above-mentioned
Position coordinates conversion and inverse conversion between two-dimensional space and three dimensions is known to those skilled in the art, no longer detailed herein
State.
In addition it should be pointed out that although illustrating history or current pose model by taking positional information as an example above,
It will be appreciated by those skilled in the art that the method not limited to this of the embodiment of the present invention, but its that can be included by feature
He comes history of forming or current pose model at information (description information etc.), will not be described in detail herein.
Next, in step S34, based on this feature model and it is expected that current pose model, determine the corresponding relation.
The processing for determining the corresponding relation is further described next, with reference to Fig. 4.
Fig. 4 is to be shown in determine the feature in present frame and the spy in the particular frame in method according to embodiments of the present invention
The flow chart of the step of detailed processing of corresponding relation between sign.
As shown in figure 4, first, in step S341, calculate each feature in present frame and each spy in the particular frame
The first matching degree between the characteristic model of sign.
Specifically, the first matching degree between the feature in t frames and the characteristic model of t-1 framesCan be by following
Expression formula (3) calculates:
Wherein,Represent the description information of acquisition after testing of ith feature in t frames;Represent what is be expected
The description information of feature i in, t frames corresponding with the feature j in t-1 frames, that is, characteristic model recited above;w1
() is the calculating function of the first matching degree, and it can as suitably desired be designed by those skilled in the art, not made herein
It is specific to limit.Exemplarily,WithBetween it is closer, that is to say, that each feature in present frame more meets character modules
Type, the first matching degree being calculated are bigger.WithBetween further away from, that is to say, that each feature in present frame
Characteristic model is not met more, and the first matching degree being calculated is smaller.
Then, in step S342, calculate each feature in present frame and it is expected that current pose model between second
Matching degree.
Specifically, the feature in t frames and it is expected that current pose model between the second matching degreeCan be by following
Expression formula (4) calculates:
Wherein,The positional information of feature corresponding with feature j in the t frames that expression is expected, it can represent as above institute
State estimated current pose model;
Represent the positional information of the ith feature in t frames;
w2() is the calculating function of the second matching degree, and it can be set as suitably desired by those skilled in the art
Meter, is not especially limited herein.
Exemplarily,WithBetween it is closer, that is to say, that the positional information of each feature in present frame more accords with
Estimated current pose model is closed, the second matching degree being calculated is bigger.WithBetween further away from, that is to say, that when
The positional information of each feature in previous frame does not meet estimated current pose model more, and the second matching degree being calculated is got over
It is small.
After obtaining the first matching degree by step S341 and obtaining the second matching degree by step S342, in step
S343, based on the first matching degree and the second matching degree, calculate the comprehensive matching degree of each feature in present frame.
Exemplarily, at least one larger in first matching degree and second matching degree, what is be calculated is comprehensive
It is larger to close matching degree.All smaller in first matching degree and second matching degree, the comprehensive matching degree being calculated is smaller.
Specifically, in the first example, comprehensive matching degree can be calculated by the way that the first matching degree and the second matching degree are summed.Second
In example, comprehensive matching degree can be calculated by the way that the first matching degree is added with the second matching degree.Certainly, it the foregoing is only and show
Example, those skilled in the art can design the calculation of other various comprehensive matching degree, not make to have herein herein on basis
Body limits.
Then, in step S344, based on comprehensive matching degree, determine feature and the feature in the particular frame in present frame it
Between corresponding relation.
The exemplary determination mode of the corresponding relation is described next, with reference to Fig. 5.
Fig. 5 is to be shown in determine the feature in present frame and the spy in the particular frame in method according to embodiments of the present invention
The schematic diagram of corresponding relation between sign.
As shown in figure 5, X1、X2And X3The feature in t-1 frames, Y are represented respectively1、Y2And Y3The spy in t frames is represented respectively
Sign, w represent the comprehensive matching degree between the feature in feature and t frames in t-1 frames.For example, w23Represent X2With Y3Between
Comprehensive matching degree;Etc..
That is, it can be combined for the whole between each feature in t frames and each feature in t-1 frames
Calculate its comprehensive matching degree.It is then possible to calculate in the case of various combination is selected for the totality of all features in t frames
Comprehensive matching degree.For example, as selection (X1,Y1)、(X2,Y2) and (X3,Y3) the first candidate as corresponding relation when, calculate first
Overall synthetic matching degree w11+w22+w33.As selection (X1,Y1)、(X2,Y3) and (X3,Y2) the second candidate as corresponding relation when,
Calculate the second overall synthetic matching degree w11+w23+w32;Etc..Matched in the overall synthetic for calculating the candidate of all corresponding relations
After degree, maximum overall synthetic matching degree is therefrom selected, so as to obtain its corresponding corresponding relation.
Certainly, the mode of above-described determination corresponding relation is merely illustrative.Those skilled in the art can pass through other
Various Dynamic Programming modes determine the corresponding relation between the feature in feature and the particular frame in present frame.For example, still
By taking Fig. 5 as an example, it can select first with one of feature in t-1 frames (for example, X1) the maximum spy of comprehensive matching degree
Sign, such as Y3;Then, selected in being combined from residue with next feature (for example, X2) the maximum feature of comprehensive matching degree, example
Such as, Y2;Etc., so as to obtain corresponding relation (X1,Y3)、(X2,Y2) and (X3,Y1)。
More than, the processing of the determination corresponding relation in Fig. 1 step S3 is described in detail in reference picture 3- Fig. 5.
It is pointed out that the foregoing describe by characteristic model and attitude mode to determine the processing side of corresponding relation
Formula.However, the method not limited to this of the embodiment of the present invention.Those skilled in the art can be appropriate by other on basis herein
Mode and determine corresponding relation.For example, can be directly based upon the 0th frame to the positional information of the feature of t-1 frames obtains motion ginseng
Number information, and based on description information of the 0th frame to t-1 frames and the moving parameter information, determine feature in t frames with
The corresponding relation between feature in t-1 frames.
In addition it should be pointed out that it will be appreciated by those skilled in the art that for the 0th frame as described above to t frames
Picture frame sequence, as t=1, it is based only upon the feature of the 0th frame and determines corresponding relation.
Next, return to the methods that is used to determine the posture of object of the Fig. 2 continuing on the embodiment of the present invention.It is determined that pair
After should being related to, methods described is carried out to step S4.In step S4, based on the corresponding relation, the present frame image and should
The image of particular frame, carry out estimation.
Specifically, those skilled in the art can use the various motion estimation algorithms of known and following exploitation, such as
3D-2D algorithms etc., the image based on the corresponding relation, the image of the present frame and the particular frame carry out estimation,
This is not especially limited.
In addition, by the determination of corresponding relation as described above, the feature in present frame can be divided into and the particular frame
In feature corresponding to the first category feature and the second category feature not corresponding with any feature in the particular frame.It is described
First category feature can be generically referred to as old feature, such as corresponding to object with the situation of similar shooting image of finding a view.Institute
New feature or emerging feature in the current frame can be generically referred to as by stating the second category feature, for example, corresponding to object with
The situation of larger shooting image of finding a view is differed, for example, object moves to shooting vacant lot, etc. from shooting building.
Thus, in step S4 estimation, the second category feature can not be considered, and be based on first category feature, be somebody's turn to do
The image of the image of present frame and the particular frame, estimation is carried out, thus, obtains motion estimation result.
So as in step S5, the result based on estimation, determine the posture of object.Specifically, those skilled in the art
The various algorithms of known and following exploitation can be used, the result based on estimation determines the posture of object, herein no longer
It is described in detail.
Optionally, in addition, after the result of estimation is obtained, the method for the embodiment of the present invention can also be to character modules
In type and attitude mode any one or both be updated, think that the processing of next frame is ready.
Specifically, for example, the result of estimation is included in the moving parameter information between present frame and the particular frame.By
This, on the one hand, can based on the moving parameter information between present frame and the particular frame and more on the renewal of attitude mode
New historical attitude mode.In other words, can be based on the moving parameter information between present frame and the particular frame and history appearance
States model, the current pose model of object is established, its specific processing and the processing as above described in step s 32 are similar,
This is not repeated.
On the other hand, the renewal on characteristic model, it is possible, firstly, to based on the motion between present frame and the particular frame
Parameter information, first category feature is subdivided into first kind subcharacter and the second class subcharacter.The first kind subcharacter is such as
The consistent feature of moving parameter information in upper described old feature between present frame and the particular frame, also referred to as intra-office are special
Sign, it is represented based on the feature that historical frames the are expected feature consistent with the actually detected feature that arrives in present frame.Described
Two class subcharacters are the inconsistent spy of the moving parameter information in old feature as described above between present frame and the particular frame
Sign, feature also referred to as not in the know, its represent in the feature that is expected based on historical frames and present frame it is actually detected to feature differ
The feature of cause.
Next, on the one hand, can be with the moving parameter information between present frame and the particular frame more for intra-office feature
Its new characteristic model, its specific processing is similar with the processing as above described in step S31, is not repeated herein.The opposing party
Face, for feature not in the know, its characteristic model can be given up.
In addition, corresponding to object to differ the situation of larger shooting image of finding a view, may exist in particular frame and fail
The feature corresponding with the feature in present frame, i.e. the feature to disappear in the current frame.Accordingly it is also possible to give up in particular frame
This feature characteristic model.
In addition, for emerging feature in the current frame as described above, the characteristic model of this feature can be initialized, its
Specific processing is similar with the processing as above described in step S31, is not repeated herein.
More than, reference picture 1- Fig. 5 describes the method for being used to determine the posture of object of the embodiment of the present invention.In the present invention
The object gesture of embodiment is determined in method, due to make use of at least two frames before present frame (for example, t frames) (for example,
0 frame is to t-1 frames) image information determine the corresponding relation between feature and carry out estimation, fortune can be significantly improved
The precision of dynamic estimation, so as to significantly improve the precision for the posture for determining object, reduce accumulated error.
Although it is pointed out that be described above the embodiment of the present invention be used for determine object posture method,
But each step in methods described can suitably be modified according to application scenario, combine, change, add or deleted.Example
Such as, when only needing to carry out estimation, the method for the embodiment of the present invention can omit step S5.That is, the embodiment of the present invention can carry
For a kind of method for estimation, including:Obtain the picture frame sequence that object gathers during movement;Detection image frame sequence
The feature of present frame in row;Based on each feature in the particular frame before present frame and its before the particular frame at least
Character pair in one frame and the moving parameter information between at least frame before the particular frame and the particular frame, really
The corresponding relation between the feature in feature and the particular frame in settled previous frame;And based on the corresponding relation, the present frame
Image and the particular frame image, carry out estimation, to obtain the result of estimation.
Next, with reference to Fig. 6 descriptions device for being used to determine the posture of object according to embodiments of the present invention.
Fig. 6 is the diagram block diagram for being used to determine the main configuration of the device of the posture of object according to embodiments of the present invention.
As shown in fig. 6, the object gesture determining device 600 of the embodiment of the present invention includes:Image acquisition unit 610, feature
Detection unit 620, relation determination unit 630, motion estimation unit 640 and posture determining unit 650.
Described image acquiring unit 610 obtains the picture frame sequence that object gathers during movement.
The feature of present frame in the detection image frame sequence of characteristic detection unit 620.
The relation determination unit 630 based on each feature in the particular frame before present frame and its particular frame it
Character pair in a preceding at least frame and the kinematic parameter between at least frame before the particular frame and the particular frame
Information, determine the corresponding relation between the feature in the feature and the particular frame in present frame.
The image of the motion estimation unit 640 based on the corresponding relation, the image of the present frame and the particular frame, enters
Row estimation.
The result of the posture determining unit 650 based on estimation, determine the posture of object.
Next, with reference to the exemplary configuration of the relation determination unit 630 of Fig. 7 detailed descriptions in one embodiment.
Fig. 7 is the detailed of the relation determination unit that is used to determine in the device of the posture of object of the diagram according to Fig. 6
The block diagram of configuration.
As shown in fig. 7, the relation determination unit 630 includes:Characteristic model acquiring unit 6310, attitude mode acquiring unit
6320th, attitude mode expected cell 6330 and corresponding relation determining unit 6340.
Specifically, the characteristic model acquiring unit 6310 obtains the characteristic model of each feature in the particular frame.Often
The characteristic model of individual feature is corresponding special based on this feature in the particular frame and its at least frame before the particular frame
Levy and formed.
The attitude mode acquiring unit 6320 obtains the history attitude mode of object.The history attitude mode base of the object
Moving parameter information between at least frame before the particular frame and the particular frame and formed.
The attitude mode expected cell 6330 is based on history attitude mode, it is contemplated that the current pose model of object.
The corresponding relation determining unit 6340 is based on this feature model and the history attitude mode, determines that the correspondence is closed
System.
Next, with reference to the detailed configuration of the corresponding relation determining unit 6340 of Fig. 8 descriptions in one embodiment.
Fig. 8 is the block diagram of the detailed configuration for the corresponding relation determining unit being shown in the relation determination unit shown in Fig. 7.
As shown in figure 8, the corresponding relation determining unit 6340 includes:First matching degree computing unit 6340A, second
With degree computing unit 6340B, comprehensive matching degree computing unit 6340C and feature corresponding relation determining unit 6340D.
The first matching degree computing unit 6340A calculates each feature in present frame and each spy in the particular frame
The first matching degree between the characteristic model of sign.The second matching degree computing unit 6340B calculates each spy in present frame
Sign and it is expected that current pose model between the second matching degree.The comprehensive matching degree computing unit 6340C is based on first
With degree and the second matching degree, calculate present frame in each feature comprehensive matching degree.The feature corresponding relation determining unit
6340D is based on comprehensive matching degree, determines the corresponding relation between the feature in the feature and the particular frame in present frame.
In another embodiment, the corresponding relation determining unit is configured to:Feature in present frame is divided into specific with this
First category feature corresponding to feature in frame and the second category feature not corresponding with any feature in the particular frame, make
For the corresponding relation.Correspondingly, the motion estimation unit is configured to:Based on first category feature, the present frame image and
The image of the particular frame, carry out estimation.
In another embodiment, the result of estimation is included in the kinematic parameter letter between present frame and the particular frame
Breath.Described device also includes at least one of following:Attitude mode updating block, based between present frame and the particular frame
Moving parameter information and history attitude mode, establish the current pose model of object;And characteristic model updating block,
Based on the moving parameter information between present frame and the particular frame, first category feature is subdivided into specific with this with present frame
The consistent first kind subcharacter of moving parameter information between frame and the kinematic parameter letter between present frame and the particular frame
The second inconsistent class subcharacter is ceased, updates the first kind subcharacter with the moving parameter information between present frame and the particular frame
Characteristic model, and give up the characteristic model of the second class subcharacter.
The detailed configuration for being used to determine the unit of the device 600 of object gesture of the embodiment of the present invention and operation are
Determine to be described in detail in method in reference picture 1-5 object gesture, be not repeated herein.
Although it is pointed out that be described above the embodiment of the present invention be used for determine object posture device,
But each unit in described device can suitably be modified according to application scenario, combine, change, add or deleted.Example
Such as, when only needing to carry out estimation, the method for the embodiment of the present invention can omit posture determining module.That is, the present invention is implemented
Example can provide a kind of means for motion estimation, including:Image acquisition unit, obtain the image that object gathers during movement
Frame sequence;Characteristic detection unit, the feature of the present frame in detection image frame sequence;Relation determination unit, based on present frame it
Each feature in preceding particular frame and its character pair at least frame before the particular frame and in the particular frame
And the moving parameter information between at least frame before the particular frame, determine the feature in present frame and the spy in the particular frame
Corresponding relation between sign;And motion estimation unit, based on the corresponding relation, the image of the present frame and the particular frame
Image, estimation is carried out, to obtain motion estimation result.
The object gesture determining device of the embodiment of the present invention is described next, with reference to Fig. 9.
Fig. 9 is the structure chart of the main hardware configuration of diagram object gesture determining device according to embodiments of the present invention.
As shown in figure 9, the object gesture determining device 900 of the embodiment of the present invention mainly includes:One or more processors
910th, memory 920, image capture module 940 and output module 950, these components pass through bus system 930 and/or other shapes
Bindiny mechanism's (not shown) interconnection of formula.It should be noted that the component and structure of the object gesture determining device 900 shown in Fig. 9
It is illustrative, and not restrictive, as needed, object gesture determining device 900 can also have other assemblies and structure.
Image capture module 940 can gather the picture frame sequence during object motion.Exemplarily, image capture module
940 can by Still camera, dynamically image first-class various photographing elements and form.Output module 950 can be used for object output
The result that posture determines.Exemplarily, the output module 950 can be the image output module or such as of display etc.
The voice output module of loudspeaker etc..
Processor 910 can be CPU (CPU) or have data-handling capacity and/or instruction execution capability
Other forms processing unit, and can be desired to perform with other components in control object posture determining device 900
Function.
Memory 920 can include one or more computer program products, and the computer program product can include
Various forms of computer-readable recording mediums, such as volatile memory and/or nonvolatile memory.The volatibility is deposited
Reservoir is such as can include random access memory (RAM) and/or cache memory (cache).It is described non-volatile
Memory is such as can include read-only storage (ROM), hard disk, flash memory.Can be with the computer-readable recording medium
One or more computer program instructions are stored, processor 910 can run described program instruction, to realize the embodiment of the present invention
Object gesture determine the function of method and/or other desired functions.In addition, for example, as shown in figure 9, processor 910
Can be by operation program instruction calls above with reference to the unit described in Fig. 6, i.e. image acquisition unit 610, feature detection list
Member 620, relation determination unit 630, motion estimation unit 640 and posture determining unit 650, to realize corresponding function.
Exemplarily, the processor 910 can run described program instruction, to perform following handle:From IMAQ
Module obtains described image frame sequence;The feature of present frame in detection image frame sequence;Based on the particular frame before present frame
In each feature and its character pair at least frame before the particular frame and in the particular frame and the particular frame
The moving parameter information between an at least frame before, determines pair between the feature in the feature and the particular frame in present frame
It should be related to;Based on the image of the corresponding relation, the image of the present frame and the particular frame, estimation is carried out;And it is based on
The result of estimation, determine the posture of object.
Realized it should be noted that, although being described above with reference to Fig. 9 by way of processor calling for determination pair
The device of the posture of elephant, however, it will be appreciated by those skilled in the art that this is only example.The embodiment of the present invention is used to determine
The device of the posture of object can also be realized by way of other hardware circuits of embedded system etc., no longer detailed herein
State.
More than, the according to embodiments of the present invention method and dress that are used to determine the posture of object are described referring to figs. 1 to Fig. 9
Put.
The embodiment of the present invention be used for determine the method and apparatus of posture of object, due to before make use of present frame
The image informations of at least two frames determine the corresponding relation between feature and carry out estimation, motion can be significantly improved and estimated
The precision of meter, so as to significantly improve the precision for the posture for determining object, reduce accumulated error.
It should be noted that in this manual, term " comprising ", "comprising" or its any other variant are intended to
Nonexcludability includes, so that process, method, article or equipment including a series of elements not only will including those
Element, but also the other element including being not expressly set out, or it is this process, method, article or equipment also to include
Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that
Other identical element also be present in process, method, article or equipment including the key element.
Furthermore, it is necessary to explanation, in this manual, the statement of similar " Unit first ... ", " Unit second ... " is only
Distinguished in order to convenient in description, and be not meant to it and must be implemented as two or more units of physical separation.In fact,
As needed, the unit can be entirely implemented as a unit, can also be embodied as multiple units.
Finally, it is to be noted that, a series of above-mentioned processing are not only included with order described here in temporal sequence
The processing of execution, and the processing including performing parallel or respectively rather than in chronological order.
Through the above description of the embodiments, those skilled in the art can be understood that the present invention can be by
Software adds the mode of required hardware platform to realize, naturally it is also possible to is all implemented by hardware.Based on such understanding,
What technical scheme contributed to background technology can be embodied in the form of software product in whole or in part,
The computer software product can be stored in storage medium, such as ROM/RAM, magnetic disc, CD, including some instructions are making
Obtain a computer equipment (can be personal computer, server, or network equipment etc.) and perform each embodiment of the present invention
Or the method described in some parts of embodiment.
In embodiments of the present invention, units/modules can be realized with software, so as to by various types of computing devices.
For example, the executable code module of a mark can include the one or more physics or logic of computer instruction
Block, for example, it can be built as object, process or function.Nevertheless, the executable code of institute's mark module need not
It is physically located together, but the different instructions being stored in different positions can be included, is combined when in these command logics
When together, its Component units/module and the regulation purpose for realizing the units/modules.
When units/modules can be realized using software, it is contemplated that the level of existing hardware technique, it is possible to software
The units/modules of realization, in the case where not considering cost, those skilled in the art can build corresponding hardware circuit and come
Function corresponding to realization, the hardware circuit is including conventional ultra-large integrated (VLSI) circuit or gate array and such as
The existing semiconductor of logic chip, transistor etc either other discrete elements.Module can also be set with programmable hardware
Standby, field programmable gate array, programmable logic array, programmable logic device etc. are realized.
The present invention is described in detail above, principle and embodiment party of the specific case used herein to the present invention
Formula is set forth, and the explanation of above example is only intended to help the method and its core concept for understanding the present invention;It is meanwhile right
In those of ordinary skill in the art, according to the thought of the present invention, change is had in specific embodiments and applications
Part, in summary, this specification content should not be construed as limiting the invention.
Claims (10)
1. a kind of method for being used to determine the posture of object, including:
Obtain the picture frame sequence that object gathers during movement;
The feature of present frame in detection image frame sequence;
It is corresponding special based on each feature in the particular frame before present frame and its at least frame before the particular frame
Sign and the moving parameter information between at least frame before the particular frame and the particular frame, determine the spy in present frame
The corresponding relation between feature in sign and the particular frame;
Based on the image of the corresponding relation, the image of the present frame and the particular frame, estimation is carried out;And
Result based on estimation, determine the posture of object.
2. the method for claim 1, wherein determine pair between the feature in the feature and the particular frame in present frame
The step of should being related to, includes:
The characteristic model of each feature in the particular frame is obtained, wherein, the characteristic model of each feature is based in the particular frame
This feature and its character pair at least frame before the particular frame and formed;
The history attitude mode of object is obtained, wherein, the history attitude mode of the object is based in the particular frame and the particular frame
The moving parameter information between an at least frame before and formed;
Based on history attitude mode, it is contemplated that the current pose model of object;
Based on this feature model and it is expected that current pose model, determine the corresponding relation.
3. method as claimed in claim 2, wherein,
The step of determining the corresponding relation between the feature in the feature in present frame and the particular frame includes:
By the feature in present frame be divided into the first category feature corresponding with the feature in the particular frame and with the particular frame
Not corresponding second category feature of any feature, as the corresponding relation;
The step of carrying out estimation includes:
Based on the image of first category feature, the image of the present frame and the particular frame, estimation is carried out.
4. method as claimed in claim 3, wherein, the result of estimation is included in the fortune between present frame and the particular frame
Dynamic parameter information, methods described also include at least one of following:
Based on the moving parameter information between present frame and the particular frame and history attitude mode, the current of object is established
Attitude mode;And
Based on the moving parameter information between present frame and the particular frame, first category feature is subdivided into present frame with being somebody's turn to do
The consistent first kind subcharacter of moving parameter information between particular frame and the motion ginseng between present frame and the particular frame
The second inconsistent class subcharacter of number information, update first kind with the moving parameter information between present frame and the particular frame
The characteristic model of feature, and give up the characteristic model of the second class subcharacter.
5. method as claimed in claim 2, determine the corresponding pass between the feature in present frame and the feature in the particular frame
The step of being includes:
Calculate the first matching degree between the characteristic model of each feature in each feature and the particular frame in present frame;
Calculate present frame in each feature and it is expected that current pose model between the second matching degree;
Based on the first matching degree and the second matching degree, the comprehensive matching degree of each feature in present frame is calculated;And
Based on comprehensive matching degree, the corresponding relation between the feature in the feature and the particular frame in present frame is determined.
6. a kind of device for being used to determine the posture of object, including:
Image acquisition unit, obtain the picture frame sequence that object gathers during movement;
Characteristic detection unit, the feature of the present frame in detection image frame sequence;
Relation determination unit, based on each feature in the particular frame before present frame and its at least one before the particular frame
Character pair in frame and the moving parameter information between at least frame before the particular frame and the particular frame, it is determined that
The corresponding relation between the feature in feature and the particular frame in present frame;
Motion estimation unit, based on the image of the corresponding relation, the image of the present frame and the particular frame, carry out motion and estimate
Meter;And
Posture determining unit, the result based on estimation, determine the posture of object.
7. device as claimed in claim 6, wherein, the relation determination unit includes:
Characteristic model acquiring unit, the characteristic model of each feature in the particular frame is obtained, wherein, the character modules of each feature
Character pair of the type based on this feature in the particular frame and its at least frame before the particular frame and formed;
Attitude mode acquiring unit, the history attitude mode of object is obtained, wherein, the history attitude mode of the object is based at this
The moving parameter information between an at least frame before particular frame and the particular frame and formed;
Attitude mode expected cell, based on history attitude mode, it is contemplated that the current pose model of object;
Corresponding relation determining unit, based on this feature model and it is expected that current pose model, determine the corresponding relation.
8. device as claimed in claim 7, wherein,
The corresponding relation determining unit is configured to:
By the feature in present frame be divided into the first category feature corresponding with the feature in the particular frame and with the particular frame
Not corresponding second category feature of any feature, as the corresponding relation;
The motion estimation unit is configured to:
Based on the image of first category feature, the image of the present frame and the particular frame, estimation is carried out.
9. device as claimed in claim 8, wherein, the result of estimation is included in the fortune between present frame and the particular frame
Dynamic parameter information, described device also include at least one of following:
Attitude mode updating block, based on the moving parameter information between present frame and the particular frame and history posture mould
Type, establish the current pose model of object;And
Characteristic model updating block, based on the moving parameter information between present frame and the particular frame, by first category feature
Be subdivided into the consistent first kind subcharacter of moving parameter information between present frame and the particular frame and with present frame with should
The second inconsistent class subcharacter of moving parameter information between particular frame, with the kinematic parameter between present frame and the particular frame
The characteristic model of the information updating first kind subcharacter, and give up the characteristic model of the second class subcharacter.
10. device as claimed in claim 7, the corresponding relation determining unit includes:
First matching degree computing unit, calculate the characteristic model of each feature and each feature in the particular frame in present frame
Between the first matching degree;
Second matching degree computing unit, calculate present frame in each feature and it is expected that current pose model between second
With degree;
Comprehensive matching degree computing unit, based on the first matching degree and the second matching degree, calculate the comprehensive of each feature in present frame
Close matching degree;And
Feature corresponding relation determining unit, based on comprehensive matching degree, determine the feature in present frame and the feature in the particular frame
Between corresponding relation.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610329835.0A CN107403440B (en) | 2016-05-18 | 2016-05-18 | Method and apparatus for determining a pose of an object |
JP2017080192A JP6361775B2 (en) | 2016-05-18 | 2017-04-14 | Method and apparatus for identifying target posture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610329835.0A CN107403440B (en) | 2016-05-18 | 2016-05-18 | Method and apparatus for determining a pose of an object |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107403440A true CN107403440A (en) | 2017-11-28 |
CN107403440B CN107403440B (en) | 2020-09-08 |
Family
ID=60394023
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610329835.0A Active CN107403440B (en) | 2016-05-18 | 2016-05-18 | Method and apparatus for determining a pose of an object |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP6361775B2 (en) |
CN (1) | CN107403440B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108664122A (en) * | 2018-04-04 | 2018-10-16 | 歌尔股份有限公司 | A kind of attitude prediction method and apparatus |
CN109118523A (en) * | 2018-09-20 | 2019-01-01 | 电子科技大学 | A kind of tracking image target method based on YOLO |
CN115235500A (en) * | 2022-09-15 | 2022-10-25 | 北京智行者科技股份有限公司 | Lane line constraint-based pose correction method and device and all-condition static environment modeling method and device |
CN115797659A (en) * | 2023-01-09 | 2023-03-14 | 思看科技(杭州)股份有限公司 | Data splicing method, three-dimensional scanning system, electronic device and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116863124B (en) * | 2023-09-04 | 2023-11-21 | 所托(山东)大数据服务有限责任公司 | Vehicle attitude determination method, controller and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101393609A (en) * | 2008-09-18 | 2009-03-25 | 北京中星微电子有限公司 | Target detection tracking method and device |
CN102117487A (en) * | 2011-02-25 | 2011-07-06 | 南京大学 | Scale-direction self-adaptive Mean-shift tracking method aiming at video moving object |
CN103473757A (en) * | 2012-06-08 | 2013-12-25 | 株式会社理光 | Object tracking method in disparity map and system thereof |
CN103646391A (en) * | 2013-09-30 | 2014-03-19 | 浙江大学 | Real-time camera tracking method for dynamically-changed scene |
CN104424648A (en) * | 2013-08-20 | 2015-03-18 | 株式会社理光 | Object tracking method and device |
CN105184803A (en) * | 2015-09-30 | 2015-12-23 | 西安电子科技大学 | Attitude measurement method and device |
-
2016
- 2016-05-18 CN CN201610329835.0A patent/CN107403440B/en active Active
-
2017
- 2017-04-14 JP JP2017080192A patent/JP6361775B2/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101393609A (en) * | 2008-09-18 | 2009-03-25 | 北京中星微电子有限公司 | Target detection tracking method and device |
CN102117487A (en) * | 2011-02-25 | 2011-07-06 | 南京大学 | Scale-direction self-adaptive Mean-shift tracking method aiming at video moving object |
CN103473757A (en) * | 2012-06-08 | 2013-12-25 | 株式会社理光 | Object tracking method in disparity map and system thereof |
CN104424648A (en) * | 2013-08-20 | 2015-03-18 | 株式会社理光 | Object tracking method and device |
CN103646391A (en) * | 2013-09-30 | 2014-03-19 | 浙江大学 | Real-time camera tracking method for dynamically-changed scene |
CN105184803A (en) * | 2015-09-30 | 2015-12-23 | 西安电子科技大学 | Attitude measurement method and device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108664122A (en) * | 2018-04-04 | 2018-10-16 | 歌尔股份有限公司 | A kind of attitude prediction method and apparatus |
CN109118523A (en) * | 2018-09-20 | 2019-01-01 | 电子科技大学 | A kind of tracking image target method based on YOLO |
CN109118523B (en) * | 2018-09-20 | 2022-04-22 | 电子科技大学 | Image target tracking method based on YOLO |
CN115235500A (en) * | 2022-09-15 | 2022-10-25 | 北京智行者科技股份有限公司 | Lane line constraint-based pose correction method and device and all-condition static environment modeling method and device |
CN115797659A (en) * | 2023-01-09 | 2023-03-14 | 思看科技(杭州)股份有限公司 | Data splicing method, three-dimensional scanning system, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
JP6361775B2 (en) | 2018-07-25 |
CN107403440B (en) | 2020-09-08 |
JP2017208080A (en) | 2017-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109872306B (en) | Medical image segmentation method, device and storage medium | |
CN109241903B (en) | Sample data cleaning method, device, computer equipment and storage medium | |
CN107403440A (en) | For the method and apparatus for the posture for determining object | |
CN110866509B (en) | Action recognition method, device, computer storage medium and computer equipment | |
CN108062526A (en) | A kind of estimation method of human posture and mobile terminal | |
Zeng et al. | View-invariant gait recognition via deterministic learning | |
CN108133456A (en) | Face super-resolution reconstruction method, reconstructing apparatus and computer system | |
CN111179419B (en) | Three-dimensional key point prediction and deep learning model training method, device and equipment | |
CN106355195B (en) | System and method for measuring image definition value | |
WO2021044122A1 (en) | Scene representation using image processing | |
CN109785322A (en) | Simple eye human body attitude estimation network training method, image processing method and device | |
CN111126249A (en) | Pedestrian re-identification method and device combining big data and Bayes | |
CN110555526A (en) | Neural network model training method, image recognition method and device | |
CN112419419A (en) | System and method for human body pose and shape estimation | |
CN114387513A (en) | Robot grabbing method and device, electronic equipment and storage medium | |
JP2019046334A (en) | Classification model generation apparatus, image data classification apparatus and program thereof | |
CN114549765A (en) | Three-dimensional reconstruction method and device and computer-readable storage medium | |
CN117037215A (en) | Human body posture estimation model training method, estimation device and electronic equipment | |
Wang et al. | Object counting in video surveillance using multi-scale density map regression | |
CN113158970B (en) | Action identification method and system based on fast and slow dual-flow graph convolutional neural network | |
CN109858326A (en) | Based on classification semantic Weakly supervised online visual tracking method and system | |
Zhang et al. | Video extrapolation in space and time | |
CN117237756A (en) | Method for training target segmentation model, target segmentation method and related device | |
JP2023527627A (en) | Inference of joint rotation based on inverse kinematics | |
JP2008134939A (en) | Moving object tracking apparatus, moving object tracking method, moving object tracking program with the method described therein, and recording medium with the program stored therein |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |