CN109255801A - The method, apparatus, equipment and storage medium of three-dimension object Edge Following in video - Google Patents
The method, apparatus, equipment and storage medium of three-dimension object Edge Following in video Download PDFInfo
- Publication number
- CN109255801A CN109255801A CN201810880412.7A CN201810880412A CN109255801A CN 109255801 A CN109255801 A CN 109255801A CN 201810880412 A CN201810880412 A CN 201810880412A CN 109255801 A CN109255801 A CN 109255801A
- Authority
- CN
- China
- Prior art keywords
- point
- frame image
- dimension object
- marginal point
- previous frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Abstract
The embodiment of the present application provides method, apparatus, equipment and the storage medium of three-dimension object Edge Following in a kind of video.This method comprises: obtaining the marginal point in previous frame image in the edge contour of target three-dimension object;Determine match point corresponding with the marginal point in previous frame image in current frame image;According to the positional relationship of each marginal point and surrounding edge point in previous frame image, in current frame image the positional relationship of the corresponding match point of each marginal point and surrounding match point determine error matching points to correct matching double points;The relative attitude of the target three-dimension object in current frame image and previous frame image is calculated according to the model information of correct matching double points and target three-dimension object.The relative attitude of calculated target three-dimension object can be made more accurate, and then be preferably tracked to the edge of target three-dimension object.
Description
Technical field
The invention relates to three-dimension object Edge Followings in technical field of video processing more particularly to a kind of video
Method, apparatus, equipment and storage medium.
Background technique
With the technology maturation of video capture device, what more and more fields recorded and analyzed as information using video
Means.Position of the specific three dimensional object in picture is determined in a large amount of video, and it is quickly positioned and is persistently chased after
Track is the basis that further operating and analysis are carried out to video.
Due to being had more compared to the method for tracing based on point based on Edge Following method for the three-dimension object in video
Robustness.Three-dimension object in video is tracked so generalling use Edge Following method.
When being tracked using Edge Following method to three-dimension object in video, due to finding two frame three-dimension object sides of front and back
It is existing sometimes because the reasons such as illumination, background change search out the matching double points of mistake during the match point of edge profile
The calculating that the data with error matching points pair carry out relative attitude is directlyed adopt in technology, and not to error matching points to progress
Processing makes the relative attitude inaccuracy of calculated three-dimension object, to cannot preferably chase after to the edge of three-dimension object
Track.
Summary of the invention
The embodiment of the present application provides method, apparatus, equipment and the storage medium of three-dimension object Edge Following in a kind of video,
The method for solving three-dimension object Edge Following in video in the prior art directlys adopt the data with error matching points pair
The calculating of relative attitude is carried out, and not to error matching points to handling, make the relative attitude of calculated three-dimension object not
Accurately, thus the technical issues of cannot being preferably tracked to the edge of three-dimension object.
The embodiment of the present application first aspect provides a kind of method of three-dimension object Edge Following in video, comprising: before acquisition
Marginal point in one frame image in the edge contour of target three-dimension object;It determines in current frame image and in the previous frame image
The corresponding match point of marginal point;According in the previous frame image each marginal point and surrounding edge point positional relationship,
The positional relationship of the corresponding match point of each marginal point and surrounding match point determines error matching points pair in the current frame image
With correct matching double points;The present frame is calculated according to the model information of the correct matching double points and the target three-dimension object
The relative attitude of the target three-dimension object in image and the previous frame image.
The embodiment of the present application second aspect provides a kind of device of three-dimension object Edge Following in video, comprising: marginal point
Module is obtained, for obtaining the marginal point in previous frame image in the edge contour of target three-dimension object;Match point determining module,
For determining match point corresponding with the marginal point in the previous frame image in current frame image;Matching double points determining module,
It is every in the current frame image for the positional relationship according to each marginal point and surrounding edge point in the previous frame image
The positional relationship of the corresponding match point of a marginal point and surrounding match point determine error matching points to correct matching double points;Relatively
Attitude Calculation module, it is described current for being calculated according to the model information of the correct matching double points and the target three-dimension object
The relative attitude of the target three-dimension object in frame image and the previous frame image.
The embodiment of the present application third aspect provides a kind of terminal device, comprising: one or more processors;Memory is used
In the one or more programs of storage;When one or more of programs are executed by one or more of processors, so that described
One or more processors realize the method as described in above-mentioned first aspect.
The embodiment of the present application fourth aspect provides a kind of computer readable storage medium, is stored thereon with computer program,
The program is executed by processor the method as described in above-mentioned first aspect.
The edge contour for passing through target three-dimension object in acquisition previous frame image based on aspects above, the embodiment of the present application
In marginal point;Determine match point corresponding with the marginal point in previous frame image in current frame image;According to previous frame image
In each marginal point and surrounding edge point positional relationship, the corresponding match point of each marginal point and surrounding in current frame image
The positional relationship of match point determine error matching points to correct matching double points;According to correct matching double points and target three-dimension object
Model information calculate current frame image and the target three-dimension object in previous frame image relative attitude.Due to before and after determination
During the match point of two frame target three-dimension object edge contours, error matching points pair are eliminated, correct match point is remained
It is right, the relative attitude of target three-dimension object is calculated according to correct match point, makes the opposite appearance of calculated target three-dimension object
State is more accurate, and then is preferably tracked to the edge of target three-dimension object.
It should be appreciated that content described in foregoing invention content part is not intended to limit the pass of embodiments herein
Key or important feature, it is also non-for limiting scope of the present application.Other features will become to hold by description below
It is readily understood.
Detailed description of the invention
The flow chart of the method for three-dimension object Edge Following in the video that Fig. 1 provides for the embodiment of the present application one;
The flow chart of the method for three-dimension object Edge Following in the video that Fig. 2 provides for the embodiment of the present application two;
The structural schematic diagram of the device of three-dimension object Edge Following in the video that Fig. 3 provides for the embodiment of the present application three;
The structural schematic diagram of the device of three-dimension object Edge Following in the video that Fig. 4 provides for the embodiment of the present application four;
Fig. 5 is the structural schematic diagram for the terminal device that the embodiment of the present application five provides.
Specific embodiment
Embodiments herein is more fully described below with reference to accompanying drawings.Although showing that the application's is certain in attached drawing
Embodiment, it should be understood that, the application can be realized by various forms, and should not be construed as being limited to this
In the embodiment that illustrates, providing these embodiments on the contrary is in order to more thorough and be fully understood by the application.It should be understood that
It is that being given for example only property of the accompanying drawings and embodiments effect of the application is not intended to limit the protection scope of the application.
The specification and claims of the embodiment of the present application and the term " first " in above-mentioned attached drawing, " second ", "
Three ", the (if present)s such as " 4th " are to be used to distinguish similar objects, without for describing specific sequence or successive time
Sequence.It should be understood that the data used in this way are interchangeable under appropriate circumstances, so as to the embodiment of the present application described herein as can
The enough sequence implementation with other than those of illustrating or describe herein.In addition, term " includes " and " having " and they
Any deformation, it is intended that cover it is non-exclusive include, for example, containing the process, method of a series of steps or units, being
System, product or equipment those of are not necessarily limited to be clearly listed step or unit, but may include be not clearly listed or
For the intrinsic other step or units of these process, methods, product or equipment.
Hereinafter reference will be made to the drawings to specifically describe embodiments herein.
Embodiment one
The flow chart of the method for three-dimension object Edge Following in the video that Fig. 1 provides for the embodiment of the present application one, such as Fig. 1 institute
Show, the executing subject of the embodiment of the present application is the device of three-dimension object Edge Following in video, three-dimension object edge in the video
The device of tracking can integrate in terminal device.Terminal device can be computer, laptop, video processing equipment
Deng then the method for three-dimension object Edge Following includes following steps in video provided in this embodiment.
Step 101, the marginal point in previous frame image in the edge contour of target three-dimension object is obtained.
Specifically, in the present embodiment, previous frame image can be obtained from the video that camera or video camera are shot first, preceding
There is target three-dimension object, using the posture of the target three-dimension object in previous frame image to target three-dimensional article in one frame image
The model of body is projected, and the edge contour, specifically of target three-dimension object is obtained from projection result, obtains mesh from projection result
The method for marking the edge contour of three-dimension object can be with are as follows: renders to target three-dimension object, z is obtained from rendering tool
Buffer image, then binary conversion treatment is carried out to z buffer image, profile information finally is extracted to bianry image, wherein z
Buffer is to execute a technology of " hidden surface elimination " work to when colouring for target three-dimensional object, so hiding
The part of target three-dimensional object behind would not be revealed.Marginal point is obtained from the edge contour of target three-dimension object.
Wherein, the acquisition modes of marginal point can be marginal point equally spaced to be acquired along edge contour, or use and move
Edge contour is divided into line segment short and small one by one by dynamic edge algorithms, and the midpoint of line segment one by one is determined as edge contour
Marginal point.Other modes can also be used and obtain marginal point, do not limited this in the present embodiment.
Step 102, match point corresponding with the marginal point in previous frame image in current frame image is determined.
In the present embodiment, current frame image is obtained from the video that camera or video camera are shot first, in current frame image
In have target three-dimension object.Can be used a little matched method determine in current frame image with the marginal point pair in previous frame image
The match point answered.
Specifically, in the present embodiment, match point corresponding with the marginal point in previous frame image in current frame image is determined
It can be with are as follows: carry out the point in the marginal point and current frame image in previous frame image a little using the matched method of pixel value
Match, determines match point corresponding with the marginal point in previous frame image in current frame image.Determine in current frame image with it is previous
The corresponding match point of marginal point in frame image can be with are as follows: calculates each marginal point and current frame image in previous frame image
In each pixel in corresponding search range similarity, the highest pixel of similarity is determined as in previous frame image
The corresponding match point of marginal point.
It is understood that determining in current frame image corresponding with the marginal point in previous frame image in the present embodiment
Method with point can also be other methods, not limit this in the present embodiment.
Step 103, according to the positional relationship of each marginal point and surrounding edge point in previous frame image, current frame image
In the positional relationship of the corresponding match point of each marginal point and surrounding match point determine error matching points to correct matching double points.
Specifically, in the present embodiment, after getting the marginal point in previous frame image first, according to default surrounding pixel point
Range obtain previous frame image in each marginal point surrounding edge point, determine the position of each marginal point Yu surrounding edge point
Relationship, wherein positional relationship can be the relative position of marginal point and each marginal point of surrounding.Secondly, obtaining current frame image
In each marginal point match point after, obtain each of current frame image according to the range of same default surrounding pixel point
Surrounding's match point with point, determines the positional relationship of each match point Yu surrounding match point, wherein positional relationship can be matching
The relative position of point and each match point of surrounding.Finally according to the position of each marginal point and surrounding edge point in previous frame image
Set relationship, the positional relationship of the corresponding match point of each marginal point and surrounding match point determines error matching points in current frame image
To with correct matching double points.If a certain marginal point and the positional relationship of surrounding edge point and the position of match point and surrounding match point
Relationship satisfaction presets correct matching double points condition, it is determined that the corresponding match point of the marginal point, the marginal point surrounding edge point with
Corresponding surrounding match point is correct matching double points, if the positional relationship of a certain marginal point and surrounding edge point and match point with
The positional relationship of surrounding match point is unsatisfactory for presetting correct matching double points condition, it is determined that the corresponding match point of the marginal point, it should
Marginal point surrounding edge point is error matching points pair with corresponding surrounding match point.
Step 104, current frame image and previous is calculated according to the model information of correct matching double points and target three-dimension object
The relative attitude of target three-dimension object in frame image.
Specifically, in the present embodiment, error matching points pair is rejected, correct matching double points are obtained, according to correct matching double points
The relative attitude of the target three-dimension object in current frame image and previous frame image is calculated with the model information of target three-dimension object.
Wherein, the model information of correct matching double points and target three-dimension object can be input in the model of visual servo, visual servo
Model calculates the mesh in current frame image and previous frame image according to the model information of correct matching double points and target three-dimension object
Mark the relative attitude of three-dimension object.
It is understood that calculating the relative attitude of the target three-dimension object in current frame image and previous frame image
Afterwards, target three-dimensional article in current frame image can be calculated according to the posture of the target three-dimension object of previous frame image and relative attitude
The current pose of body.
The method of three-dimension object Edge Following in video provided in this embodiment, by obtaining target three in previous frame image
Tie up the marginal point in the edge contour of object;Determine matching corresponding with the marginal point in previous frame image in current frame image
Point;According to the positional relationship of each marginal point and surrounding edge point in previous frame image, each marginal point in current frame image
The positional relationship of corresponding match point and surrounding match point determine error matching points to correct matching double points;According to correct matching
Point to and the model information of target three-dimension object calculate the opposite of the target three-dimension object in current frame image and previous frame image
Posture.Due to eliminating error matching points during the match point of two frame target three-dimension object edge contours before and after determination
It is right, correct matching double points are remained, the relative attitude of target three-dimension object is calculated according to correct matching double points, are made calculated
The relative attitude of target three-dimension object is more accurate, and then is preferably tracked to the edge of target three-dimension object.
Embodiment two
The flow chart of the method for three-dimension object Edge Following in the video that Fig. 2 provides for the embodiment of the present application two, such as Fig. 2 institute
Show, the method for three-dimension object Edge Following in video provided in this embodiment, is in the video that the embodiment of the present application one provides
On the basis of the method for three-dimension object Edge Following, further refinement to step 101- step 104 is then provided in this embodiment
The method of three-dimension object Edge Following includes the following steps in video.
Step 201, the edge contour of target three-dimension object in previous frame image is divided into using mobile edge algorithms more
A line segment;The midpoint of each line segment is determined as the marginal point in previous frame image in the edge contour of target three-dimension object.
In the present embodiment, step 201 is the side of three-dimension object Edge Following in the video provided the embodiment of the present application one
The further refinement of method step 101.
Further, in this embodiment obtaining the edge contour of the target three-dimension object in previous frame image, then use
The edge contour of target three-dimension object in previous frame image is divided into short and small line segment one by one by mobile edge algorithms, all short and small
Line segment is bonded edge contour as far as possible.The midpoint of each line segment is determined as in previous frame image by the midpoint for extracting each line segment
Marginal point in the edge contour of target three-dimension object.
Step 202, the midpoint of each line segment and each pixel in search range corresponding in current frame image are calculated
Similarity;The highest pixel of similarity is determined as the corresponding match point of the marginal point in previous frame image.
In the present embodiment, step 202 is the side of three-dimension object Edge Following in the video provided the embodiment of the present application one
The further refinement of method step 102.
Further, in this embodiment determining each line segment midpoint for the midpoint of each line segment in previous frame image
Search range, if search range is that the normal vector of the line segment is the direction of search in current frame image, offset is the linear of n
Search range, wherein n value can be 3,4 or other numerical value.Or be other search ranges, this is not limited in the present embodiment
It is fixed.Then the midpoint of each line segment and the similarity of each pixel in corresponding search range are calculated.Wherein, the line segment
Similarity between midpoint and pixel can be the similarity of pixel value, or between the midpoint and pixel of the line segment
The similarity of other features does not limit this in the present embodiment.Then the highest pixel of similarity is determined as former frame
The corresponding match point of marginal point in image.
Preferably, in the present embodiment, calculate the midpoint of each line segment in search range corresponding in current frame image
The similarity of each pixel, comprising:
Firstly, determining corresponding matching template according to the direction of each line segment.
Secondly, by the corresponding matching template of each line segment respectively and each of in search range corresponding in current frame image
Pixel does convolutional calculation, to obtain the convolution value at each line segment midpoint with corresponding all pixels point.
Finally, each line segment midpoint and the convolution value of corresponding all pixels point are determined as each line segment midpoint and corresponding
All pixels point similarity.
Further, in this embodiment having corresponding matching template, first according to every for the line segment in each direction
The direction of a line segment determines corresponding matching template.Secondly for some line segment, by corresponding matching template point
Convolutional calculation, the corresponding convolution of each pixel are not done with each pixel in search range corresponding in current frame image
Value, is determined as the midpoint of the line segment and the similarity of each pixel for the corresponding convolution value of each pixel.For each line
The midpoint of section, finds the maximum pixel of convolution value in pixel corresponding with the midpoint, which is
For the corresponding match point in midpoint of the line segment.
Step 203, by the positional relationship and current frame image of each marginal point and surrounding edge point in previous frame image
In the corresponding match point of each marginal point and the positional relationship of surrounding match point compare.
Further, in this embodiment for some marginal point, can first determine the marginal point in previous frame image with
The positional relationship of each surrounding edge point, and determine that the position of the corresponding match point of the marginal point and each surrounding match point is closed
System, by the positional relationship of the marginal point and each surrounding edge point respectively match point corresponding with the marginal point and it is each around
Positional relationship with point compares.Determine that the positional relationship of the marginal point and each surrounding edge point is corresponding with the marginal point
Whether match point differs within the scope of predeterminated position with the positional relationship of each surrounding match point, is determined according to judging result correct
Matching double points and error matching points pair.
Step 204, judge the positional relationship and present frame figure of some marginal point and surrounding edge point in previous frame image
Whether the corresponding match point of the marginal point is differed with the positional relationship of surrounding match point as in is greater than predeterminated position range, if it is not,
Then follow the steps 205, it is no to then follow the steps 206.
Step 205, determine marginal point match point corresponding with the marginal point, the marginal point surrounding edge point with it is corresponding
Surrounding match point is correct matching double points.
Step 206, determine marginal point match point corresponding with the marginal point, the marginal point surrounding edge point with it is corresponding
Surrounding match point is error matching points pair.
In the present embodiment, step 203- step 206 is three-dimension object edge in the video provided the embodiment of the present application one
The further refinement of the method and step 103 of tracking.
Further, it is illustrated in conjunction with step 204- step 206.In the present embodiment, if some in previous frame image
The positional relationship of marginal point and surrounding edge point match point corresponding with the marginal point in current frame image and surrounding match point
Positional relationship differs by more than predeterminated position range, then marginal point match point corresponding with the marginal point, side around the marginal point
Edge point is error matching points pair with corresponding surrounding match point.If some marginal point and surrounding edge point in previous frame image
Positional relationship match point corresponding with the marginal point in current frame image differed with the positional relationship of surrounding match point be less than or
Equal to predeterminated position range, then marginal point match point corresponding with the marginal point, the marginal point surrounding edge point with it is corresponding
Surrounding match point is correct matching double points.
It illustrates are as follows: if the positional relationship of some marginal point and surrounding edge point in previous frame image makes the marginal point
The lines connecting with surrounding edge point form the arc-shaped of a convex, and the marginal point in current frame image is corresponding
Match point is formed with the lines that the positional relationship of surrounding match point connects the corresponding match point of the marginal point with surrounding match point
Arc-shaped, marginal point in previous frame image and in the positional relationship and current frame image of surrounding edge point of one indent
The positional relationship of the corresponding match point of the marginal point and surrounding match point differs by more than predeterminated position range, then the marginal point and should
The corresponding match point of marginal point, the marginal point surrounding edge point are error matching points pair with corresponding surrounding match point.
Step 207, the model information of correct matching double points and target three-dimension object is input in visual servo model, with
Calculate the relative attitude of current frame image and the target three-dimension object in previous frame image.
Step 208, the relative attitude of current frame image and the target three-dimension object in previous frame image is exported.
In the present embodiment, step 207- step 208 is three-dimension object edge in the video provided the embodiment of the present application one
The further refinement of the method and step 104 of tracking.
Further, in this embodiment the model information of correct matching double points and target three-dimension object is input to vision
In servo model, visual servo model calculates current frame image according to the model information of correct matching double points and target three-dimension object
With the relative attitude of the target three-dimension object in previous frame image, the target in current frame image and previous frame image is being calculated
After the relative attitude of three-dimension object, from the target three-dimensional article exported in visual servo model in current frame image and previous frame image
The relative attitude of body.
The method of three-dimension object Edge Following in a kind of video provided in this embodiment, will by using mobile edge algorithms
The edge contour of target three-dimension object is divided into multiple line segments in previous frame image;The midpoint of each line segment is determined as former frame
Marginal point in image in the edge contour of target three-dimension object, the midpoint for calculating each line segment are corresponding with current frame image
The similarity of each pixel in search range;The highest pixel of similarity is determined as the marginal point in previous frame image
Corresponding match point, will be every in the positional relationship of each marginal point and surrounding edge point in previous frame image and current frame image
The corresponding match point of a marginal point and the positional relationship of surrounding match point compare, and judge some edge in previous frame image
The position of positional relationship corresponding with the marginal point in the current frame image match point and surrounding match point of point and surrounding edge point
Whether relationship difference is greater than predeterminated position range, if not, it is determined that marginal point match point corresponding with the marginal point, the edge
Point surrounding edge point is correct matching double points with corresponding surrounding match point, if so, determining the marginal point and the marginal point pair
The match point answered, which is error matching points pair with corresponding surrounding match point, by correct match point
To and the model information of target three-dimension object be input in visual servo model, to calculate in current frame image and previous frame image
Target three-dimension object relative attitude, export current frame image and the target three-dimension object in previous frame image opposite appearance
State.Due to when determining match point, according to the midpoint of each line segment with each of in search range corresponding in current frame image
What the similarity of pixel was determined, the match point of each marginal point can be made to determine more acurrate, and correct can be made
With point to and error matching points to determining more acurrate, further increase the accuracy that the relative attitude of three-dimension object calculates.
Embodiment three
The structural schematic diagram of the device of three-dimension object Edge Following in the video that Fig. 3 provides for the embodiment of the present application three, such as
Shown in Fig. 3, the device 30 of three-dimension object Edge Following includes: that marginal point obtains module 31 in video provided in this embodiment,
With determining module 32, matching double points determining module 33 and relative attitude computing module 34.
Wherein, marginal point obtains module 31, for obtaining in previous frame image in the edge contour of target three-dimension object
Marginal point.Match point determining module 32, for determining matching corresponding with the marginal point in previous frame image in current frame image
Point.Matching double points determining module 33, for according in previous frame image each marginal point and surrounding edge point positional relationship,
In current frame image the positional relationship of the corresponding match point of each marginal point and surrounding match point determine error matching points to just
True matching double points.Relative attitude computing module 34, based on according to the model information of correct matching double points and target three-dimension object
Calculate the relative attitude of current frame image and the target three-dimension object in previous frame image.
The device of three-dimension object Edge Following can execute embodiment of the method shown in Fig. 1 in video provided in this embodiment
Technical solution, it is similar that the realization principle and technical effect are similar, and details are not described herein again.
Example IV
The structural schematic diagram of the device of three-dimension object Edge Following in the video that Fig. 4 provides for the embodiment of the present application four, such as
Shown in Fig. 4, the device 40 of three-dimension object Edge Following is provided in the embodiment of the present application three in video provided in this embodiment view
In frequency on the basis of the device 30 of three-dimension object Edge Following, further, match point determining module 32 includes: similarity calculation
Submodule 321 and match point determine submodule 322.
Further, marginal point obtains module 31, is specifically used for: using mobile edge algorithms by target in previous frame image
The edge contour of three-dimension object is divided into multiple line segments;The midpoint of each line segment is determined as target three-dimensional article in previous frame image
Marginal point in the edge contour of body.
Further, similarity calculation submodule 321, the midpoint for calculating each line segment are corresponding with current frame image
Search range in each pixel similarity.Match point determines submodule 322, is used for the highest pixel of similarity
The corresponding match point of the marginal point being determined as in previous frame image.
Further, similarity calculation submodule 321, is specifically used for: determining corresponding according to the direction of each line segment
With template;By the corresponding matching template of each line segment respectively with each pixel in search range corresponding in current frame image
Convolutional calculation is done, to obtain the convolution value at each line segment midpoint with corresponding all pixels point;By each line segment midpoint with it is corresponding
The convolution value of all pixels point be determined as the similarity at each line segment midpoint with corresponding all pixels point.
Further, matching double points determining module 33, is specifically used for: by each marginal point and surrounding in previous frame image
The positional relationship of the positional relationship of marginal point match point corresponding with marginal point each in current frame image and surrounding match point into
Row comparison;If the marginal point in the positional relationship and current frame image of some marginal point and surrounding edge point in previous frame image
The positional relationship of corresponding match point and surrounding match point differs by more than predeterminated position range, then the marginal point and the marginal point pair
The match point answered, the marginal point surrounding edge point are error matching points pair with corresponding surrounding match point;If previous frame image
In some marginal point and surrounding edge point positional relationship match point corresponding with the marginal point in current frame image and surrounding
The positional relationship difference of match point is less than or equal to predeterminated position range, then marginal point match point corresponding with the marginal point,
The marginal point surrounding edge point is correct matching double points with corresponding surrounding match point.
Further, relative attitude computing module 34, is specifically used for: by the mould of correct matching double points and target three-dimension object
Type information input is into visual servo model, to calculate the opposite of the target three-dimension object in current frame image and previous frame image
Posture;Export the relative attitude of current frame image and the target three-dimension object in previous frame image.
The device of three-dimension object Edge Following can execute embodiment of the method shown in Fig. 2 in video provided in this embodiment
Technical solution, it is similar that the realization principle and technical effect are similar, and details are not described herein again.
Embodiment five
Fig. 5 is the structural schematic diagram for the terminal device that the embodiment of the present application five provides, as shown in figure 5, the present embodiment provides
Terminal device 50 include: one or more processors 51 and memory 52.
Wherein, memory 52, for storing one or more programs.When one or more programs are handled by one or more
Device 51 executes, so that one or more processors 51 realize that three-dimension object edge chases after in the video provided such as the embodiment of the present application one
The method of three-dimension object Edge Following in the video that the method or the embodiment of the present application two of track provide.
Related description can correspond to the corresponding associated description and effect of the step of referring to Fig. 1 to Fig. 2 and be understood, herein
It does not do and excessively repeats.
Embodiment six
The embodiment of the present application also provides a kind of computer readable storage medium, is stored thereon with computer program, computer
Program is executed by processor the method for three-dimension object Edge Following or this Shen in the video to realize the offer of the embodiment of the present application one
Please embodiment two provide video in three-dimension object Edge Following method.
Computer readable storage medium provided in this embodiment, by the side for obtaining target three-dimension object in previous frame image
Marginal point in edge profile;Determine match point corresponding with the marginal point in previous frame image in current frame image;According to previous
The positional relationship of each marginal point in frame image and surrounding edge point, the corresponding match point of each marginal point in current frame image
With the positional relationship of match point around determine error matching points to correct matching double points;According to correct matching double points and target three
The model information for tieing up object calculates the relative attitude of current frame image and the target three-dimension object in previous frame image.Due to true
During the match point of two frame target three-dimension object edge contours of fixed front and back, error matching points pair are eliminated, are remained correct
Matching double points calculate the relative attitude of target three-dimension object according to correct matching double points, make calculated target three-dimension object
Relative attitude it is more accurate, and then preferably the edge of target three-dimension object is tracked.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of module, only
A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple module or components can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of device or module
It connects, can be electrical property, mechanical or other forms.
Module may or may not be physically separated as illustrated by the separation member, show as module
Component may or may not be physical module, it can and it is in one place, or may be distributed over multiple networks
In module.Some or all of the modules therein can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
It, can also be in addition, can integrate in a processing module in each functional module in each embodiment of the application
It is that modules physically exist alone, can also be integrated in two or more modules in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also realize in the form of hardware adds software function module.
For implement the present processes program code can using any combination of one or more programming languages come
It writes.These program codes can be supplied to the place of general purpose computer, special purpose computer or other programmable data processing units
Device or controller are managed, so that program code makes defined in flowchart and or block diagram when by processor or controller execution
Function/operation is carried out.Program code can be executed completely on machine, partly be executed on machine, as stand alone software
Is executed on machine and partly execute or executed on remote machine or server completely on the remote machine to packet portion.
In the context of this application, machine readable media can be tangible medium, may include or is stored for
The program that instruction execution system, device or equipment are used or is used in combination with instruction execution system, device or equipment.Machine can
Reading medium can be machine-readable signal medium or machine-readable storage medium.Machine readable media can include but is not limited to electricity
Son, magnetic, optical, electromagnetism, infrared or semiconductor system, device or equipment or above content any conjunction
Suitable combination.The more specific example of machine readable storage medium will include the electrical connection of line based on one or more, portable meter
Calculation machine disk, hard disk, random access memory (RAM), read-only memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM
Or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage facilities or
Any appropriate combination of above content.
Although this should be understood as requiring operating in this way with shown in addition, depicting each operation using certain order
Certain order out executes in sequential order, or requires the operation of all diagrams that should be performed to obtain desired result.
Under certain environment, multitask and parallel processing be may be advantageous.Similarly, although containing several tools in being discussed above
Body realizes details, but these are not construed as the limitation to the scope of the present disclosure.In the context of individual embodiment
Described in certain features can also realize in combination in single realize.On the contrary, in the described in the text up and down individually realized
Various features can also realize individually or in any suitable subcombination in multiple realizations.
Although having used specific to this theme of the language description of structure feature and/or method logical action, answer
When understanding that theme defined in the appended claims is not necessarily limited to special characteristic described above or movement.On on the contrary,
Special characteristic described in face and movement are only to realize the exemplary forms of claims.
Claims (14)
1. a kind of method of three-dimension object Edge Following in video characterized by comprising
Obtain the marginal point in previous frame image in the edge contour of target three-dimension object;
Determine match point corresponding with the marginal point in the previous frame image in current frame image;
It is every in the current frame image according to the positional relationship of each marginal point and surrounding edge point in the previous frame image
The positional relationship of the corresponding match point of a marginal point and surrounding match point determine error matching points to correct matching double points;
The current frame image and described is calculated according to the model information of the correct matching double points and the target three-dimension object
The relative attitude of the target three-dimension object in previous frame image.
2. the method according to claim 1, wherein the side for obtaining target three-dimension object in previous frame image
Marginal point in edge profile, comprising:
The edge contour of target three-dimension object in the previous frame image is divided into multiple line segments using mobile edge algorithms;
The midpoint of each line segment is determined as the marginal point in the previous frame image in the edge contour of target three-dimension object.
3. according to the method described in claim 2, it is characterized in that, in the determining current frame image with the previous frame image
In the corresponding match point of marginal point, comprising:
Calculate the midpoint of each line segment and the phase of each pixel in corresponding search range in the current frame image
Like degree;
The corresponding match point of marginal point that the highest pixel of similarity is determined as in the previous frame image.
4. according to the method described in claim 3, it is characterized in that, the midpoint for calculating each line segment and described current
The similarity of each pixel in frame image in corresponding search range, comprising:
Corresponding matching template is determined according to the direction of each line segment;
By the corresponding matching template of each line segment respectively and each of in corresponding search range in the current frame image
Pixel does convolutional calculation, to obtain the convolution value at each line segment midpoint with corresponding all pixels point;
The convolution value at each line segment midpoint and corresponding all pixels point is determined as each line segment midpoint and corresponding institute
There is the similarity of pixel.
5. the method according to claim 1, wherein each marginal point according in the previous frame image
With the positional relationship of surrounding edge point, the position of each marginal point corresponding match point and surrounding match point in the current frame image
The relationship of setting determine error matching points to correct matching double points, specifically include:
It will be every in the positional relationship of each marginal point and surrounding edge point in the previous frame image and the current frame image
The corresponding match point of a marginal point and the positional relationship of surrounding match point compare;
If should in the positional relationship of some marginal point and surrounding edge point in the previous frame image and the current frame image
The positional relationship of the corresponding match point of marginal point and surrounding match point differs by more than predeterminated position range, then the marginal point and the side
The corresponding match point of edge point, the marginal point surrounding edge point are error matching points pair with corresponding surrounding match point;
If should in the positional relationship of some marginal point and surrounding edge point in the previous frame image and the current frame image
The corresponding match point of marginal point is differed with the positional relationship of surrounding match point is less than or equal to predeterminated position range, then the marginal point
Match point corresponding with the marginal point, the marginal point surrounding edge point are correct matching double points with corresponding surrounding match point.
6. method according to claim 1-5, which is characterized in that described according to the correct matching double points and institute
The model information for stating target three-dimension object calculates the target three-dimensional article in the current frame image and the previous frame image
The relative attitude of body, specifically includes:
The model information of the correct matching double points and the target three-dimension object is input in visual servo model, to calculate
The relative attitude of the target three-dimension object in the current frame image and the previous frame image;
Export the relative attitude of the target three-dimension object in the current frame image and the previous frame image.
7. the device of three-dimension object Edge Following in a kind of video characterized by comprising
Marginal point obtains module, for obtaining the marginal point in previous frame image in the edge contour of target three-dimension object;
Match point determining module, for determining matching corresponding with the marginal point in the previous frame image in current frame image
Point;
Matching double points determining module, for being closed according to the position of each marginal point and surrounding edge point in the previous frame image
System, the positional relationship of the corresponding match point of each marginal point and surrounding match point determines error matching points in the current frame image
To with correct matching double points;
Relative attitude computing module, for being calculated according to the model information of the correct matching double points and the target three-dimension object
The relative attitude of the target three-dimension object in the current frame image and the previous frame image.
8. device according to claim 7, which is characterized in that the marginal point obtains module, is specifically used for:
The edge contour of target three-dimension object in the previous frame image is divided into multiple line segments using mobile edge algorithms;It will
The midpoint of each line segment is determined as the marginal point in the previous frame image in the edge contour of target three-dimension object.
9. device according to claim 8, which is characterized in that the match point determining module specifically includes:
Similarity calculation submodule, for calculating the midpoint of each line segment and search model corresponding in the current frame image
The similarity of each pixel in enclosing;
Match point determines submodule, the marginal point pair for being determined as similarity highest pixel in the previous frame image
The match point answered.
10. device according to claim 9, which is characterized in that the similarity calculation submodule is specifically used for:
Corresponding matching template is determined according to the direction of each line segment;By the corresponding matching template of each line segment respectively with institute
State each pixel in current frame image in corresponding search range and do convolutional calculation, with obtain each line segment midpoint with it is corresponding
All pixels point convolution value;Each line segment midpoint is determined as each line with the convolution value of corresponding all pixels point
The similarity of Duan Zhongdian and corresponding all pixels point.
11. device according to claim 7, which is characterized in that the matching double points determining module is specifically used for:
It will be every in the positional relationship of each marginal point and surrounding edge point in the previous frame image and the current frame image
The corresponding match point of a marginal point and the positional relationship of surrounding match point compare;If some side in the previous frame image
The positional relationship of edge point and surrounding edge point match point corresponding with the marginal point in the current frame image and surrounding match point
Positional relationship differ by more than predeterminated position range, then marginal point match point corresponding with the marginal point, around the marginal point
Marginal point is error matching points pair with corresponding surrounding match point;If some marginal point and surrounding in the previous frame image
The positional relationship of the positional relationship of marginal point corresponding with the marginal point in the current frame image match point and surrounding match point
Difference is less than or equal to predeterminated position range, then marginal point match point corresponding with the marginal point, the marginal point surrounding edge
Point is correct matching double points with corresponding surrounding match point.
12. according to the described in any item devices of claim 7-11, which is characterized in that the relative attitude computing module, specifically
For:
The model information of the correct matching double points and the target three-dimension object is input in visual servo model, to calculate
The relative attitude of the target three-dimension object in the current frame image and the previous frame image;Export the present frame figure
The relative attitude of the target three-dimension object in picture and the previous frame image.
13. a kind of terminal device characterized by comprising
One or more processors;
Memory, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
Now such as method of any of claims 1-6.
14. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
Execute such as method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810880412.7A CN109255801B (en) | 2018-08-03 | 2018-08-03 | Method, device and equipment for tracking edges of three-dimensional object in video and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810880412.7A CN109255801B (en) | 2018-08-03 | 2018-08-03 | Method, device and equipment for tracking edges of three-dimensional object in video and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109255801A true CN109255801A (en) | 2019-01-22 |
CN109255801B CN109255801B (en) | 2022-02-22 |
Family
ID=65049258
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810880412.7A Active CN109255801B (en) | 2018-08-03 | 2018-08-03 | Method, device and equipment for tracking edges of three-dimensional object in video and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109255801B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977833A (en) * | 2019-03-19 | 2019-07-05 | 网易(杭州)网络有限公司 | Object tracking method, object tracking device, storage medium and electronic equipment |
CN111275827A (en) * | 2020-02-25 | 2020-06-12 | 北京百度网讯科技有限公司 | Edge-based augmented reality three-dimensional tracking registration method and device and electronic equipment |
CN112435294A (en) * | 2020-11-02 | 2021-03-02 | 中国科学院深圳先进技术研究院 | Six-degree-of-freedom attitude tracking method of target object and terminal equipment |
WO2022048468A1 (en) * | 2020-09-01 | 2022-03-10 | 腾讯科技(深圳)有限公司 | Planar contour recognition method and apparatus, computer device, and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7403634B2 (en) * | 2002-05-23 | 2008-07-22 | Kabushiki Kaisha Toshiba | Object tracking apparatus and method |
CN101251926A (en) * | 2008-03-20 | 2008-08-27 | 北京航空航天大学 | Remote sensing image registration method based on local configuration covariance matrix |
CN103116895A (en) * | 2013-03-06 | 2013-05-22 | 清华大学 | Method and device of gesture tracking calculation based on three-dimensional model |
CN103177269A (en) * | 2011-12-23 | 2013-06-26 | 北京三星通信技术研究有限公司 | Equipment and method used for estimating object posture |
CN103530881A (en) * | 2013-10-16 | 2014-01-22 | 北京理工大学 | Outdoor augmented reality mark-point-free tracking registration method applicable to mobile terminal |
CN105976399A (en) * | 2016-04-29 | 2016-09-28 | 北京航空航天大学 | Moving object detection method based on SIFT (Scale Invariant Feature Transform) feature matching |
CN107016704A (en) * | 2017-03-09 | 2017-08-04 | 杭州电子科技大学 | A kind of virtual reality implementation method based on augmented reality |
CN107330928A (en) * | 2017-06-09 | 2017-11-07 | 北京理工大学 | Based on the Image Feature Matching method for improving Shape context |
CN107833270A (en) * | 2017-09-28 | 2018-03-23 | 浙江大学 | Real-time object dimensional method for reconstructing based on depth camera |
-
2018
- 2018-08-03 CN CN201810880412.7A patent/CN109255801B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7403634B2 (en) * | 2002-05-23 | 2008-07-22 | Kabushiki Kaisha Toshiba | Object tracking apparatus and method |
CN101251926A (en) * | 2008-03-20 | 2008-08-27 | 北京航空航天大学 | Remote sensing image registration method based on local configuration covariance matrix |
CN103177269A (en) * | 2011-12-23 | 2013-06-26 | 北京三星通信技术研究有限公司 | Equipment and method used for estimating object posture |
CN103116895A (en) * | 2013-03-06 | 2013-05-22 | 清华大学 | Method and device of gesture tracking calculation based on three-dimensional model |
CN103530881A (en) * | 2013-10-16 | 2014-01-22 | 北京理工大学 | Outdoor augmented reality mark-point-free tracking registration method applicable to mobile terminal |
CN105976399A (en) * | 2016-04-29 | 2016-09-28 | 北京航空航天大学 | Moving object detection method based on SIFT (Scale Invariant Feature Transform) feature matching |
CN107016704A (en) * | 2017-03-09 | 2017-08-04 | 杭州电子科技大学 | A kind of virtual reality implementation method based on augmented reality |
CN107330928A (en) * | 2017-06-09 | 2017-11-07 | 北京理工大学 | Based on the Image Feature Matching method for improving Shape context |
CN107833270A (en) * | 2017-09-28 | 2018-03-23 | 浙江大学 | Real-time object dimensional method for reconstructing based on depth camera |
Non-Patent Citations (5)
Title |
---|
ANGELIQUE LOESCH 等: "Generic edgelet-based tracking of 3D objects in real-time", 《2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)》 * |
GUOFENG WANG 等: "Global optimal searching for textureless 3D object tracking", 《THE VISUAL COMPUTER》 * |
徐畅: "基于FPGA的单目标跟踪系统设计", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
曾晓奇 等: "基于边的自适应实时三维跟踪", 《计算机应用》 * |
杨亚飞 等: "一种基于多尺度轮廓点空间关系特征的形状匹配方法", 《自动化学报》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977833A (en) * | 2019-03-19 | 2019-07-05 | 网易(杭州)网络有限公司 | Object tracking method, object tracking device, storage medium and electronic equipment |
CN109977833B (en) * | 2019-03-19 | 2021-08-13 | 网易(杭州)网络有限公司 | Object tracking method, object tracking device, storage medium, and electronic apparatus |
CN111275827A (en) * | 2020-02-25 | 2020-06-12 | 北京百度网讯科技有限公司 | Edge-based augmented reality three-dimensional tracking registration method and device and electronic equipment |
CN111275827B (en) * | 2020-02-25 | 2023-06-16 | 北京百度网讯科技有限公司 | Edge-based augmented reality three-dimensional tracking registration method and device and electronic equipment |
WO2022048468A1 (en) * | 2020-09-01 | 2022-03-10 | 腾讯科技(深圳)有限公司 | Planar contour recognition method and apparatus, computer device, and storage medium |
CN112435294A (en) * | 2020-11-02 | 2021-03-02 | 中国科学院深圳先进技术研究院 | Six-degree-of-freedom attitude tracking method of target object and terminal equipment |
CN112435294B (en) * | 2020-11-02 | 2023-12-08 | 中国科学院深圳先进技术研究院 | Six-degree-of-freedom gesture tracking method of target object and terminal equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109255801B (en) | 2022-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111402290B (en) | Action restoration method and device based on skeleton key points | |
CN108875524B (en) | Sight estimation method, device, system and storage medium | |
CN109255801A (en) | The method, apparatus, equipment and storage medium of three-dimension object Edge Following in video | |
Wan et al. | Teaching robots to do object assembly using multi-modal 3d vision | |
He et al. | Sparse template-based 6-D pose estimation of metal parts using a monocular camera | |
CN105023010A (en) | Face living body detection method and system | |
CN102971768B (en) | Posture state estimation unit and posture state method of estimation | |
CN110281231B (en) | Three-dimensional vision grabbing method for mobile robot for unmanned FDM additive manufacturing | |
JP2004333422A (en) | Image processing device | |
CN109241844A (en) | Attitude estimation method, apparatus, equipment and the storage medium of three-dimension object | |
CN112348890B (en) | Space positioning method, device and computer readable storage medium | |
Mittrapiyanumic et al. | Calculating the 3d-pose of rigid-objects using active appearance models | |
Chen et al. | Projection-based augmented reality system for assembly guidance and monitoring | |
Van Tran et al. | BiLuNetICP: A deep neural network for object semantic segmentation and 6D pose recognition | |
Ji et al. | An integrated linear technique for pose estimation from different geometric features | |
CN109785444A (en) | Recognition methods, device and the mobile terminal of real plane in image | |
Yoon et al. | A new approach to the use of edge extremities for model-based object tracking | |
Lim et al. | Use of log polar space for foveation and feature recognition | |
CN109872343B (en) | Weak texture object posture tracking method, system and device | |
Duong et al. | Accurate sparse feature regression forest learning for real-time camera relocalization | |
Azad et al. | Accurate shape-based 6-dof pose estimation of single-colored objects | |
Wolnitza et al. | 3D object reconstruction and 6D-pose estimation from 2D shape for robotic grasping of objects | |
KR101900903B1 (en) | Image orientation estimation method based on center of mass of partitioned images, character recognition apparatus and method using thereof | |
US20240046593A1 (en) | Modelling method for making a virtual model of a user's head | |
JP7207396B2 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |