CN105447841B - Image matching method and method for processing video frequency - Google Patents
Image matching method and method for processing video frequency Download PDFInfo
- Publication number
- CN105447841B CN105447841B CN201410234637.7A CN201410234637A CN105447841B CN 105447841 B CN105447841 B CN 105447841B CN 201410234637 A CN201410234637 A CN 201410234637A CN 105447841 B CN105447841 B CN 105447841B
- Authority
- CN
- China
- Prior art keywords
- image
- matching
- characteristic point
- matching characteristic
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The present invention relates to a kind of image matching method and method for processing video frequency.Described image matching process, comprising: region division is carried out to image scene, to obtain subregion;The characteristic point in the first image and the second image is extracted and matches, to obtain the matching characteristic point between the first image and the second image;The matching characteristic point is grouped according to the subregion;According to the Matching Model between the first image and the second image, matching characteristic point subset is chosen from the matching characteristic point, the matching characteristic point subset is related to grouping as much as possible;The Matching Model is fitted using the matching characteristic point subset, to obtain images match result.The present invention can be improved matched robustness during video debounce.
Description
Technical field
The present invention relates to field of image processing, in particular to a kind of image matching method and method for processing video frequency.
Background technique
In practical camera chain, not only contained by the video that the mobile platforms such as vehicle, hand-held or aircraft obtain
The active movement of imaging system, while also containing the random motion of mobile platform.And thus random motion generate it is unstable
Video council allows people to generate sense tired out, while also bringing difficulty to the extraction of useful information.Therefore, how unstable video to be turned
Stable video is turned to be of great significance.
Video debounce, also referred to as video stabilization are a kind of critically important video processing techniques.The video processing technique is intended to disappear
It except video jitter, can guarantee the clear of video image, stablize picture, compress video preferably, to improve
Video quality and speed.
Video jitter, which refers in shooting process, causes video sequence there are inconsistent motion artifacts due to video camera
It shakes and fuzzy.In order to eliminate these shakes, video processing technique need to do is to extract the true global motion ginseng of video camera
Number makes video pictures process and stablizes then using the movement of suitable change technique compensation video camera.
The processing method of removal video jitter includes pixel method, block matching method, phase correlation method and characteristic matching method at present
Deng.
Pixel method carries out estimation using the relationship between grey scale pixel value, but it is to noise-sensitive, and requires image
Information is compared with horn of plenty.
Pixel in block is carried out estimation as a whole by block matching method, thus its than pixel method more Shandong
Stick, but the precision of algorithm and computation complexity influenced by the number of block, size, search range and search strategy it is very big.
Phase correlation method estimates direction and the speed of movement, its noise immunity by calculating the crosspower spectrum of consecutive frame
It is relatively strong, but computation complexity is big and the interference vulnerable to local motion.
Characteristic matching method is based on human vision property, and the feature by extracting and matching consecutive frame carries out video camera overall situation fortune
Dynamic parameter Estimation, compared to other algorithms, it but works as closer to the process that is handled motion information of vision system of people
When moving target there are other in scene, scene will appear different kinematic parameters, and feature point extraction may be limited to certain
The region of one kinematic parameter, result can be limited by feature extraction at this time, influence matched robustness and precision.
Summary of the invention
The technical issues of technical solution of the present invention is solved is how to improve matched robustness during video debounce.
In order to solve the above-mentioned technical problem, technical solution of the present invention provides a kind of image matching method, comprising:
Region division is carried out to image scene, to obtain subregion;
Extract and match the characteristic point in the first image and the second image, with obtain the first image and the second image it
Between matching characteristic point;
The matching characteristic point is grouped according to the subregion;
According to the Matching Model between the first image and the second image, matching characteristic is chosen from the matching characteristic point
Point subset, the matching characteristic point subset are related to grouping as much as possible;
The Matching Model is fitted using the matching characteristic point subset, to obtain images match result.
Optionally, described is the spatial relation based on described image scene to described image scene progress region division
It carries out.
Optionally, described includes: average division described image scene to image scene progress region division.
Optionally, described includes: figure described at least trisection in the lateral or vertical direction image scene progress region division
Image field scape.
Optionally, it is described to image scene carry out region division include: using the central point of described image scene as starting point, etc.
Angular divisions described image scene.
Optionally, the Matching Model is affine Transform Model.
Optionally, the grouping of the matching characteristic point corresponds to the division of subregion.
Optionally, the Matching Model is fitted based on RANSAC algorithm.
In order to solve the above-mentioned technical problem, technical solution of the present invention additionally provides a kind of image matching method, comprising:
Extract and match the characteristic point in the first image and the second image, with obtain the first image and the second image it
Between matching characteristic point;
Region division is carried out to the image scene that the matching characteristic point is distributed with, to obtain subregion;
The matching characteristic point is grouped according to the subregion;
According to the Matching Model between the first image and the second image, matching characteristic is chosen from the matching characteristic point
Point subset, the matching characteristic point subset are related to grouping as much as possible;
The Matching Model is fitted using the matching characteristic point subset, to obtain images match result.
Optionally, it is based on the distribution that the described pair of image scene that the matching characteristic point is distributed with, which carries out region division,
There is the spatial relation of the image scene of the matching characteristic point to carry out.
Optionally, it includes: with described minute that the described pair of image scene that the matching characteristic point is distributed with, which carries out region division,
Be furnished with the image scene of the matching characteristic point central point be starting point, angularly divide described in the matching characteristic point is distributed with
Image scene.
In order to solve the above-mentioned technical problem, technical solution of the present invention additionally provides a kind of method for processing video frequency, comprising:
It is fitted to obtain the Matching Model between consecutive frame image using matching process as described above;
Motion compensation is carried out to frame image based on the Matching Model.
Technical solution of the present invention at least has following technical effect:
Technical solution of the present invention divides the field with two or more different motion parameter regions in video image
Scape, and be grouped according to scene areas is divided to respectively at the matching characteristic point divided in scene areas, to guarantee to use every time
It is evenly distributed in entire scene in the matching characteristic point for calculating Matching Model, it is ensured that the robustness of final Matching Model.
Technical solution of the present invention is grouped matching characteristic point according further to the spatial relation of image scene, according to point
Group extracts matching characteristic point, and the grouping as much as possible involved in selection process, and the estimation of Matching Model is made to be suitable for global mould
Shape parameter, algorithm cost is very low, and consumption is few when calculating.
In technical solution of the present invention, the Matching Model is also fitted using RANSAC algorithm, is based on technical solution of the present invention
Grouping choose the mode that is fitted of matching characteristic point, what each iteration of RANSAC algorithm was chosen matches mould for calculating
The matching characteristic point of type can broadly be distributed to most of region of image scene as far as possible, maximize and choose to whole scenes point
Matching characteristic point within the scope of cloth, reduce algorithmic statement to local optimum probability;Technical solution of the present invention can be better
Estimate the global registration model parameter with the matching characteristic point of spatial position, avoid falling into local matching results, further protects
The robustness of matching algorithm is demonstrate,proved.
Detailed description of the invention
Fig. 1 is a kind of flow diagram for image matching method that technical solution of the present invention provides;
Fig. 2 is the content schematic diagram that image capture apparatus laterally shoots field range;
Fig. 3 is the result schematic diagram divided with a kind of division mode to image scene;
Fig. 4 is the result schematic diagram divided to the image scene laterally shot;
Fig. 5 is the result schematic diagram divided to the image scene longitudinally shot;
Fig. 6 is the result schematic diagram divided with another division mode to image scene;
Fig. 7 is the distribution schematic diagram of matching characteristic point on image;
Fig. 8 is the distribution schematic diagram for dividing matching characteristic point on subregion obtained by image scene;
Fig. 9 is the flow diagram for another image matching method that technical solution of the present invention provides;
Figure 10 is the image scene that the matching characteristic point is distributed in a kind of selection schematic diagram defined under result;
Figure 11 is the image scene that the matching characteristic point is distributed in another selection signal defined under result
Figure;
Figure 12 is that the image scene that the matching characteristic point is distributed with is illustrated in another selection defined under result
Figure;
Figure 13 is the subregion for dividing the image scene that the matching characteristic point is distributed with described in described image scene gained
The distribution schematic diagram of upper matching characteristic point;
Figure 14 is that subregion on obtained by the image scene of the matching characteristic point is distributed with described in division in one way
Distribution schematic diagram with characteristic point;
Figure 15 is to be distributed on subregion obtained by the image scene of the matching characteristic point described in division in another way
The distribution schematic diagram of matching characteristic point;
Figure 16 is a kind of flow diagram for method for processing video frequency that technical solution of the present invention provides.
Specific embodiment
In order to keep the purpose of the present invention, feature and effect more obvious and easy to understand, with reference to the accompanying drawing to of the invention
Specific embodiment elaborates.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, but the present invention can be with
Implemented using other than the one described here mode, therefore the present invention is not limited by the specific embodiments disclosed below.
Embodiment one
A kind of image matching method, as shown in Figure 1, it includes the following steps:
Step S100 carries out region division to image scene, to obtain subregion.
Described image scene refers to the image capture apparatus such as video camera in ingestible picture material within sweep of the eye.This
In embodiment, described image scene progress region division is referred to, described image pickup apparatus field range is divided.
With reference to Fig. 2, the field range 1 of image capture apparatus is the field range laterally shot, is set in field range 1
Picture material includes personage, animal and static background, wherein personage corresponding region is diagonal line hatches, and animal corresponding region is
Dotted line shade, static background corresponding region shadow-free.
In this step, division mode can there are many:
The first division mode:
The selection that image scene in pickup apparatus can be corresponded to is divided.
For example, image scene is selected as personage's scene in pickup apparatus, then consider personage usually in picture field range
It is described region division is carried out to image scene to include: using the central point of described image scene as starting point, angularly divide at center
Described image scene.
In conjunction with Fig. 3, can point A be image scene central point, and with the view of 120 ° of angular divisions image capture apparatus
Wild range 1 (namely described image scene), and obtain subregion 11, subregion 12 and subregion 13.
Certainly, it is customized to can be pickup apparatus for the selection of the central point and angle, for example, for landscape scene,
The selection of central point can be the feature boundary of different objects, and for night scene scene, the selection of central point may be greater than certain
The intersection of the pixel of one brightness value and other non-pixels greater than the brightness value;And angle can be 45 °, 90 ° or
180°。
Second of division mode:
The angle that transverse and longitudinal is shot in pickup apparatus can be corresponded to divide scene.
For example, pickup apparatus is laterally shooting, described to carry out region division to image scene include: horizontally equal part institute
State image scene.
In conjunction with Fig. 4, the field range 1 of image capture apparatus horizontally by trisection, obtains subregion 21, subregion 22
And subregion 23.
In division example as shown in Figure 5, pickup apparatus is longitudinal shooting, described to carry out region division to image scene
It include: equal part described image scene in the longitudinal direction.The field range 2 of image capture apparatus is the field range longitudinally shot, depending on
Wild range 2 by trisection, obtains subregion 31, subregion 32 and subregion 33 in the longitudinal direction.
Above-mentioned division laterally or longitudinally does not limit isodisperse mesh yet, but the considerations of divide for feature of overall importance, at least
Need trisection.
The third division mode:
It can be with the default division mode of Initialize installation pickup apparatus.
The default division mode of pickup apparatus is average division described image scene.In conjunction with Fig. 6, directly image can be taken the photograph
The field range 1 (namely described image scene) of equipment is taken to be divided into identical four sub-regions namely subregion 41, subregion
42, subregion 43 and subregion 44.Averagely dividing image scene subregion number obtained can be any.
In addition, the default behavior of Initialize installation pickup apparatus can also be using the first division mode be similar to, initially
The central point for changing default image scene is the central point of field range, averagely divides image scene with 90 ° of angle.And initial
After change, it can customize the adjustment central point and divide angle.
Division from above-mentioned image scene is what the spatial relation based on described image scene carried out.It can will be described
Image scene is considered as a two-dimensional space plane, two-dimensional space described in the absolute visual field scope limitation according to described image pickup apparatus
Region in plane, if then being divided the region in the two-dimensional space plane to obtain according to above-mentioned any division mode
The division result of dry subregion.
It continues to refer to figure 1, the image matching method of the present embodiment further include:
Step S101 extracts and matches the characteristic point in the first image and the second image, with obtain the first image and
Matching characteristic point between second image.
The first image and the second image are referred to based on described image scene, the intake of the image capture apparatus such as video camera
To image sequence in adjacent two field pictures.Wherein, a frame image is reference picture, and another frame image is present image, this
It is reference picture that embodiment, which defaults the first image, and the second image is present image.
The characteristic point of image is related to the color characteristic of image, textural characteristics, shape feature and spatial relation characteristics.
When the characteristic point is based on color characteristic:
Can extract the color characteristic based on color histogram, color histogram it face in piece image can be briefly described
The global distribution of color, i.e. different color ratio shared in entire image, can be used in description and are difficult to divide automatically and be not required to
Consider the image of object space position.The extraction of color characteristic is related to color space, including RGB color and hsv color sky
Between.Certainly, the extraction of color characteristic is also based on color set, color moment, color convergence vector in addition to color histogram
Equal color correlograms, the foundation of above-mentioned color correlogram are based respectively on the color histogram.
Based on color histogram, the match party rule of color characteristic include histogram intersection method, Furthest Neighbor, center away from method,
Reference color table method, cumulative color histogram method scheduling algorithm.
When the characteristic point is based on textural characteristics:
Since textural characteristics have stated the surface nature of scenery corresponding to image or image-region, and texture is a kind of object
The characteristic on surface can not reflect the essential attribute of object completely, so being that can not obtain high level just with textural characteristics
Secondary picture material.Different from color characteristic, textural characteristics are not based on the feature of pixel, it needs including multiple pixels
Statistics calculating is carried out in the region of point, is a kind of provincial characteristics.In pattern match, this zonal feature has biggish
Superiority, will not due to part deviation and can not successful match.Textural characteristics are a kind of statistical natures, are rotated not often with having
Denaturation, and have stronger resistivity for noise.
The description method of textural characteristics include: gray level co-occurrence matrixes analysis of texture method (Gotlieb and
Kreyszig et al. is proposed), geometric method (establish a kind of analysis of texture method) in texture primitive theoretical basis, model
Method (including random field models method, such as Markov random field models method and Gibbs random field models method) and signal processing method.
The extraction of textural characteristics and matching process are based on gray level co-occurrence matrixes, Tamura textural characteristics, autoregression texture mould
Type, wavelet transformation etc..Wherein, gray level co-occurrence matrixes feature extracting and matching depends on energy, inertia, entropy and correlation four
A parameter.Tamura textural characteristics, to the visual perception psychological study of texture, propose 6 attribute, it may be assumed that coarse based on the mankind
Degree, contrast, direction degree, line picture degree, regularity and rough degree.Autoregression texture model (simultaneous auto-
Regressive, SAR) be Markov random field (MRF) model a kind of application example.
When the characteristic point is based on shape feature:
Shape feature includes contour feature (for the outer boundary of object) and provincial characteristics (being related to entire shape area),
The shape feature that following several modes describe image can be used:
Boundary characteristic method, this method is by obtaining the form parameter of image to the description of boundary characteristic, including Hough becomes
It changes detection parallel lines method and edge direction histogram method is.
Fourier's shape description symbols (Fourier shape descriptors) method, basic thought is to use object boundary
Fourier transformation as shape description, using the closure and periodicity of zone boundary, convert one-dimensional ask for two-dimensional problems
Topic.
The expression of shape in image and matching are described method using more simple provincial characteristics by geometry parameter method,
For example, by using the form parameter method (shape factor) in relation to shape quantitative measure (such as square, area, perimeter).
Shape invariance moments method, using the square in region shared by target as shape description parameter.
In addition, the expression of shape and matching process further include FInite Element (Finite Element Method or FEM),
The methods of rotation function (Turning Function) and wavelet descriptor (Wavelet Descriptor).
When the characteristic point is based on spatial relation characteristics:
Extracting image space relationship characteristic can be there are two types of method: a kind of method is to be divided automatically to image first,
Object or color region included in image are marked off, then according to these extracted region images features, and establishes index;Separately
A kind of method then simply by image uniform is divided into several regular sub-blocks, then extracts feature to each image subblock, and
Establish index.
Based on the characteristic relation of features described above point, SIFT (scale invariant feature conversion, Scale- can also be utilized
Invariant Feature Transform) algorithm extracts, after the characteristic point of the first image and the second image generates,
For the characteristic point in the second image, can find fisrt feature point nearest with its Euclidean distance in the first image and it is European away from
Receive second if minimum distance and its minimum distance division result are less than given threshold from secondary nearest second feature point
Characteristic point in image is matching characteristic point.
It continues to refer to figure 1, the image matching method of the present embodiment further include:
Step S102 is grouped the matching characteristic point according to the subregion.
The grouping of the matching characteristic point corresponds to the division of subregion.It is known that image scene is drawn based on step S100
The mode of dividing, can obtain n sub-regions.Being grouped according to the subregion to the matching characteristic point described in this step is
Refer to, the matching characteristic point on the second image is corresponded into the n sub-regions according to its coordinate position and is grouped.
With reference to Fig. 7, if Fig. 7 illustrates that matching characteristic point on the second image 3 obtained by the image scene 1 based on Fig. 2
Distribution schematic diagram.
By taking the division mode of Fig. 6 as an example, with reference to Fig. 8, n is 4 at this time.Second image 3 is divided into sub-image area n1, son
Image-region n2, sub-image area n3, sub-image area n4.Wherein, the matching in sub-image area n1 and sub-image area n3
Characteristic point substantially covers matching characteristic point in relation to personage, the matching characteristic point of sub-image area n2 substantially cover it is related quiet
The matching characteristic point of state background, the matching characteristic point of sub-image area n4 substantially cover the matching characteristic point in relation to animal.Son
There are 6 matching characteristic points in image-region n1, there are 4 matching characteristic points in the n2 of sub-image area, there are 8 in the n3 of sub-image area
Matching characteristic point has 13 matching characteristic points in the n4 of sub-image area.
It, can be with the dividing condition of sub-image area to each matching characteristic point when being grouped to above-mentioned matching characteristic point
Type of sets is set, for record.For example, the dividing condition according to Fig. 8, can be arranged it to the matching characteristic point p0 in Fig. 8
N1 or other packet identifications opposite with n1 are grouped into, it is arranged to matching characteristic point p1 and is grouped into n2 or other opposite with n2
Packet identification is arranged it to matching characteristic point p2 and is grouped into n3 or other packet identifications opposite with n3, to matching characteristic point p3
It is set and is grouped into n4 or other packet identifications opposite with n4.
It continues to refer to figure 1, the image matching method of the present embodiment further include:
Step S103 is clicked according to the Matching Model between the first image and the second image from the matching characteristic
Matching characteristic point subset is taken, the matching characteristic point subset is related to grouping as much as possible.
Matching Model between first image and the second image can be customized.The Matching Model can be affine transformation
Model, Perspective transformation model etc., in which: for affine Transform Model, its fitting need to choose 3 pairs of matching characteristic points, for every frame figure
As choosing three matching characteristic points;For Perspective transformation model, its fitting need to choose 4 pairs of matching characteristic points, for every frame image
Choose four matching characteristic points.The Matching Model is adapted for the estimation of kinematic parameter between image.The present embodiment uses
RANSAC (Random Sample Consensus, random sampling consistency) algorithm calculates the matching mould between two images
Type.Wherein, it is related to randomly selecting the matching characteristic point in image-region, to assess kinematic parameter (this of covering global area
It is suitable for assessment kinematic parameter with model, therefore, is fitted the Matching Model using the matching characteristic point randomly selected, it is as above-mentioned
Kinematic parameter is assessed using the matching characteristic point randomly selected).
In view of (above-mentioned personage moves object in the shake and scene of the image capture apparatus such as camera, video camera itself
Object) movement and Rolling shutter (Rolling Shutter) effect, lead to the movement of different zones in an image scene
Parameter is not identical.Such as in image scene 1, personage, animal itself have certain kinematic parameter, and image capture apparatus
The shake of itself and curtain shutter effect all may also can to the object (either stationary body or dynamic object) in scene
Generate kinematic parameter.At this point, the kinematic parameter of the different zones of image is simultaneously non-equal when images match, for example, personage region
Kinematic parameter is different from the kinematic parameter of faunal district, also different from the kinematic parameter of static background region.
In the image area, if the matching characteristic point number of regional area can be greater than or equal to the matching randomly selected
Feature point number, then the matching characteristic points of regional area exist be randomly picked simultaneously to a possibility that, based on RANSAC
The Matching Model of algorithm is likely to what basis was calculated in the matching characteristic point that regional area randomly selects.And due to being
Randomly select matching characteristic point, it is also possible to cause each fitting result of Matching Model to be all based on different regional areas and select at random
The matching characteristic point got leads to images match failure or video to can not accurately estimate global motion between two field pictures
Debounce failure.
In view of the image sequence characteristic with motility, it is known that: 1) it in image sequence not only comprising images such as cameras takes the photograph
The movement of equipment itself is taken, and includes the movement of moving object in scene.2) purpose of images match is image debounce, figure
Global motion caused by being moved as shake mainly image capture apparatus, rather than the movement of the object in scene, so image
What debounce needed to estimate is the Matching Model of description global motion, and the Matching Model of non-depicted local motion.
Scene partitioning and matching characteristic point grouping process based on step S100 and S102, for randomly selecting matching characteristic
Point may fall into the problem of regional area is chosen and cannot assess globe motion parameter, this step randomly selects matching characteristic point
Mode can guarantee that matching characteristic point is chosen of overall importance:
Firstly, the number for randomly selecting matching characteristic point is obtained according to Matching Model, for example, being directed to affine Transform Model institute
The six-freedom degree needed, it is desirable to which three pairs of matching characteristic points, therefore, it is necessary to select three matching characteristics in the image area
Point.
In order to guarantee the global spreadability of the matching characteristic got point, prevent finally obtained interior point from converging to local
With characteristic point, then the matching characteristic point that disperses as far as possible in described image region should be chosen to be fitted the Matching Model.
The thinking of the present embodiment is scene partitioning and the grouping of matching characteristic point based on step S100 and S102, in this step
Random selection process in, matching characteristic point is chosen according to spatial relation, and ensure choose matching characteristic point to the greatest extent may be used
The space segmentation from different images region more than energy.
If the Matching Model is affine Transform Model, the number that randomly selects of the matching characteristic point is 3, in conjunction with figure
8 partition mode and matching characteristic point packet mode, then the matching characteristic point randomly selected at most may be from three it is different
Sub-image area, therefore, a possibility that one kind randomly selecting matching characteristic point are selection: matching characteristic point p0, come from subgraph
Region n1;Matching characteristic point p1 comes from sub-image area n2;Matching characteristic point p3 comes from sub-image area n4.At this point, this
Matching characteristic point randomly selects mode, realize substantially randomly select it is of overall importance.
It continues to refer to figure 1, the image matching method of the present embodiment further include:
Step S104 is fitted the Matching Model using the matching characteristic point subset, to obtain images match result.
Above-mentioned fit procedure is based on RANSAC algorithm:
From the set S of matching characteristic point, the subset S comprising having several matching characteristic points is randomly selected1Initialization
With model.For example, if the Matching Model is affine Transform Model, subset S1Element number be 3.
The process for repeating above-mentioned sampling and Matching Model finds from set S and is less than a certain spy with the error of Matching Model
Determine the subset S ' of threshold value t.
After completing certain frequency in sampling, algorithm fails if not finding consistent collection, otherwise obtains after selection sampling
Maximum consistent collection is used as interior point, and corresponding Matching Model is final Matching Model.
In the algorithm of the present embodiment images match, matching mould of the iteration cut-off condition of RANSAC usually by finding every time
The corresponding consistent element number for concentrating maximum unanimously to collect of type and the number of iterations codetermine, if looked in iterative process every time
To the element number unanimously collected more can make RANSAC algorithmic statement faster.
Based on above-mentioned fitting algorithm, it can compare and randomly select and directly random choosing using the present embodiment packet mode
The fitting result for taking matching characteristic point subset, for the matching characteristic point distribution of the image shown in Fig. 7 and Fig. 8, and Matching Model
For affine Transform Model:
In the scene of the image sequence with multi-motion parameter, image has two or more different zones
Quite, with reference to Fig. 8, in Fig. 8, the matching characteristic point number of region n1 to region n4 is all larger than 3 to matching characteristic points.
If taking the selection matching characteristic point scheme of RANSAC algorithm completely random to be fitted the Matching Model, every time
It randomly selects 3 matching characteristic points (namely choosing a matching characteristic point subset) and establishes Matching Model, if this 3 matchings are special
What sign point was chosen is all the matching characteristic point on the n1 of sub-image area, then according to the RANSAC algorithm of the present embodiment, from entire figure
The matching characteristic point in the n1 of region can be found in all matching characteristic point sets of image field scape as consistent collection, and in the case where connecing
Come in the iterative process of RANSAC algorithm, all can not find consistent collection element more corresponding than the model and more unanimously collect, until calculating
Method convergence, the Matching Model that the matching characteristic point that final Matching Model is based only on the n1 of region calculates.
Similarly, if in iterative process at first, 3 of region n2 into n4 on any one region are had selected first
Point, then final Matching Model is based only on the model that the matching characteristic point in the optional region calculates.
If multiframe is all such case in image sequence, then being based on randomness, the Matching Model meeting that consecutive frame is found
There is each frame and Matching Model is found based on different regions.For example, n-th frame is found based on the matching characteristic point in the n1 of region
Final Matching Model, and the (n+1)th frame is the final Matching Model found based on the matching characteristic point in the n2 of region.It is such
Randomness not can guarantee the consistent parameter Estimation for representing global motion, eventually lead to the failure of images match and video debounce.
And the present embodiment packet mode is used to carry out randomly selecting matching characteristic point then at least guaranteeing to be fitted Matching Model
The matching characteristic point randomly selected can be fallen in 3/4 region of scene, and the matching characteristic point for reducing selection falls on office
The probability in portion region:
System has recorded matching characteristic point number of four sub-image area n1 into n4 in image scene first
0, and it is based on RANSAC algorithm, select three regions at random from aforementioned four region every time to choose matching characteristic point.
If this trizonal matching characteristic point number is not 0, each region random selection from above three region
One matching characteristic point, to form a matching characteristic point subset, (certainly, if occurring in randomly selected three regions, there are two areas
The matching characteristic point number in domain is not 0 and the matching characteristic point number in another region is 0, then at random from above-mentioned two region
One region of middle selection, selects two matching characteristic points from the region at random, selects a matching at random from another corresponding region
Characteristic point;If the matching characteristic point number in only one region is not 0 and another two region in randomly selected three regions
Matching characteristic point number is 0, then single to select three matching characteristic points at random from said one region).
In contrast, packet mode randomly select each iteration institute of matching characteristic promise RANSAC algorithm with
The matching characteristic point that machine is chosen can be distributed to most of region of image scene, namely ensure for calculating affine transformation mould
The matching characteristic point of the Matching Models such as type can react image global motion situation, preferably have estimated with spatial position
Global registration model parameter with characteristic point, avoids falling into local matching results as far as possible, reduces algorithmic statement to local optimum
Probability, ensure that the robustness of algorithm.
Embodiment two
A kind of image matching method, as shown in fig. 9, it includes following steps:
Step S200 extracts and matches the characteristic point in the first image and the second image, with obtain the first image and
Matching characteristic point between second image.
The specific embodiment of this step can refer to the step S101 of embodiment one.
Step S201 carries out region division to the image scene that the matching characteristic point is distributed with, to obtain subregion.
Different from the scene partitioning of embodiment one, the object that the present embodiment divides is the figure that the matching characteristic point is distributed with
Image field scape.
Still referring to FIG. 2, and combine Fig. 7, the range setting of the image scene of the matching characteristic point is distributed with based on described
The spatial relation of the image scene of the matching characteristic point is distributed with.
It can be and cover the minimal graph for having the matching characteristic point described in the spatial position setting with the matching characteristic point
As region define described in the image scene of the matching characteristic point is distributed with.As shown in Figure 10, s1 is covered for described in described
Minimum image region namely the image scene that the matching characteristic point is distributed with characteristic point, boundary is with described
What the line of matching characteristic point limited.It is straight line line in Figure 10.
Cover the minimal graph for having the matching characteristic point described in can setting based on the spatial position of the matching characteristic point
As region define described in the image scene of the matching characteristic point is distributed with:
As shown in figure 11, s1 be also it is described cover the image-region for having the matching characteristic point, it is described to be distributed with described
The boundary of image scene with characteristic point is also to be limited with the line of the matching characteristic point, special based on the Boundary Match
The connection of sign point can be customized curve line, and the custom curve can be to be obtained based on preset function.
As shown in figure 12, s2 be also it is described cover the image-region for having the matching characteristic point, it is described to be distributed with described
Image scene with characteristic point is the custom rule graphics field based on Boundary Match characteristic point, such as square, rectangle
Or polygon etc., the image scene that the matching characteristic point is distributed with described in Figure 12 is the length of a covering Boundary Match characteristic point
It is rectangular.
In this step, can also have to the division mode of the image scene that the matching characteristic point is distributed with more
Kind:
Global image scene 3 can be divided, with to the image scene that the matching characteristic point is distributed with into
Row divides.Its division mode can be with the step S100 of reference implementation example one.
For example, the divided result of image scene s1 shown in Figure 11 can refer to figure using the third division mode
13.At this point, the subregion that this step divides is subregion n1 ', subregion n2 ', subregion n3 ' and subregion n4 '.
Furthermore it is also possible to which the first division mode of direct basis embodiment one, is directly distributed with the matching to described
The image scene of characteristic point is divided, wherein it is customized that the selection of the central point and angle can be pickup apparatus.
For example, the central point is A1 using the first division mode, angle is 120 °, image shown in Figure 11
The divided result of scene s1 can refer to Figure 14.At this point, the subregion that divides of this step be subregion m1, subregion m2 and
Subregion m3.
According to the first division mode, the central point is A2, and angle is 90 °, image scene shown in Figure 12
The divided result of s2 can refer to Figure 15.At this point, the subregion that this step divides is subregion m1 ', subregion m2 ', son
Region m3 ' and subregion m4 '.
With continued reference to Fig. 9, the image matching method of the present embodiment further include:
Step S202 is grouped the matching characteristic point according to the subregion.
The specific embodiment of this step can refer to the step S102 of embodiment one.
Step S203 is clicked according to the Matching Model between the first image and the second image from the matching characteristic
Matching characteristic point subset is taken, the matching characteristic point subset is related to grouping as much as possible.
The specific embodiment of this step can refer to the step S103 of embodiment one.
Step S204 is fitted the Matching Model using the matching characteristic point subset, to obtain images match result.
The specific embodiment of this step can refer to the step S104 of embodiment one.
Embodiment three
A kind of method for processing video frequency, as shown in figure 16 comprising:
Step S300, the Matching Model being fitted between consecutive frame image.
The process of Matching Model between this step fitting consecutive frame image can be appointed with reference implementation example one or embodiment two
Matching process described in one.
Step S301 carries out motion compensation to frame image based on the Matching Model.
The present embodiment realizes video debounce using motion compensation.It, can base for the Matching Model that step S300 is fitted
Institute's video sequence (image sequence) is filtered and is fitted in the Matching Model, to obtain stablizing video flowing.It will refer to again
Frame is aligned to present frame, is calculated the difference between reference frame and present frame, present frame is filled up by reference frame, to realize the fortune
Dynamic compensation.
Furthermore it is also possible to which multiple reference frames are aligned to present frame simultaneously, the difference in reference frame between present frame is taken
Minimum value fills up present frame using the reference frame for having difference minimum value, to realize the motion compensation.
It is also based on interpretational criteria evaluation reference frame and corresponds to smooth motion parameter in globe motion parameter, using steady
Kinematic parameter and globe motion parameter difference, as jitter parameter, to realize the motion compensation.
Although the invention has been described by way of example and in terms of the preferred embodiments, but it is not for limiting the present invention, any this field
Technical staff without departing from the spirit and scope of the present invention, may be by the methods and technical content of the disclosure above to this hair
Bright technical solution makes possible variation and modification, therefore, anything that does not depart from the technical scheme of the invention, and according to the present invention
Technical spirit any simple modifications, equivalents, and modifications to the above embodiments, belong to technical solution of the present invention
Protection scope.
Claims (13)
1. a kind of image matching method characterized by comprising
Region division is carried out to image scene, to obtain three or four sub-regions;
The characteristic point in the first image and the second image is extracted and matches, to obtain between the first image and the second image
Matching characteristic point;
The matching characteristic point is grouped according to the subregion;
According to the Matching Model between the first image and the second image, matching characteristic idea is chosen from the matching characteristic point
Collection, the matching characteristic point subset are related to grouping as much as possible, and the Matching Model is affine Transform Model or perspective transform
Model;
It is fitted the Matching Model using the matching characteristic point subset, based on RANSAC algorithm, to obtain images match result.
2. image matching method as described in claim 1, which is characterized in that described to carry out region division to described image scene
It is what the spatial relation based on described image scene carried out.
3. image matching method as claimed in claim 2, which is characterized in that described to carry out region division packet to image scene
It includes: averagely division described image scene.
4. image matching method as claimed in claim 2, which is characterized in that described to carry out region division packet to image scene
It includes: at least trisection described image scene in the lateral or vertical direction.
5. image matching method as claimed in claim 2, which is characterized in that described to carry out region division packet to image scene
It includes: using the central point of described image scene as starting point, angularly dividing described image scene.
6. image matching method as described in claim 1, which is characterized in that the grouping of the matching characteristic point corresponds to sub-district
The division in domain.
7. a kind of image matching method characterized by comprising
The characteristic point in the first image and the second image is extracted and matches, to obtain between the first image and the second image
Matching characteristic point;
Region division is carried out to the image scene that the matching characteristic point is distributed with, to obtain three or four sub-regions;
The matching characteristic point is grouped according to the subregion;
According to the Matching Model between the first image and the second image, matching characteristic idea is chosen from the matching characteristic point
Collection, the matching characteristic point subset are related to grouping as much as possible, and the Matching Model is affine Transform Model or perspective transform
Model;
It is fitted the Matching Model using the matching characteristic point subset, based on RANSAC algorithm, to obtain images match result.
8. image matching method as claimed in claim 7, which is characterized in that the described pair of figure for being distributed with the matching characteristic point
It is that the spatial relation based on the image scene that the matching characteristic point is distributed with carries out that image field scape, which carries out region division,
's.
9. image matching method as claimed in claim 8, which is characterized in that the described pair of figure for being distributed with the matching characteristic point
It includes: average division described image scene that image field scape, which carries out region division,.
10. image matching method as claimed in claim 8, which is characterized in that described pair is distributed with the matching characteristic point
It includes: at least trisection described image scene in the lateral or vertical direction that image scene, which carries out region division,.
11. image matching method as claimed in claim 8, which is characterized in that described pair is distributed with the matching characteristic point
Image scene carry out region division include: using the central point of the image scene that the matching characteristic point is distributed with as starting point,
The image scene of the matching characteristic point is distributed with described in angularly dividing.
12. image matching method as claimed in claim 7, which is characterized in that the grouping of the matching characteristic point corresponds to son
The division in region.
13. a kind of method for processing video frequency characterized by comprising
It is fitted to obtain the Matching Model between consecutive frame image using the described in any item matching process of such as claim 1 to 12;
Motion compensation is carried out to frame image based on the Matching Model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410234637.7A CN105447841B (en) | 2014-05-28 | 2014-05-28 | Image matching method and method for processing video frequency |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410234637.7A CN105447841B (en) | 2014-05-28 | 2014-05-28 | Image matching method and method for processing video frequency |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105447841A CN105447841A (en) | 2016-03-30 |
CN105447841B true CN105447841B (en) | 2019-06-07 |
Family
ID=55557975
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410234637.7A Active CN105447841B (en) | 2014-05-28 | 2014-05-28 | Image matching method and method for processing video frequency |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105447841B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316313B (en) * | 2016-04-15 | 2020-12-11 | 株式会社理光 | Scene segmentation method and device |
CN109840457B (en) * | 2017-11-29 | 2021-05-18 | 深圳市掌网科技股份有限公司 | Augmented reality registration method and augmented reality registration device |
CN113020428B (en) * | 2021-03-24 | 2022-06-28 | 北京理工大学 | Progressive die machining monitoring method, device, equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101009021A (en) * | 2007-01-25 | 2007-08-01 | 复旦大学 | Video stabilizing method based on matching and tracking of characteristic |
CN101916445A (en) * | 2010-08-25 | 2010-12-15 | 天津大学 | Affine parameter estimation-based image registration method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101247220B1 (en) * | 2011-03-10 | 2013-03-25 | 서울대학교산학협력단 | Image processing apparatus and method using repetitive patterns |
-
2014
- 2014-05-28 CN CN201410234637.7A patent/CN105447841B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101009021A (en) * | 2007-01-25 | 2007-08-01 | 复旦大学 | Video stabilizing method based on matching and tracking of characteristic |
CN101916445A (en) * | 2010-08-25 | 2010-12-15 | 天津大学 | Affine parameter estimation-based image registration method |
Non-Patent Citations (2)
Title |
---|
基于不变特征和映射抑制的航拍视频图像配准;易盟等;《航空学报》;20121025;第33卷(第10期);第1872-1880页 |
视频序列电子稳像技术研究;远中文;《中国优秀硕士论文全文数据库》;20120815;摘要,正文第17-28页 |
Also Published As
Publication number | Publication date |
---|---|
CN105447841A (en) | 2016-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xiangli et al. | Bungeenerf: Progressive neural radiance field for extreme multi-scale scene rendering | |
US11830163B2 (en) | Method and system for image generation | |
Ghosh et al. | Stacked U-Nets for ground material segmentation in remote sensing imagery | |
CN111243093B (en) | Three-dimensional face grid generation method, device, equipment and storage medium | |
CN107301620B (en) | Method for panoramic imaging based on camera array | |
CN107852533A (en) | Three-dimensional content generating means and its three-dimensional content generation method | |
CN108876723B (en) | Method for constructing color background of gray target image | |
EP3499414B1 (en) | Lightweight 3d vision camera with intelligent segmentation engine for machine vision and auto identification | |
CN108694705A (en) | A kind of method multiple image registration and merge denoising | |
JP7224604B2 (en) | Vehicle inspection system and method | |
WO2021017588A1 (en) | Fourier spectrum extraction-based image fusion method | |
CN105320947B (en) | A kind of human face in-vivo detection method based on illumination component | |
CN104463777B (en) | A method of the real time field depth based on face | |
CN105787943B (en) | SAR image registration method based on multi-scale image block feature and rarefaction representation | |
CN105447841B (en) | Image matching method and method for processing video frequency | |
US11270414B2 (en) | Method for generating a reduced-blur digital image | |
CN112218107B (en) | Live broadcast rendering method and device, electronic equipment and storage medium | |
CN106296632B (en) | A kind of well-marked target detection method based on amplitude spectrum analysis | |
Fang et al. | Separable kernel for image deblurring | |
JP6754717B2 (en) | Object candidate area estimation device, object candidate area estimation method, and object candidate area estimation program | |
CN113763544A (en) | Image determination method, image determination device, electronic equipment and computer-readable storage medium | |
US9087381B2 (en) | Method and apparatus for building surface representations of 3D objects from stereo images | |
CN105528772B (en) | A kind of image interfusion method based on directiveness filtering | |
CN107730438A (en) | One kind is based on fish-eye dynamic object tracking and system | |
Ali et al. | Panoramic image construction using feature based registration methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |