CN105447841A - Image matching method and video processing method - Google Patents

Image matching method and video processing method Download PDF

Info

Publication number
CN105447841A
CN105447841A CN201410234637.7A CN201410234637A CN105447841A CN 105447841 A CN105447841 A CN 105447841A CN 201410234637 A CN201410234637 A CN 201410234637A CN 105447841 A CN105447841 A CN 105447841A
Authority
CN
China
Prior art keywords
image
matching
characteristic point
matching characteristic
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410234637.7A
Other languages
Chinese (zh)
Other versions
CN105447841B (en
Inventor
孟春芝
王浩
蔡进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Tianjin Co Ltd
Original Assignee
Spreadtrum Communications Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Tianjin Co Ltd filed Critical Spreadtrum Communications Tianjin Co Ltd
Priority to CN201410234637.7A priority Critical patent/CN105447841B/en
Publication of CN105447841A publication Critical patent/CN105447841A/en
Application granted granted Critical
Publication of CN105447841B publication Critical patent/CN105447841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an image matching method and a video processing method. The image matching method comprises the steps: carrying out the regional division of an image scene, so as to obtain sub-regions; extracting feature points in a first image and a second image, and carrying out the matching of the feature points, so as to obtain the matched feature points between the first and second images; grouping the matched feature points according to the sub-regions; selecting a matching feature point subset from the matched feature points according to a matching model between the first image and the second image, wherein the matching feature point subset refers to groups as many as possible; and carrying out the fitting of the matching model through employing the matching feature point subset, so as to obtain an image matching result. The method can improve the matching robustness in a jitter removing process of a video.

Description

Image matching method and method for processing video frequency
Technical field
The present invention relates to image processing field, particularly a kind of image matching method and method for processing video frequency.
Background technology
In actual camera chain, the video obtained by mobile platforms such as vehicle, hand-held or aircrafts not only contains the active movement of imaging system, also contains the random motion of mobile platform simultaneously.And the unstable video council that random motion produces thus allows people produce sense tired out, bring difficulty also to the extraction of useful information simultaneously.Therefore, how the video of instability is converted into stable video significant.
Video debounce, also claims video stabilization, is a kind of very important video processing technique.This video processing technique is intended to eliminate video jitter, can ensure the clear of video image, stablize picture, video can be compressed better, thus improves video quality and speed.
Video jitter refers in shooting process and causes the shake of video sequence and fuzzy because video camera exists inconsistent motion artifacts.In order to eliminate these shakes, video processing technique needs to do, and extracts the true globe motion parameter of video camera, then adopts suitable change technique to compensate the motion of video camera, makes video pictures flow process and stablize.
The disposal route of current removal video jitter comprises pixel method, Block Matching Algorithm, phase correlation method and characteristic matching method etc.
Pixel method utilizes the relation between grey scale pixel value to carry out estimation, but it is to noise-sensitive, and requires the information comparatively horn of plenty of image.
Pixel in block is integrally carried out estimation by Block Matching Algorithm, thus it is than pixel method robust more, but the precision of algorithm and computation complexity are very large by the impact of the number of block, size, hunting zone and search strategy.
Phase correlation method is by the direction that calculates the cross-power spectrum of consecutive frame and estimate to move and speed, and its noise immunity is comparatively strong, but computation complexity is large and be subject to the interference of local motion.
Characteristic matching method is based on human vision property, by to extract and the feature of mating consecutive frame carries out global camera motion parameter estimation, compared to other algorithm, the process that it processes movable information closer to the vision system of people, but when there are other moving targets in scene, its scene there will be different kinematic parameters, feature point extraction only may be confined to the region of a certain kinematic parameter, now result can by the restriction of feature extraction, the robustness of impact coupling and precision.
Summary of the invention
The technical matters that technical solution of the present invention solves is, how to improve the robustness of mating in video debounce process.
In order to solve the problems of the technologies described above, technical solution of the present invention provides a kind of image matching method, comprising:
Region dividing is carried out to image scene, to obtain subregion;
Extract and mate the unique point in the first image and the second image, to obtain the matching characteristic point between described first image and the second image;
According to described subregion, described matching characteristic point is divided into groups;
According to the Matching Model between described first image and the second image, choose matching characteristic point subset from described matching characteristic point, described matching characteristic point subset relates to grouping as much as possible;
Adopt Matching Model described in the matching of described matching characteristic point subset, to obtain images match result.
It is optionally, described that to carry out Region dividing to described image scene be carry out based on the spatial relation of described image scene.
Optionally, describedly Region dividing is carried out to image scene comprise: on average divide described image scene.
Optionally, describedly Region dividing is carried out to image scene comprise: at least image scene described in trisection in the lateral or vertical direction.
Optionally, describedly Region dividing is carried out to image scene comprise: be starting point with the central point of described image scene, angularly divide described image scene.
Optionally, described Matching Model is affine Transform Model.
Optionally, the grouping of described matching characteristic point corresponds to the division of subregion.
Optionally, based on Matching Model described in the matching of RANSAC algorithm.
In order to solve the problems of the technologies described above, technical solution of the present invention additionally provides a kind of image matching method, comprising:
Extract and mate the unique point in the first image and the second image, to obtain the matching characteristic point between described first image and the second image;
Region dividing is carried out, to obtain subregion to the image scene being distributed with described matching characteristic point;
According to described subregion, described matching characteristic point is divided into groups;
According to the Matching Model between described first image and the second image, choose matching characteristic point subset from described matching characteristic point, described matching characteristic point subset relates to grouping as much as possible;
Adopt Matching Model described in the matching of described matching characteristic point subset, to obtain images match result.
Optionally, the described image scene to being distributed with described matching characteristic point carries out Region dividing is carry out based on the described spatial relation being distributed with the image scene of described matching characteristic point.
Optionally, the described image scene to being distributed with described matching characteristic point is carried out Region dividing and is comprised: with the described central point being distributed with the image scene of described matching characteristic point be starting point, angularly divide described in be distributed with the image scene of described matching characteristic point.
In order to solve the problems of the technologies described above, technical solution of the present invention additionally provides a kind of method for processing video frequency, comprising:
Utilize the Matching Model that matching process matching as above obtains between consecutive frame image;
Based on described Matching Model, motion compensation is carried out to two field picture.
Technical solution of the present invention at least possesses following technique effect:
Technical solution of the present invention is in video image, divide the scene with two or more different motion parameter region, and divide into groups to respectively at the matching characteristic point divided in scene areas according to dividing scene areas, to ensure to be evenly distributed in whole scene for the matching characteristic point calculating Matching Model at every turn, guarantee the robustness of final Matching Model.
Technical solution of the present invention is also divided into groups to matching characteristic point according to the spatial relation of image scene, matching characteristic point is extracted according to grouping, and relate to grouping as much as possible choosing in process, the estimation of Matching Model is made to be suitable for world model's parameter, its algorithm cost is very low, and when calculating, consumption is few.
In technical solution of the present invention, also adopt Matching Model described in the matching of RANSAC algorithm, the mode that matching characteristic point carries out matching is chosen in grouping based on technical solution of the present invention, most of region that as far as possible broadly can be distributed to image scene for the matching characteristic point calculating Matching Model that each iteration of RANSAC algorithm is chosen, maximize the matching characteristic point chosen within the scope of whole scene distribution, reduce the probability of algorithm convergence to local optimum; Technical solution of the present invention better can estimate the global registration model parameter of the matching characteristic point with locus, avoids being absorbed in local matching results, further ensures the robustness of matching algorithm.
Accompanying drawing explanation
The schematic flow sheet of a kind of image matching method that Fig. 1 provides for technical solution of the present invention;
Fig. 2 is the content schematic diagram that image capture apparatus laterally takes field range;
Fig. 3 is the result schematic diagram divided image scene with a kind of dividing mode;
Fig. 4 is the result schematic diagram divided the image scene of laterally shooting;
Fig. 5 is the result schematic diagram divided the image scene of longitudinally shooting;
Fig. 6 is the result schematic diagram divided image scene with another kind of dividing mode;
Fig. 7 is the distribution schematic diagram of matching characteristic point on image;
Fig. 8 is the distribution schematic diagram of matching characteristic point on partitioned image scene resulting bottle region;
The schematic flow sheet of the another kind of image matching method that Fig. 9 provides for technical solution of the present invention;
Figure 10 for described in be distributed with described matching characteristic point image scene choose schematic diagram a kind of definition under result;
Figure 11 for described in be distributed with described matching characteristic point image scene under another kind definition result, choose schematic diagram;
Figure 12 for described in be distributed with described matching characteristic point image scene under another definition result, choose schematic diagram;
Figure 13 for divide be distributed with the image scene of described matching characteristic point described in described image scene gained subregion on the distribution schematic diagram of matching characteristic point;
Figure 14 for be distributed with described matching characteristic point described in dividing in one way image scene resulting bottle region on the distribution schematic diagram of matching characteristic point;
Figure 15 for be distributed with described matching characteristic point described in dividing in another way image scene resulting bottle region on the distribution schematic diagram of matching characteristic point;
The schematic flow sheet of a kind of method for processing video frequency that Figure 16 provides for technical solution of the present invention.
Embodiment
In order to enable object of the present invention, characteristic sum effect becomes apparent more, elaborates to the specific embodiment of the present invention below in conjunction with accompanying drawing.
Set forth a lot of detail in the following description so that fully understand the present invention, but the present invention can also adopt other to be different from mode described here to implement, therefore the present invention is not by the restriction of following public specific embodiment.
Embodiment one
A kind of image matching method, as shown in Figure 1, it comprises the steps:
Step S100, carries out Region dividing to image scene, to obtain subregion.
Described image scene refers to the image capture apparatus such as video camera in ingestible picture material within sweep of the eye.In the present embodiment, Region dividing is carried out to described image scene and refers to described image capture apparatus field range is divided.
With reference to figure 2, the field range 1 of image capture apparatus is the field range of laterally shooting, and the picture material in setting field range 1 comprises personage, animal, and static background, wherein, personage corresponding region is diagonal line hatches, and animal corresponding region is dotted line shade, static background corresponding region shadow-free.
In this step, dividing mode can have multiple:
The first dividing mode:
The selection of image scene can divide in corresponding pickup apparatus.
Such as, in pickup apparatus, image scene is chosen as personage's scene, then consider that personage is usually in the center of picture field range, describedly Region dividing is carried out to image scene comprise: be starting point with the central point of described image scene, angularly divide described image scene.
Composition graphs 3, can put the central point that A is image scene, and with the field range 1 (being also described image scene) of the angular divisions image capture apparatus of 120 °, and obtain subregion 11, subregion 12 and subregion 13.
Certainly, the selection of described central point and angle can be that pickup apparatus is self-defining, such as, for landscape scene, the selection of central point can be the feature boundary of different objects, for night scene scene, the selection of central point can be pixel and other non-intersections being greater than the pixel of this brightness value of being greater than a certain brightness value; And angle can be 45 °, 90 ° or 180 °.
The second dividing mode:
The angle of transverse and longitudinal shooting in corresponding pickup apparatus scene can be divided.
Such as, pickup apparatus is laterally shooting, describedly carries out Region dividing to image scene and comprises: image scene described in decile in the horizontal.
Composition graphs 4, the field range 1 of image capture apparatus, in the horizontal by trisection, obtains subregion 21, subregion 22 and subregion 23.
In division example as shown in Figure 5, pickup apparatus is longitudinally shooting, describedly carries out Region dividing to image scene and comprises: image scene described in decile in the vertical.The field range 2 of image capture apparatus is the field range of longitudinally shooting, and field range 2, in the vertical by trisection, obtains subregion 31, subregion 32 and subregion 33.
Above-mentioned horizontal or longitudinal division such as not to limit at point number yet, but for the consideration that feature of overall importance divides, at least needs trisection.
The third dividing mode:
Can the acquiescence dividing mode of Initialize installation pickup apparatus.
The acquiescence dividing mode of pickup apparatus is the described image scene of average division.Composition graphs 6, directly the field range 1 (being also described image scene) of image capture apparatus can be divided into four identical sub regions, be also subregion 41, subregion 42, subregion 43 and subregion 44.The subregion number that average partitioned image scene obtains can be any.
In addition, the default behavior of Initialize installation pickup apparatus can also adopt and be similar to the first dividing mode, and the central point of initialization default image scene is the central point of field range, with the average partitioned image scene of the angle of 90 °.And after initialization, can customize the described central point of adjustment and divide angle.
Carry out based on the spatial relation of described image scene from the division of above-mentioned image scene.Described image scene can be considered as a two-dimensional space plane, absolute visual field scope according to described image capture apparatus limits the region in described two-dimensional space plane, then divides to the region in described two-dimensional space plane the division result obtaining some subregions according to above-mentioned any dividing mode.
Continue with reference to figure 1, the image matching method of the present embodiment also comprises:
Step S101, extracts and mates the unique point in the first image and the second image, to obtain the matching characteristic point between described first image and the second image.
Described first image and the second image refer to based on described image scene, two two field pictures adjacent in the image sequence that the image capture apparatus such as video camera absorb.Wherein, a two field picture is reference picture, and another two field picture is present image, and it is reference picture that the present embodiment gives tacit consent to the first image, and the second image is present image.
The unique point of image is relevant to the color characteristic of image, textural characteristics, shape facility and spatial relationship feature.
When described unique point is based on color characteristic:
Described color characteristic can be extracted based on color histogram, color histogram it can simply describe color in piece image the overall situation distribution, namely the ratio that different color is shared in entire image, can be used in describing and is difficult to auto Segmentation and the image not needing to consider object space position.The extraction of color characteristic relates to color space, comprises RGB color space and hsv color space.Certainly, the extraction of color characteristic can also based on color correlogram such as the color set except color histogram, color moment, color convergence vectors, and the foundation of above-mentioned color correlogram is respectively based on described color histogram.
Based on color histogram, the match party rule of color characteristic comprises histogram intersection method, Furthest Neighbor, centre distance method, reference color table method, cumulative color histogram method scheduling algorithm.
When described unique point is based on textural characteristics:
Because textural characteristics has stated the surface nature of scenery corresponding to image or image-region, and texture is a kind of characteristic of body surface, can not reflect the essential attribute of object completely, so only utilize textural characteristics cannot obtain high-level picture material.Different from color characteristic, textural characteristics is not the feature based on pixel, and it needs to carry out statistical computation in the region comprising multiple pixel, is a kind of provincial characteristics.In pattern match, this zonal feature has larger superiority, can not cannot the match is successful due to the deviation of local.Textural characteristics is a kind of statistical nature, and it often has rotational invariance, and has stronger resistivity for noise.
The describing method of textural characteristics comprises: the analysis of texture method of gray level co-occurrence matrixes people such as (propose) Gotlieb and Kreyszig, geometric method (being based upon a kind of analysis of texture method in texture primitive theoretical foundation), modelling (comprising random field models method, as Markov random field models method and Gibbs random field models method) and signal transacting method.
The extraction of textural characteristics and matching process are based on gray level co-occurrence matrixes, Tamura textural characteristics, autoregression texture model, wavelet transformation etc.Wherein, gray level co-occurrence matrixes feature extracting and matching depends on energy, inertia, entropy and correlativity four parameters.Tamura textural characteristics, based on the visually-perceptible psychological study of the mankind to texture, proposes 6 attribute, that is: roughness, contrast, direction degree, line picture degree, regularity and spend roughly.Autoregression texture model (simultaneousauto-regressive, SAR) is a kind of application example of Markov random field (MRF) model.
When described unique point Shape-based interpolation feature:
Shape facility comprises contour feature (outer boundary for object) and provincial characteristics (being related to whole shape area), can use the shape facility of following several mode Description Image:
Boundary characteristic method, the method, by obtaining the form parameter of image to the description of boundary characteristic, comprises Hough transform and detects parallel lines method and edge direction histogram method and be.
Fourier's shape description symbols (Fouriershapedescriptors) method, its basic thought uses the Fourier transform of object boundary as shape description, utilizes closure and the periodicity of zone boundary, two-dimensional problems are converted into one-dimensional problem.
Geometry parameter method, the expression of shape in image and coupling are adopted more simple provincial characteristics describing method by it, such as, adopt the form parameter method (shapefactor) of associated shape quantitative measure (as square, area, girth etc.).
Shape invariance moments method, it utilizes the square in region shared by target as shape description parameter.
In addition, the expression of shape and matching process also comprise the methods such as finite element method (FiniteElementMethod or FEM), rotation function (TurningFunction) and wavelet descriptor (WaveletDescriptor).
When described unique point is based on spatial relationship feature:
Extract image space relationship characteristic and can have two kinds of methods: a kind of method first carries out auto Segmentation to image, marks off the object or color region that comprise in image, then according to these extracted region images features, and sets up index; Image uniform is then divided into some regular sub-blocks by another kind method simply, then extracts feature to each image subblock, and sets up index.
Based on the characteristic relation of above-mentioned unique point, (scale invariant feature is changed can also to utilize SIFT, Scale-invariantFeatureTransform) algorithm extracts, after the unique point of the first image and the second image generates, for the unique point in the second image, fisrt feature point nearest with its Euclidean distance in the first image and the secondary nearest second feature point of Euclidean distance can be found, if minimum distance and its minimum distance division result are less than setting threshold value, then the unique point accepted in the second image is matching characteristic point.
Continue with reference to figure 1, the image matching method of the present embodiment also comprises:
Step S102, divides into groups to described matching characteristic point according to described subregion.
The grouping of described matching characteristic point corresponds to the division of subregion.Known to the dividing mode of step S100 to image scene, n sub regions can be obtained.Carry out grouping according to described subregion to described matching characteristic point described in this step to refer to, by the matching characteristic point on the second image according to the corresponding described n sub regions grouping of its coordinate position.
With reference to figure 7, if Fig. 7 signal be based on Fig. 2 image scene 1 obtain the distribution schematic diagram of matching characteristic point on the second image 3.
For the dividing mode of Fig. 6, with reference to figure 8, now n is 4.Second image 3 is divided into sub-image area n1, sub-image area n2, sub-image area n3, sub-image area n4.Wherein, the matching characteristic point in sub-image area n1 and sub-image area n3 covers substantially about the matching characteristic point of personage, the matching characteristic point of sub-image area n2 covers the matching characteristic point of Static background substantially, the matching characteristic point of sub-image area n4 covers related animal substantially matching characteristic point.There are 6 matching characteristic points in the n1 of sub-image area, in the n2 of sub-image area, have 4 matching characteristic points, in the n3 of sub-image area, have 8 matching characteristic points, in the n4 of sub-image area, have 13 matching characteristic points.
When dividing into groups to above-mentioned matching characteristic point, to each matching characteristic point, type of sets can be set, for record with the dividing condition of sub-image area.Such as, according to the dividing condition of Fig. 8, can it be set the matching characteristic point p0 in Fig. 8 and be grouped into n1 or other group characters relative with n1, matching characteristic point p1 is arranged it and is grouped into n2 or other group characters relative with n2, matching characteristic point p2 is arranged it and is grouped into n3 or other group characters relative with n3, matching characteristic point p3 is arranged it and is grouped into n4 or other group characters relative with n4.
Continue with reference to figure 1, the image matching method of the present embodiment also comprises:
Step S103, according to the Matching Model between described first image and the second image, chooses matching characteristic point subset from described matching characteristic point, and described matching characteristic point subset relates to grouping as much as possible.
Matching Model between first image and the second image can be self-defined.Described Matching Model can be affine Transform Model, Perspective transformation model etc., wherein: need choose 3 pairs of matching characteristic points for its matching of affine Transform Model, namely chooses three matching characteristic points for every two field picture; 4 pairs of matching characteristic points need be chosen for its matching of Perspective transformation model, namely four matching characteristic points are chosen for every two field picture.Described Matching Model is the estimation being suitable for kinematic parameter between image.The present embodiment use RANSAC (RandomSampleConsensus, random sampling consistance) algorithm calculates the Matching Model between two width images.Wherein, relate to the matching characteristic point randomly drawed in image-region, (this Matching Model is suitable for assessing kinematic parameter to assess the kinematic parameter covering global area, therefore, Matching Model described in the matching characteristic point matching utilizing random selecting, is the above-mentioned matching characteristic point assessment kinematic parameter utilizing random selecting).
Consider the motion of object (above-mentioned personage, animal) in the shake of the image capture apparatus such as camera, video camera itself and scene, and Rolling shutter (RollingShutter) effect, cause the kinematic parameter of zones of different in an image scene not identical.Such as in image scene 1, personage, animal self possess certain kinematic parameter, and the shake of image capture apparatus itself and curtain shutter effect also all may can produce kinematic parameter to the object (being stationary body or dynamic object) in scene.Now, during images match, the kinematic parameter of the zones of different of image non-equal, such as, the kinematic parameter of people's object area is different from the kinematic parameter of faunal district, is also different from the kinematic parameter of static background region.
In the image area, the matching characteristic point number that can be more than or equal to random selecting if the matching characteristic of regional area is counted out, so the matching characteristic of regional area is counted and be there is the possibility that is simultaneously randomly picked, the Matching Model based on RANSAC algorithm be likely according to regional area random selecting to matching characteristic point calculate.And owing to being random selecting matching characteristic point, each fitting result of Matching Model also may be caused to be all the matching characteristic point arrived based on different regional area random selecting, thus cannot accurately estimate global motion between two two field pictures, cause images match to lose efficacy or the failure of video debounce.
Consider the image sequence characteristic with motility, known: the motion 1) not only comprising the image capture apparatus such as camera itself in image sequence, and the motion comprising moving object in scene.2) object of images match is image debounce, flating is the kinetic global motion of image capture apparatus mainly, instead of the motion of object in scene, so image debounce needs to estimate is the Matching Model describing global motion, and the Matching Model of non-depicted local motion.
Based on scene partitioning and the matching characteristic point grouping process of step S100 and S102, may fall into regional area for random selecting matching characteristic point to choose and the problem can not assessing globe motion parameter, it is of overall importance that the mode of this step random selecting matching characteristic point can ensure that matching characteristic point is chosen:
First, obtain the number of random selecting matching characteristic point according to Matching Model, such as, for the six-freedom degree needed for affine Transform Model, required is three pairs of matching characteristic points, therefore, needs to select three matching characteristic points in the image area.
In order to ensure the overall spreadability of the matching characteristic point got, prevent the interior point finally obtained from converging to the matching characteristic point of local, then the matching characteristic point that should be chosen at dispersion as far as possible in described image-region carrys out Matching Model described in matching.
The thinking of the present embodiment is scene partitioning based on step S100 and S102 and the grouping of matching characteristic point, in the random selection process of this step, matching characteristic point is chosen according to spatial relation, and guarantees the matching characteristic point compartition from different images region as much as possible chosen.
If described Matching Model is affine Transform Model, the random selecting number of described matching characteristic point is 3, the partition mode of composition graphs 8 and matching characteristic point packet mode, then the matching characteristic point that arrives of random selecting at most can from three different sub-image areas, therefore, the possibility of random selecting matching characteristic point is for selecting: a matching characteristic point p0, from sub-image area n1; Matching characteristic point p1, from sub-image area n2; Matching characteristic point p3, from sub-image area n4.Now, the random selecting mode of this matching characteristic point, achieves the of overall importance of random selecting substantially.
Continue with reference to figure 1, the image matching method of the present embodiment also comprises:
Step S104, adopts Matching Model described in the matching of described matching characteristic point subset, to obtain images match result.
Above-mentioned fit procedure is based on RANSAC algorithm:
From the S set of matching characteristic point, randomly draw the subset S comprising and possess some matching characteristics point 1initialization Matching Model.Such as, if described Matching Model is affine Transform Model, then subset S 1element number be 3.
Repeat the process of above-mentioned sampling and Matching Model, from S set, find the subset S ' being less than a certain specific threshold t with the error of Matching Model.
After completing certain frequency in sampling, unanimously collect if do not find, algorithm failure, otherwise the maximum consistent collection obtained after choosing sampling is as interior point, corresponding Matching Model is final Matching Model.
In the algorithm of the present embodiment images match, the iteration cut-off condition of RANSAC is determined, if the element number of the consistent collection found in each iterative process is more just can make RANSAC algorithm convergence faster jointly by the consistent element number of maximum consistent collection and the iterations concentrated that the Matching Model found is corresponding at every turn usually.
Based on above-mentioned fitting algorithm, the fitting result adopting the present embodiment packet mode to carry out random selecting and direct random selecting matching characteristic point subset can be compared, be distributed as example with the matching characteristic of image shown in Fig. 7 and Fig. 8 point, and Matching Model is affine Transform Model:
In the scene of image sequence with multi-motion parameter, the matching characteristic that image has two or more zoness of different is counted quite, and with reference in figure 8, Fig. 8, the matching characteristic point number of region n1 to region n4 is all greater than 3.
If Matching Model described in matching characteristic point scheme of the choosing matching taking RANSAC algorithm completely random, then each random selecting 3 matching characteristic points (also namely choosing a matching characteristic point subset) set up Matching Model, the matching characteristic point that what if these 3 matching characteristic points were chosen is all on the n1 of sub-image area, then according to the RANSAC algorithm of the present embodiment, the point of the matching characteristic the n1 of region can be found as consistent collection in all matching characteristic points set of whole image scene, and in the iterative process of following RANSAC algorithm, all can not find the consistent element of set element more consistent collection more corresponding than this model, until algorithm convergence, the Matching Model that final Matching Model is just calculated based on the matching characteristic point on the n1 of region.
In like manner, if in iterative process at first, first have selected 3 points in the n2 to n4 of region on any one region, then final Matching Model is just based on the model that the matching characteristic point in this optional region calculates.
If in image sequence, multiframe is all this situation, and so based on randomness, the Matching Model that consecutive frame finds there will be each frame and finds Matching Model based on different regions.Such as, the n-th frame is the final Matching Model found based on the matching characteristic point in the n1 of region, and the (n+1)th frame is the final Matching Model found based on the matching characteristic point in the n2 of region.Such randomness, cannot ensure the parameter estimation of consistent representative global motion, finally cause the failure of images match and video debounce.
And adopt the present embodiment packet mode to carry out random selecting matching characteristic point with matching Matching Model, then at least ensure that the matching characteristic point of random selecting can be fallen in the region of 3/4 of scene, reduce the probability that the matching characteristic point chosen falls regional area:
The system matching characteristic point number that first have recorded in image scene in four sub-image area n1 to n4 is not 0, and based on RANSAC algorithm, selects three regions to choose matching characteristic point from above-mentioned four regions at random at every turn.
If this trizonal matching characteristic point number is not 0, then each region Stochastic choice matching characteristic point from above-mentioned three regions, to form a matching characteristic point subset (certainly, the matching characteristic point number in two regions is had not to be 0 in three regions of Stochastic choice if occur and the matching characteristic point number in another region is 0, then random from above-mentioned two regions select a region, select two matching characteristic points from this region at random, select a matching characteristic point from another region of correspondence at random; If only have the matching characteristic point number in a region not to be 0 and the matching characteristic point number in another two regions is 0 in three of Stochastic choice regions, then singly from an above-mentioned region, select three matching characteristic points at random).
In contrast, the matching characteristic point that packet mode carries out random selecting matching characteristic promise RANSAC algorithm each iteration institute random selecting can be distributed to most of region of image scene, also namely ensure that the matching characteristic point for calculating the Matching Model such as affine Transform Model can react image global motion situation, have estimated the global registration model parameter of the matching characteristic point with locus better, avoid being absorbed in local matching results as far as possible, reduce the probability of algorithm convergence to local optimum, ensure that the robustness of algorithm.
Embodiment two
A kind of image matching method, as shown in Figure 9, it comprises the steps:
Step S200, extracts and mates the unique point in the first image and the second image, to obtain the matching characteristic point between described first image and the second image.
The embodiment of this step can the step S101 of reference example one.
Step S201, carries out Region dividing, to obtain subregion to the image scene being distributed with described matching characteristic point.
Be different from the scene partitioning of embodiment one, the present embodiment divide to as if be distributed with the image scene of described matching characteristic point.
Still with reference to figure 2, and composition graphs 7, be distributed with the range set of the image scene of described matching characteristic point based on the described spatial relation being distributed with the image scene of described matching characteristic point.
Can be contain with described in the setting of the locus of described matching characteristic point the image scene having and be distributed with described matching characteristic point described in the definition of the minimum image region of described matching characteristic point.As shown in Figure 10, s1 for described in contain the minimum image region having described matching characteristic point, also namely described in be distributed with the image scene of described matching characteristic point, its border limits with the line of described matching characteristic point.It is straight line line in Figure 10.
The image scene having and be distributed with described matching characteristic point described in the definition of the minimum image region of described matching characteristic point can be contained described in the locus setting of described matching characteristic point:
As shown in figure 11, s1 also for described in contain the image-region having described matching characteristic point, the described border being distributed with the image scene of described matching characteristic point is also limit with the line of described matching characteristic point, its connection based on described Boundary Match unique point can be self-defining curve line, and described custom curve can be obtain based on the function preset.
As shown in figure 12, s2 also for described in contain the image-region having described matching characteristic point, the described image scene being distributed with described matching characteristic point is the custom rule graphics field based on Boundary Match unique point, such as square, rectangle or polygon etc., the image scene being distributed with described matching characteristic point described in Figure 12 is the rectangle of a covering Boundary Match unique point.
In this step, also can have multiple to the described dividing mode being distributed with the image scene of described matching characteristic point:
Can divide global image scene 3, to divide the described image scene being distributed with described matching characteristic point.Its dividing mode can the step S100 of reference example one.
Such as, adopt the third dividing mode, the result that the image scene s1 shown in Figure 11 is divided can with reference to Figure 13.Now, this step divides the subregion obtained is subregion n1 ', subregion n2 ', subregion n3 ' and subregion n4 '.
In addition, the first dividing mode of all right direct basis embodiment one, directly divide the described image scene being distributed with described matching characteristic point, wherein, the selection of described central point and angle can be that pickup apparatus is self-defining.
Such as, adopt the first dividing mode, described central point is A1, and angle is 120 °, and the result that the image scene s1 shown in Figure 11 is divided can with reference to Figure 14.Now, this step divides the subregion obtained is subregion m1, subregion m2 and subregion m3.
According to the first dividing mode, described central point is A2, and angle is 90 °, and the result that the image scene s2 shown in Figure 12 is divided can with reference to Figure 15.Now, this step divides the subregion obtained is subregion m1 ', subregion m2 ', subregion m3 ' and subregion m4 '.
Continue with reference to figure 9, the image matching method of the present embodiment also comprises:
Step S202, divides into groups to described matching characteristic point according to described subregion.
The embodiment of this step can the step S102 of reference example one.
Step S203, according to the Matching Model between described first image and the second image, chooses matching characteristic point subset from described matching characteristic point, and described matching characteristic point subset relates to grouping as much as possible.
The embodiment of this step can the step S103 of reference example one.
Step S204, adopts Matching Model described in the matching of described matching characteristic point subset, to obtain images match result.
The embodiment of this step can the step S104 of reference example one.
Embodiment three
A kind of method for processing video frequency, as shown in figure 16, it comprises:
Step S300, the Matching Model between matching consecutive frame image.
The process of the Matching Model between this step matching consecutive frame image can reference example one or the matching process described in any one of embodiment two.
Step S301, carries out motion compensation based on described Matching Model to two field picture.
The present embodiment uses motion compensation to realize video debounce.For the Matching Model that step S300 matching obtains, filtering and matching can be carried out based on described Matching Model to institute's video sequence (image sequence), to obtain stable video stream.Align to present frame with reference to frame, the difference between computing reference frame and present frame, fills up present frame by reference frame again, to realize described motion compensation.
In addition, by multiple reference frame simultaneously to present frame alignment, the difference minimum value in reference frame and between present frame can also be got, uses the reference frame possessing difference minimum value to fill up present frame, to realize described motion compensation.
Based on the easy motion parameter in the corresponding globe motion parameter of interpretational criteria evaluation reference frame, easy motion parameter and globe motion parameter difference can also be used, as jitter parameter, to realize described motion compensation.
Although the present invention with preferred embodiment openly as above; but it is not for limiting the present invention; any those skilled in the art without departing from the spirit and scope of the present invention; the Method and Technology content of above-mentioned announcement can be utilized to make possible variation and amendment to technical solution of the present invention; therefore; every content not departing from technical solution of the present invention; the any simple modification done above embodiment according to technical spirit of the present invention, equivalent variations and modification, all belong to the protection domain of technical solution of the present invention.

Claims (17)

1. an image matching method, is characterized in that, comprising:
Region dividing is carried out to image scene, to obtain subregion;
Extract and mate the unique point in the first image and the second image, to obtain the matching characteristic point between described first image and the second image;
According to described subregion, described matching characteristic point is divided into groups;
According to the Matching Model between described first image and the second image, choose matching characteristic point subset from described matching characteristic point, described matching characteristic point subset relates to grouping as much as possible;
Adopt Matching Model described in the matching of described matching characteristic point subset, to obtain images match result.
2. image matching method as claimed in claim 1, is characterized in that, described to carry out Region dividing to described image scene be carry out based on the spatial relation of described image scene.
3. image matching method as claimed in claim 2, is characterized in that, describedly carries out Region dividing to image scene and comprises: on average divide described image scene.
4. image matching method as claimed in claim 2, is characterized in that, describedly carries out Region dividing to image scene and comprises: at least image scene described in trisection in the lateral or vertical direction.
5. image matching method as claimed in claim 2, is characterized in that, describedly carries out Region dividing to image scene and comprises: be starting point with the central point of described image scene, angularly divide described image scene.
6. image matching method as claimed in claim 1, it is characterized in that, described Matching Model is affine Transform Model.
7. image matching method as claimed in claim 1, is characterized in that, the grouping of described matching characteristic point corresponds to the division of subregion.
8. image matching method as claimed in claim 1, is characterized in that, based on Matching Model described in the matching of RANSAC algorithm.
9. an image matching method, is characterized in that, comprising:
Extract and mate the unique point in the first image and the second image, to obtain the matching characteristic point between described first image and the second image;
Region dividing is carried out, to obtain subregion to the image scene being distributed with described matching characteristic point;
According to described subregion, described matching characteristic point is divided into groups;
According to the Matching Model between described first image and the second image, choose matching characteristic point subset from described matching characteristic point, described matching characteristic point subset relates to grouping as much as possible;
Adopt Matching Model described in the matching of described matching characteristic point subset, to obtain images match result.
10. image matching method as claimed in claim 9, it is characterized in that, it is carry out based on the described spatial relation being distributed with the image scene of described matching characteristic point that the described image scene to being distributed with described matching characteristic point carries out Region dividing.
11. image matching methods as claimed in claim 10, is characterized in that, the described image scene to being distributed with described matching characteristic point is carried out Region dividing and comprised: the described image scene of average division.
12. image matching methods as claimed in claim 10, is characterized in that, the described image scene to being distributed with described matching characteristic point is carried out Region dividing and comprised: at least image scene described in trisection in the lateral or vertical direction.
13. image matching methods as claimed in claim 10, it is characterized in that, the described image scene to being distributed with described matching characteristic point is carried out Region dividing and is comprised: be starting point with the described central point being distributed with the image scene of described matching characteristic point, be angularly distributed with the image scene of described matching characteristic point described in division.
14. image matching methods as claimed in claim 9, it is characterized in that, described Matching Model is affine Transform Model.
15. image matching methods as claimed in claim 9, is characterized in that, the grouping of described matching characteristic point corresponds to the division of subregion.
16. image matching methods as claimed in claim 9, is characterized in that, based on Matching Model described in the matching of RANSAC algorithm.
17. 1 kinds of method for processing video frequency, is characterized in that, comprising:
The matching process matching of utilization as described in any one of claim 1 to 16 obtains the Matching Model between consecutive frame image;
Based on described Matching Model, motion compensation is carried out to two field picture.
CN201410234637.7A 2014-05-28 2014-05-28 Image matching method and method for processing video frequency Active CN105447841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410234637.7A CN105447841B (en) 2014-05-28 2014-05-28 Image matching method and method for processing video frequency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410234637.7A CN105447841B (en) 2014-05-28 2014-05-28 Image matching method and method for processing video frequency

Publications (2)

Publication Number Publication Date
CN105447841A true CN105447841A (en) 2016-03-30
CN105447841B CN105447841B (en) 2019-06-07

Family

ID=55557975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410234637.7A Active CN105447841B (en) 2014-05-28 2014-05-28 Image matching method and method for processing video frequency

Country Status (1)

Country Link
CN (1) CN105447841B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316313A (en) * 2016-04-15 2017-11-03 株式会社理光 Scene Segmentation and equipment
CN109840457A (en) * 2017-11-29 2019-06-04 深圳市掌网科技股份有限公司 Augmented reality register method and augmented reality register device
CN113020428A (en) * 2021-03-24 2021-06-25 北京理工大学 Processing monitoring method, device and equipment of progressive die and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101009021A (en) * 2007-01-25 2007-08-01 复旦大学 Video stabilizing method based on matching and tracking of characteristic
CN101916445A (en) * 2010-08-25 2010-12-15 天津大学 Affine parameter estimation-based image registration method
KR101247220B1 (en) * 2011-03-10 2013-03-25 서울대학교산학협력단 Image processing apparatus and method using repetitive patterns

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101009021A (en) * 2007-01-25 2007-08-01 复旦大学 Video stabilizing method based on matching and tracking of characteristic
CN101916445A (en) * 2010-08-25 2010-12-15 天津大学 Affine parameter estimation-based image registration method
KR101247220B1 (en) * 2011-03-10 2013-03-25 서울대학교산학협력단 Image processing apparatus and method using repetitive patterns

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
易盟等: "基于不变特征和映射抑制的航拍视频图像配准", 《航空学报》 *
远中文: "视频序列电子稳像技术研究", 《中国优秀硕士论文全文数据库》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316313A (en) * 2016-04-15 2017-11-03 株式会社理光 Scene Segmentation and equipment
CN107316313B (en) * 2016-04-15 2020-12-11 株式会社理光 Scene segmentation method and device
CN109840457A (en) * 2017-11-29 2019-06-04 深圳市掌网科技股份有限公司 Augmented reality register method and augmented reality register device
CN109840457B (en) * 2017-11-29 2021-05-18 深圳市掌网科技股份有限公司 Augmented reality registration method and augmented reality registration device
CN113020428A (en) * 2021-03-24 2021-06-25 北京理工大学 Processing monitoring method, device and equipment of progressive die and storage medium
CN113020428B (en) * 2021-03-24 2022-06-28 北京理工大学 Progressive die machining monitoring method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN105447841B (en) 2019-06-07

Similar Documents

Publication Publication Date Title
JP6976270B2 (en) Remote determination of the amount stored in a container in a geographic area
CN106683100B (en) Image segmentation defogging method and terminal
CN109961417B (en) Image processing method, image processing apparatus, and mobile apparatus control method
US20130215234A1 (en) Method and apparatus for stereo matching
CN110443205A (en) A kind of hand images dividing method and device
KR20160138478A (en) System and method for images distortion correction
KR101567792B1 (en) System and method for describing image outlines
CN110245199B (en) Method for fusing large-dip-angle video and 2D map
CN110211169B (en) Reconstruction method of narrow baseline parallax based on multi-scale super-pixel and phase correlation
CN108765317A (en) A kind of combined optimization method that space-time consistency is stablized with eigencenter EMD adaptive videos
CN102253995A (en) Method and system for realizing image search by using position information
CN111598777A (en) Sky cloud image processing method, computer device and readable storage medium
CN105447841A (en) Image matching method and video processing method
CN112218107B (en) Live broadcast rendering method and device, electronic equipment and storage medium
CN104346630A (en) Cloud flower identifying method based on heterogeneous feature fusion
CN111709893B (en) ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment
JP7156624B2 (en) Depth map filtering device, depth map filtering method and program
CN114677479A (en) Natural landscape multi-view three-dimensional reconstruction method based on deep learning
CN107038758A (en) A kind of augmented reality three-dimensional registration method based on ORB operators
CN105957053B (en) Two dimensional image depth of field generation method and device
CN103514587B (en) Ship-based image-stabilizing method based on sea-sky boundary detecting
CN109635809B (en) Super-pixel segmentation method for visual degradation image
CN104978558B (en) The recognition methods of target and device
EP3850587A1 (en) Methods, devices, and computer program products for improved 3d mesh texturing
Heisterklaus et al. Image-based pose estimation using a compact 3d model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant