WO2014183385A1 - Terminal and image processing method therefor - Google Patents

Terminal and image processing method therefor Download PDF

Info

Publication number
WO2014183385A1
WO2014183385A1 PCT/CN2013/085782 CN2013085782W WO2014183385A1 WO 2014183385 A1 WO2014183385 A1 WO 2014183385A1 CN 2013085782 W CN2013085782 W CN 2013085782W WO 2014183385 A1 WO2014183385 A1 WO 2014183385A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
matching
feature
module
Prior art date
Application number
PCT/CN2013/085782
Other languages
French (fr)
Chinese (zh)
Inventor
刘冬梅
刘凤鹏
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2014183385A1 publication Critical patent/WO2014183385A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a terminal and a method for implementing image processing.
  • the embodiment of the present invention provides a terminal and a method for realizing image processing thereof, so that an image taken by the terminal can meet the dual requirements of visual field and resolution.
  • a method for implementing image processing by a terminal including:
  • Image acquisition step acquiring two images with overlapping regions
  • Image matching step According to the overlapping area, the two images are registered, and the two images after registration are combined.
  • the registering the two images according to the overlapping area includes: extracting feature points of the two images, and extracting two images from the feature points. Matching feature pairs, with the matching feature pairs as alignment points, register the two images.
  • the feature point includes a corner point of the image.
  • the extracting feature points of the two images includes: for each image, convolving with the image by using a 3 x 3 convolution kernel to obtain each pixel of the image a partial derivative, and using the partial derivative to calculate a symmetric matrix M in a Plessy corner detection algorithm corresponding to each pixel point;
  • the feature point determination threshold is set, and the pixel points whose R values are larger than the determination threshold are selected as the extracted feature points.
  • the method of the embodiment of the present invention further includes:
  • the boundary feature points in the image are deleted by using a preset boundary template
  • the matching feature pairs of the two images are extracted from the feature points, including:
  • the feature points in the two images are coarsely matched, and the matching feature pairs obtained by the rough matching are accurately matched by the random sample RANSAC algorithm to obtain the accurately extracted matching feature pairs.
  • the method further includes:
  • the two images are smoothed by the median filter, and the result of subtracting the original image from the filtered image is used as the operation object of the rough matching processing.
  • the synthesizing the two images after registration comprises: grading gray of each pixel of the two images after registration according to the progressive progressive synthesis method value
  • the synthesizing the two images after the registration further comprises:
  • the 7 X 7 area around the seam is the seam processing area, and the pixels in the seam processing area are linearly filtered by the 3 x 3 template.
  • the method further includes:
  • Image pre-processing step processing the two images acquired by the image obtaining step according to a set pre-processing operation; wherein, the pre-processing operation comprises one or more of the following operations: verifying the acquired image, and The image is converted into the same coordinate system, the two images are smoothed and processed, and the initial positioning is performed to obtain a rough overlapping region, and the overlapping region is taken as an extraction region of the feature point.
  • the method of the embodiment of the present invention further includes:
  • a 3D image generating step acquiring a composite image and another image having an overlapping area with the composite image, performing the image combining step to perform re-synthesis, repeating the image capturing and image combining process, and obtaining a 3D image having a depth of field; wherein, Composite image has another image with overlapping regions It can be a non-synthetic image or a composite image.
  • a terminal including:
  • An image acquisition module configured to acquire two images having overlapping regions
  • the image matching module is arranged to register the two images according to the overlapping area, and synthesize the two images after registration.
  • the image matching module is configured to extract feature points of two images, and extract matching feature pairs of two images in the feature points, to match the The feature pair is the alignment point, and the two images are registered.
  • the feature point includes a corner point of the image.
  • the image matching module further includes: a calculation submodule configured to convolve with the image by using a 3 x 3 convolution kernel for each image, and obtain an image The partial derivative of the pixel, and using the partial derivative to calculate the symmetric matrix M in the Plessy corner detection algorithm corresponding to each pixel point;
  • the characteristic value of M, £ ⁇ is a minimum value that makes the denominator non-zero; the screening sub-module is arranged to select a detection area on the image according to the selection window, and the R value is selected to be the largest in the detection area. Pixels, move the selection window until the full image is filtered;
  • the extraction sub-module is configured to set a feature point determination threshold, and set, in the selected pixel points, a pixel point whose R value is greater than the determination threshold value as the extracted feature point.
  • the image matching module further includes: a coarse matching submodule configured to perform rough matching on feature points in the two images by using a coarse matching bidirectional maximum correlation coefficient BGCC algorithm ; as well as
  • the image matching module is further configured to set a gray value/(X, a pixel value of each pixel of the two images after registration according to a progressive fade-out synthesis method.
  • the setting rules include:
  • the terminal further includes: an image preprocessing module, and/or a 3D image generation module, where:
  • the image pre-processing module is configured to process the two images acquired by the image acquisition module according to a set pre-processing operation; wherein the pre-processing operation comprises one or more of the following operations: Image, converting the two images into the same coordinate system, smoothing and filtering the two images, and initially positioning, obtaining a substantially overlapping region, and extracting regions with the overlapping regions as feature points;
  • the 3D image generation module is configured to acquire a composite image and another image having an overlapping area with the composite image, trigger the image matching module to perform re-synthesis, repeat image acquisition and image matching process, and obtain a depth of field The 3D image; wherein, another image having an overlapping area with the composite image may be a non-composite image or a composite image.
  • the terminal and the method according to the embodiment of the present invention extract feature point parameters directly for each image by taking two sets of images with different angles but overlapping areas, and then determine the matching degree between the images according to each feature point, and eliminate the wrong matching pair.
  • the composite image is processed according to the image after registration, and a wide field of view and high resolution image are obtained, which greatly improves the user experience;
  • the series of images are spatially overlapped by taking a series of images with different angles but overlapping regions to form a wide viewing angle scene including each image sequence information.
  • High-definition new images with 3D effects which not only meet the requirements of wide field of view and high resolution, but also better meet the needs of users.
  • FIG. 1 is a flowchart of a method for implementing image processing by a terminal according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of geometric matching points of an actual matching point and an estimated point according to Embodiment 2 of the present invention; The overall processing framework of image processing in the second;
  • FIG. 4 is a flowchart of a method for implementing image processing in a terminal according to Embodiment 3 of the present invention
  • FIG. 5 is a processing framework diagram of an application example according to Embodiment 3 of the present invention.
  • FIG. 6 is a structural block diagram of a terminal according to Embodiment 4 of the present invention.
  • FIG. 7 is a structural block diagram of a terminal according to Embodiment 5 of the present invention.
  • the present invention provides a terminal and a method for realizing image processing thereof, and a method for realizing image processing by using a single picture captured by a current terminal device.
  • the image is spatially overlapped and synthesized, and a scene image with a large field of view is obtained without reducing the image resolution.
  • An embodiment of the present invention provides a method for implementing image processing in a terminal. As shown in FIG. 1 , the method includes: Step S101: acquiring two images having overlapping regions;
  • the acquired image may be an image stored in a terminal storage module (such as a terminal internal memory and/or an external expansion memory), or may be an image captured by the terminal in real time, and the embodiment of the present invention does not perform image acquisition.
  • a terminal storage module such as a terminal internal memory and/or an external expansion memory
  • the embodiment of the present invention does not perform image acquisition. The only limit.
  • the embodiment provides a preferred implementation manner, which is specifically: setting two sets of rotatable cameras in the terminal, since the two sets of cameras can change the angle to shoot Therefore, images with different angles and overlapping regions can be acquired at the same time.
  • This acquisition method provides strong support for speeding up image processing.
  • Step S102 registering the two images according to the overlapping area.
  • the two images are registered according to the overlapping area, including: extracting feature points of the two images, and extracting matching feature pairs of the two images in the feature points, wherein the matching feature pairs are Align the points and register the two images.
  • the feature points may be any geometric or grayscale features suitable for image matching extracted according to image properties.
  • the embodiment of the present invention preferably uses a corner point as a feature point to be extracted.
  • the corner point detection algorithm is mainly used.
  • the corner point detection algorithm is mainly divided into two types of edge-based and gray-based extraction algorithms. Because the former has a large dependence on edge extraction, if it is detected, An error occurs at the edge of the edge or the edge line is interrupted (this is often the case in practice), which will have a greater impact on the corner extraction result, while the grayscale-based extraction algorithm mainly calculates the grayscale in the local range. And the violent maximum point of the gradient changes to achieve the purpose of detection, without edge extraction, and thus has been widely used in practice.
  • the most representative corner detection algorithms are: Moravec operator corner detection, Forstner operator corner detection, Susan detection algorithm, Plessy corner detection algorithm.
  • the Plessy corner detection algorithm has excellent performance in terms of consistency and effectiveness, and the extracted corner points are proved to have the advantages of rotation, translation invariance, and stability.
  • the basic idea of the Plessy corner detection algorithm is to determine the corner point by using the gray rate of change of the image.
  • the method calculates the eigenvalue of the matrix M associated with the autocorrelation function of the image, ie
  • the first-order curvature of the autocorrelation function determines whether the point is a corner point. If both curvature values are high, then the point is considered to be a corner point.
  • the Pless corner detection algorithm defines the autocorrelation value £ ( M , V ) in any direction as:
  • I x and I y are gradient values in the x and y directions of the image, respectively, and ⁇ is a parameter representing the width of the Gaussian filter, which represents a convolution operation.
  • is a 2 x 2 symmetric matrix, so there must be two eigenvalues 4 and eigenvalues reflecting the characteristics of the image pixels, ie if the pixel points ( ⁇ ' is a feature point, then 2 ⁇ matrices of this point
  • the eigenvalues are all positive values, and they are local maxima in the region centered on ( ⁇ ', then the feature points can be expressed as evaluation functions:
  • R Det(M) - kTrace 2 (M) (2)
  • T the threshold
  • the feature points are generally pixel points corresponding to the maximum interest values in the local range. Therefore, after calculating the R value of each point, non-maximum value suppression is performed, and all points with the largest local interest value in the original image are extracted.
  • step S102 extracting matching feature pairs of the two images includes:
  • Step S103 synthesizing the two images after registration.
  • the image synthesis strategy selected in the embodiment of the present invention may be, but is not limited to, a progressive fade synthesis method.
  • the method in this embodiment extracts feature point parameters directly for each image by taking two sets of images with different angles but overlapping areas, and then determines the matching degree between the images according to each feature point, and eliminates the wrong matching pair.
  • the composite image is processed to obtain a wide-field, high-resolution image, which greatly improves the user experience.
  • Embodiment 2 The embodiment of the present invention provides a method for implementing image processing in a terminal. The method in this embodiment provides several improvements under the main structure described in Embodiment 1, which can further speed up image processing speed and accuracy. , continue as shown in Figure 1, including the following steps:
  • Step S101 acquiring two images having overlapping regions
  • Step S102 registering the two images according to the overlapping area.
  • the two images are registered according to the overlapping area, including: extracting feature points of the two images, and extracting matching feature pairs of the two images in the feature points, wherein the matching feature pairs are Align the points and register the two images.
  • this embodiment proposes several improvements to extract as many feature points as possible and accurate in the image, and accelerate at the same time Extract the speed of the corner points.
  • the implementation process of the improved Plessy corner detection algorithm includes:
  • the convolution of the 3 ⁇ 3 convolution kernel with the original image can be used to obtain the first-order partial derivative I x , I of each point of the original image.
  • the 3 x 3 convolution kernel can be, but is not limited to, the following template: ⁇
  • the k value of the feature point available in the evaluation function R is an empirical constant, which is arbitrarily used, resulting in a decrease in the reliability of corner extraction, in the case of different picture conditions. Underneath, it is easy to affect the accuracy of corner extraction. Considering that R is essentially a corner detection signal, the value of the determinant is large, and the value of the trace is small as a corner signal, and vice versa. Therefore, in the improved algorithm, the following evaluation method is used to calculate the feature point available evaluation function:
  • the improved Plessy corner detection algorithm uses the in-window suppression non-maximum method in the image to combine the threshold settings to filter the feature points.
  • the principle is: select an appropriate window in the image. The pixel with the largest R in the window is retained, and the remaining pixels in the window are deleted. The moving window filters the pixels of the entire image.
  • the number of local extremum points is often many.
  • a reasonable threshold is set according to requirements, and the largest number of selected pixels are selected as the final feature point extraction results.
  • Use a preset boundary template to exclude boundary corner points that do not match well.
  • the size of the "selection window” and the "judgment threshold” can be flexibly set according to actual requirements, and the main performance is that when the selection window is small, more pixels are selected; otherwise, the selected pixels are selected. less.
  • the judgment of the threshold the larger the setting, the fewer the feature points are finally extracted; on the contrary, the extracted feature points are more.
  • the decision width is 2200
  • the non-maximum window is 7*7.
  • the selection window and the decision threshold can be flexibly set according to requirements. This embodiment does not limit the size.
  • a sub-pixel feature point (corner point) positioning process may also be performed, and the feature points may be further accurately extracted by positioning the sub-pixel feature point positioning process.
  • the pixel corresponding to the maximum point is the extracted precise feature point.
  • the extracted precise feature point is the corner point; otherwise, the corresponding corner point in the calculation is deleted, and the pixel point corresponding to the maximum value point is the accurately extracted corner point (feature point).
  • step S102 the following extraction methods are preferably used for matching feature pairs:
  • the matching algorithm proposed in this embodiment is divided into two steps: rough matching is performed by using the bidirectional maximum correlation coefficient (BGCC); then it is purified by random sampling method (RANSAC) to achieve the perfect matching of images.
  • the method can accurately extract the correct matching feature point pairs while removing the redundant feature points.
  • the rough matching uses the Bidirectional Greatest Correlative Coefficient (BGCC) method to establish a similarity measure NCC.
  • BGCC Bidirectional Greatest Correlative Coefficient
  • the matching is considered successful only when the two corner points are the largest relative measure value of the other party.
  • the correlation coefficients are defined as follows:
  • n x n is the size of the window selected in one of the images
  • a x / is the size of the search area selected in the other image, and the angle in the first image is set
  • the feature point ( M , V ) is the average gray value of the corner window area:
  • C uses the corner point with the largest correlation coefficient as the matching point of the given corner point, so that a set of matching points can be obtained.
  • the image is filtered with median.
  • the waver (such as the median filter of 7 X 7) is smoothed, and then the result of subtracting the original image from the filtered image is taken as the object of the operation.
  • the random matching method (RANSAC) is used for fine matching.
  • RANSAC The basic idea of RANSAC is: Firstly, some kind of objective function is designed according to the specific problem, then the initial value of the parameter in the function is estimated by repeatedly extracting the minimum point set, and all the data are divided into so-called “inner points” by using these initial parameter values. "(inliers, the point that satisfies the estimated parameters) and “outliers” (points that do not satisfy the estimated parameters), and finally recalculate and estimate the parameters of the function with all "inside points".
  • the so-called minimum point set is sampled in the input data, and the parameter to be determined is estimated by using the minimum point set obtained by each sampling, and at the same time, according to certain discriminant criteria, which of the input data is associated with the group
  • the parameters are consistent, that is, "inside point”, which are inconsistent, that is, "out of point”.
  • the estimated parameter value corresponding to the highest "inside point” ratio in the input data is taken as the final parameter estimation value.
  • step (2) (4) randomly select n pairs of matching points, return to step (2), and repeat N times, to obtain a more accurate projection transformation matrix H, and perform projection transformation on each matching point obtained by rough matching according to the matrix H.
  • the inner point is the pair of matching features that are accurately extracted.
  • the estimated transformation matrix H requires at least eight equations, that is, n ( > 4 ) pairs of feature pairs need to be selected in the adjacent two images, and the feature pairs can be obtained through the above-mentioned corner matching process.
  • the projection of ⁇ , 2 is transformed into (in homogeneous coordinates):
  • the calculation of the inner point according to the principle that the inner point distance is less than the set distance threshold value t includes: as shown in FIG. 2, respectively, *, respectively, points ⁇ , corresponding points estimated in the respective corresponding images, then in the image
  • the geometric distance between the actual matching point of a point and its estimated matching point is defined as follows:
  • indicates the Euclidean distance.
  • , 1, 2,..., ( 10) If the calculated dis is greater than the set distance threshold, the corresponding matching point is considered to be the outlier; if the calculated dis is less than the set distance For the value, the corresponding matching point is considered to be an inner point, and only the inner point is suitable for calculating the transformation matrix H.
  • Step S103 synthesizing the two images after registration.
  • the improved progressive progressive synthesis method is used for image synthesis, as follows:
  • the original progressive gradual synthesis method takes the gradation value f(x, y) of the pixel in the overlapping region of the image from the gradation values f (x, y) and f 2 (x, y) of the corresponding pixel in the two images.
  • the weighted average is obtained:
  • f(x,y) dl xfl(x,y)+d2 ⁇ (x,y)
  • d 2 is ramped from 0 to 1
  • f (x, y) slowly transitions smoothly to f 2 ( x, y).
  • this algorithm it is found that the processed image still eliminates the boundary in the image, but still overlaps the image. The phenomenon of ghosting and blurring occurs.
  • the gray value of the synthesized image appears to jump at these pixels, in order to avoid such a situation.
  • the ioor, for f(x, y), does not directly take the weighted average of f (x, y) and f 2 (x, y), but first calculates the pixel corresponding to the pixel in the first two graphs.
  • the difference of the gray value if the difference is less than the threshold, the weighted average is taken as the gray value of the point, and vice versa, the gray value before the smoothing is the gray value of the point.
  • the image pixel f(x, y) synthesized by the correction algorithm can be expressed as:
  • the door is a preset judgment threshold. It can be known from the formula (11) that the judgment threshold is used to determine which gray scale definition method is used for the pixel of the overlap region. When the ioor is set too large, it will cause all Pixels
  • the embodiment of the present invention only proposes the concept of ioor, and the specific value of ioor is not limited.
  • the overlap area of the selected seam is too large, problems such as image blurring and inconspicuous edge information may occur. If the overlap area of the selected seam is too small, the seaming phenomenon of the image cannot be eliminated. Therefore, in the embodiment, for the processed image, the 7x7 region around the patchwork is used as the seam processing area, and the 3x3 template is used to linearly filter the pixels in the seam region, and the effect is best.
  • Embodiment 3 This embodiment provides a method for implementing image processing in a terminal. As shown in FIG. 4, the method includes the following steps:
  • Step S401 Acquire two images with overlapping regions.
  • Step S402 Perform preprocessing on the acquired two images to ensure the accuracy of the image blending in the next step.
  • the preprocessing process includes one or more of the following processing methods:
  • the two images are converted into the same coordinate system to facilitate subsequent image blending processing; in the third method, the image is smoothed and filtered to provide precision support for subsequent image blending processing; and the fourth method is initially positioned to obtain approximate An overlapping area, and an extraction area characterized by the overlapping area.
  • This preprocessing method narrows the matching range and improves the image processing speed.
  • Step S403 Extract feature points of the two images, and extract matching feature pairs of the two images in the feature points, and register the two images by using the matched feature pairs as alignment points.
  • Step S404 synthesizing the two images after registration.
  • Step S405 Acquire a composite image and another image having an overlapping area with the composite image, perform steps S403 and S404 to perform re-synthesis, repeat image acquisition and image matching process, and obtain a 3D image with depth of field.
  • the other image having a certain overlapping area with the composite image may be a non-composite image or a composite image.
  • FIG. 5 A specific application implementation process using the method described in this embodiment is shown below.
  • the basic processing framework is shown in FIG. 5, and the specific implementation process includes:
  • the user can select to enable the image processing function through the terminal interface
  • the processed two sets of pictures enter the picture blending process to generate a set of composite pictures A;
  • the two sets of cameras are used to continue photographing to synthesize B, C, D ... and store the synthesized pictures locally;
  • the user can directly preview the generated image.
  • the series of images are spatially overlapped by taking a series of images with different angles but overlapping regions to form a wide viewing angle scene including each image sequence information.
  • Embodiment 4 The embodiment of the present invention provides a terminal, as shown in FIG. 6, specifically:
  • An image obtaining module 610 configured to acquire two images with overlapping regions
  • the image matching module 620 is configured to register the two images according to the overlapping area, and synthesize the two images after registration.
  • the image matching module 620 extracts feature points of two images, and extracts matching feature pairs of the two images in the feature points, and uses the matched feature pairs as alignment points to match the two images. quasi.
  • the feature points in the image blending module 620 may be any geometric or grayscale features suitable for image blending based on image properties. Embodiments of the present invention preferably use corner points as feature points to be extracted.
  • the image matching module 620 can perform corner extraction by the following corner detection algorithm: Moravec operator corner detection, Forstner operator corner detection, Susan detection algorithm, and Plessy corner detection algorithm.
  • the Plessy corner detection algorithm has excellent performance in terms of consistency and validity, and the extracted corner points are proved to have the advantages of rotation, translation invariance and stability.
  • the improved Plessy corner detection algorithm is preferably used for corner extraction.
  • the image matching module 620 includes: a calculation submodule 621, a setting submodule 622, and a screening submodule. 623 and extraction sub-module 624; wherein:
  • the calculation sub-module 621 is configured to convolve the image with a 3 x 3 convolution kernel for each image, obtain a partial derivative of each pixel of the image, and use the partial derivative to calculate the Plessy corner detection corresponding to each pixel point.
  • a symmetric matrix M in the algorithm
  • the feature value, £ ⁇ is a minimum value that makes the denominator non-zero;
  • the screening sub-module 623 is configured to select a detection area on the image according to the selection window, and select the largest R value in the detection area. Pixels, move the selection window until the entire image is filtered;
  • the extraction sub-module 624 is configured to set a feature point determination threshold, and set, in the selected pixel points, a pixel point whose R value is greater than the determination threshold value as the extracted feature point.
  • the boundary feature points in the image are deleted using a preset boundary template before the feature points of the modules 621 to 624 are extracted.
  • the sub-pixel feature points in each feature point are extracted, and the extracted sub-pixel feature points are used as the final extracted feature points.
  • the image matching module 620 extracts matching feature pairs, including but not limited to: Hausdorff distance method, slack mark method, deterministic annealing algorithm, and iterative closest point algorithm (ICP).
  • the matching feature pairs are extracted by using the bidirectional maximum correlation coefficient method and the random sampling method.
  • the image matching module 620 includes: a coarse matching sub-module 625 and an exact matching sub-module 626; wherein:
  • the coarse matching sub-module 625 is configured to perform coarse matching on the feature points in the two images by using the coarse matching bidirectional maximum correlation coefficient BGCC algorithm;
  • the median filter is used to smooth the two images, and the result of subtracting the original image from the filtered image is used as a rough match.
  • the manipulated object of the process is used before the coarse matching sub-module 625 performs coarse matching on the feature points in the two images.
  • the exact matching sub-module 626 is configured to accurately match the matching feature pairs obtained by the rough matching by using a random RANSAC algorithm to obtain an accurately extracted matching feature pair.
  • the image blending module 620 preferably using an improved progressive fade-out synthesis method, sets the gray value ⁇ of each pixel of the two images after registration to realize image synthesis; , setting
  • the 7x7 region around the seam is used as the seam processing region, and the pixel in the seam region is linearly filtered by the template of 3x3, and the effect is best.
  • the embodiment provides a terminal.
  • the embodiment includes all the functional modules in the fourth embodiment, and is an extension of the solution in the fourth embodiment.
  • the method includes:
  • An image obtaining module 710 configured to acquire two images having overlapping regions
  • the image pre-processing module 720 is configured to process the two images acquired by the image obtaining module 710 according to the set pre-processing operation; wherein the pre-processing operation includes one or more of the following operations: verifying the acquired image, The two images are converted into the same coordinate system, the two images are smoothed and processed, and the initial positioning is performed to obtain a substantially overlapping region, and the overlapping region is taken as an extraction region of the feature point;
  • the image matching module 730 is configured to register the two images according to the overlapping area, and synthesize the two images after registration; specifically, the image matching module 730 extracts feature points of the two images, and The matching feature pairs of the two images are extracted from the feature points, and the matching image pairs are used as alignment points to register the two images.
  • the 3D image generation module 740 is configured to acquire a composite image and another image having an overlapping area with the composite image, trigger the image combining module 730 to perform re-synthesis, and repeat the image capturing and image combining process to obtain a 3D image having a depth of field.
  • another image having a certain overlapping area with the composite image may be a non-composite image or a composite image.
  • the terminal in the embodiment of the present invention extracts feature point parameters directly for each image by taking two sets of images with different angles but overlapping areas, and then determines the degree of matching between the images according to each feature point, and rejects the error.
  • the matching pair is combined and processed according to the image after registration, and a wide-field, high-resolution image is obtained, which greatly improves the user experience;
  • the terminal in the embodiment of the present invention performs spatial overlapping processing on a series of images by taking a series of images with different angles but overlapping regions to form a wide viewing angle scene including each image sequence information.
  • the high-definition new image with 3D effect not only meets the requirements of wide field of view and high resolution, but also better meets the user's needs.
  • the spirit and scope of the invention Thus, it is intended that the present invention cover the modifications and variations of the inventions
  • the series of images are spatially overlapped by taking a series of images with different angles but overlapping regions to form a wide viewing angle scene including each image sequence information.
  • High-definition new images with 3D effects which not only meet the requirements of wide field of view and high resolution, but also better meet the needs of users.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A terminal and an image processing method therefor. The method comprises: an image obtaining step: obtaining two images with an overlapped region; and an image combination step: registering the two images according to the overlapped region, and synthesizing the registered two images. In the method of the present invention, two groups of images which are shot in different angles and have an overlapped region, a feature point parameter is directly extracted for each image, and a matching degree between images is determined according to each feature point, an error matching pair is removed, and combination integration processing is performed according to the registered image.

Description

一种终端及其实现图像处理的方法  Terminal and method for realizing image processing thereof
技术领域 Technical field
本发明涉及图像处理技术领域, 尤其涉及一种终端及其实现图像处理的 方法。  The present invention relates to the field of image processing technologies, and in particular, to a terminal and a method for implementing image processing.
背景技术 Background technique
随着移动设备技术的飞速发展, 移动设备的各项功能也越来越强大。 目 为人们生活中不可或缺的一部分。 随着功能的强大, 人们对于听觉和视觉的 享受要求也越来越高, 但是目前移动设备终端照相机只能实现对单张图片的 拍摄, 日常使用普通移动设备照相机来获取视野的场景图像时, 即使是在高 端移动设备上人们必须调节相机的焦距, 通过缩放镜头才可以摄取完整的场 景, 但这样获取照片的分辨率是比较低的, 因为相机的分辨率是一定的, 拍 摄的场景越大分辨率越低(即, 相同的分辨率, 场景大的话出来的图片就比 较模糊, 场景小的就比较清晰) ; 为了获取高分辨率的场景照片不得不通过 缩放相机镜头来减小拍摄的视野, 但这又得不到完整的场景照片, 因此需要 在场景的大小和分辨率之间进行折衷。 可见, 当前终端对于照片的摄取方式 还存在一定的功能缺陷, 不能满足用户的使用需求。 发明内容  With the rapid development of mobile device technology, the functions of mobile devices are becoming more and more powerful. It is an indispensable part of people's lives. With the powerful functions, people's requirements for hearing and visual enjoyment are getting higher and higher, but at present, mobile device terminal cameras can only capture a single picture. When using a normal mobile device camera to acquire a scene image of a field of view, Even on high-end mobile devices, people have to adjust the focal length of the camera. By zooming the lens, the full scene can be taken, but the resolution of the captured image is relatively low, because the resolution of the camera is certain, and the larger the scene is. The lower the resolution (that is, the same resolution, the picture will be blurred when the scene is large, and the scene will be clear when the scene is small); in order to obtain high-resolution scene photos, the camera lens has to be zoomed to reduce the field of view. , but this does not get a complete picture of the scene, so you need to compromise between the size and resolution of the scene. It can be seen that the current terminal has certain functional defects in the way of ingesting photos, which cannot meet the user's use requirements. Summary of the invention
实本发明施例提供了一种终端及其实现图像处理的方法, 使得终端摄取 的图像可以满足视野和分辨率的双重要求。  The embodiment of the present invention provides a terminal and a method for realizing image processing thereof, so that an image taken by the terminal can meet the dual requirements of visual field and resolution.
依据本发明实施例的一个方面, 提供了一种终端实现图像处理的方法, 包括:  According to an aspect of the embodiments of the present invention, a method for implementing image processing by a terminal is provided, including:
图像获取步骤: 获取具有重叠区域的两幅图像; 以及  Image acquisition step: acquiring two images with overlapping regions;
图像糅合步骤: 按照所述重叠区域, 将两幅图像进行配准, 并将配准后 的两幅图像进行合成。 可选地, 本发明实施例所述方法中, 所述按照重叠区域, 将两幅图像进 行配准, 包括: 提取两幅图像的特征点, 并在所述特征点中提取出两幅图像 的匹配特征对, 以所述匹配特征对为对准点, 将两幅图像进行配准。 Image matching step: According to the overlapping area, the two images are registered, and the two images after registration are combined. Optionally, in the method of the embodiment of the present invention, the registering the two images according to the overlapping area includes: extracting feature points of the two images, and extracting two images from the feature points. Matching feature pairs, with the matching feature pairs as alignment points, register the two images.
可选地, 本发明实施例所述方法中, 所述特征点包括图像的角点。  Optionally, in the method of the embodiment of the present invention, the feature point includes a corner point of the image.
可选地, 本发明实施例所述方法中, 所述提取两幅图像的特征点, 包括: 对于每幅图像, 利用 3 x 3卷积核与图像做卷积, 求得图像各像素点的偏 导数, 并利用所述偏导数计算各像素点对应的 Plessy角点检测算法中的对称 矩阵 M;  Optionally, in the method of the embodiment of the present invention, the extracting feature points of the two images includes: for each image, convolving with the image by using a 3 x 3 convolution kernel to obtain each pixel of the image a partial derivative, and using the partial derivative to calculate a symmetric matrix M in a Plessy corner detection algorithm corresponding to each pixel point;
设置选取窗口、 以及特征点可用评价函数 R; 其中, R = Det(M) , 式 Set the selection window, and the feature point available evaluation function R; where R = Det(M) ,
Trace(M) + ε 中 Det(M、 = λ, . Trace(M) = + λ, , 、 分别为矩阵 Μ的特征值, ε为使分 母不为零的极小值; 按所述选取窗口在所述图像上选取一个检测区域, 在该检测区域内筛选 出 R值最大的像素点, 移动选取窗口, 直到筛选完整幅图; 以及  Trace(M) + ε Det(M, = λ, . Trace(M) = + λ, , , are the eigenvalues of the matrix Μ, respectively, ε is the minimum value that makes the denominator non-zero; Selecting a detection area on the image, selecting a pixel with the largest R value in the detection area, and moving the selection window until the complete image is filtered;
设置特征点判定阔值, 将筛选出的各像素点中 R值大于所述判定阔值的 像素点设为提取到的特征点。  The feature point determination threshold is set, and the pixel points whose R values are larger than the determination threshold are selected as the extracted feature points.
可选地, 本发明实施例所述方法还包括:  Optionally, the method of the embodiment of the present invention further includes:
在特征点提取前, 利用预先设定的边界模板, 将图像中的边界特征点删 除;  Before the feature point is extracted, the boundary feature points in the image are deleted by using a preset boundary template;
和 /或, 在提取图像中的特征点后, 提取各特征点中的亚像素特征点, 并 以提取的亚像素特征点为最终提取的特征点。  And/or, after extracting feature points in the image, extracting sub-pixel feature points in each feature point, and extracting the sub-pixel feature points as the final extracted feature points.
可选地, 本发明实施例所述方法中, 所述在特征点中提取出两幅图像的 匹配特征对, 包括:  Optionally, in the method of the embodiment of the present invention, the matching feature pairs of the two images are extracted from the feature points, including:
利用粗匹配双向最大相关系数 BGCC算法, 对两幅图像中的特征点进行 粗匹配, 利用随机釆样 RANSAC算法, 对粗匹配得到的匹配特征对进行精确 匹配, 得到精确提取的匹配特征对。  Using the coarse matching bidirectional maximum correlation coefficient BGCC algorithm, the feature points in the two images are coarsely matched, and the matching feature pairs obtained by the rough matching are accurately matched by the random sample RANSAC algorithm to obtain the accurately extracted matching feature pairs.
可选地, 本发明实施例所述方法中, 在对两幅图像中的特征点进行粗匹 配之前, 所述方法还包括: Optionally, in the method of the embodiment of the present invention, the feature points in the two images are coarsely Before the matching, the method further includes:
利用中值滤波器对两幅图像进行平滑处理, 并将原图与滤波处理后图像 相减的结果作为粗匹配处理的操作对象。  The two images are smoothed by the median filter, and the result of subtracting the original image from the filtered image is used as the operation object of the rough matching processing.
可选地, 本发明实施例所述方法中, 所述将配准后的两幅图像进行合成, 包括: 根据渐进渐出合成方法, 对配准后的两幅图像的各像素点的灰度值 Optionally, in the method of the embodiment of the present invention, the synthesizing the two images after registration comprises: grading gray of each pixel of the two images after registration according to the progressive progressive synthesis method value
/(X, 进行设置; 其中, 设置规则包括: /(X, set; where, the setting rules include:
f(x,y) = , (x, y) e ( n f2 )f(x,y) = , (x, y) e ( nf 2 )
, dx < d2, (x, y) e (f, nf2)
Figure imgf000005_0001
, d x < d 2 , (x, y) e (f, nf 2 )
Figure imgf000005_0001
其中, ϋ Λ( 分别表示两幅图像中像素点的灰度值, 4,^ (ο,ι) , 且 4 +^ = 1 ,分别表示两幅图像的渐进因子, ifo0r为预先设定的判定阔值, ;、 /2分别表示两幅图像。 可选地, 本发明实施例所述方法中, 所述将配准后的两幅图像进行合成, 还包括: Where , Λ (representing the gray value of the pixel in the two images, 4, ^ (ο, ι), and 4 +^ = 1 respectively, indicating the progressive factor of the two images, ifo 0 r is preset The determination threshold value, ; / 2 respectively represent two images. Optionally, in the method of the embodiment of the invention, the synthesizing the two images after the registration further comprises:
釆用拼缝周围 7 X 7区域为拼缝处理区域, 并以 3 x 3的模板对所述拼缝 处理区域内的像素点进行线性滤波处理。  The 7 X 7 area around the seam is the seam processing area, and the pixels in the seam processing area are linearly filtered by the 3 x 3 template.
可选地, 本发明实施例所述方法中, 在图像获取步骤和图像糅合步骤之 间, 还包括:  Optionally, in the method of the embodiment of the present invention, between the image obtaining step and the image combining step, the method further includes:
图像预处理步骤: 将所述图像获取步骤获取到的两幅图像按设定的预处 理操作进行处理; 其中, 预处理操作包括如下操作中的一项或多项: 验证获 取的图像、将两幅图像转换到同一坐标系下、对两幅图像进行平滑滤波处理、 以及初略定位, 得到大致的重叠区域, 并以该重叠区域为特征点的提取区域。  Image pre-processing step: processing the two images acquired by the image obtaining step according to a set pre-processing operation; wherein, the pre-processing operation comprises one or more of the following operations: verifying the acquired image, and The image is converted into the same coordinate system, the two images are smoothed and processed, and the initial positioning is performed to obtain a rough overlapping region, and the overlapping region is taken as an extraction region of the feature point.
可选地, 本发明实施例所述方法还包括:  Optionally, the method of the embodiment of the present invention further includes:
3D图像生成步骤:获取合成图像以及与所述合成图像具有重叠区域的另 一图像, 执行所述图像糅合步骤进行再次合成, 重复图像获取及图像糅合过 程, 得到具有景深的 3D 图像; 其中, 与合成图像具有重叠区域的另一图像 可以为非合成图像, 也可以为合成图像。 a 3D image generating step: acquiring a composite image and another image having an overlapping area with the composite image, performing the image combining step to perform re-synthesis, repeating the image capturing and image combining process, and obtaining a 3D image having a depth of field; wherein, Composite image has another image with overlapping regions It can be a non-synthetic image or a composite image.
依据本发明实施例的另一个方面, 提供了一种终端, 包括:  According to another aspect of the embodiments of the present invention, a terminal is provided, including:
图像获取模块, 其设置成获取具有重叠区域的两幅图像; 以及  An image acquisition module configured to acquire two images having overlapping regions;
图像糅合模块, 其设置成按照所述重叠区域, 将两幅图像进行配准, 并 将配准后的两幅图像进行合成。  The image matching module is arranged to register the two images according to the overlapping area, and synthesize the two images after registration.
可选地, 本发明实施例所述终端中, 所述图像糅合模块是设置成提取两 幅图像的特征点, 并在所述特征点中提取出两幅图像的匹配特征对, 以所述 匹配特征对为对准点, 将两幅图像进行配准。  Optionally, in the terminal according to the embodiment of the present invention, the image matching module is configured to extract feature points of two images, and extract matching feature pairs of two images in the feature points, to match the The feature pair is the alignment point, and the two images are registered.
可选地, 本发明实施例所述终端中, 在图像糅合模块中, 特征点包括图 像的角点。  Optionally, in the terminal according to the embodiment of the present invention, in the image matching module, the feature point includes a corner point of the image.
可选地, 本发明实施例所述终端中, 所述图像糅合模块还包括: 计算子模块, 其设置成对于每幅图像, 利用 3 x 3卷积核与图像做卷积, 求得图像各像素点的偏导数, 并利用该偏导数计算各像素点对应的 Plessy角 点检测算法中的对称矩阵 M;  Optionally, in the terminal according to the embodiment of the present invention, the image matching module further includes: a calculation submodule configured to convolve with the image by using a 3 x 3 convolution kernel for each image, and obtain an image The partial derivative of the pixel, and using the partial derivative to calculate the symmetric matrix M in the Plessy corner detection algorithm corresponding to each pixel point;
设置子模块, 其设置成设置选取窗口、 以及特征点可用评价函数 R; 其 Setting a sub-module, which is set to set a selection window, and a feature point available evaluation function R;
Det(M) Det(M)
中, R , 式中 /^ (Λ^^ Ι^、 Trace(M) = + l1 , , 分别为矩阵 In , R , where /^ (Λ^^ Ι^, Trace(M) = + l 1 , , respectively
M的特征值, £·为使分母不为零的极小值; 筛选子模块,其设置成按所述选取窗口在所述图像上选取一个检测区域, 在该检测区域内筛选出 R值最大的像素点, 移动选取窗口, 直到筛选完整幅 图; 以及 The characteristic value of M, £· is a minimum value that makes the denominator non-zero; the screening sub-module is arranged to select a detection area on the image according to the selection window, and the R value is selected to be the largest in the detection area. Pixels, move the selection window until the full image is filtered;
提取子模块, 其设置成设置特征点判定阔值, 将筛选出的各像素点中 R 值大于所述判定阔值的像素点设为提取到的特征点。  The extraction sub-module is configured to set a feature point determination threshold, and set, in the selected pixel points, a pixel point whose R value is greater than the determination threshold value as the extracted feature point.
可选地, 本发明实施例所述终端中, 所述图像糅合模块还包括: 粗匹配子模块, 其设置成利用粗匹配双向最大相关系数 BGCC算法, 对 两幅图像中的特征点进行粗匹配; 以及  Optionally, in the terminal according to the embodiment of the present invention, the image matching module further includes: a coarse matching submodule configured to perform rough matching on feature points in the two images by using a coarse matching bidirectional maximum correlation coefficient BGCC algorithm ; as well as
精确匹配子模块, 其设置成利用随机釆样 RANSAC算法,对粗匹配得到 的匹配特征对进行精确匹配 , 得到精确提取的匹配特征对。 Accurately matching sub-modules, which are set to use a random sample RANSAC algorithm for coarse matching The matching feature pairs are accurately matched to obtain an accurately extracted matching feature pair.
可选地, 本发明实施例所述终端中, 所述图像糅合模块还设置成根据渐 进渐出合成方法, 对配准后的两幅图像的各像素点的灰度值 /(X, 进行设置; 其中, 设置规则包括:  Optionally, in the terminal according to the embodiment of the present invention, the image matching module is further configured to set a gray value/(X, a pixel value of each pixel of the two images after registration according to a progressive fade-out synthesis method. ; The setting rules include:
f(x, y) =f(x, y) =
Figure imgf000007_0001
Figure imgf000007_0001
其中, ϋ Λ( 分别表示两幅图像中像素点的灰度值, 4,^ (ο,ι) , 且 4 + ^ = 1 ,分别表示两幅图像的渐进因子, ifo0r为预先设定的判定阔值, ;、 /2分别表示两幅图像。 可选地, 本发明实施例所述终端还包括: 图像预处理模块, 和 /或, 3D 图像生成模块, 其中: Where , Λ (representing the gray value of the pixel in the two images, 4, ^ (ο, ι), and 4 + ^ = 1 respectively, indicating the progressive factor of the two images, ifo 0 r is preset The determination thresholds, and / 2 respectively represent two images. Optionally, the terminal according to the embodiment of the present invention further includes: an image preprocessing module, and/or a 3D image generation module, where:
所述图像预处理模块, 其设置成将所述图像获取模块获取到的两幅图像 按设定的预处理操作进行处理; 其中, 预处理操作包括如下操作中的一项或 多项: 验证获取的图像、 将两幅图像转换到同一坐标系下、 对两幅图像进行 平滑滤波处理、 以及初略定位, 得到大致的重叠区域, 并以该重叠区域为特 征点的提取区域; 以及  The image pre-processing module is configured to process the two images acquired by the image acquisition module according to a set pre-processing operation; wherein the pre-processing operation comprises one or more of the following operations: Image, converting the two images into the same coordinate system, smoothing and filtering the two images, and initially positioning, obtaining a substantially overlapping region, and extracting regions with the overlapping regions as feature points;
所述 3D 图像生成模块, 其设置成获取合成图像以及与所述合成图像具 有重叠区域的另一图像, 触发所述图像糅合模块进行再次合成, 重复进行图 像获取及图像糅合过程, 得到具有景深的 3D 图像; 其中, 与合成图像具有 重叠区域的另一图像可以为非合成图像, 也可以为合成图像。  The 3D image generation module is configured to acquire a composite image and another image having an overlapping area with the composite image, trigger the image matching module to perform re-synthesis, repeat image acquisition and image matching process, and obtain a depth of field The 3D image; wherein, another image having an overlapping area with the composite image may be a non-composite image or a composite image.
本发明实施例有益效果如下:  The beneficial effects of the embodiments of the present invention are as follows:
本发明实施例所述终端及方法, 通过摄取两组不同角度但具有重叠区域 的图像, 直接针对各个图像提取特征点参数, 然后依据各个特征点确定图像 间的匹配程度, 剔除错误的匹配对, 依据配准后的图像进行糅合合成处理, 得到了宽视野、 高分辨率的图像, 极大的提高了用户使用体验; 本发明实施例所述终端及方法, 通过摄取一系列不同角度但具有重叠区 域的图像, 将所述一系列图像进行空间重叠处理, 形成一幅包含各图像序列 信息的宽视角场景的、 完整的、 高清晰的具有 3D效果的新图像, 其不仅实 现了宽视野、 高分辨率的要求, 而且更好的满足了用户的使用需求。 附图概述 The terminal and the method according to the embodiment of the present invention extract feature point parameters directly for each image by taking two sets of images with different angles but overlapping areas, and then determine the matching degree between the images according to each feature point, and eliminate the wrong matching pair. The composite image is processed according to the image after registration, and a wide field of view and high resolution image are obtained, which greatly improves the user experience; According to the terminal and method of the embodiment of the present invention, the series of images are spatially overlapped by taking a series of images with different angles but overlapping regions to form a wide viewing angle scene including each image sequence information. High-definition new images with 3D effects, which not only meet the requirements of wide field of view and high resolution, but also better meet the needs of users. BRIEF abstract
下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍, 显 然, 下面描述中的附图仅仅是本发明的一些实施例, 对于本领域普通技术人 员来讲, 在不付出创造性劳动性的前提下, 还可以根据这些附图获得其他的 附图。  The drawings used in the embodiments or the related art description will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention, and those of ordinary skill in the art do not pay Other drawings can also be obtained from these drawings on the premise of creative labor.
图 1为本发明实施例一提供的一种终端实现图像处理的方法的流程图; 图 2为本发明实施例二中实际匹配点与估计的匹配点的几何示意图; 图 3为本发明实施例二中图像处理的整体处理框架图;  FIG. 1 is a flowchart of a method for implementing image processing by a terminal according to Embodiment 1 of the present invention; FIG. 2 is a schematic diagram of geometric matching points of an actual matching point and an estimated point according to Embodiment 2 of the present invention; The overall processing framework of image processing in the second;
图 4为本发明实施例三提供的一种终端实现图像处理的方法的流程图; 图 5为本发明实施例三中应用示例的处理框架图;  4 is a flowchart of a method for implementing image processing in a terminal according to Embodiment 3 of the present invention; FIG. 5 is a processing framework diagram of an application example according to Embodiment 3 of the present invention;
图 6为本发明实施例四提供的一种终端的结构框图;  6 is a structural block diagram of a terminal according to Embodiment 4 of the present invention;
图 7为本发明实施例五提供的一种终端的结构框图。  FIG. 7 is a structural block diagram of a terminal according to Embodiment 5 of the present invention.
本发明的较佳实施方式 Preferred embodiment of the invention
下面将结合本发明实施例中的附图, 对本发明实施例中的技术方案进行 清楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而 不是全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没有做 出创造性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。  The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
针对当前终端设备拍摄的单张图片不能同时满足高分辨率、 宽视野的问 题, 本发明实施例提供了一种终端及其实现图像处理的方法, 该方法通过将 摄取的具有一定重叠区域的两幅图像进行空间重叠对齐及合成处理, 实现了 在不降低图像分辨率的条件下获取大视野范围的场景图像。 下面就通过几个 具体实施例对本发明的具体实施过程进行详细阐述。 实施例一 The present invention provides a terminal and a method for realizing image processing thereof, and a method for realizing image processing by using a single picture captured by a current terminal device. The image is spatially overlapped and synthesized, and a scene image with a large field of view is obtained without reducing the image resolution. The specific implementation process of the present invention will be described in detail below through several specific embodiments. Embodiment 1
本发明实施例提供一种终端实现图像处理的方法, 如图 1所示, 包括: 步骤 S101 , 获取具有重叠区域的两幅图像;  An embodiment of the present invention provides a method for implementing image processing in a terminal. As shown in FIG. 1 , the method includes: Step S101: acquiring two images having overlapping regions;
该步骤中, 获取的图像可以是在终端存储模块(如: 终端内部存储器和 / 或外部扩充存储器) 内存储的图像、 也可以是终端实时摄取的图像, 本发明 实施例不对图像的获取方式做唯一限定。  In this step, the acquired image may be an image stored in a terminal storage module (such as a terminal internal memory and/or an external expansion memory), or may be an image captured by the terminal in real time, and the embodiment of the present invention does not perform image acquisition. The only limit.
其中, 当终端釆用实时摄取的方式获取图像时, 本实施例给出一种优选 的实施方式, 具体为: 在终端内设置两组可旋转的摄像头, 由于这两组摄像 头可以变换角度进行拍摄, 所以可以同时获取不同角度的且具有一定重叠区 域的图像。 该获取方式为加快图像处理速度提供有力支撑。  When the terminal acquires an image by means of real-time ingestion, the embodiment provides a preferred implementation manner, which is specifically: setting two sets of rotatable cameras in the terminal, since the two sets of cameras can change the angle to shoot Therefore, images with different angles and overlapping regions can be acquired at the same time. This acquisition method provides strong support for speeding up image processing.
步骤 S102, 按照所述重叠区域, 将两幅图像进行配准。  Step S102, registering the two images according to the overlapping area.
该步骤中, 按照重叠区域, 将两幅图像进行配准, 包括: 提取两幅图像 的特征点, 并在所述特征点中提取出两幅图像的匹配特征对, 以所述匹配特 征对为对准点, 将两幅图像进行配准。  In this step, the two images are registered according to the overlapping area, including: extracting feature points of the two images, and extracting matching feature pairs of the two images in the feature points, wherein the matching feature pairs are Align the points and register the two images.
其中, 特征点可以为根据图像性质提取的适用于图像糅合的任何几何或 灰度特征。 本发明实施例优选地釆用角点作为待提取的特征点。  Wherein, the feature points may be any geometric or grayscale features suitable for image matching extracted according to image properties. The embodiment of the present invention preferably uses a corner point as a feature point to be extracted.
对于角点的提取主要釆用角点检测算法来实现, 关于角点检测算法主要 分为基于边缘和基于灰度的两类提取算法, 由于前者对边缘的提取的依赖性 比较大, 如果检测到的边缘发生错误或是边缘线发生中断(在实际中经常会 遇到这种情况) , 则会对角点提取结果造成较大影响, 而基于灰度的提取算 法主要通过计算局部范围内灰度和梯度变化剧烈的极大点来达到检测目的, 无需进行边缘提取, 因而在实际中得到了广泛应用。 其中最具代表性的几种 角点检测算法有: Moravec算子角点检测、 Forstner算子角点检测、 Susan检 测算法、 Plessy角点检测算法。 Plessy角点检测算法在一致性和有效性方面均 具有优良的性能以及所提取的角点被证明具有旋转、 平移不变性、 稳定性好 等优点。  For the extraction of corner points, the corner point detection algorithm is mainly used. The corner point detection algorithm is mainly divided into two types of edge-based and gray-based extraction algorithms. Because the former has a large dependence on edge extraction, if it is detected, An error occurs at the edge of the edge or the edge line is interrupted (this is often the case in practice), which will have a greater impact on the corner extraction result, while the grayscale-based extraction algorithm mainly calculates the grayscale in the local range. And the violent maximum point of the gradient changes to achieve the purpose of detection, without edge extraction, and thus has been widely used in practice. The most representative corner detection algorithms are: Moravec operator corner detection, Forstner operator corner detection, Susan detection algorithm, Plessy corner detection algorithm. The Plessy corner detection algorithm has excellent performance in terms of consistency and effectiveness, and the extracted corner points are proved to have the advantages of rotation, translation invariance, and stability.
其中, Plessy 角点检测算法的基本思想是利用图像的灰度变化率确定角 点, 该方法通过计算一个与图像的自相关函数相联系的矩阵 M的特征值, 即 自相关函数的一阶曲率来判定该点是否为角点, 如果两个曲率值都高, 那么 就认为该点是角点。 Among them, the basic idea of the Plessy corner detection algorithm is to determine the corner point by using the gray rate of change of the image. The method calculates the eigenvalue of the matrix M associated with the autocorrelation function of the image, ie The first-order curvature of the autocorrelation function determines whether the point is a corner point. If both curvature values are high, then the point is considered to be a corner point.
Pless 角点检测算法定义了任意方向上的自相关值 £(MV)为: The Pless corner detection algorithm defines the autocorrelation value £ ( M , V ) in any direction as:
Figure imgf000010_0001
Figure imgf000010_0001
Ix、 Iy分别为图像 x、 y方向的梯度值, σ为表征高斯滤波器宽度的参数, 其中 表示卷积运算。 Μ是 2 x 2的对称矩阵, 因此必然存在 2个特征值 4和 特征值反映了图像像素的特征性, 即如果像素点(χ' 是一特征点, 那么 关于这个点的 Μ矩阵的 2个特征值都是正值, 并且它们是以 (χ' 为中心的 区域中的局部最大值 , 则特征点可用评价函数表示为: I x and I y are gradient values in the x and y directions of the image, respectively, and σ is a parameter representing the width of the Gaussian filter, which represents a convolution operation. Μ is a 2 x 2 symmetric matrix, so there must be two eigenvalues 4 and eigenvalues reflecting the characteristics of the image pixels, ie if the pixel points ( χ ' is a feature point, then 2 Μ matrices of this point The eigenvalues are all positive values, and they are local maxima in the region centered on ( χ ', then the feature points can be expressed as evaluation functions:
R = Det(M) - kTrace2 (M) (2) 其中, D M、 = 、 Trace{M) = l1 + l2 ? I ^为矩阵的行列式, Trace为矩 阵的迹 (矩阵对角线元素的和)。设定一个合理的阔值 T, 当实际由公式 (2)计算 出来的 R大于该阔值时, 则表示找到了一个角点, 否则就不是。 特征点一般 是局部范围内的极大兴趣值对应的像素点。 因此, 在计算完各点的 R值后, 要进行非极大值抑制, 提取原始图像中所有局部兴趣值最大的点。 其中 k是 一实验值, 一般 k = 0.04 ~ 0.06。 R = Det(M) - kTrace 2 (M) (2) where DM, = , Trace{M) = l 1 + l 2 ? I ^ is the determinant of the matrix, Trace is the trace of the matrix (matrix diagonal) The sum of the elements). Set a reasonable threshold T. When the R calculated by the formula (2) is greater than the threshold, it means that a corner is found, otherwise it is not. The feature points are generally pixel points corresponding to the maximum interest values in the local range. Therefore, after calculating the R value of each point, non-maximum value suppression is performed, and all points with the largest local interest value in the original image are extracted. Where k is an experimental value, generally k = 0.04 ~ 0.06.
在步骤 S102中, 提取两幅图像的匹配特征对包括:  In step S102, extracting matching feature pairs of the two images includes:
对两幅图像提取的特征点中有相当多的冗余点, 如果不去除这些冗余的 特征点将会导致匹配参数的误差, 甚至导致匹配失败。 选择合适的点匹配准 则寻找对应特征对是实现图像糅合正确性和精确度的重要保证, 常用的匹配 特征对提取方法包括但不限于为: Hausdorff 距离法 、 松弛标记法 、 确定性 退火算法以及迭代最近点算法 (ICP)。  There are quite a few redundant points in the feature points extracted from the two images. If these redundant feature points are not removed, the errors of the matching parameters will be caused, and even the matching will fail. Choosing the appropriate point matching criterion to find the corresponding feature pairs is an important guarantee for the correctness and accuracy of image matching. Common methods for matching feature pairs include, but are not limited to: Hausdorff distance method, relaxation mark method, deterministic annealing algorithm and iteration Nearest Point Algorithm (ICP).
步骤 S103 , 将配准后的两幅图像进行合成。  Step S103, synthesizing the two images after registration.
具体地, 在将两图像之间在空间上进行配准之后, 就需要选择合适的图 像合成策略, 完成图像的糅合。 所谓图像合成就是将源图像的像素结合起来 生成糅合平面上的像素, 实现相邻图像间自然的过渡。 其中, 所选择的合成 策略要能够尽量的减少遗留变形以及图像间亮度差异对合并效果的影响, 以 获得对同一场景的更为精确、 更为全面、 更为可靠的图像描述。 基于上述的 选取标准, 本发明实施例中关于选取的图像合成策略可以但不限于为渐进渐 出合成方法。 Specifically, after spatially registering the two images, it is necessary to select an appropriate map. Like a compositing strategy, complete the blending of images. The so-called image synthesis is to combine the pixels of the source image to generate pixels on the matching plane, and realize the natural transition between adjacent images. Among them, the selected synthesis strategy should be able to minimize the influence of the residual deformation and the difference of brightness between images on the combined effect, in order to obtain a more accurate, more comprehensive and more reliable image description of the same scene. Based on the above selection criteria, the image synthesis strategy selected in the embodiment of the present invention may be, but is not limited to, a progressive fade synthesis method.
综上所述, 本实施例所述方法通过摄取两组不同角度但具有重叠区域的 图像, 直接针对各个图像提取特征点参数, 然后依据各个特征点确定图像间 的匹配程度, 剔除错误的匹配对, 依据配准后的图像进行糅合合成处理, 得 到了宽视野、 高分辨率的图像, 极大的提高了用户使用体验。 实施例二 本发明实施例提供了一种终端实现图像处理的方法, 本实施例所述方法 在实施例一所述的主体架构下, 提出了几种改进方案, 能够进一步加快图像 处理速度和精度, 继续如图 1所示, 包括如下步骤:  In summary, the method in this embodiment extracts feature point parameters directly for each image by taking two sets of images with different angles but overlapping areas, and then determines the matching degree between the images according to each feature point, and eliminates the wrong matching pair. According to the image after registration, the composite image is processed to obtain a wide-field, high-resolution image, which greatly improves the user experience. Embodiment 2 The embodiment of the present invention provides a method for implementing image processing in a terminal. The method in this embodiment provides several improvements under the main structure described in Embodiment 1, which can further speed up image processing speed and accuracy. , continue as shown in Figure 1, including the following steps:
步骤 S101 , 获取具有重叠区域的两幅图像;  Step S101, acquiring two images having overlapping regions;
该步骤的实施过程与实施例一相同, 详细实施方式不再赘述。  The implementation process of this step is the same as that of the first embodiment, and detailed description thereof will not be repeated.
步骤 S102, 按照所述重叠区域, 将两幅图像进行配准。  Step S102, registering the two images according to the overlapping area.
该步骤中, 按照重叠区域, 将两幅图像进行配准, 包括: 提取两幅图像 的特征点, 并在所述特征点中提取出两幅图像的匹配特征对, 以所述匹配特 征对为对准点, 将两幅图像进行配准。  In this step, the two images are registered according to the overlapping area, including: extracting feature points of the two images, and extracting matching feature pairs of the two images in the feature points, wherein the matching feature pairs are Align the points and register the two images.
在该步骤中, 关于特征点的提取釆用改进的 Plessy角点检测算法, 具体 下:  In this step, with regard to the extraction of feature points, an improved Plessy corner detection algorithm is used, specifically:
针对原始的 Plessy角点检测单一阔值设定、 定位精度低以及实时性差等 一些缺陷, 本实施例提出了几点改进, 使得在图像中提取出尽可能多又定位 精确的特征点, 同时加快提取角点的速度。 结合实施例一中所述的 Plessy角 点检测算法的实施过程, 改进后的 Plessy角点检测算法的实施过程包括: In view of some defects such as the original Plessy corner detection single threshold setting, low positioning accuracy and poor real-time performance, this embodiment proposes several improvements to extract as many feature points as possible and accurate in the image, and accelerate at the same time Extract the speed of the corner points. In conjunction with the implementation of the Plessy corner detection algorithm described in the first embodiment, the implementation process of the improved Plessy corner detection algorithm includes:
1)对图像的每一个点计算其在横向和纵向的一阶偏导数 IX、 Iy以及两者 的乘积 IxIy, 利用得到的偏导数信息, 按照公式(1 ) , 计算得到对称矩阵 M。 1) Calculate the first-order partial derivatives I X , Iy in the horizontal and vertical directions and the product I x I y of the two points for each point of the image. Using the obtained partial derivative information, calculate the symmetric matrix according to formula (1). M.
在图像处理中不易求得偏导数,而本实施例中给出一种优选的计算方式: 利用 3 χ 3卷积核与原图像做卷积便可求得原图像每一点的一阶偏导数 Ix、 I 其中, 3 x 3卷积核可以但不限于釆用如下模板表示: αIt is not easy to find the partial derivative in image processing, and a preferred calculation method is given in this embodiment: The convolution of the 3 χ 3 convolution kernel with the original image can be used to obtain the first-order partial derivative I x , I of each point of the original image. The 3 x 3 convolution kernel can be, but is not limited to, the following template: α
Figure imgf000012_0001
那么得到的一阶偏导数 4 =— iv = i®o 。
Figure imgf000012_0001
Then the first-order partial derivative 4 = - i v = i®o is obtained.
dx 3y 2)原始 Plessy角点检测算法中,特征点可用评价函数 R中的 k值是一个 经验常数, 使用起来随意性较大, 造成角点提取可靠性的降低, 在图片状况 不一的情况下, 容易影响到角点提取的准确性。 考虑到 R实质是角点检测信 号, 行列式的值大、 迹的值小为角点信号, 反之为边缘信号的特点。 因此改 进后的算法中, 釆用如下比值法计算特征点可用评价函数:  Dx 3y 2) In the original Plessy corner detection algorithm, the k value of the feature point available in the evaluation function R is an empirical constant, which is arbitrarily used, resulting in a decrease in the reliability of corner extraction, in the case of different picture conditions. Underneath, it is easy to affect the accuracy of corner extraction. Considering that R is essentially a corner detection signal, the value of the determinant is large, and the value of the trace is small as a corner signal, and vice versa. Therefore, in the improved algorithm, the following evaluation method is used to calculate the feature point available evaluation function:
R = Det M ( 3 ) R = Det M ( 3 )
Trace(M) + ε 其中, 为了避免矩阵迹有时可能为零, 在分母中补加很小的数 与原 始 Plessy角点检测算法中提出的评价函数相比, 它避免参数 k 的选取, 减少 了 k选择的随机性, 具有实用性, 可靠性好, 准确度高。 其中, ε取大于零 的任意小的数, ε具有任意性, 既然表达任意接近, 那么 ε可以任意取正值, 惟其可以任意取值, 才可准确表达极限定义中"无限接近"的含义。 但为了突 出 "无限接近"通常取 0<ε<1。  Trace(M) + ε where, in order to avoid that the matrix trace may sometimes be zero, adding a small number in the denominator avoids the selection of the parameter k compared to the evaluation function proposed in the original Plessy corner detection algorithm. The randomness of k selection has practicality, good reliability and high accuracy. Where ε takes any small number greater than zero, ε has an arbitrary degree. Since the expression is arbitrarily close, ε can take a positive value arbitrarily, but it can take any value to accurately express the meaning of "infinite proximity" in the limit definition. However, in order to highlight "infinite proximity", 0 < ε < 1 is usually taken.
3)选取局部极值点, 习惯上的做法都是选择一个适当的阔值, 然后将兴 趣值大于该阔值的像素点作为特征点, 那些兴趣值小于阔值的像素点, 则被 筛选掉。 这种做法虽然简单易实现, 但单一阔值的选取对于非均质图像来说 可能导致部分特征点也被筛选掉。 为了克服这一缺陷, 改进后的 Plessy角点 检测算法釆用图像中窗口内抑制非最大的方式结合阔值的设定来进行特征点 的筛选, 原理为: 在图像中选取一个适当的窗口, 将窗口中 R最大的像素点 保留, 而将窗口中其余像素点删去, 移动窗口对整幅图像的像素点进行筛选。 局部极值点的数目往往很多, 根据要求设定一合理的阔值, 将筛选出的最大 的若干个像素点作为最后的特征点提取结果。优选地, 为了加快提取的速度, 釆用预先设定的边界模板将对匹配作用不大的边界角点排除。 3) Select local extremum points. It is customary to choose an appropriate threshold, and then use pixels with interest values greater than the threshold as feature points. Those pixels with interest values less than the threshold are filtered out. . Although this method is simple and easy to implement, the selection of a single threshold may cause some feature points to be filtered out for non-homogeneous images. In order to overcome this defect, the improved Plessy corner detection algorithm uses the in-window suppression non-maximum method in the image to combine the threshold settings to filter the feature points. The principle is: select an appropriate window in the image. The pixel with the largest R in the window is retained, and the remaining pixels in the window are deleted. The moving window filters the pixels of the entire image. The number of local extremum points is often many. A reasonable threshold is set according to requirements, and the largest number of selected pixels are selected as the final feature point extraction results. Preferably, in order to speed up the extraction, 预先 Use a preset boundary template to exclude boundary corner points that do not match well.
上述 选过程通过步骤方式表述为:  The above selection process is expressed in steps:
3.1 )设置选取窗口, 按所述选取窗口在所述图像上选取一个检测区域, 在该检测区域内筛选出 R值最大的像素点, 移动选取窗口, 直到筛选完整幅 图;  3.1) setting a selection window, selecting a detection area on the image according to the selection window, screening a pixel with the largest R value in the detection area, and moving the selection window until the complete image is filtered;
3.2 )设置特征点判定阔值, 将筛选出的各像素点中 R值大于所述判定阔 值的像素点设为提取到的特征点。  3.2) Set the feature point determination threshold, and set the pixel points of the selected pixel points whose R value is larger than the determination threshold as the extracted feature points.
其中, 所述的 "选取窗口" 、 "判定阔值" 的大小可以根据实际要求进 行灵活设置, 主要表现为, 当选取窗口较小时, 筛选出的像素点较多; 反之, 筛选出的像素点较少。 而对于判定阔值, 设定的越大, 最终提取的特征点就 越少; 反之, 提取的特征点较多。 本发明实施例在研发阶段釆用判定阔值为 2200, 抑制非最大的窗口为 7*7 , 但选取窗口和判定阔值可以根据需求进行 灵活设置, 本实施例不对其大小做唯一限定。  The size of the "selection window" and the "judgment threshold" can be flexibly set according to actual requirements, and the main performance is that when the selection window is small, more pixels are selected; otherwise, the selected pixels are selected. less. For the judgment of the threshold, the larger the setting, the fewer the feature points are finally extracted; on the contrary, the extracted feature points are more. In the embodiment of the present invention, the decision width is 2200, and the non-maximum window is 7*7. However, the selection window and the decision threshold can be flexibly set according to requirements. This embodiment does not limit the size.
优选地, 在提取出特征点后, 还可以进行亚像素特征点 (角点)定位过 程, 通过定位亚像素特征点定位过程, 可以进一步精确提取的特征点。 定位 实现方式为: 釆用二次多项式^ + 2 + ^ + + + / = , y来逼近特征点可用 评价函数 R, 实现角点的亚像素级精确位置。 确切地说, 用已经检测出来的 角点周围的普通像素点可以建立含有 a ~ f 的 6个未知量的超定方程组,运用 最小二乘法求解这个超定方程组, 亚像素级角点对应的是二次多项式的极大 值点, 以该极大值点对应的像素点为提取的精确特征点, 换句话说, 当极大 值点对应的像素点为计算时对应的角点, 则提取的精确特征点就为该角点; 否则, 删除计算时对应的角点, 以极大值点对应的像素点为精确提取的角点 (特征点) 。 Preferably, after the feature points are extracted, a sub-pixel feature point (corner point) positioning process may also be performed, and the feature points may be further accurately extracted by positioning the sub-pixel feature point positioning process. The positioning implementation is as follows: 二次 Use the quadratic polynomial ^ + 2 + ^ + + + / = , y to approximate the feature point and use the evaluation function R to achieve the sub-pixel-level precise position of the corner. Specifically, using the ordinary pixel points around the corners that have been detected, six overdetermined equations with a ~ f can be established, and the overdetermined equations are solved by the least squares method. Subpixel-level corner points correspond. Is the maximum point of the quadratic polynomial, and the pixel corresponding to the maximum point is the extracted precise feature point. In other words, when the pixel corresponding to the maximum point is the corresponding corner point in the calculation, The extracted precise feature point is the corner point; otherwise, the corresponding corner point in the calculation is deleted, and the pixel point corresponding to the maximum value point is the accurately extracted corner point (feature point).
在步骤 S102中, 对于匹配特征对优选地釆用如下提取方式:  In step S102, the following extraction methods are preferably used for matching feature pairs:
本实施例中提出的匹配算法分为两步: 利用双向最大相关系数 (BGCC) 进行粗匹配; 然后用随机釆样法(RANSAC)对其进行提纯, 实现图像的精匹 配。 该方法在去除冗余特征点的同时能准确提取出正确的匹配特征点对。  The matching algorithm proposed in this embodiment is divided into two steps: rough matching is performed by using the bidirectional maximum correlation coefficient (BGCC); then it is purified by random sampling method (RANSAC) to achieve the perfect matching of images. The method can accurately extract the correct matching feature point pairs while removing the redundant feature points.
粗匹配用双向最大相关系数 BGCC ( Bidirectional Greatest Correlative Coefficient ) 的方法, 建立一个相似测度 NCC, 只有当两角点均是相对于对 方相似度量值最大时才认为匹配成功, 具体的: 相关系数定义如下: The rough matching uses the Bidirectional Greatest Correlative Coefficient (BGCC) method to establish a similarity measure NCC. The matching is considered successful only when the two corner points are the largest relative measure value of the other party. Specifically: The correlation coefficients are defined as follows:
(2n + l)(2n + ( 4 ) (2n + l)(2n + ( 4 )
7i、 是两幅图像的灰度; nxn是在一幅图中选择的窗口大小; A x /是 在另一幅图中选择的搜索区域的大小,设第一幅图像中的角点为(¾, i=l...m, 第二幅图像中的角点为(¾, j=l...n, 则 和 分别为两幅图中第 i个 和第 j 个待匹配的特征点。 (MV)是角点窗口区域的平均灰度值:
Figure imgf000014_0001
7 i, is the grayscale of the two images; n x n is the size of the window selected in one of the images; A x / is the size of the search area selected in the other image, and the angle in the first image is set The point is (3⁄4, i=l...m, the corner point in the second image is (3⁄4, j=l...n, then the ith and jth to be matched in the two figures respectively) The feature point ( M , V ) is the average gray value of the corner window area:
Figure imgf000014_0001
窗口区域的标准方差
Figure imgf000014_0002
Standard variance of the window area
Figure imgf000014_0002
用双向最大相关系数算法进行角点的粗匹配具体为:  The coarse matching of the corner points by the bidirectional maximum correlation coefficient algorithm is as follows:
1 )以图像 ι中的任意一个角点为中心选取一个 nxn的相关窗口,在 中 以与 ι中的角点具有相同坐标的像素点为中心选取一个大小为 X 的矩形 搜索区域,然后对 ι中的角点与 中搜索窗口区域内每一个角点计算相关系数 1) Select an nxn related window centering on any corner of the image ι, and select a rectangular search area of size X from the pixel with the same coordinates as the corner point in ι, and then ι Calculate the correlation coefficient between the corner points in the middle and the corner points in the search window area
C 将相关系数最大的角点作为 ι给定角点的匹配点, 这样可以得到一组匹 配点集。 C uses the corner point with the largest correlation coefficient as the matching point of the given corner point, so that a set of matching points can be obtained.
2) 同理, 给定图像 2中的任意一个角点, 搜索图像 ι中对应的窗口区域 内与之相关系数最大的角点作为 给定角点的匹配点, 这样也可以得到一组 匹西己点集。  2) Similarly, given any corner point in image 2, the corner point of the corresponding window region in the image ι with the largest correlation coefficient is used as the matching point of the given corner point, so that a set of Pisi can also be obtained. I have a collection.
3 )最后在得到的两组匹配点集中搜索相同的匹配角点对, 认为该角点对 是相互匹配对应的, 至此, 完成了角点的初始匹配。  3) Finally, the same pair of matching corner points are searched in the obtained two sets of matching points, and it is considered that the pair of corner points are matched with each other, and thus the initial matching of the corner points is completed.
在实际操作中为了补偿两幅图像由于光照产生的不同, 将图像用中值滤 波器(如 7 X 7的中值滤波器)进行平滑, 然后将原图与经过滤波的图像相减 的结果作为操作的对象。 In actual operation, in order to compensate for the difference between the two images due to illumination, the image is filtered with median. The waver (such as the median filter of 7 X 7) is smoothed, and then the result of subtracting the original image from the filtered image is taken as the object of the operation.
然而, 如果仅使用 BGCC进行匹配就会产生错误的匹配对, 有时错误匹 配的比例会非常高, 严重干扰了变换矩阵的估计, 导致图像糅合失败。 因此, 必须对特征点对加以校正, 去掉错误的匹配对。 本实施例中釆用随机釆样法 ( RANSAC )进行精匹配。  However, if only BGCC is used for matching, an incorrect matching pair will be generated. Sometimes the proportion of error matching will be very high, which seriously interferes with the estimation of the transformation matrix, resulting in image failure. Therefore, the feature point pairs must be corrected to remove the wrong matching pair. In this embodiment, the random matching method (RANSAC) is used for fine matching.
RANSAC的基本思想是: 首先根据具体问题设计出某种目标函数, 然后 通过反复提取最小点集来估计该函数中参数的初始值, 利用这些初始参数值 把所有的数据分为所谓的 "内点" (inliers, 即满足估计参数的点)和 "出格 点" (outliers, 即不满足估计参数的点) , 最后反过来用所有的 "内点" 重 新计算和估计函数的参数。 具体作法是, 在输入数据中釆样所谓的最小点集, 并利用每次取样所得到的最小点集估计出所要确定的参数, 同时根据一定的 判别准则来判别输入数据中哪些是与该组参数相一致, 即 "内点" , 哪些是 不一致的, 即 "出格点" 。 如此迭代一定的次数之后, 将对应输入数据中 "内 点" 比例最高的所估计出的参数值作为最终的参数估计值。  The basic idea of RANSAC is: Firstly, some kind of objective function is designed according to the specific problem, then the initial value of the parameter in the function is estimated by repeatedly extracting the minimum point set, and all the data are divided into so-called "inner points" by using these initial parameter values. "(inliers, the point that satisfies the estimated parameters) and "outliers" (points that do not satisfy the estimated parameters), and finally recalculate and estimate the parameters of the function with all "inside points". Specifically, the so-called minimum point set is sampled in the input data, and the parameter to be determined is estimated by using the minimum point set obtained by each sampling, and at the same time, according to certain discriminant criteria, which of the input data is associated with the group The parameters are consistent, that is, "inside point", which are inconsistent, that is, "out of point". After iterating a certain number of times, the estimated parameter value corresponding to the highest "inside point" ratio in the input data is taken as the final parameter estimation value.
RANSAC算法应用到本实施例中的具体实施过程如下:  The specific implementation process of the RANSAC algorithm applied to this embodiment is as follows:
( 1 ) 随机选取 n对匹配点 (选取的 n点应保证样本中的任意三点不在同 一直线上), 线性地计算投影变换矩阵 H; 其中, n大于等于 4  (1) randomly select n pairs of matching points (the selected n points should ensure that any three points in the sample are not on the same line), and linearly calculate the projection transformation matrix H; where n is greater than or equal to 4
( 2 )计算每个匹配点经过投影变换矩阵 H变换后到对应匹配点的距离; ( 3 )根据内点距离小于设定距离阔值 t的原则计算内点, 并选取一个包 含内点最多的点集, 在此内点域上重新估计投影变换矩阵 H;  (2) Calculate the distance from each matching point to the corresponding matching point after transformation by the projection transformation matrix H; (3) Calculate the interior point according to the principle that the inner point distance is smaller than the set distance threshold value t, and select one containing the innermost point. Point set, re-estimating the projection transformation matrix H on the inner point domain;
( 4 ) 随机选取 n对匹配点, 返回步骤(2 ) , 如此重复 N次, 即可得到 较为精确的投影变换矩阵 H, 根据该矩阵 H对粗匹配得到的各匹配点进行投 影变换, 得到的内点, 即为精确提取的匹配特征对。  (4) randomly select n pairs of matching points, return to step (2), and repeat N times, to obtain a more accurate projection transformation matrix H, and perform projection transformation on each matching point obtained by rough matching according to the matrix H. The inner point is the pair of matching features that are accurately extracted.
其中, 估计投影变换矩阵 H至少需要 8个方程, 也就是需要在相邻的两 幅图像中选取 n ( > 4 )对特征对应对, 特征对可以通过上述角点匹配过程获 得。 ι、 2 的投影变换为 (以齐次坐标表示) :
Figure imgf000015_0001
通过叉积方程可以表示为: X;XHXi = 0 其中 =(«' ): 令 表示 H的 j行, 那么叉积方程可以表示为 Ah=0即
The estimated transformation matrix H requires at least eight equations, that is, n ( > 4 ) pairs of feature pairs need to be selected in the adjacent two images, and the feature pairs can be obtained through the above-mentioned corner matching process. The projection of ι, 2 is transformed into (in homogeneous coordinates):
Figure imgf000015_0001
The cross product equation can be expressed as: X ; XHX i = 0 where = («' ) : Let j line representing H, then the cross product equation can be expressed as Ah=0
0T -w'X y'X 0 T -w'X y'X
W;X 0Γ -xt'X\ =0 W; X 0 Γ -x t 'X\ =0
-y'X ^X 0T -y'X ^X 0 T
(8) 实际中通过对 A进行 SVD分解, h的解就是 V的值,进而可以得到矩阵 (8) In practice, by SVD decomposition of A, the solution of h is the value of V, and then the matrix can be obtained.
H0 H 0
上述根据内点距离小于设定距离阔值 t的原则计算内点, 具体包括: 如图 2所示, 设, * 、 分别为点 ^、 在各自对应图像中估计出来的对 应点, 则图像中一个点的实际匹配点到其估计匹配点之间的几何距离定义如 下:  The calculation of the inner point according to the principle that the inner point distance is less than the set distance threshold value t includes: as shown in FIG. 2, respectively, *, respectively, points ^, corresponding points estimated in the respective corresponding images, then in the image The geometric distance between the actual matching point of a point and its estimated matching point is defined as follows:
d(p,p) = d(p,H~lq) = \\p -H~lq | , d'(q, q) = d(q, Hp) = \\q-Hp | ( g ) 式中 ΙΙ·Ι表示欧式距离。考虑到对称性, 几何距离判决准则函数定义如下: dis― d{ (pf, p')2 + di /(qi, q;)2 = pf - H~qi + 1|¾ - Hpi || , = 1, 2,…, ( 10) 若计算出的 dis大于设定的距离阔值, 则对应的匹配点被认为是出格点; 若计算出的 dis小于设定的距离阔值,则对应的匹配点被认为是内点,只有内 点才适合计算变换矩阵 H。 d(p,p) = d(p,H~ l q) = \\p -H~ l q | , d'(q, q) = d(q, Hp) = \\q-Hp | ( g In the formula, ΙΙ·Ι indicates the Euclidean distance. Considering symmetry, the geometric distance decision criterion function is defined as follows: dis― d { (p f , p') 2 + d i / (q i , q;) 2 = p f - H~q i + 1|3⁄4 - Hp i || , = 1, 2,..., ( 10) If the calculated dis is greater than the set distance threshold, the corresponding matching point is considered to be the outlier; if the calculated dis is less than the set distance For the value, the corresponding matching point is considered to be an inner point, and only the inner point is suitable for calculating the transformation matrix H.
步骤 S103, 将配准后的两幅图像进行合成。  Step S103, synthesizing the two images after registration.
该步骤中, 为了能使糅合区域平滑, 保证图像质量, 釆用改进的渐进渐 出合成方法进行图像合成, 具体如下:  In this step, in order to smooth the kneading region and ensure the image quality, the improved progressive progressive synthesis method is used for image synthesis, as follows:
原始的渐进渐出合成方法将图像重叠区域中像素点的灰度值 f(x,y)由两 图像中对应像素点的灰度值 f (x,y)和 f2 (x,y)的加权平均得到: The original progressive gradual synthesis method takes the gradation value f(x, y) of the pixel in the overlapping region of the image from the gradation values f (x, y) and f 2 (x, y) of the corresponding pixel in the two images. The weighted average is obtained:
f(x,y) =dl xfl(x,y)+d2 χβ (x,y) 其中 、 d2是渐变因子, 其取值范围限制在 (0, 1)之间, 满足 4+4=1 的关系,在重叠区域中,按照从第 1 幅图像到第 2 幅图像的方向, 由 1 渐 变至 0, d2 由 0渐变至 1, f (x,y)慢慢平滑过渡到 f2(x,y)。 然而, 在使用该算 法时发现, 经处理后的图像虽然消除了图像中的边界, 但是重叠区域中仍然 出现重影、 模糊现象, 由于两幅图像重叠部分中个别对应像素灰度值存在较 大的差异而使合成后的图像在这些像素处的灰度值出现跳变, 为避免这种情 入一个阔值 ioor, 对于 f(x,y), 并不直接取 f (x,y)和 f2(x,y)的加权平均值, 而 是先计算该点在平滑前两幅图对应像素的灰度值差值, 若此差值小于阔值, 则取加权平均值为此点灰度值, 反之, 则取平滑前的灰度值为此点灰度值。 f(x,y) =dl xfl(x,y)+d2 χβ (x,y) where d 2 is a gradation factor whose value is limited to between (0, 1) and satisfies 4+4=1 The relationship, in the overlapping area, from 1 to 2 in the direction from the 1st image to the 2nd image, d 2 is ramped from 0 to 1, and f (x, y) slowly transitions smoothly to f 2 ( x, y). However, when using this algorithm, it is found that the processed image still eliminates the boundary in the image, but still overlaps the image. The phenomenon of ghosting and blurring occurs. Because of the large difference in the gray values of the corresponding pixels in the overlapping portions of the two images, the gray value of the synthesized image appears to jump at these pixels, in order to avoid such a situation. The ioor, for f(x, y), does not directly take the weighted average of f (x, y) and f 2 (x, y), but first calculates the pixel corresponding to the pixel in the first two graphs. The difference of the gray value, if the difference is less than the threshold, the weighted average is taken as the gray value of the point, and vice versa, the gray value before the smoothing is the gray value of the point.
修正算法合成的图像像素 f(x,y)可以表示为:  The image pixel f(x, y) synthesized by the correction algorithm can be expressed as:
f(x,y) =f(x,y) =
Figure imgf000017_0001
Figure imgf000017_0001
(11)  (11)
其中, (χ' 、 2 分别表示两幅图像中像素点的灰度值, 4, 0,1), 且 4+^=1,分别表示两幅图像的渐进因子, f 分别表示两幅图像。 door 为预先设定的判定阔值, 由公式(11 )可知, 该判定阔值用以判断重叠区域 的像素点具体釆用何种灰度定义方式, 当 ioor设置的过大, 就会导致所有像 素点 | _/2I的值可能均小于该 ioor, 导致最终灰度设定不准; 当 ioor设置的 过小, 则会导致所有像素点 l^—^l的值大于 door, 也会导致最终灰度设定不 准。 所以, 在设定 door时建议可以预先对一些重叠区域的灰度值进行对比, 找到一个灰度差值的经验值, 以该经验值作为参考基准, 可调的设置 ioor。 所以, 本发明实施例仅是提出了 ioor的概念, 对于 ioor的具体值不作唯一 限定。 Where χ ' and 2 respectively represent the gray values of the pixels in the two images, 4, 0, 1), and 4+^=1, respectively representing the progressive factors of the two images, and f respectively representing the two images. The door is a preset judgment threshold. It can be known from the formula (11) that the judgment threshold is used to determine which gray scale definition method is used for the pixel of the overlap region. When the ioor is set too large, it will cause all Pixels | _/2 I may be less than the ioor, resulting in the final grayscale setting is not accurate; when the ioor is set too small, it will cause all the pixel points l^-^l value is greater than the door, it will also lead to The final grayscale setting is not allowed. Therefore, when setting the door, it is recommended to compare the gray values of some overlapping areas in advance, and find an empirical value of the gray difference value, and use the empirical value as a reference reference to adjust the ioor. Therefore, the embodiment of the present invention only proposes the concept of ioor, and the specific value of ioor is not limited.
再者, 在图像合成时, 如果选择的拼缝重叠区域过大, 则会出现图像模 糊、 边缘信息不明显等问题, 若选择的拼缝重叠区域太小, 则无法消除图像 的拼缝现象。 所以, 本实施例中, 对于所处理的图像, 釆用拼缝周围 7x7 区 域为拼缝处理区域, 以 3x3 的模板对拼缝区域内的像素点进行线性滤波, 得 到的效果最好。  Furthermore, in the image synthesis, if the overlap area of the selected seam is too large, problems such as image blurring and inconspicuous edge information may occur. If the overlap area of the selected seam is too small, the seaming phenomenon of the image cannot be eliminated. Therefore, in the embodiment, for the processed image, the 7x7 region around the patchwork is used as the seam processing area, and the 3x3 template is used to linearly filter the pixels in the seam region, and the effect is best.
本实施例的整体处理流程图如图 3所示。 综上所述, 本实施例基于实施 例一的实施架构, 对特征点提取、 匹配特征对提取、 以及图像合成的方法进 行改进, 进一步加快图像处理速度和精度。 实施例三 本实施例提供了一种终端实现图像处理的方法, 如图 4所示, 包括如下 步骤: The overall processing flow chart of this embodiment is shown in FIG. In summary, the embodiment is based on the implementation architecture of the first embodiment, and the method for extracting feature points, matching feature extraction, and image synthesis Improvements to further speed up image processing and accuracy. Embodiment 3 This embodiment provides a method for implementing image processing in a terminal. As shown in FIG. 4, the method includes the following steps:
步骤 S401 , 获取具有重叠区域的两幅图像。  Step S401: Acquire two images with overlapping regions.
该步骤的实施过程与实施例一相同, 详细实施方式不再赘述。  The implementation process of this step is the same as that of the first embodiment, and detailed description thereof will not be repeated.
步骤 S402 , 对获取的两幅图像进行预处理, 用以保证下一步图像糅合的 精度。 其中, 预处理过程包括下述处理方式的一种或几种:  Step S402: Perform preprocessing on the acquired two images to ensure the accuracy of the image blending in the next step. The preprocessing process includes one or more of the following processing methods:
方式一, 验证获取的两幅图像是否具有重叠区域, 当具有重叠区域时, 进行下一步; 否则, 发出错误提示信息;  In the first mode, it is verified whether the acquired two images have overlapping regions, and when there is an overlapping region, proceed to the next step; otherwise, an error message is sent;
方式二, 将两幅图像转换到同一坐标系下, 方便后续的图像糅合处理; 方式三,对图像进行平滑滤波处理, 为后续图像糅合处理提供精度支持; 方式四, 初略定位, 得到大致的重叠区域, 并以该重叠区域为特征点的 提取区域。 该预处理方式缩小匹配范围, 提高了图像处理速度。  In the second mode, the two images are converted into the same coordinate system to facilitate subsequent image blending processing; in the third method, the image is smoothed and filtered to provide precision support for subsequent image blending processing; and the fourth method is initially positioned to obtain approximate An overlapping area, and an extraction area characterized by the overlapping area. This preprocessing method narrows the matching range and improves the image processing speed.
当然, 上述预处理方式只是列举并非穷举, 本领域技术人员容易想到的 任何可以为后续图像糅合处理提供支持的操作,都在本发明的保护思想之内。  Of course, the above pre-processing methods are merely exhaustive, and any operation that can be thought of by those skilled in the art to support subsequent image matching processing is within the protection concept of the present invention.
步骤 S403 , 提取两幅图像的特征点, 并在所述特征点中提取出两幅图像 的匹配特征对, 以所述匹配特征对为对准点, 将两幅图像进行配准。  Step S403: Extract feature points of the two images, and extract matching feature pairs of the two images in the feature points, and register the two images by using the matched feature pairs as alignment points.
该步骤的实施可以基于实施例一或者实施例二所述的方式实现。  The implementation of this step can be implemented in the manner described in Embodiment 1 or Embodiment 2.
步骤 S404 , 将配准后的两幅图像进行合成。  Step S404, synthesizing the two images after registration.
该步骤的实施可以基于实施例一或者实施例二所述的方式实现。  The implementation of this step can be implemented in the manner described in Embodiment 1 or Embodiment 2.
步骤 S405 , 获取合成图像以及与该合成图像具有重叠区域的另一图像, 执行步骤 S403、 S404进行再次合成, 重复图像获取及图像糅合过程, 得到具 有景深的 3D 图像。 其中, 与合成图像具有一定重叠区域的另一图像可以为 非合成图像, 也可以为合成图像。  Step S405: Acquire a composite image and another image having an overlapping area with the composite image, perform steps S403 and S404 to perform re-synthesis, repeat image acquisition and image matching process, and obtain a 3D image with depth of field. The other image having a certain overlapping area with the composite image may be a non-composite image or a composite image.
下面就给出使用本实施例所述方法的一种具体应用实现过程, 基本处理 框架图如图 5所示, 具体实现流程包括:  A specific application implementation process using the method described in this embodiment is shown below. The basic processing framework is shown in FIG. 5, and the specific implementation process includes:
用户可以通过终端界面选择开启图像处理功能;  The user can select to enable the image processing function through the terminal interface;
初始化两组摄像头, 调整两组摄像头的角度; 在保证一定重叠区域进行图片的拍摄, 获取两组不同角度的图片; 对两组图片进行图像预处理; Initialize two sets of cameras, adjust the angle of the two sets of cameras; In the image area where a certain overlapping area is ensured, two sets of pictures of different angles are obtained; image preprocessing is performed on the two sets of pictures;
经过处理的两组图片进入图片糅合处理, 生成一组合成图片 A;  The processed two sets of pictures enter the picture blending process to generate a set of composite pictures A;
利用两组摄像头继续拍照进行合成 B、 C、 D ... ... , 将合成图片进行本地 存储;  The two sets of cameras are used to continue photographing to synthesize B, C, D ... and store the synthesized pictures locally;
( 7 )直到用户将希望的图片全部拍摄完毕, 然后将存储模块中合成的一 系列图片进一步进行图像糅合, 直到生成不同景深具有 3D效果的图片; (7) until the user has completely photographed the desired picture, and then further combines the images of the series synthesized in the storage module until a picture with a 3D effect with different depth of field is generated;
( 8 )用户可以直接预览所生成的图片。 (8) The user can directly preview the generated image.
综上所述, 本发明实施例所述方法, 通过摄取一系列不同角度但具有重 叠区域的图像, 将所述一系列图像进行空间重叠处理, 形成一幅包含各图像 序列信息的宽视角场景的、 完整的、 高清晰的具有 3D效果的新图像, 其不 仅实现了宽视野、 高分辨率的要求, 而且更好的满足了用户的使用需求。 实施例四 本发明实施例提供了一种终端, 如图 6所示, 具体包括:  In summary, in the method of the embodiment of the present invention, the series of images are spatially overlapped by taking a series of images with different angles but overlapping regions to form a wide viewing angle scene including each image sequence information. Complete, high-definition new images with 3D effects, which not only meet the requirements of wide field of view, high resolution, but also better meet the needs of users. Embodiment 4 The embodiment of the present invention provides a terminal, as shown in FIG. 6, specifically:
图像获取模块 610, 用于获取具有重叠区域的两幅图像;  An image obtaining module 610, configured to acquire two images with overlapping regions;
图像糅合模块 620 , 用于按照所述重叠区域, 将两幅图像进行配准, 并 将配准后的两幅图像进行合成。  The image matching module 620 is configured to register the two images according to the overlapping area, and synthesize the two images after registration.
具体的, 所述图像糅合模块 620提取两幅图像的特征点, 并在所述特征 点中提取出两幅图像的匹配特征对, 以所述匹配特征对为对准点, 将两幅图 像进行配准。  Specifically, the image matching module 620 extracts feature points of two images, and extracts matching feature pairs of the two images in the feature points, and uses the matched feature pairs as alignment points to match the two images. quasi.
其中, 图像糅合模块 620中的特征点可以为根据图像性质提取适用于图 像糅合的任何几何或灰度特征。 本发明实施例优选地釆用角点作为待提取的 特征点。  The feature points in the image blending module 620 may be any geometric or grayscale features suitable for image blending based on image properties. Embodiments of the present invention preferably use corner points as feature points to be extracted.
本实施例中, 图像糅合模块 620, 可以通过如下角点检测算法进行角点 提取: Moravec算子角点检测、 Forstner算子角点检测、 Susan检测算法、 Plessy 角点检测算法。 其中, Plessy 角点检测算法在一致性和有效性方面均具有优 良的性能以及所提取的角点被证明具有旋转、 平移不变性、稳定性好等优点。  In this embodiment, the image matching module 620 can perform corner extraction by the following corner detection algorithm: Moravec operator corner detection, Forstner operator corner detection, Susan detection algorithm, and Plessy corner detection algorithm. Among them, the Plessy corner detection algorithm has excellent performance in terms of consistency and validity, and the extracted corner points are proved to have the advantages of rotation, translation invariance and stability.
本实施例中, 优选地釆用改进的 Plessy角点检测算法进行角点提取, 此 时, 图像糅合模块 620包括: 计算子模块 621、 设置子模块 622、 筛选子模块 623和提取子模块 624; 其中: In this embodiment, the improved Plessy corner detection algorithm is preferably used for corner extraction. At this time, the image matching module 620 includes: a calculation submodule 621, a setting submodule 622, and a screening submodule. 623 and extraction sub-module 624; wherein:
计算子模块 621 , 用于对于每幅图像, 利用 3 x 3卷积核与图像做卷积, 求得图像各像素点的偏导数, 并利用该偏导数计算各像素点对应的 Plessy角 点检测算法中的对称矩阵 M;  The calculation sub-module 621 is configured to convolve the image with a 3 x 3 convolution kernel for each image, obtain a partial derivative of each pixel of the image, and use the partial derivative to calculate the Plessy corner detection corresponding to each pixel point. a symmetric matrix M in the algorithm;
设置子模块 622,用于设置选取窗口、以及特征点可用评价函数 R;其中,  Setting a sub-module 622 for setting a selection window and a feature point available evaluation function R; wherein
R = Det(M、 , 式中 Z¾t(M) = l^、 Trace(M) = + 2 , . 分别为矩阵 Μ Trace(M) + ε R = Det(M , , where Z3⁄4t(M) = l^, Trace(M) = + 2 , . are respectively matrix Μ Trace(M) + ε
的特征值, £·为使分母不为零的极小值; 筛选子模块 623 , 用于按所述选取窗口在所述图像上选取一个检测区域, 在该检测区域内筛选出 R值最大的像素点, 移动选取窗口, 直到筛选完整幅 图; The feature value, £· is a minimum value that makes the denominator non-zero; the screening sub-module 623 is configured to select a detection area on the image according to the selection window, and select the largest R value in the detection area. Pixels, move the selection window until the entire image is filtered;
提取子模块 624, 用于设置特征点判定阔值, 将筛选出的各像素点中 R 值大于所述判定阔值的像素点设为提取到的特征点。  The extraction sub-module 624 is configured to set a feature point determination threshold, and set, in the selected pixel points, a pixel point whose R value is greater than the determination threshold value as the extracted feature point.
优选地, 在经过模块 621至 624的特征点提取前, 利用预先设定的边界 模板, 将图像中的边界特征点删除。 在经过模块 621至 624的特征点提取后, 提取各特征点中的亚像素特征点, 并以提取的亚像素特征点为最终提取的特 征点。  Preferably, the boundary feature points in the image are deleted using a preset boundary template before the feature points of the modules 621 to 624 are extracted. After the feature points of the modules 621 to 624 are extracted, the sub-pixel feature points in each feature point are extracted, and the extracted sub-pixel feature points are used as the final extracted feature points.
本实施例中,图像糅合模块 620提取匹配特征对的方法包括但不限于为: Hausdorff 距离法 、松弛标记法 、确定性退火算法以及迭代最近点算法 (ICP)。 本实施例中, 为了在去除冗余特征点的同时能准确提取出正确的匹配特征点 对, 优选地, 釆用双向最大相关系数法与随机釆样法相结合的方式进行匹配 特征对的提取。 此时, 图像糅合模块 620包括: 粗匹配子模块 625和精确匹 配子模块 626; 其中:  In this embodiment, the image matching module 620 extracts matching feature pairs, including but not limited to: Hausdorff distance method, slack mark method, deterministic annealing algorithm, and iterative closest point algorithm (ICP). In this embodiment, in order to accurately extract the correct matching feature point pairs while removing the redundant feature points, preferably, the matching feature pairs are extracted by using the bidirectional maximum correlation coefficient method and the random sampling method. At this time, the image matching module 620 includes: a coarse matching sub-module 625 and an exact matching sub-module 626; wherein:
粗匹配子模块 625, 用于利用粗匹配双向最大相关系数 BGCC算法, 对 两幅图像中的特征点进行粗匹配;  The coarse matching sub-module 625 is configured to perform coarse matching on the feature points in the two images by using the coarse matching bidirectional maximum correlation coefficient BGCC algorithm;
优选地 , 在粗匹配子模块 625对两幅图像中的特征点进行粗匹配之前利 用中值滤波器对两幅图像进行平滑处理, 并将原图与滤波处理后图像相减的 结果作为粗匹配处理的操作对象。  Preferably, before the coarse matching sub-module 625 performs coarse matching on the feature points in the two images, the median filter is used to smooth the two images, and the result of subtracting the original image from the filtered image is used as a rough match. The manipulated object of the process.
精确匹配子模块 626, 用于利用随机釆样 RANSAC算法, 对粗匹配得到 的匹配特征对进行精确匹配 , 得到精确提取的匹配特征对。 本实施例中, 图像糅合模块 620, 优选地釆用改进的渐进渐出合成方法, 对配准后的两幅图像的各像素点的灰度值 · χ, 进行设置, 实现图像的合成; 其中, 设置 The exact matching sub-module 626 is configured to accurately match the matching feature pairs obtained by the rough matching by using a random RANSAC algorithm to obtain an accurately extracted matching feature pair. In this embodiment, the image blending module 620, preferably using an improved progressive fade-out synthesis method, sets the gray value χ of each pixel of the two images after registration to realize image synthesis; , setting
f(x,y) =f(x,y) =
Figure imgf000021_0001
Figure imgf000021_0001
其中, _ ;(x, 、 /2(x,_y)分别表示两幅图像中像素点的灰度值, 4, e (0,l) , 且 4+ =i ,分别表示两幅图像的渐进因子, ifo0r为预先设定的判定阔值, ;、 Where _ ;(x, , / 2 (x, _y) respectively represent the gray values of the pixels in the two images, 4, e (0, l), and 4+ = i, respectively representing the asymptotic of the two images Factor, ifo 0 r is a pre-set decision threshold, ;
/2分别表示两幅图像。 再者, 在图像合成时, 如果选择的拼缝重叠区域过大, 则会出现图像模 糊、 边缘信息不明显等问题, 若选择的拼缝重叠区域太小, 则无法消除图像 的拼缝现象。 所以, 本实施例中, 对于所处理的图像, 釆用拼缝周围 7x7 区 域为拼缝处理区域, 以 3x3的模板对拼缝区域内的像素点进行线性滤波, 得 到的效果最好。 / 2 denotes two images respectively. Furthermore, in the image synthesis, if the overlap area of the selected seam is too large, problems such as image blurring and inconspicuous edge information may occur. If the selected seam overlap area is too small, the seam stitching phenomenon of the image cannot be eliminated. Therefore, in the embodiment, for the processed image, the 7x7 region around the seam is used as the seam processing region, and the pixel in the seam region is linearly filtered by the template of 3x3, and the effect is best.
实施例五  Embodiment 5
本实施例提供一种终端, 本实施例包含实施例四中所有的功能模块, 是 实施例四所述方案的扩展方案, 如图 7所示, 包括:  The embodiment provides a terminal. The embodiment includes all the functional modules in the fourth embodiment, and is an extension of the solution in the fourth embodiment. As shown in FIG. 7, the method includes:
图像获取模块 710, 用于获取具有重叠区域的两幅图像;  An image obtaining module 710, configured to acquire two images having overlapping regions;
图像预处理模块 720, 用于将图像获取模块 710获取到两幅图像按设定 的预处理操作进行处理; 其中, 预处理操作包括如下操作中的一项或多项: 验证获取的图像、 将两幅图像转换到同一坐标系下、 对两幅图像进行平滑滤 波处理、 以及初略定位, 得到大致的重叠区域, 并以该重叠区域为特征点的 提取区域;  The image pre-processing module 720 is configured to process the two images acquired by the image obtaining module 710 according to the set pre-processing operation; wherein the pre-processing operation includes one or more of the following operations: verifying the acquired image, The two images are converted into the same coordinate system, the two images are smoothed and processed, and the initial positioning is performed to obtain a substantially overlapping region, and the overlapping region is taken as an extraction region of the feature point;
图像糅合模块 730, 用于按照所述重叠区域, 将两幅图像进行配准, 并 将配准后的两幅图像进行合成; 具体地, 图像糅合模块 730提取两幅图像的 特征点, 并在所述特征点中提取出两幅图像的匹配特征对, 以所述匹配特征 对为对准点, 将两幅图像进行配准。 3D图像生成模块 740, 用于获取合成图像以及与该合成图像具有重叠区 域的另一图像, 触发所述图像糅合模块 730进行再次合成, 重复进行图像获 取及图像糅合过程, 得到具有景深的 3D 图像; 其中, 与合成图像具有一定 重叠区域的另一图像可以为非合成图像, 也可以为合成图像。 The image matching module 730 is configured to register the two images according to the overlapping area, and synthesize the two images after registration; specifically, the image matching module 730 extracts feature points of the two images, and The matching feature pairs of the two images are extracted from the feature points, and the matching image pairs are used as alignment points to register the two images. The 3D image generation module 740 is configured to acquire a composite image and another image having an overlapping area with the composite image, trigger the image combining module 730 to perform re-synthesis, and repeat the image capturing and image combining process to obtain a 3D image having a depth of field. Wherein, another image having a certain overlapping area with the composite image may be a non-composite image or a composite image.
综上所述, 本发明实施例的所述终端, 通过摄取两组不同角度但具有重 叠区域的图像, 直接针对各个图像提取特征点参数, 然后依据各个特征点确 定图像间的匹配程度, 剔除错误的匹配对, 依据配准后的图像进行糅合合成 处理, 得到了宽视野、 高分辨率的图像, 极大的提高了用户使用体验;  In summary, the terminal in the embodiment of the present invention extracts feature point parameters directly for each image by taking two sets of images with different angles but overlapping areas, and then determines the degree of matching between the images according to each feature point, and rejects the error. The matching pair is combined and processed according to the image after registration, and a wide-field, high-resolution image is obtained, which greatly improves the user experience;
再者, 本发明实施例所述终端, 通过摄取一系列不同角度但具有重叠区 域的图像, 将所述一系列图像进行空间重叠处理, 形成一幅包含各图像序列 信息的宽视角场景的、 完整的、 高清晰的具有 3D效果的新图像, 其不仅实 现了宽视野、 高分辨率的要求, 而且更好的满足了用户的使用需求。 发明的精神和范围。 这样, 倘若本发明的这些修改和变型属于本发明权利要 求及其等同技术的范围之内, 则本发明也意图包含这些改动和变型在内。  Furthermore, the terminal in the embodiment of the present invention performs spatial overlapping processing on a series of images by taking a series of images with different angles but overlapping regions to form a wide viewing angle scene including each image sequence information. The high-definition new image with 3D effect not only meets the requirements of wide field of view and high resolution, but also better meets the user's needs. The spirit and scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of the inventions
工业实用性 Industrial applicability
本发明实施例所述终端及方法, 通过摄取一系列不同角度但具有重叠区 域的图像, 将所述一系列图像进行空间重叠处理, 形成一幅包含各图像序列 信息的宽视角场景的、 完整的、 高清晰的具有 3D效果的新图像, 其不仅实 现了宽视野、 高分辨率的要求, 而且更好的满足了用户的使用需求。  According to the terminal and method of the embodiment of the present invention, the series of images are spatially overlapped by taking a series of images with different angles but overlapping regions to form a wide viewing angle scene including each image sequence information. High-definition new images with 3D effects, which not only meet the requirements of wide field of view and high resolution, but also better meet the needs of users.

Claims

权 利 要 求 书 Claim
1、 一种终端实现图像处理的方法, 包括:  A method for implementing image processing by a terminal, comprising:
图像获取步骤: 获取具有重叠区域的两幅图像; 以及  Image acquisition step: acquiring two images with overlapping regions;
图像糅合步骤: 按照所述重叠区域, 将两幅图像进行配准, 并将配准后 的两幅图像进行合成。  Image matching step: According to the overlapping area, two images are registered, and the two images after registration are combined.
2、 如权利要求 1所述的方法, 其中, 所述按照重叠区域, 将两幅图像进 行配准, 包括:  2. The method according to claim 1, wherein the registering the two images according to the overlapping area comprises:
提取两幅图像的特征点, 并在所述特征点中提取出两幅图像的匹配特征 对, 以所述匹配特征对为对准点, 将两幅图像进行配准。  Extracting feature points of the two images, and extracting matching feature pairs of the two images in the feature points, and registering the two images with the matching feature pairs as alignment points.
3、 如权利要求 2所述的方法, 其中, 所述特征点包括图像的角点。  3. The method of claim 2, wherein the feature points comprise corner points of an image.
4、如权利要求 3所述的方法,其中, 所述提取两幅图像的特征点, 包括: 对于每幅图像, 利用 3 x 3卷积核与图像做卷积, 求得图像各像素点的偏 导数, 并利用所述偏导数计算各像素点对应的 Plessy角点检测算法中的对称 矩阵 M;  4. The method according to claim 3, wherein the extracting feature points of the two images comprises: for each image, convolving with the image using a 3 x 3 convolution kernel to obtain pixel points of the image a partial derivative, and using the partial derivative to calculate a symmetric matrix M in a Plessy corner detection algorithm corresponding to each pixel point;
设置选取窗口、 以及特征点可用评价函数 R; 其中, R = Det(M) , 式 Set the selection window, and the feature point available evaluation function R; where R = Det(M) ,
Trace(M) + ε 中 Det(M、 = λ, . Trace(M) = + λ, , 、 分别为矩阵 Μ的特征值, ε为使分 母不为零的极小值; 按所述选取窗口在所述图像上选取一个检测区域, 在所述检测区域内筛 选出 R值最大的像素点, 移动选取窗口, 直到筛选完整幅图; 以及  Trace(M) + ε Det(M, = λ, . Trace(M) = + λ, , , are the eigenvalues of the matrix Μ, respectively, ε is the minimum value that makes the denominator non-zero; Selecting a detection area on the image, filtering a pixel point having the largest R value in the detection area, and moving the selection window until the complete image is filtered;
设置特征点判定阔值, 将筛选出的各像素点中 R值大于所述判定阔值的 像素点设为提取到的特征点。  The feature point determination threshold is set, and the pixel points whose R values are larger than the determination threshold are selected as the extracted feature points.
5、 如权利要求 2或 3或 4所述的方法, 其中, 所述方法还包括: 在特征点提取前, 利用预先设定的边界模板, 将图像中的边界特征点删 除;  The method according to claim 2 or 3 or 4, wherein the method further comprises: deleting the boundary feature points in the image by using a preset boundary template before extracting the feature points;
和 /或, 在提取图像中的特征点后, 提取各特征点中的亚像素特征点, 并 以提取的亚像素特征点为最终提取的特征点。  And/or, after extracting feature points in the image, extracting sub-pixel feature points in each feature point, and extracting the sub-pixel feature points as the final extracted feature points.
6、 如权利要求 2或 3或 4所述的方法, 其中, 所述在特征点中提取出两 幅图像的匹配特征对, 包括: 利用粗匹配双向最大相关系数 BGCC算法, 对两幅图像中的特征点进行 粗匹配, 利用随机釆样 RANSAC算法, 对粗匹配得到的匹配特征对进行精确 匹配, 得到精确提取的匹配特征对。 The method according to claim 2 or 3 or 4, wherein the matching feature pairs of the two images are extracted from the feature points, including: By using the coarse matching bidirectional maximum correlation coefficient BGCC algorithm, the feature points in the two images are coarsely matched, and the matching feature pairs obtained by the rough matching are accurately matched by the random sample RANSAC algorithm to obtain the accurately extracted matching feature pairs.
7、 如权利要求 6所述的方法, 其中, 在对两幅图像中的特征点进行粗匹 配之前, 所述方法还包括:  7. The method according to claim 6, wherein before the coarse matching of the feature points in the two images, the method further comprises:
利用中值滤波器对两幅图像进行平滑处理, 并将原图与滤波处理后图像 相减的结果作为粗匹配处理的操作对象。  The two images are smoothed by the median filter, and the result of subtracting the original image from the filtered image is used as the operation object of the rough matching processing.
8、 如权利要求 1至 4任一项所述的方法, 其中, 所述将配准后的两幅图 像进行合成, 包括: 根据渐进渐出合成方法, 对配准后的两幅图像的各像素 点的灰度值/ (x,_y)进行设置; 其中, 设置规则包括:  The method according to any one of claims 1 to 4, wherein the synthesizing the two images after registration comprises: according to the progressive progressive synthesis method, each of the two images after registration The gray value of the pixel / (x, _y) is set; wherein, the setting rule includes:
Figure imgf000024_0001
Figure imgf000024_0001
其中, ϋ Λ( 分别表示两幅图像中像素点的灰度值, 4,^ (0, 1) , 且 4 +^ = 1 ,分别表示两幅图像的渐进因子, ifoor为预先设定的判定阔值, _;、 /2分别表示两幅图像。 Where , Λ (representing the gray value of the pixel in the two images, 4, ^ (0, 1), and 4 +^ = 1 respectively, indicating the progressive factor of the two images, ifoor is the preset judgment The threshold, _;, / 2 represent two images.
9、如权利要求 8所述的方法,其中,所述将配准后的两幅图像进行合成, 还包括: 9. The method of claim 8, wherein the synthesizing the two images after registration further comprises:
釆用拼缝周围 7 X 7区域为拼缝处理区域, 并以 3 x 3的模板对所述拼缝 处理区域内的像素点进行线性滤波处理。  The 7 X 7 area around the seam is the seam processing area, and the pixels in the seam processing area are linearly filtered by the 3 x 3 template.
10、 如权利要求 1至 4任一项所述的方法, 其中, 在图像获取步骤和图 像糅合步骤之间, 还包括:  The method according to any one of claims 1 to 4, further comprising: between the image capturing step and the image combining step, further comprising:
图像预处理步骤: 将所述图像获取步骤获取到的两幅图像按设定的预处 理操作进行处理; 其中, 预处理操作包括如下操作中的一项或多项: 验证获 取的图像、将两幅图像转换到同一坐标系下、对两幅图像进行平滑滤波处理、 以及初略定位, 得到粗估计的重叠区域。  Image pre-processing step: processing the two images acquired by the image obtaining step according to a set pre-processing operation; wherein, the pre-processing operation comprises one or more of the following operations: verifying the acquired image, and The image is converted to the same coordinate system, the two images are smoothed and processed, and the initial positioning is performed to obtain a rough estimated overlap region.
11、 如权利要求 1至 4任一项所述的方法, 其中, 所述方法还包括: 3D图像生成步骤:获取合成图像以及与所述合成图像具有重叠区域的另 一图像, 执行所述图像糅合步骤进行再次合成, 重复图像获取及图像糅合过 程, 得到具有景深的 3D图像。 The method according to any one of claims 1 to 4, wherein the method further comprises: The 3D image generating step: acquiring a composite image and another image having an overlapping area with the composite image, performing the image combining step to perform re-synthesis, repeating the image capturing and image combining process, and obtaining a 3D image having a depth of field.
12、 一种终端, 包括:  12. A terminal, comprising:
图像获取模块, 其设置成获取具有重叠区域的两幅图像; 以及  An image acquisition module configured to acquire two images having overlapping regions;
图像糅合模块, 其设置成按照所述重叠区域, 将两幅图像进行配准, 并 将配准后的两幅图像进行合成。  The image matching module is arranged to register the two images according to the overlapping area, and synthesize the two images after registration.
13、 如权利要求 12所述的终端, 其中, 所述图像糅合模块是设置成提取 两幅图像的特征点, 并在所述特征点中提取出两幅图像的匹配特征对, 以所 述匹配特征对为对准点, 将两幅图像进行配准。  The terminal according to claim 12, wherein the image matching module is configured to extract feature points of two images, and extract matching feature pairs of two images in the feature points, to match the The feature pair is the alignment point, and the two images are registered.
14、 如权利要求 13所述的终端, 其中, 在所述图像糅合模块中, 特征点 包括图像的角点。  14. The terminal of claim 13, wherein in the image combining module, the feature point includes a corner point of the image.
15、 如权利要求 14所述的终端, 其中, 所述图像糅合模块还包括: 计算子模块, 其设置成对于每幅图像, 利用 3 x 3卷积核与图像做卷积, 求得图像各像素点的偏导数, 并利用所述偏导数计算各像素点对应的 Plessy 角点检测算法中的对称矩阵 M;  The terminal according to claim 14, wherein the image matching module further comprises: a calculation sub-module configured to convolve with the image by using a 3 x 3 convolution kernel for each image to obtain an image a partial derivative of the pixel, and using the partial derivative to calculate a symmetric matrix M in the Plessy corner detection algorithm corresponding to each pixel;
设置子模块, 其设置成设置选取窗口、 以及特征点可用评价函数 R; 其 中, R = Det(M、 , Det{M) = l1 , Trace{M) = + l1 , , 分别为矩阵 Setting a sub-module, which is set to set a selection window, and a feature point available evaluation function R; wherein, R = Det(M , , Det{M) = l 1 , Trace{M) = + l 1 , , respectively
Trace(M) + ε  Trace(M) + ε
Μ的特征值, £·为使分母不为零的极小值; 筛选子模块,其设置成按所述选取窗口在所述图像上选取一个检测区域, 在该检测区域内筛选出 R值最大的像素点, 移动选取窗口, 直到筛选完整幅 图; 以及  The characteristic value of Μ, £· is a minimum value that makes the denominator non-zero; the screening sub-module is arranged to select a detection area on the image according to the selection window, and the R value is selected to be the largest in the detection area. Pixels, move the selection window until the full image is filtered;
提取子模块, 其设置成设置特征点判定阔值, 将筛选出的各像素点中 R 值大于所述判定阔值的像素点设为提取到的特征点。  The extraction sub-module is configured to set a feature point determination threshold, and set, in the selected pixel points, a pixel point whose R value is greater than the determination threshold value as the extracted feature point.
16、 如权利要求 13或 14或 15所述的终端, 其中, 所述图像糅合模块还 包括:  The terminal according to claim 13 or 14 or 15, wherein the image matching module further comprises:
粗匹配子模块, 其设置成利用粗匹配双向最大相关系数 BGCC算法, 对 两幅图像中的特征点进行粗匹配; 以及  a coarse matching sub-module configured to coarsely match feature points in the two images using a coarse matching bidirectional maximum correlation coefficient BGCC algorithm;
精确匹配子模块, 其设置成利用随机釆样 RANSAC算法,对粗匹配得到 的匹配特征对进行精确匹配 , 得到精确提取的匹配特征对。 Accurately matching sub-modules, which are set to use a random sample RANSAC algorithm for coarse matching The matching feature pairs are accurately matched to obtain an accurately extracted matching feature pair.
17、 如权利要求 12至 15任一项所述的终端, 其中, 所述图像糅合模块 还设置成根据渐进渐出合成方法, 对配准后的两幅图像的各像素点的灰度值 /(χ, 进行设置; 其中, 设置规则包括:  The terminal according to any one of claims 12 to 15, wherein the image matching module is further configured to: according to the progressive fade-out synthesis method, the gray value of each pixel of the two images after registration/ (χ, set; where the setting rules include:
f(x, y) =f(x, y) =
Figure imgf000026_0001
Figure imgf000026_0001
其中, ϋ Λ( 分别表示两幅图像中像素点的灰度值, 4,^ (ο,ι) , 且 4 + ^ = 1 ,分别表示两幅图像的渐进因子, ifo0r为预先设定的判定阔值, ;、 /2分别表示两幅图像。 Where , Λ (representing the gray value of the pixel in the two images, 4, ^ (ο, ι), and 4 + ^ = 1 respectively, indicating the progressive factor of the two images, ifo 0 r is preset The judgment threshold, ;, / 2 respectively represent two images.
18、 如权利要求 12至 15任一项所述的终端, 其中, 所述终端还包括: 图像预处理模块, 和 /或, 3D图像生成模块; The terminal according to any one of claims 12 to 15, wherein the terminal further comprises: an image preprocessing module, and/or a 3D image generating module;
所述图像预处理模块, 其设置成将所述图像获取模块获取到的两幅图像 按设定的预处理操作进行处理; 其中, 预处理操作包括如下操作中的一项或 多项: 验证获取的图像、 将两幅图像转换到同一坐标系下、 对两幅图像进行 平滑滤波处理、 以及初略定位, 得到粗估计的重叠区域; 以及  The image pre-processing module is configured to process the two images acquired by the image acquisition module according to a set pre-processing operation; wherein the pre-processing operation comprises one or more of the following operations: Image, converting the two images to the same coordinate system, smoothing and filtering the two images, and initially locating to obtain a rough estimated overlap region;
所述 3D 图像生成模块, 其设置成获取合成图像以及与所述合成图像具 有重叠区域的另一图像, 触发所述图像糅合模块进行再次合成, 重复进行图 像获取及图像糅合过程, 得到具有景深的 3D图像。  The 3D image generation module is configured to acquire a composite image and another image having an overlapping area with the composite image, trigger the image matching module to perform re-synthesis, repeat image acquisition and image matching process, and obtain a depth of field 3D image.
PCT/CN2013/085782 2013-05-17 2013-10-23 Terminal and image processing method therefor WO2014183385A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310185745.5A CN104166972A (en) 2013-05-17 2013-05-17 Terminal and method for realizing image processing
CN201310185745.5 2013-05-17

Publications (1)

Publication Number Publication Date
WO2014183385A1 true WO2014183385A1 (en) 2014-11-20

Family

ID=51897629

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/085782 WO2014183385A1 (en) 2013-05-17 2013-10-23 Terminal and image processing method therefor

Country Status (2)

Country Link
CN (1) CN104166972A (en)
WO (1) WO2014183385A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107370951A (en) * 2017-08-09 2017-11-21 广东欧珀移动通信有限公司 Image processing system and method
CN108460763A (en) * 2018-03-26 2018-08-28 上海交通大学 A kind of automatic detection recognition method of magnetic powder inspection image
CN108496357A (en) * 2017-04-11 2018-09-04 深圳市柔宇科技有限公司 Image processing method and device

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107493412B (en) * 2017-08-09 2019-09-13 Oppo广东移动通信有限公司 Image processing system and method
CN107395974B (en) * 2017-08-09 2019-09-13 Oppo广东移动通信有限公司 Image processing system and method
CN107493411B (en) * 2017-08-09 2019-09-13 Oppo广东移动通信有限公司 Image processing system and method
CN107644411A (en) * 2017-09-19 2018-01-30 武汉中旗生物医疗电子有限公司 Ultrasonic wide-scene imaging method and device
CN108322658B (en) * 2018-03-29 2020-04-17 青岛海信移动通信技术股份有限公司 Photographing method and device
CN109035326A (en) * 2018-06-19 2018-12-18 北京理工大学 High-precision location technique based on sub-pix image recognition
CN109934809A (en) * 2019-03-08 2019-06-25 深慧视(深圳)科技有限公司 A kind of paper labels character defect inspection method
CN112132879B (en) * 2019-06-25 2024-03-08 北京沃东天骏信息技术有限公司 Image processing method, device and storage medium
CN110599404A (en) * 2019-09-24 2019-12-20 陕西晟思智能测控有限公司 Circuit board microscopic image splicing method and device and information data processing terminal
CN112819735B (en) * 2020-12-31 2022-02-01 四川大学 Real-time large-scale image synthesis algorithm of microscope system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202134044U (en) * 2011-07-06 2012-02-01 长安大学 An image splicing device based on extracting and matching of angular point blocks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101276465B (en) * 2008-04-17 2010-06-16 上海交通大学 Method for automatically split-jointing wide-angle image
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202134044U (en) * 2011-07-06 2012-02-01 长安大学 An image splicing device based on extracting and matching of angular point blocks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU, DONGMEI.: "Research on Image Mosaic Algorithm.", CHINA MASTER'S THESES FULL-TEXT DATABASE, INFORMATION SCIENCE SERIES., 15 January 2009 (2009-01-15), pages 2 - 72 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108496357A (en) * 2017-04-11 2018-09-04 深圳市柔宇科技有限公司 Image processing method and device
WO2018187941A1 (en) * 2017-04-11 2018-10-18 深圳市柔宇科技有限公司 Picture processing method and device
CN108496357B (en) * 2017-04-11 2020-11-24 深圳市柔宇科技有限公司 Picture processing method and device
CN107370951A (en) * 2017-08-09 2017-11-21 广东欧珀移动通信有限公司 Image processing system and method
CN107370951B (en) * 2017-08-09 2019-12-27 Oppo广东移动通信有限公司 Image processing system and method
CN108460763A (en) * 2018-03-26 2018-08-28 上海交通大学 A kind of automatic detection recognition method of magnetic powder inspection image
CN108460763B (en) * 2018-03-26 2021-03-30 上海交通大学 Automatic detection and identification method for magnetic powder inspection image

Also Published As

Publication number Publication date
CN104166972A (en) 2014-11-26

Similar Documents

Publication Publication Date Title
WO2014183385A1 (en) Terminal and image processing method therefor
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
US9224189B2 (en) Method and apparatus for combining panoramic image
CN105957007B (en) Image split-joint method based on characteristic point plane similarity
CN107959805B (en) Light field video imaging system and method for processing video frequency based on Hybrid camera array
CN105245841B (en) A kind of panoramic video monitoring system based on CUDA
CN101630406B (en) Camera calibration method and camera calibration device
JP5969992B2 (en) Stereoscopic (3D) panorama generation on portable devices
WO2014023231A1 (en) Wide-view-field ultrahigh-resolution optical imaging system and method
WO2020007320A1 (en) Method for fusing multi-visual angle images, apparatus, computer device, and storage medium
EP2328125A1 (en) Image splicing method and device
WO2012068902A1 (en) Method and system for enhancing text image clarity
CN104392416B (en) Video stitching method for sports scene
WO2013186056A1 (en) Method and apparatus for fusion of images
JP2010009417A (en) Image processing apparatus, image processing method, program and recording medium
WO1998021690A1 (en) Multi-view image registration with application to mosaicing and lens distortion correction
CN112801870B (en) Image splicing method based on grid optimization, splicing system and readable storage medium
CN111553841B (en) Real-time video splicing method based on optimal suture line updating
JP5210198B2 (en) Image processing apparatus, image processing method, and image processing program
CN114693760A (en) Image correction method, device and system and electronic equipment
US10482571B2 (en) Dual fisheye, hemispherical image projection and stitching method, device and computer-readable medium
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN107330856B (en) Panoramic imaging method based on projective transformation and thin plate spline
US20190172226A1 (en) System and method for generating training images
CN111630569A (en) Binocular matching method, visual imaging device and device with storage function

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13884443

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13884443

Country of ref document: EP

Kind code of ref document: A1