WO2021004237A1 - 一种图像配准、融合、遮挡检测方法、装置和电子设备 - Google Patents

一种图像配准、融合、遮挡检测方法、装置和电子设备 Download PDF

Info

Publication number
WO2021004237A1
WO2021004237A1 PCT/CN2020/096365 CN2020096365W WO2021004237A1 WO 2021004237 A1 WO2021004237 A1 WO 2021004237A1 CN 2020096365 W CN2020096365 W CN 2020096365W WO 2021004237 A1 WO2021004237 A1 WO 2021004237A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
registration
grid
point
displacement
Prior art date
Application number
PCT/CN2020/096365
Other languages
English (en)
French (fr)
Inventor
许姜严
Original Assignee
北京迈格威科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京迈格威科技有限公司 filed Critical 北京迈格威科技有限公司
Priority to US17/622,973 priority Critical patent/US20220245839A1/en
Publication of WO2021004237A1 publication Critical patent/WO2021004237A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the technical field of image processing, in particular to an image registration, fusion, and occlusion detection method, device and electronic equipment.
  • Image registration is the process of matching and superimposing two or more images acquired at different times, different sensors (imaging equipment) or under different conditions (weather, illuminance, camera position and angle, etc.), which is face recognition, identity An indispensable part in scenarios such as verification and smart cities.
  • the problem solved by the present invention is how to detect the registered image and judge the misregistration point therein.
  • an image registration detection method which includes:
  • the registration data After obtaining the registration data of the first image and the second image, perform grid segmentation on the first image and the second image, and the registration data includes at least registration point coordinates and registration point displacement;
  • the misregistration point is determined according to the displacement difference, wherein the registration point that meets a preset condition with the displacement difference of the grid to which it belongs is determined as the misregistration point.
  • a method for detecting an occluded area which includes:
  • the occlusion area is determined according to the area composed of the misregistration points.
  • An image registration detection device which includes:
  • the grid segmentation unit is used to perform grid segmentation on the first image and the second image after acquiring the registration data of the first image and the second image, and the registration data includes at least registration point coordinates and registration point displacement ;
  • a matrix calculation unit for calculating the difference between the displacement of the registration point of each registration point and the displacement of the homography matrix of the grid to which the registration point belongs as the displacement difference;
  • the displacement calculation unit is configured to calculate the difference between the registration point displacement of each registration point and the homography matrix displacement of the grid to which the registration point belongs based on the registration data and the homography matrix as the displacement difference;
  • the registration accuracy determination unit is used to determine the misregistration point according to the displacement difference, wherein the registration point that satisfies a preset condition with the displacement difference with the grid to which it belongs is determined as the misregistration point.
  • a device for detecting an obscured area which includes:
  • the image registration detection device is used to determine misregistration points
  • the area determining unit is configured to determine the occlusion area according to the area composed of the misregistration points.
  • a multi-camera image fusion device is provided again from time to time, which includes:
  • a multi-shot image registration unit for acquiring multiple shot images, and selecting two of them as the first image and the second image for registration;
  • the shielded area detection device is used to determine the shielded area;
  • the traversal unit is used to traverse the multiple captured images to determine the shielded area in the multiple captured images;
  • the image fusion unit is used to perform image fusion on the remaining parts of the multiple captured images after excluding the occluded area.
  • an electronic device including a processor and a memory
  • the memory stores a control program
  • the control program when executed by the processor, the above-mentioned image registration detection method is realized, or the above-mentioned occlusion area is realized
  • the detection method alternatively, implement the above-mentioned multi-shot image fusion method.
  • a computer-readable storage medium which stores instructions that, when loaded and executed by a processor, implement the aforementioned image registration detection method, or implement the aforementioned occlusion area detection method, or, Realize the above-mentioned multi-shot image fusion method.
  • FIG. 1A is a left side shot view in a schematic diagram according to an embodiment of the present invention.
  • FIG. 1B is a view taken from the right in the principle diagram according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of an image registration detection method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of step 300 of an image registration detection method according to an embodiment of the present invention.
  • Fig. 4 is an exemplary diagram of an image to be registered according to an embodiment of the present invention.
  • Fig. 5 is an exemplary diagram of a reference image according to an embodiment of the present invention.
  • Fig. 6 is an exemplary diagram of dividing an overlapping grid of an image to be registered according to an embodiment of the present invention
  • FIG. 7 is a flowchart of step 400 of an image registration detection method according to an embodiment of the present invention.
  • FIG. 8 is a flowchart of step 200 of an image registration detection method according to an embodiment of the present invention.
  • step 9 is a flowchart of step 220 of an image registration detection method according to an embodiment of the present invention.
  • FIG. 10 is a flowchart of a method for detecting an occluded area according to an embodiment of the present invention.
  • FIG. 11 is an exemplary diagram of a reference image after dense registration according to an embodiment of the present invention.
  • Fig. 12 is an exemplary diagram of a reference image occlusion area according to an embodiment of the present invention.
  • FIG. 13 is a flowchart of a method for fusion of multi-shot images according to an embodiment of the present invention.
  • Figure 14 is a structural block diagram of an image registration detection device according to an embodiment of the present invention.
  • FIG. 15 is a structural block diagram of an obstructed area detection device according to an embodiment of the present invention.
  • Fig. 16 is a structural block diagram of a multi-shot image fusion device according to an embodiment of the present invention.
  • Figure 17 is a structural block diagram of an electronic device according to an embodiment of the present invention.
  • Fig. 18 is a block diagram of another electronic device according to an embodiment of the present invention.
  • 2-grid segmentation unit 3-matrix calculation unit, 4-displacement calculation unit, 5-match accuracy unit, 6-area determination unit, 7-multi-camera image registration unit, 8-traverse unit, 9-image fusion Unit, 800-electronic equipment, 802-processing component, 804-memory, 806-power component, 808-multimedia component, 810-audio component, 812-input/output (I/O) interface, 814-sensor component, 816 -Communication components.
  • 802-processing component 804-memory
  • 806-power component 808-multimedia component
  • 810-audio component 812-input/output (I/O) interface
  • 814-sensor component 816 -Communication components.
  • FIG. 1A and FIG. 1B they are two images acquired by a dual camera mode (also can be acquired by a camera or a camera in different directions). Due to the position difference of the shooting camera, parallax is generated on the image. Assume that the two cameras are arranged horizontally, and the position of the cylinder in front of the cuboid in Figure 1A and Figure 1B. Then the left camera obtains the left-side shooting view and the right camera obtains the right-side shooting view, as shown in FIG. 1A and FIG. 1B, respectively.
  • the left side shot view shows more left side area of the cuboid behind the cylinder, and part of the area on the right side of the cuboid will be blocked by the front cylinder; in the same way, in the right side shot view, Part of the area on the left side of the cuboid is obscured by the cylinder, and more areas on the right side can be seen.
  • one of the shooting views can be selected as the reference view.
  • the left view will be used as the reference view for fusion.
  • all points on the right view need to be registered to the left view to obtain the registration points of the two images. Due to the existence of parallax, objects farther from the camera have a smaller parallax on the left and right side shooting views, while objects closer to the camera have a larger parallax on the left and right side shooting views. Therefore, the registration point will have a fault in the parallax area. According to this principle, we can use the tomography with different parallax to detect the occlusion area on the two images.
  • the embodiments of the present disclosure provide an image registration detection method, which can be executed by an image registration detection device, and the received image registration detection device can be integrated in an electronic device such as a mobile phone.
  • FIG. 2 it is a flowchart of an image registration detection method according to an embodiment of the present invention; wherein, the image registration detection method includes:
  • Step 100 After acquiring registration data of the first image and the second image, perform grid segmentation on the first image and the second image, where the registration data includes at least registration point coordinates and registration point displacement;
  • the first image and the second image may be images of objects, or images of people.
  • obtaining the registration data of the first image and the second image can be the first image and the second image that have been registered, and the registration data can be obtained from the registration process or the registration result.
  • the first image and the second image are registered to obtain their registration data.
  • the first image and the second image may be registered through relative registration, that is, the first image and the second image are the image to be registered and the reference image, respectively; in this case, you can directly Read the registration point coordinates of the image to be registered and the registration point displacement of the image to be registered to the reference image during the registration process, where the registration point coordinates and the registration point displacement are corresponding, and among them, the image to be registered
  • the registration point coordinates, the corresponding registration point coordinates on the reference image, the registration point displacement of the two registration point coordinates (the registration point displacement of the image to be registered to the reference image and the registration point of the reference image to the image to be registered Displacement is the vector in the opposite direction) is the corresponding relationship.
  • the third data can be calculated through the corresponding relationship. Therefore, directly read the registration point of the image to be registered during the registration process The coordinates and the displacement of the registration point of the image to be registered to the reference image; or directly read the registration point coordinates of the reference image during the registration process and the displacement of the registration point of the image to be registered to the reference image (or reference image registration) To the registration point displacement of the image to be registered), and calculate the registration point coordinates of the image to be registered through the corresponding relationship; or directly read the registration point coordinates of the reference image and the registration point coordinates of the image to be registered during the registration process, And calculating the registration point displacement of the image to be registered to the reference image through the corresponding relationship is a feasible equivalent solution.
  • the first image and the second image may also be registered by means of absolute registration, that is, the first image and the second image are both images to be registered.
  • FIG. 4 it is an example image of the image to be registered; as shown in FIG. 5, it is an example image of a reference image, where FIG. The image taken on the left side, the Figure 5 is an image taken by the camera located on the right side of the doll.
  • the upper left is a schematic diagram of the division of overlapping grids. It can be seen from this figure that overlapping grids have overlapping parts between adjacent grids.
  • the first image and the second image may be pictures of the same or similar scene taken by cameras in the acquisition device 15 from different angles, or may be cameras in different positions in the unified electronic device at the same time.
  • the first image and the second image may be two pictures taken, or any two of a plurality of pictures taken.
  • the first image and the second image are divided into grids, and the size of the grid can be determined according to actual conditions.
  • the grid can be positioned according to the registration points or vertices in the first image and the second image or in other ways, so as to establish the correspondence between the grid in the first image and the grid in the second image ;
  • Other ways can also be used to make the registration points in the grid with the corresponding relationship have strong correspondence, which is convenient for detecting the accuracy of the registration points.
  • the first image is an image to be registered
  • the second image is an image to be registered
  • the first image is a reference image or an image registration retrieval process of multiple images.
  • Step 200 Calculate the homography matrix of each grid in the first image and the corresponding grid in the second image;
  • this step there is a homography matrix between a grid in the first image and a grid in the second image with a corresponding relationship. Due to the noise problem of the corresponding registration point coordinates, the homography matrix There are errors, you can set up multiple points to form the equation group of the homography matrix, and obtain the optimal homography matrix by calculating the optimal solution; the linear linear solution or singular value decomposition can be used to calculate the optimal solution , Levenberg-Marquarat (LM) algorithm, etc. to optimize and calculate the optimal solution.
  • LM Levenberg-Marquarat
  • Step 300 Calculate the difference between the registration point displacement of each registration point and the homography matrix displacement of the grid to which the registration point belongs based on the registration data and the homography matrix as the displacement difference;
  • registration points on the image to be registered There are multiple registration points on the image to be registered, and these registration points have a one-to-one correspondence with the registration points on the reference image (in the case of registration), and the displacement between two registration points with one-to-one correspondence Is the registration point displacement of the registration point.
  • the image to be registered has a grid, a grid has multiple registration points, and the grid has a corresponding grid on the reference image, and there is a homography matrix between the two corresponding grids ;
  • the registration point on the image to be registered is transformed by the homography matrix to have a corresponding third registration point on the reference image (the third registration point is determined by the coordinates of the registration point on the image to be registered and the homography matrix It is determined that, ideally, the third registration point and the corresponding registration point on the reference image are coincident), and the displacement between the second registration point and the third registration point is a homography matrix displacement.
  • the difference between the registration point displacement of the registration point and the homography matrix displacement of the grid to which the registration point belongs is the displacement difference of each registration point that needs to be calculated in this step.
  • Step 400 Determine a misregistration point according to the displacement difference, wherein the registration point with the displacement difference with the grid to which the displacement difference meets a preset condition is determined as the misregistration point.
  • the displacement difference between each registration point and the grid to which it belongs is small/does not meet the preset condition (in the case of discharge error and noise interference, the displacement difference is zero); if the registration point If the registration is wrong, the displacement difference between each registration point and the grid to which it belongs will be larger/satisfy the preset condition.
  • step 100 to step 400 by detecting the registration points of the registered images, it is possible to determine the misregistration points therein, so that the accuracy of the registration can be further improved on the basis of the original image registration.
  • the grid division performed on the first image and the second image in step 100 is overlapping grid division.
  • the same registration point can be allocated to two or more grids, so that the displacement difference of the same registration point in multiple grids can be calculated separately, so that you can calculate Two or more displacement differences, through the comprehensive judgment of the two or more displacement differences, it is determined whether the registration of the registration point is correct or wrong; this judgment method can reduce or even eliminate the noise or error caused The problem of inaccurate judgment of whether the registration point is correct or not, further improves the accuracy of the registration judgment of the registration point.
  • the overlapping area of adjacent grids in the first image and the second image is at least 1/2 of the area of a single grid .
  • the remaining registration points are allocated to at least two grids, so that except for the registration points at the edge of the very small part, the remaining registration points can pass through multiple nets.
  • the step 100 after obtaining the registration data of the first image and the second image, in the grid segmentation of the first image and the second image, the first image and the second image
  • the registration of is dense registration.
  • Dense registration is an image registration method that performs point-by-point matching on the image. It calculates the offset of all points on the image to form a dense optical flow field. Through this dense optical flow field, pixel-level image registration can be performed, so the effect after registration is better and more accurate.
  • the first image and the second image can be registered more accurately, thereby improving the accuracy of the registration.
  • FIG. 6 it is an example diagram of dividing an overlapping grid of an image to be registered according to an embodiment of the present invention; wherein, in step 300, each registration is calculated according to the registration data and the homography matrix.
  • the difference between the registration point displacement of the quasi-point and the homography matrix displacement of the grid to which the registration point belongs is taken as the displacement difference, including:
  • Step 310 Determine a reference image during registration according to the registration data of the first image and the second image;
  • the second image is a reference image.
  • Step 320 Obtain the homography matrix of the grid in the image to be registered and the registration point coordinates of the registration points contained in the grid and the registration point displacement; the images to be registered are the first image and the second image. Another image in the image other than the reference image;
  • the first image is an image to be registered.
  • Step 330 Calculate the homography matrix displacement of the grid to which the registration point belongs according to the homography matrix of the grid in the image to be registered and the registration point coordinates of the registration points contained in the grid;
  • Step 340 Calculate the difference between the registration point displacement of the registration point and the homography matrix displacement of the grid to which the registration point belongs as the displacement difference
  • the registration point on the image to be registered and the registration point corresponding to the homography matrix, we call the registration point on the image to be registered as the first registration point, which will be registered with the first registration point
  • the registration point on the reference image is called the second registration point
  • the corresponding registration point calculated from the first registration point and the homography matrix is called the third registration point.
  • the registration point displacement of the registration point is the displacement of the first registration point and the second registration point;
  • the homography matrix displacement of the registration point and the grid to which it belongs is the first registration point and The displacement of the third registration point;
  • the difference between the displacement of the registration point of the registration point and the displacement of the homography matrix of the grid to which it belongs is the displacement difference between the above two displacements.
  • the specific calculation process of the displacement difference between the registration point and the grid to which it belongs may be:
  • the homography matrix calculates the coordinates of the third registration point, and then the homography matrix displacement of the first registration point; the displacement difference is calculated by the registration displacement of the first registration point and the homography matrix displacement .
  • step 400 determining the misregistration point according to the displacement difference, includes:
  • Step 410 Obtain the displacement difference between the same registration point and the different grid to which the registration point belongs; wherein the same registration point has at least two grids to which it belongs;
  • the registration point is contained by a grid on the image (the registration point is located in the grid on the image), and the grid is the grid to which the registration point belongs.
  • the first image and the second image are divided by overlapping grids. After the division, there will be an overlap between two adjacent grids on the same image.
  • the registration point located in the overlapped part has two adjacent said grids. grid.
  • the registration point may also have three grids or multiple grids.
  • the registration point belongs to more than one grid, and the corresponding displacement difference is also more than one.
  • multiple displacement differences of the multiple grids to which the registration point belongs are obtained.
  • Step 420 Determine whether the displacement difference between the registration point and the different grids to which the registration point belongs is greater than a preset threshold
  • the preset threshold is the dividing line where the displacement difference between the registration point and the grid to which it belongs is small and large (satisfies or does not meet the preset conditions), and the smaller (does not meet the preset conditions) and the larger The displacement difference (satisfying the preset condition) is distinguished, so as to determine whether the registration is correct.
  • the preset threshold value can be determined based on actual conditions. For example, the calculated displacement difference between the registration point and multiple subordinate grids can be counted, so as to find the boundary line where the displacement difference is smaller and larger, and select The dividing line in the middle part is the preset threshold.
  • the preset threshold can also be obtained in other ways.
  • Step 430 If both are greater than the preset threshold, then the registration point is a misregistration point.
  • the calculated homography matrix may be different due to the different registration points selected. If the selected registration point is the wrong registration point, it will result in the calculation The homography matrix of is very different from the actual homography matrix, so the calculated displacement difference will be significantly larger (for the correct registration point), therefore, the displacement difference of the registration point cannot be greater than If the threshold is preset, it is directly determined that the registration point is an incorrect registration point.
  • step 200 of the image registration detection method according to an embodiment of the present invention; wherein, in step 200, each grid in the first image and the second image are calculated.
  • the homography matrix of the corresponding grid including:
  • Step 210 Obtain the registration data of the grid in the first image and the corresponding grid in the second image respectively;
  • the registration data in this step includes at least: the registration point coordinates and the registration point displacement.
  • Step 220 screening the registration data
  • the first registration point in the grid of the first image and the second registration point in the corresponding grid of the second image are not in a one-to-one correspondence, that is, there is a first registration point.
  • the second registration point corresponding to the first registration point in the grid of an image is not in the corresponding grid of the second image, or there is a first registration point corresponding to the second registration point in the grid of the second image Not in the corresponding grid of the first image.
  • the grid of the first image the first grid
  • the grid corresponding to the first grid in the second image is called the second grid
  • the registration point in the first image is the first grid.
  • a registration point, the registration point corresponding to the first registration point in the second image is the second registration point;
  • the registration data is filtered, that is, if the first registration point in the first grid corresponds to the second registration point If the second registration point is not in the second grid, the first registration point is filtered out, and the first registration point with the corresponding second registration point in the second grid is retained; then, on this basis, if If the first registration point corresponding to the second registration point in the second grid is not in the first grid, the second registration point will be filtered out, and the first registration point will remain in the first grid.
  • the second registration point of the quasi-point in this way, after two screenings, the retained first registration point in the first grid and the second registration point in the second grid correspond to each other (only the first registration point, no Two registration points, and only the second registration point, the ones without the first registration point have been filtered out).
  • Step 230 Calculate the homography matrix of the grid in the first image and the corresponding grid in the second image according to the selected registration data.
  • the first registration point and the second registration point corresponding to the selected registration data are more likely to be accurately registered, so that the accuracy of the calculated homography matrix is also higher.
  • step 220 screening the registration data, includes:
  • Step 221 According to the registration point coordinates and the registration point displacement in the registration data, determine the correspondence between the registration point coordinates and the grid in the first image and the second image.
  • the belonging relationship of the grid is whether the registration point is located in the grid in the first image or in the grid in the second image;
  • Step 222 The registration points are screened according to the belonging relationship, and the two registration points registered in the retained registration data are respectively located in the grid in the first image and corresponding in the second image Within the grid.
  • the embodiments of the present disclosure provide a method for detecting an obscured area, which may be executed by an obstructed area detecting device, which may be integrated in an electronic device such as a mobile phone.
  • FIG. 10 it is a flowchart of a method for detecting an occlusion area according to an embodiment of the present invention; wherein, the method for detecting an occlusion area includes:
  • the error registration point is determined according to the image registration detection method; in the present occlusion area detection method, the specific content of the error registration point determined according to the image registration detection method can refer to the specific description in the image registration detection method. I will not repeat them here.
  • Step 500 Determine an occlusion area according to the area composed of the misregistration points.
  • the set of misregistration points (the misregistration points may also be referred to as occlusion points) is the occlusion area.
  • the occlusion area can be detected, and on this basis, the remaining images to be registered except the occlusion area can be registered with the reference image (or two or more images other than the occlusion area can be merged), thereby Can get more fusion area, while reducing artifacts or chromatic aberration.
  • the reference image can be specified first, and then the occlusion area in the image to be registered is excluded and then the registration fusion is performed.
  • a reference image can be specified first, and then other images in the multi-shot image can be registered and fused after excluding the occluded areas one by one. In this way, a fused image with fewer artifacts and smaller chromatic aberration can be obtained after fusion.
  • the calculated homography matrix may be different due to the different registration points selected. If the selected registration point is the wrong registration point, it will result in the calculation The homography matrix of is very different from the actual homography matrix, so the calculated displacement difference will be significantly larger (for the correct registration point), therefore, the displacement difference of the registration point cannot be greater than If the threshold is preset, it is directly determined that the registration point is an incorrect registration point.
  • the embodiments of the present disclosure provide a multi-camera image fusion method, which can be executed by a multi-camera image fusion device, which can be integrated in an electronic device such as a mobile phone.
  • FIG. 13 it is a flowchart of a multi-shot image fusion method according to an embodiment of the present invention; wherein, the multi-shot image fusion method includes:
  • Step 000 Acquire multiple captured images, and select two of them as the first image and the second image for registration;
  • the multiple captured images can be acquired by simultaneously acquiring from multiple cameras, or by receiving image data from a data interface, or by capturing and acquiring from multiple locations through a camera, or by other methods.
  • two images are selected as the first image and the second image from a plurality of captured images.
  • Two images can be randomly selected; or one of the multiple captured images can be designated as the reference image, and all the remaining images are used as the The image is registered, and one and the reference image are extracted from all the images to be registered as the first image and the second image.
  • step 600 traverse the multiple captured images to determine the occlusion area in the multiple captured images;
  • Step 700 Perform image fusion on the remaining parts of the multiple captured images after excluding the obstructed area.
  • the specific content of the occlusion region determined according to the occlusion region detection method can refer to the specific description of the occlusion region detection method, which will not be repeated here.
  • the occlusion area can be detected by the occlusion area detection method.
  • the remaining images to be registered except the occlusion area can be registered with the reference image (or two or two other than the occlusion area). (Multiple images are fused), so that more fusion areas can be obtained while reducing artifacts or chromatic aberration.
  • the reference image can be specified first, and then the occlusion area in the image to be registered is excluded and then the registration fusion is performed.
  • a reference image can be specified first, and then other images in the multi-shot image can be registered and fused after excluding the occluded areas one by one. In this way, after fusion, a fused image with fewer artifacts and chromatic aberration can be obtained without distortion.
  • the number of captured images is two.
  • registration fusion is performed after excluding the occlusion area in the image to be registered, so that fewer artifacts and smaller chromatic aberration can be obtained after fusion Fusion image.
  • the embodiments of the present disclosure provide an image registration detection device for executing the image registration detection method described in the above content of the present invention.
  • the image registration detection device will be described in detail below.
  • FIG. 14 it is a structural block diagram of an image registration detection device according to an embodiment of the present invention. wherein, the image registration detection device includes:
  • the grid segmentation unit 2 is configured to perform grid segmentation on the first image and the second image after acquiring the registration data of the first image and the second image, and the registration data includes at least registration point coordinates and registration points Displacement
  • the matrix calculation unit 3 is configured to calculate the homography matrix of each grid in the first image and the corresponding grid in the second image;
  • the displacement calculation unit 4 is configured to calculate the difference between the registration point displacement of each registration point and the homography matrix displacement of the grid to which the registration point belongs according to the registration data and the homography matrix as Displacement difference
  • the registration determination unit 5 determines the misregistration point according to the displacement difference, wherein the registration point whose displacement difference with the grid to which it belongs meets a preset condition is determined as the misregistration point.
  • the misregistration points can be judged, so that the accuracy of the registration can be further improved on the basis of the original image registration.
  • the grid segmentation performed on the first image and the second image is overlapped grid segmentation.
  • the overlapping area of adjacent grids in the first image and the second image is at least It is 1/2 of the area of a single grid.
  • the registration of the first image and the second image is dense registration.
  • the displacement calculation unit 4 is further configured to: determine a reference image at the time of registration according to the registration data of the first image and the second image; and obtain the grid information in the image to be registered
  • the image to be registered is another image other than the reference image among the first image and the second image;
  • the difference between the registration point displacement of the registration point and the homography matrix displacement of the grid to which the registration point belongs is taken as the displacement difference.
  • the registration accuracy determination unit 5 is further configured to: obtain the displacement difference between the same registration point and a different grid to which it belongs; wherein the same registration point has at least two belonging grids; Whether the displacement differences of the grids are all greater than the preset threshold; if they are all greater than the preset threshold, the registration point is a misregistration point.
  • the matrix calculation unit 3 is further configured to: obtain the registration data of the grid in the first image and the corresponding grid in the second image respectively; and filter the registration data; Calculating the homography matrix of the grid in the first image and the corresponding grid in the second image according to the registration data after screening.
  • the matrix calculation unit 3 is further configured to: determine the registration point coordinates and the grid in the first image according to the registration point coordinates and the registration point displacement in the registration data And the belonging relationship with the corresponding grid in the second image; the belonging relationship is whether the registration point is located in the grid in the first image or the grid in the second image; according to The belonging relationship screens the registration points, and the two registered registration points in the retained registration data are respectively located in a grid in the first image and a corresponding grid in the second image.
  • the embodiments of the present disclosure provide a shielded area detection device, which is used to implement the shielded area detection method described in the above content of the present invention.
  • the shielded area detection device will be described in detail below.
  • FIG. 15 it is a structural block diagram of an obstruction area detection device according to an embodiment of the present invention. wherein, the obstruction area detection device includes:
  • the image registration detection device is used to determine misregistration points
  • the area determining unit 9 is configured to determine the occlusion area according to the area composed of the misregistration points.
  • the occlusion area can be detected, and on this basis, the remaining images to be registered except the occlusion area can be registered with the reference image (or two or more images other than the occlusion area can be merged), thereby Can get more fusion area, while reducing artifacts or chromatic aberration.
  • the specific content of the image registration detection device used to determine the misregistration point can refer to the specific description of the image registration detection device, which will not be repeated here.
  • the embodiment of the present disclosure provides a multi-camera image fusion device for executing the multi-camera image fusion method described in the above content of the present invention.
  • the multi-camera image fusion device is described in detail below.
  • FIG. 16 it is a structural block diagram of a multi-camera image fusion device according to an embodiment of the present invention. wherein, the multi-camera image fusion device includes:
  • the multi-shot image registration unit 7 is used to obtain a plurality of shot images, and select two of them as the first image and the second image for registration;
  • the shielded area detection device is used to determine the shielded area
  • the traversal unit 8 is used to traverse the multiple captured images and determine the shielded area in the multiple captured images
  • the image fusion unit 9 is configured to perform image fusion on the remaining parts of the multiple captured images after excluding the occluded area.
  • the reference image can be specified first, and then the occlusion area in the image to be registered is excluded and then the registration fusion is performed.
  • a reference image can be specified first, and then other images in the multi-shot image can be registered and fused after excluding the occluded areas one by one. In this way, after fusion, a fused image with fewer artifacts and chromatic aberration can be obtained without distortion.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other division methods in actual implementation, for example, multiple units Or components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of the devices or units, and may be in electrical, mechanical or other forms.
  • the image registration detection device the occlusion area detection device, and the multi-camera image fusion device
  • the device may be implemented as an electronic device, including a processor and a memory, the memory stores a control program, and when the control program is executed by the processor, the above-mentioned image registration detection method is realized, or the above-mentioned occlusion is realized.
  • the control program is executed by the processor, the above-mentioned image registration detection method is realized, or the above-mentioned occlusion is realized
  • Fig. 18 is a block diagram showing another electronic device according to an embodiment of the present invention.
  • the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
  • the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
  • the memory 804 is configured to store various types of data to support the operation of the device 800. Examples of such data include instructions for any application software or method to operate on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 804 can be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the power component 806 provides power for various components of the electronic device 800.
  • the power component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
  • the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC).
  • the microphone is configured to receive external audio signals.
  • the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
  • the audio component 810 further includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
  • the sensor component 814 can detect the on/off status of the device 800 and the relative positioning of components.
  • the component is the display and the keypad of the electronic device 800.
  • the sensor component 814 can also detect the electronic device 800 or a component of the electronic device 800 The position of the user changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
  • the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the electronic device 800 can be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • ASIC application specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field A programmable gate array
  • controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • the misregistration points can be judged, so that the accuracy of the registration can be further improved on the basis of the original image registration.
  • the embodiments of the present disclosure provide a computer-readable storage medium that stores instructions that, when loaded and executed by a processor, implement the aforementioned image registration detection method, or implement the aforementioned occlusion area detection method .
  • the computer-readable storage medium includes but is not limited to any type of disk (including floppy disk, hard disk, optical disk, CD-ROM, and magneto-optical disk), ROM (Read-Only Memory), RAM ( Random Access Memory), EPROM (Erasable Programmable Read-Only Memory, Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), Flash, Magnetic Card or light card. That is, a readable storage medium includes any medium that stores or transmits information in a readable form by a device (for example, a computer).
  • the misregistration points can be judged, so that the accuracy of the registration can be further improved on the basis of the original image registration.
  • the technical solution of the embodiment of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including several The instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) execute all or part of the steps of the method described in the embodiment of the present invention.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

一种图像配准、融合、遮挡检测方法、装置和电子设备,所述图像配准检测方法包括:获取第一图像和第二图像的配准数据后,对所述第一图像和第二图像进行网格分割(100),所述配准数据至少包括配准点坐标和配准点位移;计算所述第一图像内每个网格与所述第二图像内对应网格的单应性矩阵(200);计算每个配准点的配准点位移与所述配准点所属的网格的单应性矩阵位移之间的差值作为位移差;根据所述位移差确定错误配准点(400),其中,将与所属网格的所述位移差满足预设条件的所述配准点判定为错误配准点。这样,通过对已配准的图像进行配准点的检测,可以判断出其中的错误配准点,从而可以在原有图像配准的基础上,进一步提高配准的准确度。

Description

一种图像配准、融合、遮挡检测方法、装置和电子设备
本申请要求在2019年7月5日提交中国专利局、申请号为201910603555.8、发明名称为“一种图像配准、融合、遮挡检测方法、装置和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及图像处理技术领域,具体而言,涉及一种图像配准、融合、遮挡检测方法、装置和电子设备。
背景技术
图像配准就是将不同时间、不同传感器(成像设备)或不同条件下(天候、照度、摄像位置和角度等)获取的两幅或多幅图像进行匹配、叠加的过程,其为人脸识别、身份验证、智慧城市等场景中必不可少的一环。
如何有效地进行图像配准,提高图像配准的准确性,是目前最重要的研究方向之一。在现有图像配准的基础上,对已配准的图像进行检测,判断其中的错误配准点,可以在原有图像配准的基础上,进一步提高配准的准确度。
但是目前并没有行之有效的对图像配准的检测方法。
发明内容
本发明解决的问题是如何对已配准的图像进行检测,判断其中的错误配准点。
为解决上述问题,本发明首先提供一种图像配准检测方法,其包括:
获取第一图像和第二图像的配准数据后,对所述第一图像和第二图像进行网格分割,所述配准数据至少包括配准点坐标和配准点位移;
计算所述第一图像内每个网格与所述第二图像内对应网格的单应性矩阵;
根据所述配准数据和所述单应性矩阵计算每个配准点的配准点位移与所述配准点所属的网格的单应性矩阵位移之间的差值作为位移差;
根据所述位移差确定错误配准点,其中,将与所属网格的所述位移差满足预设条件的所述配准点判定为错误配准点。
其次提供一种遮挡区域检测方法,其包括:
根据所述的图像配准检测方法确定错误配准点;
根据所述错误配准点组成的区域确定遮挡区域。
再次提供一种图像配准检测装置,其包括:
网格分割单元,用于获取第一图像和第二图像的配准数据后,对所述第一图像和第二图像进行网格分割,所述配准数据至少包括配准点坐标和配准 点位移;
矩阵计算单元,用于计算每个配准点的配准点位移与所述配准点所属的网格的单应性矩阵位移之间的差值作为位移差;
位移计算单元,用于根据所述配准数据和所述单应性矩阵计算每个配准点的配准点位移与所述配准点所属的网格的单应性矩阵位移之间的差值作为位移差;
配准确定单元,用于根据所述位移差确定错误配准点,其中,将与所属网格的所述位移差满足预设条件的所述配准点判定为错误配准点。
从次提供一种遮挡区域检测装置,其包括:
所述的图像配准检测装置,用于确定错误配准点;
区域确定单元,用于根据所述错误配准点组成的区域确定遮挡区域。
从次再提供一种多摄图像融合装置,其包括:
多摄图像配准单元,用于获取多个拍摄图像,并从中选取两个作为第一图像和第二图像进行配准;
所述的遮挡区域检测装置,用于确定遮挡区域;遍历单元,用于遍历所述多个拍摄图像,确定所述多个拍摄图像中的遮挡区域;
图像融合单元,用于对所述多个拍摄图像中排除遮挡区域后的其余部分进行图像融合。
最后提供一种电子设备,包括处理器以及存储器,所述存储器存储有控制程序,所述控制程序被处理器执行时实现上述所述的图像配准检测方法,或者,实现上述所述的遮挡区域检测方法,或者,实现上述所述的多摄图像融合方法。
另外,提供一种计算机可读存储介质,存储有指令,所述指令被处理器加载并执行时实现上述所述的图像配准检测方法,或者,实现上述所述的遮挡区域检测方法,或者,实现上述所述的多摄图像融合方法。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A为根据本发明实施例的原理图中左侧拍摄视图;
图1B为根据本发明实施例的原理图中右侧拍摄视图;
图2为根据本发明实施例的图像配准检测方法的流程图;
图3为根据本发明实施例的图像配准检测方法步骤300的流程图;
图4为根据本发明实施例的待配准图像的示例图;
图5为根据本发明实施例的参考图像的示例图;
图6为根据本发明实施例的待配准图像划分重叠网格的示例图;
图7为根据本发明实施例的图像配准检测方法步骤400的流程图;
图8为根据本发明实施例的图像配准检测方法步骤200的流程图;
图9为根据本发明实施例的图像配准检测方法步骤220的流程图;
图10为根据本发明实施例的遮挡区域检测方法的流程图;
图11为根据本发明实施例的参考图像稠密配准后的示例图;
图12为根据本发明实施例的参考图像遮挡区域的示例图;
图13为根据本发明实施例的多摄图像融合方法的流程图;
图14为根据本发明实施例的图像配准检测装置的结构框图;
图15为根据本发明实施例的遮挡区域检测装置的结构框图;
图16为根据本发明实施例的多摄图像融合装置的结构框图;
图17为根据本发明实施例的一种电子设备的结构框图;
图18为根据本发明实施例的另一种电子设备的框图。
附图标记说明:
2-网格分割单元,3-矩阵计算单元,4-位移计算单元,5-配准确定单元,6-区域确定单元,7-多摄图像配准单元,8-遍历单元,9-图像融合单元,800-电子设备,802-处理组件,804-存储器,806-电力组件,808-多媒体组件,810-音频组件,812-输入/输出(I/O)的接口,814-传感器组件,816-通信组件。
具体实施方式
为使本发明的上述目的、特征和优点能够更为明显易懂,下面结合附图对本发明的具体实施例做详细的说明。
显然,所说明的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有做出创造性劳动的前提下获得的所有其他实施例,都属于本发明保护的范围。
为了便于理解,在本发明中,需要对其中的技术问题进行详细阐述。
现有的图像处理中,经常要对双摄或多摄拍摄图像进行融合,以达到更好的拍摄效果。由于双摄或多摄拍摄图像的视觉不一致,造成双摄或多摄拍摄图像存在视差现象,这种视差现象使得双摄或多摄拍摄图像存在遮挡区域,在配准时遮挡区域中的点由于在其他图像中不存在可以配准的点而使得配准错误,进而会导致融合出来的图像就存在伪影或色偏等现象。
为了便于理解,我们在此对解决技术方案中的技术原理进行阐述:
如图1A,图1B所示,其为通过双摄方式获取(也可以通过一个摄像机或摄像头在不同方位拍摄获取)的两个图像,由于拍摄相机存在位置差,从而在图像上产生视差。假设两个摄像头是水平布置的,且图1A、图1B中圆 柱体在长方体之前的位置。那么左摄像头获得左侧拍摄视图和右摄像头获得右侧拍摄视图分别如图1A、图1B所示。
在图1A、图1B中,左侧拍摄视图可以看到圆柱体后面长方体更多的左侧区域,而长方体右侧的部分区域会被前方的圆柱体遮挡;同理,右侧拍摄视图中,长方体左侧部分区域被圆柱体遮挡,而可以看到更多的右侧区域。
为了进行双摄图像融合,可以选定其中一个拍摄视图作为基准视图。下面将以左视图为基准视图进行融合,首先需要将右视图上的所有点向左视图进行配准,从而得到两幅图像的配准点。由于视差的存在,导致距离摄像头较远的物体在左右侧拍摄视图上的视差较小,而距离摄像头较近的物体在左右侧拍摄视图上的视差较大。因此,配准点在视差区域就会出现断层,根据这一原理,我们可以利用不同视差的断层来检测两幅图像上的遮挡区域。
如图1A、图1B所示,假设配准点a0与a1已经配准,b0与b1已经配准;但对于右侧拍摄视图中遮挡区域的点c1来说,在左图中找不到真实的配准点,这种点只能根据配准规则找到错误的配准位置,该配准点即为错误配准点和遮挡点。
本公开实施例提供了一种图像配准检测方法,该方法可以由图像配准检测装置来执行,该收图像配准检测装置可以集成在手机等电子设备中。如图2所示,其为根据本发明实施例的图像配准检测方法的流程图;其中,所述图像配准检测方法,包括:
步骤100,获取第一图像和第二图像的配准数据后,对所述第一图像和第二图像进行网格分割,所述配准数据至少包括配准点坐标和配准点位移;
其中,所述第一图像和所述第二图像可以为物体的图像,也可以为人的图像。本步骤中,获取第一图像和第二图像的配准数据,可以是对已配准的第一图像和第二图像,从其配准过程或者配准结果中获取配准数据,可以直接对第一图像和第二图像进行配准,从而得到其配准数据。
第一图像和第二图像,可以为通过相对配准的方式进行配准的,也即是所述第一图像和第二图像分别为待配准图像和参考图像;对于此种情况,可以直接读取配准过程中待配准图像的配准点坐标和待配准图像配准到参考图像的配准点位移,其中,配准点坐标和配准点位移是对应的,且其中,待配准图像的配准点坐标、参考图像上对应的配准点坐标,该两个配准点坐标的配准点位移(待配准图像配准到参考图像的配准点位移和参考图像配准到待配准图像的配准点位移为方向相反的向量)是对应关系,在已知其中两个数据的基础上,可以通过其对应关系计算出第三个数据,因此,直接读取配准过程中待配准图像的配准点坐标和待配准图像配准点到参考图像的配准点位移;或者直接读取配准过程中参考图像的配准点坐标和待配准图像配准到参考图像的配准点位移(或者参考图像配准到待配准图像的配准点位移),并通过对应关系计算出待配准图像的配准点坐标;或者直接读取配准过程中参考图像的配准点坐标和待配准图像的配准点坐标,并通过对应关系计算出待配 准图像配准到参考图像的配准点位移都是可行的对等方案。
第一图像和第二图像,也可以为通过绝对配准的方式进行配准的,也即是所述第一图像和第二图像均为待配准图像。对于此种情况,可以直接读取配准过程中的两个待配准图像的配准点坐标和待配准图像配准到控制网格(该控制网格为定义出来的)的配准点位移,再根据该配准点坐标和配准点位移计算出来第一图像配准到第二图像或者第二图像配准到第一图像的配准点坐标和配准点位移。
对上述内容举例说明,如图4所示,其为待配准图像的示例图;如图5所示,其为参考图像的示例图,其中,图4为所述摄像机位于所述玩具娃娃偏左侧拍摄的图像,所述图5为所述摄像机位于所述玩具娃娃偏右侧拍摄的图像。
图4和图5中的图像大部分区域的内容都可以一一对应上,仅有部分区域由于拍摄角度的不同,造成了遮挡,如图4中玩具娃娃左耳位置处的部分区域,在图5中不具有对应位置,也即是遮挡。
示例中如图6所示,其左上方为重叠网格划分的示意图,由该图可以看出,重叠网格为相邻网格之间具有重叠部分。
其中,所述第一图像和所述第二图像,可以是由采集装置15中的摄像机通过不同角度拍摄的相同或者相似的场景图片,也可以是统一电子设备中的不同位置的摄像机在同一时间拍摄的相同或者相似的场景图片;也可以是由数据输入接口输入的图片信息。
所述第一图像和所述第二图像,可以是拍摄的两个图片,也可以是拍摄的多个图片中的任意两个。
对第一图像和第二图像进行网格分割,网格的大小可以根据实际情况确定。在分割时,可以先根据第一图像和第二图像中的配准点或者顶点或者其他方式对网格进行定位,从而建立起第一图像中的网格和第二图像中的网格的对应关系;也可以通过其他方式使得具有对应关系的网格中的配准点的对应性强,便于对配准点的准确度进行检测。
为了便于对本发明具体实施方式进行说明,我们在下述内容中以第一图像为待配准图像,第二图像为参考图像为例进行说明,基于此说明,本领域技术人员可以通过简单变换理解第二图像为待配准图像,第一图像为参考图像或者多个图像的图像配准检索过程。
步骤200,计算所述第一图像内每个网格与所述第二图像内对应网格的单应性矩阵;
本步骤中,具有对应关系的第一图像中的一个网格与第二图像中的一个网格之间的均具有单应性矩阵,由于对应的配准点坐标的噪声问题,使得单应性矩阵存在误差,可以通过设置多个点来组成单应性矩阵的方程组,通过计算最优解的方式,得到最优单应性矩阵;其中计算最优解时可以采用直线线性解法或者奇异值分解、Levenberg-Marquarat(LM)算法等来进行优化, 计算最优解。
步骤300,根据所述配准数据和所述单应性矩阵计算每个配准点的配准点位移与所述配准点所属的网格的单应性矩阵位移之间的差值作为位移差;
待配准图像上具有多个配准点,这些配准点与参考图像上的配准点具有一一对应关系(在已配准的情况下),具有一一对应关系的两个配准点之间的位移为配准点的配准点位移。
待配准图像上具有网格,一个网格内具有多个配准点,且该网格在所述参考图像上具有对应的网格,相对应的该两个网格之间具有单应性矩阵;待配准图像上的配准点通过单应性矩阵的变换,在参考图像上具有对应的第三配准点(第三配准点是由待配准图像上的配准点的坐标和单应性矩阵决定的,理想情况下,该第三配准点和参考图像上的相对应配准点是重合的),第二配准点与第三配准点之间的位移为单应性矩阵位移。
配准点的配准点位移与所述配准点所属的网格的单应性矩阵位移之间的差值,为本步骤需要计算的所述每个配准点的所述位移差。
步骤400,根据所述位移差确定错误配准点,其中,将与所属网格的所述位移差满足预设条件的所述配准点判定为错误配准点。
若配准点的配准正确,则每个配准点与所属网格的所述位移差较小/不满足预设条件(排出误差和噪声干扰的情况下,该位移差为零);若配准点的配准错误,则每个配准点与所属网格的所述位移差会较大/满足预设条件。
这样,步骤100-步骤400,通过对已配准的图像进行配准点的检测,可以判断出其中的错误配准点,从而可以在原有图像配准的基础上,进一步提高配准的准确度。
可选的,所述步骤100对所述第一图像和第二图像进行的网格分割为重叠网格分割。对于重叠网格中的配准点可以将同一个配准点分配到两个或多个网格中,从而可以对同一配准点在多个网格中的位移差进行分别计算,这样,就可以计算出两个或多个位移差,通过对两个或多个位移差的综合判断,确定所述配准点的配准是正确还是错误;这种判断方式,可以减小甚至消除由噪声或者误差导致的对配准点配准是否正确的判断不准确的问题,进一步提高对配准点的配准判断的准确性。
可选的,对所述第一图像和第二图像进行重叠网格分割中,所述第一图像和所述第二图像中相邻网格的重叠面积至少为单个网格面积的1/2。这样,可以保证除极小部分边缘位置的配准点外,其余配准点至少分配到两个网格中,从而可以使得除极小部分边缘位置的配准点外,其余配准点都可以通过多个网格的位移差综合判断,来提高对配准点的配准判断的准确性。
可选的,所述步骤100,获取第一图像和第二图像的配准数据后,对所述第一图像和第二图像进行网格分割中,所述第一图像和所述第二图像的配准为稠密配准。
稠密配准是一种针对图像进行逐点匹配的图像配准方法,其计算图像上 所有的点的偏移量,从而形成一个稠密的光流场。通过这个稠密的光流场,可以进行像素级别的图像配准,所以其配准后的效果也更好,更准确。
这样,通过稠密配准,可以对第一图像和第二图像进行更准确的配准,从而提高配准的准确度。
如图6所示,其为根据本发明实施例的待配准图像划分重叠网格的示例图;其中,所述步骤300,根据所述配准数据和所述单应性矩阵计算每个配准点的配准点位移与所述配准点所属的网格的单应性矩阵位移之间的差值作为位移差,包括:
步骤310,根据所述第一图像和所述第二图像的所述配准数据,确定配准时的参考图像;
根据上述示例,所述第二图像为参考图像。
步骤320,获取待配准图像内的网格的单应性矩阵以及网格包含的配准点的配准点坐标以及配准点位移;所述待配准图像为所述第一图像和所述第二图像中除参考图像之外的另一图像;
根据上述示例,所述第一图像为待配准图像。
步骤330,根据所述待配准图像内的网格的所述单应性矩阵以及网格包含的配准点的配准点坐标计算所述配准点所属的网格的所述单应性矩阵位移;
步骤340,计算所述配准点的配准点位移与所述配准点所属的网格的所述单应性矩阵位移之间的差值作为所述位移差
为了便于对参考图像、待配准图像上的配准点以及单应性矩阵对应的配准点进行描述,我们将待配准图像上的配准点称为第一配准点,将与第一配准点配准的参考图像上的配准点称为第二配准点,将由所述第一配准点和单应性矩阵计算出的对应的配准点称为第三配准点。
这样,步骤330和步骤340中,配准点的配准点位移,即为第一配准点和第二配准点的位移;配准点与所属网格的单应性矩阵位移,即为第一配准点和第三配准点的位移;配准点的配准点位移与所属网格的单应性矩阵位移的差值,即为上述两个位移之间的位移差。
配准点与所属网格的所述位移差的具体计算过程可以为:
直接获取第一配准点的配准位移,或者先获取第一配准点的坐标,再获取第二配准点的坐标,计算出第一配准点的配准位移;通过第一配准点的坐标和所述单应性矩阵计算出第三配准点的坐标,进而计算出第一配准点的单应性矩阵位移;通过第一配准点的配准位移和单应性矩阵位移,计算出所述位移差。
另外,也可以在步骤330和340的基础上,提出一种改进后的配准点与所属网格的所述位移差的具体计算过程,为:
直接获取第二配准点的坐标,或者先获取第一配准点的坐标和第一配准点的配准位移,再计算出第二配准点的坐标;通过第一配准点的坐标和所述单应性矩阵计算出第三配准点的坐标;通过第二配准点的坐标和第三配准点 的坐标计算出所述位移差,所述位移差为第二配准点的坐标和第三配准点的坐标之间的位移。
也可以通过第一配准点、第二配准点、第三配准点之间的对应关系,对上述两个具体计算过程进行有限的变换,从而得到新的具体计算过程,但变换后的该过程仍然属于本发明的保护范围。
如图7所示,其为根据本发明实施例的图像配准检测方法步骤400的流程图;其中,所述步骤400,根据所述位移差确定错误配准点,包括:
步骤410,获取同一配准点的与该配准点所属的不同网格的所述位移差;其中,同一配准点具有至少两个所属的网格;
配准点被图像上的网格所包含(配准点位于图像上的网格内),该网格即为所述配准点的所属网格。
对第一图像和第二图像进行重叠网格分割,则分割后同一图像上相邻的两个网格之间具有重叠的部分,位于重叠部分内的配准点即具有两个相邻的所述网格。
类似的,配准点还可以具有三个所述网格或多个所述网格。
这样,配准点所属的网格不止一个,对应的位移差也就不止一个,本步骤中,就是获取配准点所属的多个网格的多个位移差。
步骤420,判断该配准点与该配准点所属的不同网格的所述位移差是否均大于预设阈值;
该预设阈值是配准点与所属网格所述位移差为较小和较大(满足或不满足预设条件)的分界线,将较小(不满足预设条件)的位移差和较大(满足预设条件)的位移差区分开来,从而判断配准是否正确。
该预设阈值的确定,可以通过实际情况确定,如可以统计已经计算出的配准点与多个所属网格的所述位移差,从而找出位移差为较小和较大的分界线,选取中间部分的分界线为所述预设阈值。也可以通过其他方式获取该预设阈值。
步骤430,若均大于所述预设阈值,则该配准点为错误配准点。
在计算两个网格的单应性矩阵的过程中,可能会由于选择的配准点不同,导致计算出的单应性矩阵不同,若选择的配准点为错误的配准点,则会导致计算出的单应性矩阵与实际单应性矩阵差别很大,这样计算出的所述位移差会明显偏大(对于正确配准的配准点来说),因此,并不能通过配准点的位移差大于预设阈值,就直接判定该配准点为错误配准点。
但是上述配准点(正确配准的配准点)虽然可能会出现单个所属网格的位移差偏大的情况,但是该配准点其余所属网格的位移差也偏大的可能性极小,因此,将所属网格的位移差均大于所述预设阈值的配准点判定为错误配准点。
这样,可以进一步提供判断的准确性,减少将正确的配准点判定为错误配准点的概率。
如图8所示,其为根据本发明实施例的图像配准检测方法步骤200的流程图;其中,所述步骤200,计算所述第一图像内每个网格与所述第二图像内对应网格的单应性矩阵,包括:
步骤210,分别获取所述第一图像内网格与所述第二图像内对应网格的所述配准数据;
本步骤中的所述配准数据,至少包括:所述配准点坐标和所述配准点位移。
步骤220,对所述配准数据进行筛选;
由于配准错误的配准点的存在,使得第一图像的网格内的第一配准点与第二图像的对应网格内的第二配准点并非是一一对应的,也即是说存在第一图像的网格内的第一配准点所对应的第二配准点并不在第二图像的对应网格内,或者存在第二图像的网格内的第二配准点所对应的第一配准点并不在第一图像的对应网格内。
为了便于阐述,我们将第一图像的网格称为第一网格,第二图像内与第一网格所对应的网格,称为第二网格,第一图像内的配准点为第一配准点,第二图像内与所述第一配准点对应的配准点为第二配准点;对所述配准数据进行筛选,即如果第一网格内的第一配准点所对应的第二配准点不在所述第二网格内的话,则将该第一配准点筛选掉,保留在第二网格内具有对应的第二配准点的第一配准点;然后在此基础上,如果第二网格内的第二配准点所对应的第一配准点不在所述第一网格内的话,则将该第二配准点筛选掉,保留在第一网格内具有对应的第一配准点的第二配准点;这样,经过两次筛选,保留的第一网格内的第一配准点和第二网格内的第二配准点是相互对应的(只有第一配准点,没有第二配准点,和只有第二配准点,没有第一配准点的都已经筛选掉)。
步骤230,根据筛选后的所述配准数据计算所述第一图像内网格与所述第二图像内对应网格的所述单应性矩阵。
这样,筛选后的所述配准数据中相对应的第一配准点和第二配准点为准确配准的可能性较高,从而使得计算出的单应性矩阵的准确性也较高。
如图9所示,其为根据本发明实施例的图像配准检测方法步骤220的流程图;其中,所述步骤220,对所述配准数据进行筛选,包括:
步骤221,根据所述配准数据中的所述配准点坐标和所述配准点位移,确定已所述配准点坐标与所述第一图像内的网格以及与所述第二图像内的对应网格的所属关系;所述所属关系为所述配准点是否位于所述第一图像内的网格内或者位于所述第二图像内的网格内;
步骤222,根据所述所属关系对所述配准点进行筛选,保留的配准数据中配准的两个配准点分别位于所述第一图像内的网格内和所述第二图像内的对应网格内。
这样,可以通过对所述配准点的筛选,进一步提高计算出的单应性矩阵 的准确性。
本公开实施例提供了一种遮挡区域检测方法,该方法可以由遮挡区域检测装置来执行,该遮挡区域检测装置可以集成在手机等电子设备中。如图10所示,其为根据本发明实施例的遮挡区域检测方法的流程图;其中,所述遮挡区域检测方法包括:
根据所述的图像配准检测方法确定错误配准点;本遮挡区域检测方法中,根据所述的图像配准检测方法确定错误配准点的具体内容可参考对图像配准检测方法中的具体描述,在此不再赘述。
步骤500,根据所述错误配准点组成的区域确定遮挡区域。
其中,所述错误配准点的集合(也可以把错误配准点称为遮挡点),即为所述遮挡区域。这样,就可以检测出遮挡区域,在此基础上,可以将除遮挡区域外的其余待配准图像对参考图像进行配准(或者除遮挡区域外的两个或多个图像进行融合),从而可以得到更多的融合区域,同时减少伪影或色差的产生。
针对上述图像融合后的伪影和色差,如示例中的图11所示,其为参考图像经待配准图像配准后的示例图,由于待配准图像在玩具娃娃的左耳处具有遮挡区域(该遮挡区域见示例中的图12),使得配准后(图像融合后)的参考图像中玩具娃娃的左耳处出现伪影,色差以及失真。其中,图12中方框圈住的部分即为检测出的遮挡区域(仅为示意图,实际遮挡区域不规则)。
在上述检测出遮挡区域的基础上,对双摄图像的融合,可以先指定参考图像,然后排除待配准图像中的遮挡区域后进行配准融合。对于多摄图像的融合,可以先指定参考图像,然后将多摄图像中的其他图像逐个排除遮挡区域后进行配准融合。这样,融合后可以获得伪影更少、色差更小的融合图像。
在计算两个网格的单应性矩阵的过程中,可能会由于选择的配准点不同,导致计算出的单应性矩阵不同,若选择的配准点为错误的配准点,则会导致计算出的单应性矩阵与实际单应性矩阵差别很大,这样计算出的所述位移差会明显偏大(对于正确配准的配准点来说),因此,并不能通过配准点的位移差大于预设阈值,就直接判定该配准点为错误配准点。
但是上述配准点(正确配准的配准点)虽然可能会出现单个所属网格的位移差偏大的情况,但是该配准点其余所属网格的位移差也偏大的可能性极小,因此,将所属网格的位移差均大于所述预设阈值的配准点判定为错误配准点。
通过对待配准图像上的所有配准点的遍历,从而可以逐个判断所述配准点是否为错误配准点,进而根据所述错误配准点确定遮挡区域。
本公开实施例提供了一种多摄图像融合方法,该方法可以由多摄图像融合装置来执行,该多摄图像融合装置可以集成在手机等电子设备中。如图13所示,其为根据本发明实施例的多摄图像融合方法的流程图;其中,所述多摄图像融合方法包括:
步骤000,获取多个拍摄图像,并从中选取两个作为第一图像和第二图像进行配准;
多个拍摄图像的获取方式,可以是从多个摄像头中同时获取,也可以是从数据接口中接收图像数据,也可以是通过摄像头从多个位置进行拍摄获取,还可以为通过其他方式获取。
另外,从多个拍摄图像中选取两个作为第一图像和第二图像,可以是随机抽取两个图像;也可以是指定多个拍摄图像中的一个图像为参考图像,将剩余所有图像作为待配准图像,并从所有待配准图像中抽取一个和所述参考图像作为第一图像和第二图像。
根据所述的遮挡区域检测方法确定遮挡区域;步骤600,遍历所述多个拍摄图像,确定所述多个拍摄图像中的遮挡区域;
步骤700,对所述多个拍摄图像中排除遮挡区域后的其余部分进行图像融合。
本多摄图像融合方法中,根据所述的遮挡区域检测方法确定遮挡区域的具体内容可参考对遮挡区域检测方法中的具体描述,在此不再赘述。
这样,就可以通过所述的遮挡区域检测方法检测出遮挡区域,在此基础上,可以将除遮挡区域外的其余待配准图像对参考图像进行配准(或者除遮挡区域外的两个或多个图像进行融合),从而可以得到更多的融合区域,同时减少伪影或色差的产生。
针对上述图像融合后的伪影和色差,如示例中的图11所示,其为参考图像经待配准图像配准后的示例图,由于待配准图像在玩具娃娃的左耳处具有遮挡区域(该遮挡区域见示例中的图12),使得配准后(图像融合后)的参考图像中玩具娃娃的左耳处出现伪影,色差以及失真。其中,图12中方框圈住的部分即为检测出的遮挡区域(仅为示意图,实际遮挡区域不规则)。
在上述检测出遮挡区域的基础上,对多摄图像的融合,可以先指定参考图像,然后排除待配准图像中的遮挡区域后进行配准融合。对于多摄图像的融合,可以先指定参考图像,然后将多摄图像中的其他图像逐个排除遮挡区域后进行配准融合。这样,融合后可以获得伪影更少、色差更小的融合图像,且不失真。
可选的,所述拍摄图像的数量为两个。这样,即为对双摄图像的融合方法,上述检测出遮挡区域的基础上,排除待配准图像中的遮挡区域后进行配准融合,这样,融合后可以获得伪影更少、色差更小的融合图像。
本公开实施例提供了一种图像配准检测装置,用于执行本发明上述内容所述的图像配准检测方法,以下对所述图像配准检测装置进行详细描述。
如图14所示,其为根据本发明实施例的图像配准检测装置的结构框图;其中,所述图像配准检测装置包括:
网格分割单元2,用于获取第一图像和第二图像的配准数据后,对所述第一图像和第二图像进行网格分割,所述配准数据至少包括配准点坐标和配准 点位移;
矩阵计算单元3,用于计算所述第一图像内每个网格与所述第二图像内对应网格的单应性矩阵;
位移计算单元4,用于根据所述配准数据和所述单应性矩阵计算每个配准点的配准点位移与所述配准点所属的网格的单应性矩阵位移之间的差值作为位移差;
配准确定单元5,根据所述位移差确定错误配准点,其中,将与所属网格的所述位移差满足预设条件的所述配准点判定为错误配准点。
这样,通过对已配准的图像进行配准点的检测,可以判断出其中的错误配准点,从而可以在原有图像配准的基础上,进一步提高配准的准确度。
可选的,所述网格分割单元2中,对所述第一图像和第二图像进行的网格分割为重叠网格分割。
可选的,所述网格分割单元2中,对所述第一图像和第二图像进行重叠网格分割中,所述第一图像和所述第二图像中相邻网格的重叠面积至少为单个网格面积的1/2。
可选的,所述网格分割单元2中,所述第一图像和所述第二图像的配准为稠密配准。
可选的,所述位移计算单元4还用于:根据所述第一图像和所述第二图像的所述配准数据,确定配准时的参考图像;获取待配准图像内的网格的单应性矩阵以及网格包含的配准点的配准点坐标以及配准点位移;所述待配准图像为所述第一图像和所述第二图像中除参考图像之外的另一图像;根据所述待配准图像内的网格的所述单应性矩阵以及网格包含的配准点的配准点坐标计算所述配准点所属的网格的所述单应性矩阵位移;计算所述配准点的配准点位移与所述配准点所属的网格的所述单应性矩阵位移之间的差值作为所述位移差。
可选的,所述配准确定单元5还用于:获取同一配准点与所属不同网格的所述位移差;其中,同一配准点具有至少两个所属网格;判断该配准点与所属网格的位移差是否均大于预设阈值;若均大于所述预设阈值,则该配准点为错误配准点。
可选的,所述矩阵计算单元3还用于:分别获取所述第一图像内网格与所述第二图像内对应网格的所述配准数据;对所述配准数据进行筛选;根据筛选后的所述配准数据计算所述第一图像内网格与所述第二图像内对应网格的所述单应性矩阵。
可选的,所述矩阵计算单元3还用于:根据所述配准数据中的所述配准点坐标和所述配准点位移,确定所述配准点坐标与所述第一图像内的网格以及与所述第二图像内的对应网格的所属关系;所述所属关系为所述配准点是否位于所述第一图像内的网格内或者所述第二图像内的网格内;根据所述所属关系对所述配准点进行筛选,保留的配准数据中配准的两个配准点分别位 于所述第一图像内的网格内和所述第二图像内的对应网格内。
本公开实施例提供了一种遮挡区域检测装置,用于执行本发明上述内容所述的遮挡区域检测方法,以下对所述遮挡区域检测装置进行详细描述。
如图15所示,其为根据本发明实施例的遮挡区域检测装置的结构框图;其中,所述遮挡区域检测装置包括:
所述的图像配准检测装置,用于确定错误配准点;
区域确定单元9,用于根据所述错误配准点组成的区域确定遮挡区域。
这样,就可以检测出遮挡区域,在此基础上,可以将除遮挡区域外的其余待配准图像对参考图像进行配准(或者除遮挡区域外的两个或多个图像进行融合),从而可以得到更多的融合区域,同时减少伪影或色差的产生。
本遮挡区域检测装置中,所述的图像配准检测装置,用于确定错误配准点的具体内容可参考对图像配准检测装置的具体描述,在此不再赘述。
本公开实施例提供了一种多摄图像融合装置,用于执行本发明上述内容所述的多摄图像融合方法,以下对所述多摄图像融合装置进行详细描述。
如图16所示,其为根据本发明实施例的多摄图像融合装置的结构框图;其中,所述多摄图像融合装置包括:
多摄图像配准单元7,用于获取多个拍摄图像,并从中选取两个作为第一图像和第二图像进行配准;
所述的遮挡区域检测装置,用于确定遮挡区域;遍历单元8,用于遍历所述多个拍摄图像,确定所述多个拍摄图像中的遮挡区域;
图像融合单元9,用于对所述多个拍摄图像中排除遮挡区域后的其余部分进行图像融合。
在上述检测出遮挡区域的基础上,对多摄图像的融合,可以先指定参考图像,然后排除待配准图像中的遮挡区域后进行配准融合。对于多摄图像的融合,可以先指定参考图像,然后将多摄图像中的其他图像逐个排除遮挡区域后进行配准融合。这样,融合后可以获得伪影更少、色差更小的融合图像,且不失真。
需要说明的是,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
以上描述了图像配准检测装置、遮挡区域检测装置、多摄图像融合装置的内部功能和结构,如图17所示,实际中,该图像配准检测装置、遮挡区域检测装置、多摄图像融合装置可实现为电子设备,包括:处理器以及存储器,所述存储器存储有控制程序,所述控制程序被处理器执行时实现上述所述的图像配准检测方法,或者,实现上述所述的遮挡区域检测方法,或者,实现 上述所述的多摄图像融合方法。
图18是根据本发明实施例示出的另一种电子设备的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图19,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电力组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用软件或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电力组件806为电子设备800的各种组件提供电力。电力组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
应用本申请实施例,至少具有如下有益效果:
通过对已配准的图像进行配准点的检测,可以判断出其中的错误配准点,从而可以在原有图像配准的基础上,进一步提高配准的准确度。
本公开实施例提供了一种计算机可读存储介质,存储有指令,所述指令被处理器加载并执行时实现上述所述的图像配准检测方法,或者,实现上述所述的遮挡区域检测方法。
本申请实施例提供的计算机可读存储介质包括但不限于任何类型的盘(包括软盘、硬盘、光盘、CD-ROM、和磁光盘)、ROM(Read-Only Memory,只读存储器)、RAM(Random Access Memory,随即存储器)、EPROM(Erasable Programmable Read-Only Memory,可擦写可编程只读存储器)、EEPROM(Electrically Erasable Programmable Read-Only Memory,电可擦可编程只读存储器)、闪存、磁性卡片或光线卡片。也就是,可读存储介质包括由设备(例如,计算机)以能够读的形式存储或传输信息的任何介质。
应用本申请实施例,至少具有如下有益效果:
通过对已配准的图像进行配准点的检测,可以判断出其中的错误配准点,从而可以在原有图像配准的基础上,进一步提高配准的准确度。
本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本发明实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
本技术领域技术人员可以理解,本申请中已经讨论过的各种操作、方法、流程中的步骤、措施、方案可以被交替、更改、组合或删除。进一步地,具有本申请中已经讨论过的各种操作、方法、流程中的其他步骤、措施、方案也可以被交替、更改、重排、分解、组合或删除。进一步地,现有技术中的具有与本申请中公开的各种操作、方法、流程中的步骤、措施、方案也可以被交替、更改、重排、分解、组合或删除。
以上所述仅是本申请的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (16)

  1. 一种图像配准检测方法,其特征在于,包括:
    获取第一图像和第二图像的配准数据后,对所述第一图像和第二图像进行网格分割,所述配准数据至少包括配准点坐标和配准点位移;
    计算所述第一图像内每个网格与所述第二图像内对应网格的单应性矩阵;
    根据所述配准数据和所述单应性矩阵计算每个配准点的配准点位移与所述配准点所属的网格的单应性矩阵位移之间的差值作为位移差;
    根据所述位移差确定错误配准点,其中,将与所属网格的所述位移差满足预设条件的所述配准点判定为错误配准点。
  2. 根据权利要求1所述的图像配准检测方法,其特征在于,所述获取第一图像和第二图像的配准数据后,对所述第一图像和第二图像进行网格分割中,对所述第一图像和第二图像进行的网格分割为重叠网格分割。
  3. 根据权利要求2所述的图像配准检测方法,其特征在于,对所述第一图像和第二图像进行重叠网格分割中,所述第一图像和所述第二图像中相邻网格的重叠面积至少为单个网格面积的1/2。
  4. 根据权利要求1-3中任一所述的图像配准检测方法,其特征在于,所述获取第一图像和第二图像的配准数据后,对所述第一图像和第二图像进行网格分割中,所述第一图像和所述第二图像的配准为稠密配准。
  5. 根据权利要求1所述的图像配准检测方法,其特征在于,所述根据所述配准数据和所述单应性矩阵计算每个配准点的配准点位移与所述配准点所属的网格的单应性矩阵位移之间的差值作为位移差,包括:
    根据所述第一图像和所述第二图像的所述配准数据,确定配准时的参考图像;
    获取待配准图像内的网格的单应性矩阵以及网格包含的配准点的配准点坐标以及配准点位移;所述待配准图像为所述第一图像和所述第二图像中除参考图像之外的另一图像;
    根据所述待配准图像内的网格的所述单应性矩阵以及网格包含的配准点的配准点坐标计算所述配准点所属的网格的所述单应性矩阵位移;
    计算所述配准点的配准点位移与所述配准点所属的网格的所述单应性矩阵位移之间的差值作为所述位移差。
  6. 根据权利要求1-3或5中任一所述的图像配准检测方法,其特征在于,所述根据所述位移差确定错误配准点,包括:
    获取同一配准点的与该配准点所属的不同网格的所述位移差;其中,同一配准点具有至少两个所属的网格;
    判断该配准点与该配准点所属的不同网格的所述位移差是否均大于预设 阈值;
    若均大于所述预设阈值,则该配准点为错误配准点。
  7. 根据权利要求1-3或5中任一所述的图像配准检测方法,其特征在于,所述计算所述第一图像内每个网格与所述第二图像内对应网格的单应性矩阵,包括:
    分别获取所述第一图像内网格与所述第二图像内对应网格的所述配准数据;
    对所述配准数据进行筛选;
    根据筛选后的所述配准数据计算所述第一图像内网格与所述第二图像内对应网格的所述单应性矩阵。
  8. 根据权利要求7所述的图像配准检测方法,其特征在于,所述对所述配准数据进行筛选,包括:
    根据所述配准数据中的所述配准点坐标和所述配准点位移,确定所述配准点坐标与所述第一图像内的网格和与所述第二图像内的对应网格的所属关系;所述所属关系为所述配准点是否位于所述第一图像内的网格内或者位于所述第二图像内的网格内;
    根据所述所属关系对所述配准点进行筛选,保留的配准数据中配准的两个配准点分别位于所述第一图像内的网格内和所述第二图像内的对应网格内。
  9. 一种遮挡区域检测方法,其特征在于,包括:
    根据权利要求1-8中任一所述的图像配准检测方法确定错误配准点;
    根据所述错误配准点组成的区域确定遮挡区域。
  10. 一种多摄图像融合方法,其特征在于,包括:
    获取多个拍摄图像,并从中选取两个作为第一图像和第二图像进行配准;
    根据权利要求9所述的遮挡区域检测方法确定遮挡区域;
    遍历所述多个拍摄图像,确定所述多个拍摄图像中的遮挡区域;
    对所述多个拍摄图像中排除遮挡区域后的其余部分进行图像融合。
  11. 根据权利要求10所述的多摄图像融合方法,其特征在于,所述拍摄图像的数量为两个。
  12. 一种图像配准检测装置,其特征在于,包括:
    网格分割单元(2),用于获取第一图像和第二图像的配准数据后,对所述第一图像和第二图像进行网格分割,所述配准数据至少包括配准点坐标和配准点位移;
    矩阵计算单元(3),用于计算所述第一图像内每个网格与所述第二图像内对应网格的单应性矩阵;
    位移计算单元(4),用于根据所述配准数据和所述单应性矩阵计算每个配准点的配准点位移与所述配准点所属的网格的单应性矩阵位移之间的差值 作为位移差;
    配准确定单元(5),用于根据所述位移差确定错误配准点,其中,将与所属网格的所述位移差满足预设条件的所述配准点判定为错误配准点。
  13. 一种遮挡区域检测装置,其特征在于,包括:
    权利要求12所述的图像配准检测装置,用于确定错误配准点;
    区域确定单元(6),用于根据所述错误配准点组成的区域确定遮挡区域。
  14. 一种多摄图像融合装置,其特征在于,包括:
    多摄图像配准单元(7),用于获取多个拍摄图像,并从中选取两个作为第一图像和第二图像进行配准;
    权利要求13所述的遮挡区域检测装置,用于确定遮挡区域;遍历单元(8),用于遍历所述多个拍摄图像,确定所述多个拍摄图像中的遮挡区域;
    图像融合单元(9),用于对所述多个拍摄图像中排除遮挡区域后的其余部分进行图像融合。
  15. 一种电子设备,包括处理器以及存储器,其特征在于,所述存储器存储有控制程序,所述控制程序被处理器执行时实现权利要求1-8中任一所述的图像配准检测方法,或者,实现权利要求9所述的遮挡区域检测方法,或者,实现权利要求10-11中任一所述的多摄图像融合方法。
  16. 一种计算机可读存储介质,存储有指令,其特征在于,所述指令被处理器加载并执行时实现权利要求1-8中任一所述的图像配准检测方法,或者,实现权利要求9所述的遮挡区域检测方法,或者,实现权利要求10-11中任一所述的多摄图像融合方法。
PCT/CN2020/096365 2019-07-05 2020-06-16 一种图像配准、融合、遮挡检测方法、装置和电子设备 WO2021004237A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/622,973 US20220245839A1 (en) 2019-07-05 2020-06-16 Image registration, fusion and shielding detection methods and apparatuses, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910603555.8A CN110458870B (zh) 2019-07-05 2019-07-05 一种图像配准、融合、遮挡检测方法、装置和电子设备
CN201910603555.8 2019-07-05

Publications (1)

Publication Number Publication Date
WO2021004237A1 true WO2021004237A1 (zh) 2021-01-14

Family

ID=68482275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096365 WO2021004237A1 (zh) 2019-07-05 2020-06-16 一种图像配准、融合、遮挡检测方法、装置和电子设备

Country Status (3)

Country Link
US (1) US20220245839A1 (zh)
CN (1) CN110458870B (zh)
WO (1) WO2021004237A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458870B (zh) * 2019-07-05 2020-06-02 北京迈格威科技有限公司 一种图像配准、融合、遮挡检测方法、装置和电子设备
CN112637515B (zh) * 2020-12-22 2023-02-03 维沃软件技术有限公司 拍摄方法、装置和电子设备
CN112927276B (zh) * 2021-03-10 2024-03-12 杭州海康威视数字技术股份有限公司 图像配准方法、装置、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150101806A (ko) * 2014-02-27 2015-09-04 동의대학교 산학협력단 그리드 패턴의 자동인식을 이용한 어라운드뷰 모니터링 시스템 및 방법
CN105389787A (zh) * 2015-09-30 2016-03-09 华为技术有限公司 一种全景图像拼接方法及装置
CN105574838A (zh) * 2014-10-15 2016-05-11 上海弘视通信技术有限公司 多目相机的图像配准和拼接方法及其装置
CN105631850A (zh) * 2014-11-21 2016-06-01 奥多比公司 对齐多视图扫描
CN106504277A (zh) * 2016-11-18 2017-03-15 辽宁工程技术大学 一种改进的icp点云自动配准方法
CN107274337A (zh) * 2017-06-20 2017-10-20 长沙全度影像科技有限公司 一种基于改进光流的图像拼接方法
CN107945113A (zh) * 2017-11-17 2018-04-20 北京天睿空间科技股份有限公司 局部图像拼接错位的矫正方法
CN110458870A (zh) * 2019-07-05 2019-11-15 北京迈格威科技有限公司 一种图像配准、融合、遮挡检测方法、装置和电子设备

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6879731B2 (en) * 2003-04-29 2005-04-12 Microsoft Corporation System and process for generating high dynamic range video
CN100588269C (zh) * 2008-09-25 2010-02-03 浙江大学 基于矩阵分解的摄像机阵列标定方法
WO2015085008A1 (en) * 2013-12-03 2015-06-11 Viewray Incorporated Single-and multi-modality alignment of medical images in the presence of non-rigid deformations using phase correlation
US9998666B2 (en) * 2015-08-26 2018-06-12 Duke University Systems and methods for burst image deblurring
CN105389815B (zh) * 2015-10-29 2022-03-01 武汉联影医疗科技有限公司 一种乳腺图像配准方法及装置
JP6515039B2 (ja) * 2016-01-08 2019-05-15 Kddi株式会社 連続的な撮影画像に映り込む平面物体の法線ベクトルを算出するプログラム、装置及び方法
CN105761254B (zh) * 2016-02-04 2019-01-01 浙江工商大学 基于图像特征的眼底图像配准方法
CN107784623B (zh) * 2016-08-31 2023-04-14 通用电气公司 X射线成像设备的图像处理方法及装置
US10366501B2 (en) * 2016-11-07 2019-07-30 The Boeing Company Method and apparatus for performing background image registration
CN107369168B (zh) * 2017-06-07 2021-04-02 安徽师范大学 一种大污染背景下配准点的提纯方法
US10726599B2 (en) * 2017-08-17 2020-07-28 Adobe Inc. Realistic augmentation of images and videos with graphics
CN107590234B (zh) * 2017-09-07 2020-06-09 哈尔滨工业大学 一种基于ransac的室内视觉定位数据库冗余信息减少的方法
CN107734268A (zh) * 2017-09-18 2018-02-23 北京航空航天大学 一种结构保持的宽基线视频拼接方法
CN107767339B (zh) * 2017-10-12 2021-02-02 深圳市未来媒体技术研究院 一种双目立体图像拼接方法
CN109934858B (zh) * 2019-03-13 2021-06-22 北京旷视科技有限公司 图像配准方法及装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150101806A (ko) * 2014-02-27 2015-09-04 동의대학교 산학협력단 그리드 패턴의 자동인식을 이용한 어라운드뷰 모니터링 시스템 및 방법
CN105574838A (zh) * 2014-10-15 2016-05-11 上海弘视通信技术有限公司 多目相机的图像配准和拼接方法及其装置
CN105631850A (zh) * 2014-11-21 2016-06-01 奥多比公司 对齐多视图扫描
CN105389787A (zh) * 2015-09-30 2016-03-09 华为技术有限公司 一种全景图像拼接方法及装置
CN106504277A (zh) * 2016-11-18 2017-03-15 辽宁工程技术大学 一种改进的icp点云自动配准方法
CN107274337A (zh) * 2017-06-20 2017-10-20 长沙全度影像科技有限公司 一种基于改进光流的图像拼接方法
CN107945113A (zh) * 2017-11-17 2018-04-20 北京天睿空间科技股份有限公司 局部图像拼接错位的矫正方法
CN110458870A (zh) * 2019-07-05 2019-11-15 北京迈格威科技有限公司 一种图像配准、融合、遮挡检测方法、装置和电子设备

Also Published As

Publication number Publication date
US20220245839A1 (en) 2022-08-04
CN110458870B (zh) 2020-06-02
CN110458870A (zh) 2019-11-15

Similar Documents

Publication Publication Date Title
WO2021004237A1 (zh) 一种图像配准、融合、遮挡检测方法、装置和电子设备
KR102194094B1 (ko) 가상과 실제 물체의 합성 방법, 장치, 프로그램 및 기록매체
CN106651955B (zh) 图片中目标物的定位方法及装置
TWI554976B (zh) 監控系統及其影像處理方法
US8315443B2 (en) Viewpoint detector based on skin color area and face area
KR101677607B1 (ko) 동영상 브라우징 방법, 장치, 프로그램 및 기록매체
WO2021115179A1 (zh) 图像处理方法、图像处理装置、存储介质与终端设备
CN110958401B (zh) 一种超级夜景图像颜色校正方法、装置和电子设备
WO2016192325A1 (zh) 视频文件的标识处理方法及装置
KR20150050172A (ko) 관심 객체 추적을 위한 다중 카메라 동적 선택 장치 및 방법
KR20170020736A (ko) 이미지에 의한 공간 파라미터 결정 방법, 장치, 단말기기, 프로그램 및 컴퓨터 판독가능한 기록매체
CN109902725A (zh) 移动目标的检测方法、装置及电子设备和存储介质
KR102367648B1 (ko) 전 방향 시차 영상 합성 방법, 장치 및 저장 매체
KR20150058871A (ko) 촬상 장치 및 촬상 이미지 스티칭 방법
WO2016029465A1 (zh) 一种图像处理方法、装置及电子设备
CN106339705A (zh) 图片获取方法及装置
CN106982327A (zh) 图像处理方法和装置
CN114422687B (zh) 预览图像切换方法及装置、电子设备及存储介质
CN112330717B (zh) 目标跟踪方法及装置、电子设备和存储介质
CN111885371A (zh) 图像遮挡检测方法、装置、电子设备和计算机可读介质
WO2023225825A1 (zh) 位置差异图生成方法及装置、电子设备、芯片及介质
CN114581867B (zh) 目标检测方法、设备、存储介质及程序产品
CN112070681B (zh) 图像处理方法及装置
CN114445501A (zh) 多摄像头标定方法、多摄像头标定装置及存储介质
CN118118783A (zh) 相机防抖检测方法,装置,电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20836024

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20836024

Country of ref document: EP

Kind code of ref document: A1