CN116363238A - Parallax map generation method, device, equipment and computer readable storage medium - Google Patents

Parallax map generation method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN116363238A
CN116363238A CN202111627116.4A CN202111627116A CN116363238A CN 116363238 A CN116363238 A CN 116363238A CN 202111627116 A CN202111627116 A CN 202111627116A CN 116363238 A CN116363238 A CN 116363238A
Authority
CN
China
Prior art keywords
image
reference image
target
offset
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111627116.4A
Other languages
Chinese (zh)
Inventor
向超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan TCL Group Industrial Research Institute Co Ltd
Original Assignee
Wuhan TCL Group Industrial Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan TCL Group Industrial Research Institute Co Ltd filed Critical Wuhan TCL Group Industrial Research Institute Co Ltd
Priority to CN202111627116.4A priority Critical patent/CN116363238A/en
Publication of CN116363238A publication Critical patent/CN116363238A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/37Determination of transform parameters for the alignment of images, i.e. image registration using transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides a disparity map generation method, a device, equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a target image, and a first reference image and a second reference image corresponding to the target image; determining an image offset between the target image and the first reference image; determining a target reference image according to the image offset, the first reference image and the second reference image; and generating a parallax map of the target image according to the target image and the target reference image. According to the method provided by the embodiment of the application, the image offset between the target image and the reference image is considered, then the reference image which is more suitable for estimating the depth information of the target image is determined by using the image offset, and a more accurate parallax image can be obtained by using the reference image later.

Description

Parallax map generation method, device, equipment and computer readable storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a parallax map generation method, a parallax map generation device, parallax map generation equipment and a computer readable storage medium.
Background
In order to achieve a depth of field effect on a mobile terminal similar to that of an analog single inverse contour imaging device, it is often necessary to determine a disparity value for each pixel point in the image. Currently, the determination of the parallax value is mainly implemented by installing two cameras or multiple cameras on a mobile terminal and based on images acquired by the multiple cameras.
However, in general, in consideration of cost and other factors, the performance of multiple cameras installed on the mobile terminal may be different, which also results in the undesirable effect of determining the parallax value in the prior art.
Disclosure of Invention
The embodiment of the application provides a parallax map generation method, a parallax map generation device, parallax map generation equipment and a computer readable storage medium, and aims to solve the technical problem that the effect of determining a parallax value is not ideal enough in the prior art.
In one aspect, an embodiment of the present application provides a disparity map generating method, including:
acquiring a target image, and a first reference image and a second reference image corresponding to the target image;
determining an image offset between the target image and the first reference image;
determining a target reference image according to the image offset, the first reference image and the second reference image;
and generating a parallax map of the target image according to the target image and the target reference image.
On the other hand, the embodiment of the application also provides a parallax map generating device, which comprises:
the acquisition module is used for acquiring a target image, and a first reference image and a second reference image corresponding to the target image;
a determining module for determining an image offset between the target image and the first reference image;
The image determining module is used for determining a target reference image according to the image offset, the first reference image and the second reference image;
and the generating module is used for generating a parallax image of the target image according to the target image and the target reference image.
On the other hand, the embodiment of the application also provides a parallax map generating device, which comprises a processor, a memory and a parallax map generating program stored in the memory and capable of running on the processor, wherein the processor executes the parallax map generating program to realize the steps in the parallax map generating method.
On the other hand, the embodiment of the application also provides a computer readable storage medium, on which a disparity map generating program is stored, the disparity map generating program being executed by a processor to implement the steps in the above-described disparity map generating method.
According to the parallax image generation method, after the target image and the corresponding first reference image and second reference image are acquired, whether the first reference image can be used as the reference image for estimating the depth information of the target image or not is judged by utilizing the offset between the target image and the first reference image, so that the reference image which is more suitable for estimating the depth information of the target image can be determined from the first reference image and the second reference image, and the parallax image with more accurate target image can be obtained by utilizing the reference image later.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an implementation scenario of a disparity map generating method according to an embodiment of the present application;
fig. 2 is a step flowchart of a disparity map generating method provided in an embodiment of the present application;
FIG. 3 is a flowchart of steps for determining a target reference image according to an embodiment of the present application;
FIG. 4 is a flowchart of the steps for transforming a reference image provided in an embodiment of the present application;
FIG. 5 is a flowchart of the steps for determining an offset provided by an embodiment of the present application;
FIG. 6 is a flowchart of steps for determining an image offset based on a coordinate difference according to an embodiment of the present application;
FIG. 7 is a flowchart of the steps provided by an embodiment of the present application for further rotating a reference image;
FIG. 8 is a flowchart of a step of rotating a reference image according to an embodiment of the present application;
FIG. 9 is a flowchart of steps for acquiring a target image and a reference image according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a parallax map generating apparatus provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of a disparity map generating apparatus according to an embodiment of the present invention.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be encompassed by the present invention.
In the embodiments of the present application, the term "exemplary" is used to mean "serving as an example, instance, or illustration. Any embodiment described as "exemplary" in this application is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the invention. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the invention with unnecessary detail. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed in the embodiments of the present application.
The embodiment of the application provides a parallax map generation method, a parallax map generation device, parallax map generation equipment and a computer readable storage medium, and the parallax map generation method, the parallax map generation device, the parallax map generation equipment and the computer readable storage medium are respectively described in detail below.
Fig. 1 is a schematic view of an implementation scenario of a disparity map generating method according to an embodiment of the present application. In this implementation scenario, the parallax map generation apparatus 300 includes a first image capturing apparatus 100, a second image capturing apparatus 200, and a parallax map generation apparatus 300, where the first image capturing apparatus 100 and the second image capturing apparatus 200 are mainly used to capture a target image and a reference image of a parallax map to be generated later, for example, in a feasible scenario, the first image capturing apparatus 100 and the second image capturing apparatus 200 are a main camera and a sub camera on a mobile terminal device, respectively. The parallax map generating device 300 is mainly used for processing the images acquired by the first image acquiring device 100 and the second image acquiring device 200 to obtain a parallax map of the target image, and specifically, the parallax map generating device may be that the first image acquiring device 100 and the second image acquiring device 200 are integrally installed on a mobile terminal device or disposed on a remote server, and at this time, the first image acquiring device 100, the second image acquiring device 200 and the parallax map generating device 300 perform image transmission in a remote communication manner.
It should be noted that, the schematic implementation scenario of the parallax map generating method shown in fig. 1 is only an example, and the scenario described in the embodiment of the present invention is for more clearly describing the technical solution of the embodiment of the present invention, and does not constitute a limitation on the technical solution provided by the embodiment of the present invention.
Based on the implementation scene of the parallax map generation method, an embodiment of the parallax map generation method is provided.
As shown in fig. 2, fig. 2 is a step flowchart of a disparity map generating method according to an embodiment of the present application. Specifically, the method comprises the steps 201 to 204:
and 201, acquiring a target image, and a first reference image and a second reference image corresponding to the target image.
In this embodiment of the present application, the target image refers to an image of a parallax image to be determined, which is generally acquired by a first image acquisition device or a second image acquisition device in an implementation scene of the parallax image generating method shown in fig. 1, and similarly, a first reference image and a second reference image corresponding to the target image are also acquired by the first image acquisition device or the second image acquisition device. Taking a mobile terminal device with double cameras as an example, the first image acquisition device is a main camera on the mobile terminal device, and the second image acquisition device is a secondary camera on the mobile terminal device. It is emphasized that on most mobile terminal devices, the imaging quality of the primary camera is better than that of the secondary camera due to cost, i.e. the difference between the imaging quality of the primary camera and the imaging quality of the secondary camera is large, so that inaccurate binocular depth estimation is caused. On the basis of the method, a parallax image generation method is provided, and more accurate binocular depth estimation is obtained by using the photographed images of the main camera and the auxiliary camera. Specifically, the target image and the second reference image are a pair of frame images obtained by shooting with double cameras, wherein the target image is obtained by shooting with a main camera with better imaging quality, the second reference image is obtained by shooting with a secondary camera, and the first reference image is the previous frame image when the target image is obtained by shooting with the main camera. For a clearer understanding of the complete implementation process of the target image and the first reference image and the second reference image corresponding to the target image in the present application, refer to fig. 9 and the explanation thereof.
An image offset between the target image and the first reference image is determined 202.
In this embodiment of the present application, it is known from the foregoing description that the target image and the first reference image are the same camera, that is, the main camera shoots, if there is enough deviation between the target image and the first reference image, the depth information of the target image can be determined by using the target image and the first reference image with the same imaging quality, and at this time, the obtained depth information is more accurate than the depth information estimated by using the images with different imaging quality. Accordingly, an image offset between the target image and the first reference image may be determined first to determine whether there is sufficient deviation between the target image and the first reference image for enabling estimation of depth information of the target image.
As an alternative embodiment of the present application, the image offset may be obtained by matching the target image and the first reference image. Specifically, the target image and the first reference image are matched to obtain a plurality of matching point pairs, wherein two matching points in the matching point pairs are two corresponding points in the target image and the first reference image, and the two matching points can be understood as positions of the same object in the target image and the first reference image, and then the image offset between the target image and the first reference image can be determined based on the distance between the matching points corresponding to the matching point pairs, and the details of fig. 5 and the explanation thereof can be referred to later.
And 203, determining a target reference image according to the image offset, the first reference image and the second reference image.
In this embodiment of the present application, it is known from the foregoing description that the image offset describes the moving distance of the main camera in the shooting process, so that the reference image that is more suitable for the target image may be determined from the first reference image collected by the main camera and the second reference image collected by the sub camera according to the image offset, that is, the moving distance of the main camera, and the step of determining the target reference image may refer to fig. 3 and the explanation of the following description.
And 204, generating a parallax image of the target image according to the target image and the target reference image.
In the embodiment of the present application, after obtaining the target reference image of the target head portrait, it is known by combining the above-indicated related background technology that parallax matching is performed on the target image and the target reference image, so as to obtain the parallax map of the target image.
Specifically, the parallax Matching method for the target image and the target reference image may adopt a Block Matching algorithm (Block Matching), a Semi-Global Matching algorithm (Semi-Global Matching), a Graph-Cut based stereo Matching algorithm (Graph-Cut), or the like.
According to the parallax image generation method, after the target image and the corresponding first reference image and second reference image are acquired, whether the first reference image can be used as the reference image for estimating the depth information of the target image or not is judged by utilizing the offset between the target image and the first reference image, so that the reference image which is more suitable for estimating the depth information of the target image can be determined from the first reference image and the second reference image, and the parallax image with more accurate target image can be obtained by utilizing the reference image later.
As shown in fig. 3, a flowchart of a step of determining a target reference image according to an embodiment of the present application is provided, which is described in detail below.
In this embodiment of the present application, a specific implementation manner for determining a target reference image based on an image offset and two reference images is provided, which specifically includes steps 301 to 303:
and 301, comparing the image offset with a preset offset threshold value, and judging whether the image offset is larger than the preset offset threshold value. If yes, go to step 302; if not, go to step 303.
In this embodiment of the present application, since the image offset describes the movement amount of the main camera, the image offset and the preset offset threshold may be compared to determine whether there is a sufficient deviation between the target image and the first reference image to be used for implementing estimation of depth information of the target image. In other words, the preset offset threshold may be equally understood as the minimum amount of offset between two images that enables estimation of depth information, the specific value of which is associated with the imaging parameters of the main camera, and therefore, in general, it is necessary to obtain the preset offset threshold by experimental measurement in advance.
Specifically, if the image offset is greater than the preset offset threshold, it indicates that there is a certain parallax between the first reference image captured by the main camera and the target image captured by the main camera, and because the imaging quality of the main camera is higher than that of the second reference image captured by the sub camera, the imaging quality of the first reference image captured by the main camera is better than that of the second reference image captured by the sub camera, so that the first reference image is regarded as the target reference image of the target image, and a better parallax image generating effect can be obtained. Otherwise, if the image offset is smaller than the preset offset threshold, the moving distance of the main camera is indicated to be shorter, at this time, effective parallax cannot be formed between the first reference image shot by the main camera and the target image shot by the main camera, and when the parallax is calculated, an accurate parallax image is difficult to obtain, so that the second reference image needs to be regarded as the target reference image of the target image, and the effect of the generated parallax image is ensured.
The first reference picture is determined 302 as the target reference picture.
In this embodiment of the present application, when the image offset is greater than a preset offset threshold, that is, there is enough deviation between the target image and the first reference image to implement estimation of depth information of the target image, the first reference image acquired by the main camera may be determined as the target reference image, so as to improve the effect of the generated parallax map.
As an alternative embodiment of the present application, since there is a certain deviation between the first reference image and the target image, in order to further improve the effect of the subsequently generated disparity map, it is necessary to perform binocular correction on the first reference image and the target image first, and specifically, refer to fig. 4 and the explanation thereof.
The second reference picture is determined 303 as the target reference picture.
In this embodiment of the present application, when the image offset is smaller than or equal to a preset offset threshold, that is, there is not enough deviation between the target image and the first reference image to be used for realizing the estimation of the depth information of the target image, the second reference image acquired by the secondary camera needs to be determined as the target reference image, so as to ensure the effect of the generated parallax map.
The embodiment of the application provides a specific implementation mode for determining a target reference image based on an image offset and two reference images, specifically, the moving distance of a main camera is determined by utilizing the magnitude relation between the image offset and a preset offset threshold, so that the image acquired by the main camera or a secondary camera is correspondingly regarded as the target reference image, and the subsequent generation of a parallax image with the best effect is facilitated.
As shown in fig. 4, a flowchart of steps for transforming a reference image is provided in an embodiment of the present application. Detailed description is as follows.
In this embodiment of the present application, considering that there is a certain deviation between the reference image and the target image obtained by capturing with the main camera, before determining the parallax map, binocular correction needs to be performed on the reference image and the target image, and specifically includes steps 401 to 403:
and 401, acquiring a target pixel point in the target image and a matched pixel point corresponding to the target pixel point in the first reference image.
In this embodiment of the present application, the target pixel point in the target image and the matching pixel point in the first reference image may include a plurality of matching pixel points in a one-to-one correspondence relationship, that is, the target pixel point and the matching pixel point may form a plurality of matching point pairs.
It should be noted that, in fact, in step 202, that is, the step of determining the image offset between the target image and the first reference image, is implemented by matching the target image and the first reference image to obtain a matching point pair. That is, in step 401 provided in the embodiment of the present application, the obtained target pixel point and the obtained matching pixel point may be directly determined by using the matching point pair obtained in the step of determining the image offset, and since the subsequent step of determining the image offset between the target image and the first reference image is shown in fig. 5, the specific implementation manner of obtaining the target pixel point and the matching pixel point is also specifically given, which is not described herein.
And 402, determining an alignment transformation matrix according to the target pixel point and the corresponding matched pixel point.
In this embodiment of the present application, for the extracted matching point pair, that is, the target pixel and its corresponding matching pixel, the following 3×3 transform matrix F may be obtained based on the least square method or the random consistency sampling (Random sample consensus, RANSAC) method:
Figure BDA0003439936430000071
and enabling the sum of absolute values of differences between the coordinates of the target pixel point after F transformation and the coordinates of the corresponding matched pixel point to be minimum.
It should be noted that, the coordinates include an abscissa and an ordinate, and in the embodiment of the present application, specifically, whether the sum of absolute values of differences of the abscissas is minimized or the sum of absolute values of differences of the ordinates is minimized depends on performing a binocular correction of line alignment or a binocular correction of column alignment on the target image and the first reference image, and a specific binocular correction rule is also related to the image offset between the target image and the first reference image, so a further explanation will be given in fig. 5 later.
And 403, transforming the first reference image according to the alignment transformation matrix to obtain a transformed first reference image.
In this embodiment, further, after determining an alignment transformation matrix by using the target pixel point and the corresponding matched pixel point, each pixel point in the first reference image is processed by using the alignment transformation matrix, so as to obtain a transformed first reference image. At this time, any point in the three-dimensional scene satisfies: the difference between the horizontal (vertical) coordinates in the transformed first reference image and the horizontal (vertical) coordinates in the target image is equal, and at this time, the difference between the vertical (horizontal) coordinates can reflect the depth information in the real three-dimensional scene, so that the subsequent parallax matching is conveniently performed on the target image and the transformed first reference image through a binocular depth estimation algorithm, and a parallax image of the target image is obtained.
The embodiment of the application provides an implementation manner for performing binocular correction on a first reference image and a target image so as to facilitate subsequent determination of a disparity map, and particularly relates to determining an alignment transformation matrix by using matching point pairs in the first reference image and the target image, and processing each pixel point in the reference image by using the alignment transformation matrix to obtain a transformed first reference image. At this time, any point in the three-dimensional scene can ensure that the horizontal (vertical) coordinates are equal in the target image and the transformed first reference image, and at this time, the difference between the vertical (horizontal) coordinates corresponding to the point can be used for subsequent parallax matching to generate a parallax image.
As shown in fig. 5, a flowchart of steps for determining an offset is provided in an embodiment of the present application. Detailed description is as follows.
In the embodiment of the present application, a specific implementation scheme for determining an offset between a target image and a first reference image is provided, which specifically includes steps 501 to 503:
and 501, sampling the target image to obtain sampling pixel points.
In this embodiment of the present application, a certain number of sampling pixels may be obtained by sampling the target image, and specific sampling rules may be various, for example, including but not limited to, uniform sampling, corner detection sampling, etc., and the specific sampling rules for obtaining the sampling pixels by sampling the target image are not limited, and all the sampling pixels that may be obtained by sampling a certain number should be considered as being within the scope of protection claimed in the present application.
As an alternative embodiment of the application, corner detection samples, such as FAST corner detection samples and Harris corner detection samples, can be specifically selected, and at this time, the sampled sampling pixel points have obvious corner characteristics, so that the matching of the subsequent target image and the first reference image is more convenient.
And 502, determining a reference pixel point corresponding to the sampling pixel point in the first reference image according to the sampling pixel point.
In this embodiment of the present application, after extracting a plurality of sampling pixels in a target image, the target image and a first reference image are matched, that is, a pixel matched with each sampling pixel is found in the first reference image, where the pixel is a reference pixel, and in particular, there are various matching implementations, for example, optical flow tracking, nearest neighbor matching, violent matching, etc., where a specific matching rule for matching the target image and the first reference image is not limited, and any matching rule capable of extracting a reference pixel corresponding to the sampling pixel from the first reference image should be considered as being within the scope of protection claimed in this application.
It should be noted that, at this time, the target image is sampled, and the obtained sampled pixel points are the target pixel points in the target image in the foregoing step 401, and the reference pixel points in the first reference image are the matching pixel points in the first reference image in the foregoing step 401.
And 503, determining the image offset between the target image and the first reference image according to the coordinate difference value between each sampling pixel point and the corresponding reference pixel point.
In the embodiment of the present application, it can be understood that the coordinate difference between the sampling pixel point and the corresponding reference pixel point can be understood as the offset of the pixel. Because the offset of the pixel can reflect the offset of the image to a certain extent, the integral image offset between the target image and the first reference image can be obtained by counting the coordinate difference between each sampling pixel point and the corresponding reference pixel point, and the step of determining the image offset according to the coordinate difference between each sampling pixel point and the corresponding reference pixel point can be referred to later fig. 6 and the explanation thereof.
As shown in fig. 6, fig. 6 is a flowchart illustrating a step of determining an image offset according to a coordinate difference according to an embodiment of the present application. Detailed description is as follows.
In the embodiment of the present application, a specific implementation manner of determining an image offset according to a coordinate difference is provided, specifically, the method includes steps 601 to 603:
601, determining a horizontal coordinate difference value and a vertical coordinate difference value between each sampling pixel point and a corresponding reference pixel point.
In the embodiment of the application, for each sampling pixel point and the reference pixel point corresponding to the sampling pixel point, the difference value of the abscissa and the difference value of the ordinate are respectively determined, so that a plurality of horizontal coordinate difference values and a plurality of vertical coordinate difference values can be obtained.
602, determining the average value of the horizontal coordinate difference value and the average value of the vertical coordinate difference value respectively to obtain an average difference of the horizontal coordinate and an average difference of the vertical coordinate.
In this embodiment, further, for the plurality of horizontal coordinate differences and the plurality of vertical coordinate differences determined in step 601, the average value of the horizontal coordinate differences and the average value of the vertical coordinate differences are determined respectively, so that an average difference of the horizontal coordinates and an average difference of the vertical coordinates can be obtained. The mean value of the horizontal coordinate difference values is the average difference of the horizontal coordinates, and the mean value of the vertical coordinate difference values is the average difference of the vertical coordinates.
603, any one of the average difference on the abscissa and the average value on the ordinate, which is larger in absolute value, is set as the image offset between the target image and the first reference image.
In the embodiment of the application, the absolute value of the mean difference of the abscissa and the absolute value of the mean difference of the ordinate are compared, and the absolute value is set to be larger as the image offset between the target image and the first reference image. That is, if the absolute value of the abscissa mean difference is greater than the absolute value of the ordinate mean difference, the absolute value of the abscissa mean difference is set as the image offset between the target image and the first reference image, and if the absolute value of the abscissa mean difference is smaller than the absolute value of the ordinate mean difference, the absolute value of the ordinate mean difference is set as the image offset between the target image and the first reference image. Of course, if the absolute values of the abscissa mean difference and the ordinate mean difference are opposite, the absolute value of the abscissa mean difference or the ordinate mean difference may be set as the image offset.
In this embodiment of the present application, it should be noted that the magnitude relation between the absolute value of the mean difference of the abscissa and the absolute value of the mean difference of the ordinate may further affect the rule of transforming the first reference image, that is, the foregoing step 402 of determining the alignment transformation matrix minimizes the sum of the absolute values of the differences of the abscissas or minimizes the sum of the absolute values of the differences of the ordinates. Specifically, if the absolute value of the mean difference of the abscissa is greater than the absolute value of the mean difference of the ordinate, it is indicated that the moving distance of the main camera in the horizontal direction is greater than the moving distance in the vertical direction, and at this time, the binocular correction in the line alignment mode should be performed with the moving distance of the main camera in the horizontal direction as the binocular base line distance, that is, an alignment transformation matrix that minimizes the sum of the absolute values of the differences of the abscissas is determined. On the contrary, if the absolute value of the mean difference of the ordinate is larger than the absolute value of the mean difference of the abscissa, it indicates that the moving distance of the main camera in the vertical direction is larger than the moving distance in the horizontal direction, and at this time, the binocular correction of the line alignment mode should be performed with the moving distance of the main camera in the vertical direction as the binocular base line distance, that is, an alignment transformation matrix that minimizes the sum of the absolute values of the differences of the ordinate is determined.
As an optional embodiment of the present application, when the image offset is determined to be greater than the preset offset threshold, that is, the first reference image needs to be used as the target reference image, in addition to performing binocular alignment on the target image and the first reference image, the rotation processing is further performed on the first reference image according to the mean difference of the abscissa and the mean difference of the ordinate, so as to facilitate the determination of the subsequent alignment transformation matrix, please refer to the following fig. 7 and the explanation thereof.
As shown in fig. 7, fig. 7 is a flowchart of steps for further rotating a reference image according to an embodiment of the present application. Specifically, the method comprises the steps 701 to 703:
701, comparing the image offset with a preset offset threshold, and judging whether the image offset is larger than the preset offset threshold. If yes, go to step 720, if no, go to other steps.
In this embodiment of the present application, the image offset is compared with a preset offset threshold, and whether the image offset is greater than the preset offset threshold is determined as the same as the foregoing step 301, that is, whether the main camera has moved a sufficient distance is determined. This application is not described in detail herein.
And 702, performing rotation processing on the first reference image according to the average difference of the horizontal coordinate and the average difference of the vertical coordinate to obtain a rotated first reference image.
In this embodiment of the present application, it is known from the foregoing description that when the image offset is greater than the preset offset threshold, that is, there is enough deviation between the target image and the first reference image, the depth information of the target image is estimated. In this case, the first reference image may be determined as the target reference image, but before that, the first reference image may be rotated according to the average difference of the abscissa and the average difference of the ordinate to obtain a rotated first reference image, where the common area between the rotated first reference image and the target image is the largest.
It should be noted that, while the first reference image is rotated, the target image is also rotated synchronously, that is, a disparity map of the target image is generated according to the rotated target image and the target reference image.
As an optional embodiment of the present application, the rotation processing is performed on the first reference image according to the average difference of the abscissa and the average difference of the ordinate, so as to obtain a specific implementation step of the rotated first reference image, please refer to the following fig. 8 and the explanation thereof.
And 703, generating a parallax map of the target image according to the image offset, the rotated first reference image and the rotated second reference image.
In this embodiment, the difference is that the first reference image is rotated, and the target reference image is determined by using the rotated first reference image as the first reference image, which is the same as the rule for determining the target reference image according to the image offset, the first reference image, and the second reference image shown in step 203. This application is not described in detail herein.
As shown in fig. 8, fig. 8 is a flowchart illustrating a step of rotating a reference image according to an embodiment of the present application. Specifically, the method comprises the steps 801 to 803:
801, if the absolute value of the mean difference of the abscissa is greater than the absolute value of the mean difference of the ordinate and the mean difference of the abscissa is negative, rotating the first reference image clockwise by a first angle to obtain a rotated first reference image.
In this embodiment of the present application, if the absolute value of the mean difference of the abscissa is greater than the absolute value of the mean difference of the ordinate, it indicates that the moving distance of the camera in the horizontal direction is greater than the vertical direction, and the moving distance of the camera in the horizontal direction should be the binocular base line distance.
Correspondingly, if the abscissa mean difference is positive, the rotation of the first reference image may not be required.
In this embodiment, the first reference image is rotated clockwise by a first angle, and the target image is rotated clockwise by the first angle synchronously.
And 802, if the absolute value of the mean difference of the abscissas is smaller than the absolute value of the mean difference of the ordinates, and the mean difference of the ordinates is positive, rotating the first reference image clockwise by a second angle to obtain the rotated first reference image.
In this embodiment, if the absolute value of the mean difference of the abscissa is smaller than the absolute value of the mean difference of the ordinate, the moving distance of the camera in the vertical direction should be taken as the binocular base line distance, and at this time, it is considered to vertically rotate the vertical direction so as to convert the vertical direction into the horizontal direction. Specifically, if the difference between the ordinate and the ordinate is positive, the first reference image is rotated clockwise by 270 degrees.
Of course, in the embodiment of the present application, the first reference image is rotated clockwise by the second angle, and the target image is rotated clockwise by the second angle synchronously.
803, if the absolute value of the mean difference of the abscissa is smaller than the absolute value of the mean difference of the ordinate and the mean difference of the abscissa is negative, rotating the first reference image clockwise by a third angle to obtain a rotated first reference image.
In this embodiment, as in the foregoing step 802, if the absolute value of the mean difference of the abscissa is smaller than the absolute value of the mean difference of the ordinate, the moving distance of the camera in the vertical direction should be taken as the binocular base line distance, and at this time, it is considered to vertically rotate the vertical direction so as to convert the vertical direction into the horizontal direction. However, if the mean difference on the ordinate is negative, the first reference image needs to be rotated 90 ° clockwise to adjust the mean difference on the ordinate to a positive value.
Of course, in the embodiment of the present application, the first reference image is rotated clockwise by a third angle, and the target image is rotated clockwise by a third angle synchronously.
The above-mentioned rotation of the first reference image and the target image is only to maximize the common area between the rotated first reference image and target image, so as to improve the effect of determining the alignment transformation matrix subsequently.
As shown in fig. 9, fig. 9 is a flowchart of a step of acquiring a target image and a reference image according to an embodiment of the present application. Detailed description is as follows.
In the embodiment of the application, the implementation scheme for acquiring the target image and the reference image by using the image acquisition device specifically includes the following steps 901 to 903:
901, acquiring a first output frame and a second output frame acquired by a first image acquisition device and a reference output frame acquired by a second image acquisition device.
In this embodiment of the present application, as known from the implementation scenario of the parallax map generating method shown in fig. 1, the first image capturing device, that is, the main camera, may continuously capture multiple frames of images, including a first output frame and a second output frame, and in general, considering that the images are continuously captured, for convenience of description, the first output frame may be considered to be a previously captured image, the second output frame may be considered to be a later captured image, and the reference output frame may be an output frame captured by the second image capturing device when the first image capturing device captures the first output frame or the second output frame.
If the first output frame is set as the target image, the second output frame and the reference output frame are set as the first reference image and the second reference image corresponding to the target image, respectively 902.
In this embodiment of the present application, if a previously acquired image, that is, a first output frame, is regarded as a target image, then the second image acquisition device, that is, the reference output frame of the secondary camera, is the second reference image, and at the same time, the first image acquisition device further acquires the second output frame and uses the second output frame as the first reference image of the target image.
If 903 is set to the target image, the first output frame and the reference output frame are set to the first reference image and the second reference image corresponding to the target image, respectively.
In this embodiment of the present invention, if the later acquired image, that is, the second output frame, is regarded as the target image, the disparity map generating device extracts the first output frame in advance of the first image acquiring device as the first reference image of the target image, and simultaneously regards the reference output frame of the second image acquiring device as the second reference image.
It will be appreciated that the first reference image and the target image are always images acquired by the same image acquisition device at different moments in time, while the second reference image is an image acquired by another reference output frame.
For easy understanding, the implementation scheme of the disparity map generating method proposed in the present application will be fully described below, specifically as follows:
the parallax map generation method is applied to mobile terminal equipment provided with two cameras, namely a main camera and a secondary camera, wherein after a shooting function is started, the main camera and the secondary camera can respectively shoot, and when frames shot by the main camera and the secondary camera are received for the first time, as the images shot by the main camera do not have a previous frame as a reference, the parallax map can only be determined through the images shot by the main camera and the images shot by the secondary camera. At this time, however, the image captured by the main camera is recorded as a reference frame a. Thus, when the next pair of frames shot by the main camera and the auxiliary camera is received, if the output frame B of the main camera is regarded as the target image, the recorded reference frame a is the first reference image, and the output frame C of the auxiliary camera is the second reference image. For convenience of description, the target image, the first reference image, and the second reference image will be directly described below with the frame B, the frame a, and the frame C being replaced with the frame B, the frame a, and the frame C, respectively.
1) Sampling the frame B by using any sampling method such as uniform sampling, FAST corner detection sampling, harris corner detection sampling and the like to obtain a plurality of points P 0
2) Any matching method based on optical flow tracking, nearest neighbor matching, violent matching and the like finds a corresponding point P in a frame A 0 And is denoted as P 1
3) Calculation of P 0 Each point of (3) is equal to P 1 Difference in abscissa of corresponding point in (b)Mean value D of (2) X And the mean value D of the difference values of the ordinate Y
4) Judgment D X The sum of absolute values D of (2) Y Whether the absolute values of (a) are all smaller than a certain preset threshold D 0 If so, it indicates that the main camera between the frame B and the frame a does not have enough position movement, so that only the frame B and the frame C can be subjected to parallax Matching, and depth information is recovered, so as to obtain a parallax image of the frame B, where the scheme for parallax Matching specifically includes, but is not limited to, a Block Matching algorithm (Block Matching), a Semi-Global Matching algorithm (Semi-Global Matching), a stereo Matching algorithm based on image segmentation (Graph-Cut), and the like, and the frame B is recorded and regarded as a new reference frame a, and the next pair of frames shot by the main camera and the auxiliary camera are received, that is, the step 1 is returned again;
5) If D X The sum of absolute values D of (2) Y The absolute value of (2) is greater than a preset threshold value D 0 Then compare D X And D Y If D X Is greater than D Y Considering that the moving distance of the camera in the horizontal direction is greater than the vertical direction, and performing binocular correction in an alignment mode by taking the moving distance of the camera in the horizontal direction as a binocular base line distance; otherwise, binocular correction in a row alignment mode should be performed with the moving distance of the camera in the vertical direction as the binocular base line distance; specifically, note D X And D Y The larger absolute value of the two is D 1 And remembers the selected binocular corrected alignment as R L . If R is L In a row alignment mode, and D 1 If the value is more than 0, no additional treatment is needed; if R is L Aligned for rows and D 1 If the frame is smaller than 0, both the frame A and the frame B need to be rotated 180 degrees clockwise; if R is L In a column alignment mode, and D 1 If the frame is larger than 0, the frame A and the frame B are required to be rotated clockwise by 270 degrees; if R is L In a column alignment mode, and D 1 If the frame is smaller than 0, the frame A and the frame B need to be rotated 90 degrees clockwise;
6) Rotating frame a and frame B, i.e., the set of points P 0 And P 1 Performing the same coordinate rotation process;
7) By treatment ofPost P 0 And P 1 And calculates a 3×3 alignment transformation matrix F according to the least square method or RANSAC method so that P 1 The horizontal (vertical) coordinates of the point in (a) after F transformation are corresponding to the point in P 0 The sum of the absolute values of the differences of the lateral (longitudinal) coordinates of the points in (a) is minimal; wherein the form of the alignment transformation matrix F is as follows:
Figure BDA0003439936430000151
8) And processing the frame A by using the alignment transformation matrix F to obtain a transformed frame A, performing parallax Matching on the transformed frame A and the transformed frame B to finish the recovery of depth information, and obtaining a parallax image of the frame B, wherein the scheme for performing parallax Matching specifically comprises, but is not limited to, a Block Matching algorithm (Block Matching), a Semi-Global Matching algorithm (Semi-Global Matching), a Graph-Cut based stereo Matching algorithm (Graph-Cut) and the like, and simultaneously recording the frame B as a new reference frame A and receiving the next pair of frames shot by the main camera and the auxiliary camera, namely, returning to the step 1 again.
In order to better implement the parallax map generation method in the embodiment of the present application, on the basis of the parallax map generation method, a parallax map generation device is further provided in the embodiment of the present application, as shown in fig. 10, fig. 10 is a schematic structural diagram of the parallax map generation device provided in the embodiment of the present application, including:
the acquiring module 1001 is configured to acquire a target image, and a first reference image and a second reference image corresponding to the target image.
A determining module 1002 is configured to determine an image offset between the target image and the first reference image.
The image determining module 1003 is configured to determine a target reference image according to the image offset, the first reference image and the second reference image.
The generating module 1004 is configured to generate a disparity map of the target image according to the target image and the target reference image.
In one embodiment of the present application, the image determining module includes:
the comparison sub-module is used for comparing the image offset with a preset offset threshold;
the first determining submodule is used for determining the first reference image as a target reference image if the image offset is larger than a preset offset threshold value;
a second determining sub-module for determining the second reference image as the target reference image if the image offset is less than or equal to the preset offset threshold
In an embodiment of the present application, the first determining submodule further includes:
the matching point pair acquisition unit is used for acquiring target pixel points in the target image and matching pixel points corresponding to the target pixel points in the first reference image;
the alignment transformation matrix determining unit is used for determining an alignment transformation matrix according to the target pixel points and the corresponding matched pixel points;
and the transformation unit is used for transforming the first reference image according to the alignment transformation matrix to obtain a transformed first reference image.
In one embodiment of the present application, the determining module includes:
the sampling sub-module is used for sampling the target image to obtain sampling pixel points;
the matching sub-module is used for determining a reference pixel point corresponding to the sampling pixel point in the first reference image according to the sampling pixel point;
and the offset determining submodule is used for determining the image offset between the target image and the first reference image according to the coordinate difference value between each sampling pixel point and the corresponding reference pixel point.
In one embodiment of the present application, the offset determining submodule includes:
the coordinate difference value determining unit is used for determining a horizontal coordinate difference value and a vertical coordinate difference value between each sampling pixel point and a corresponding reference pixel point;
the mean difference determining unit is used for respectively determining the mean value of the horizontal coordinate difference value and the mean value of the vertical coordinate difference value to obtain the mean difference of the horizontal coordinate and the mean difference of the vertical coordinate;
and an image offset setting unit configured to set any one of an average difference on the abscissa and an average value on the ordinate, which is larger in absolute value, as an image offset between the target image and the first reference image.
In an embodiment of the present application, the offset determining submodule further includes:
The comparison unit is used for comparing the image offset with a preset offset threshold;
and the rotation unit is used for carrying out rotation processing on the first reference image according to the average difference of the horizontal coordinates and the average difference of the vertical coordinates if the image offset is larger than the preset offset threshold value, so as to obtain the rotated first reference image.
In one embodiment of the present application, the rotating unit includes:
the first rotation subunit is configured to rotate the first reference image clockwise by a first angle if the absolute value of the mean difference of the abscissa is greater than the absolute value of the mean difference of the ordinate and the mean difference of the abscissa is negative, so as to obtain a rotated first reference image;
the second rotation subunit is configured to rotate the first reference image clockwise by a second angle if the absolute value of the mean difference of the abscissa is smaller than the absolute value of the mean difference of the ordinate and the mean difference of the ordinate is positive, so as to obtain a rotated first reference image;
and the third rotation subunit is used for rotating the first reference image clockwise by a third angle if the absolute value of the average difference of the abscissa is smaller than the absolute value of the average difference of the ordinate and the average difference of the ordinate is negative, so as to obtain the rotated first reference image.
In one embodiment of the present application, the acquiring module includes:
The image acquisition sub-module is used for acquiring a first output frame and a second output frame acquired by the first image acquisition device and a reference output frame acquired by the second image acquisition device;
the first setting submodule is used for setting the second output frame and the reference output frame as a first reference image and a second reference image corresponding to the target image respectively if the first output frame is set as the target image;
and the second setting submodule is used for setting the first output frame and the reference output frame as a first reference image and a second reference image corresponding to the target image respectively if the second output frame is set as the target image.
The embodiment of the invention also provides a parallax map generating device, as shown in fig. 11, and fig. 11 is a schematic structural diagram of the parallax map generating device provided by the embodiment of the invention.
The disparity map generating apparatus includes a memory, a processor, and a disparity map generating program stored in the memory and executable on the processor, the processor implementing the steps in the disparity map generating method in any of the embodiments when executing the disparity map generating program.
Specifically, the present invention relates to a method for manufacturing a semiconductor device. The disparity map generating apparatus may include a processor 1101 of one or more processing cores, a memory 1102 of one or more storage media, a power supply 1103, an input unit 1104, and the like. It will be appreciated by those skilled in the art that the disparity map generating apparatus structure shown in fig. 11 does not constitute a limitation of the disparity map generating apparatus, and may include more or fewer components than shown, or may combine certain components, or may be a different arrangement of components. Wherein:
The processor 1101 is a control center of the parallax map generating apparatus, connects respective portions of the entire parallax map generating apparatus using various interfaces and lines, and performs various functions and processes data of the parallax map generating apparatus by running or executing software programs and/or modules stored in the memory 1102 and calling data stored in the memory 1102, thereby performing overall monitoring of the parallax map generating apparatus. Optionally, the processor 1101 may include one or more processing cores; preferably, the processor 1101 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., and a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1101.
The memory 1102 may be used to store software programs and modules, and the processor 1101 executes various functional applications and data processing by executing the software programs and modules stored in the memory 1102. The memory 1102 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created from the use of the disparity map generating apparatus, or the like. In addition, memory 1102 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 1102 may also include a memory controller to provide the processor 1101 with access to the memory 1102.
The disparity map generating apparatus further includes a power supply 1103 that supplies power to the respective components, and preferably, the power supply 1103 can be logically connected to the processor 1101 through a power management system, so that functions of managing charging, discharging, power consumption management, and the like are realized through the power management system. The power supply 1103 may also include one or more of any of a direct current or alternating current power supply, recharging system, power failure detection circuit, power converter or inverter, power status indicator, etc.
The disparity map generating apparatus may further include an input unit 1104, and the input unit 1104 may be used to receive input numerical or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the disparity map generating apparatus may further include a display unit or the like, and will not be described here. In particular, in this embodiment, the processor 1101 in the disparity map generating apparatus loads executable files corresponding to the processes of one or more application programs into the memory 1102 according to the following instructions, and the processor 1101 runs the application programs stored in the memory 1102, so as to implement the steps in any disparity map generating method provided by the embodiments of the present invention.
To this end, an embodiment of the present invention provides a computer-readable storage medium, on which a disparity map generation program is stored, which when executed by a processor, implements steps in any of the disparity map generation methods provided by the embodiments of the present invention. In particular, the computer-readable storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of one embodiment that are not described in detail in the foregoing embodiments may be referred to in the foregoing detailed description of other embodiments, which are not described herein again.
In the implementation, each unit or structure may be implemented as an independent entity, or may be implemented as the same entity or several entities in any combination, and the implementation of each unit or structure may be referred to the foregoing method embodiments and will not be repeated herein.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
The above describes in detail a parallax map generating method provided by the embodiment of the present application, and specific examples are applied to describe the principles and embodiments of the present invention, where the description of the above embodiment is only for helping to understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present invention, the present description should not be construed as limiting the present invention.

Claims (11)

1. A disparity map generation method, comprising:
acquiring a target image, and a first reference image and a second reference image corresponding to the target image;
determining an image offset between the target image and the first reference image;
determining a target reference image according to the image offset, the first reference image and the second reference image;
and generating a parallax map of the target image according to the target image and the target reference image.
2. The parallax map generation method according to claim 1, characterized in that the step of determining a target reference image from the image offset, the first reference image, and the second reference image includes:
comparing the image offset with a preset offset threshold;
if the image offset is greater than the preset offset threshold, determining the first reference image as a target reference image;
and if the image offset is smaller than or equal to the preset offset threshold, determining the second reference image as a target reference image.
3. The disparity map generation method according to claim 2, wherein before the step of determining the first reference image as a target reference image, the method further comprises:
Acquiring a target pixel point in the target image and a matched pixel point corresponding to the target pixel point in the first reference image;
determining an alignment transformation matrix according to the target pixel points and the corresponding matched pixel points;
transforming the first reference image according to the alignment transformation matrix to obtain a transformed first reference image;
the determining the first reference image as a target reference image includes:
and determining the transformed first reference image as a target reference image.
4. The parallax map generation method according to claim 1, characterized in that the step of determining an image offset between the target image and the first reference image includes:
sampling the target image to obtain sampling pixel points;
determining a reference pixel point corresponding to the sampling pixel point in the first reference image according to the sampling pixel point;
and determining the image offset between the target image and the first reference image according to the coordinate difference value between each sampling pixel point and the corresponding reference pixel point.
5. The parallax map generation method according to claim 4, wherein the step of determining the image offset between the target image and the first reference image based on the coordinate difference between each sampling pixel and its corresponding reference pixel includes:
Determining a horizontal coordinate difference value and a vertical coordinate difference value between each sampling pixel point and a corresponding reference pixel point;
respectively determining the average value of the horizontal coordinate difference value and the average value of the vertical coordinate difference value to obtain an average difference of the horizontal coordinate and an average difference of the vertical coordinate;
any one of the abscissa mean value and the ordinate mean value having a larger absolute value is set as an image offset between the target image and the first reference image.
6. The parallax map generation method according to claim 5, characterized in that after the step of setting any one of the abscissa mean value and the ordinate mean value, which is larger in absolute value, as an image offset between the target image and the first reference image, the method further comprises:
if the image offset is larger than the preset offset threshold, rotating the first reference image according to the abscissa mean difference and the ordinate mean difference to obtain a rotated first reference image;
the step of determining a target reference image from the image offset, the first reference image and the second reference image comprises:
and generating a parallax map of the target image according to the image offset, the rotated first reference image and the second reference image.
7. The parallax map generation method according to claim 6, wherein the step of rotating the first reference image according to the abscissa mean difference and the ordinate mean difference to obtain a rotated first reference image includes:
if the absolute value of the mean difference of the abscissa is larger than the absolute value of the mean difference of the ordinate and the mean difference of the abscissa is negative, rotating the first reference image clockwise by a first angle to obtain a rotated first reference image; and/or
If the absolute value of the mean difference of the abscissa is smaller than the absolute value of the mean difference of the ordinate and the mean difference of the ordinate is positive, the first reference image is rotated clockwise by a second angle, and a rotated first reference image is obtained; and/or
And if the absolute value of the mean difference of the abscissa is smaller than the absolute value of the mean difference of the ordinate and the mean difference of the ordinate is negative, rotating the first reference image clockwise by a third angle to obtain a rotated first reference image.
8. The parallax map generation method according to any one of claims 1 to 7, characterized in that the target image and the second reference image are a pair of frame images obtained by shooting by a preset binocular image acquisition device; the target image is obtained by shooting a main image acquisition device in the binocular image acquisition device, the second reference image is obtained by shooting a secondary image acquisition device in the binocular image acquisition device, and the first reference image is a previous frame image when the main image acquisition device shoots and obtains the target image.
9. A parallax map generation apparatus, comprising:
the acquisition module is used for acquiring a target image, and a first reference image and a second reference image corresponding to the target image;
a determining module for determining an image offset between the target image and the first reference image;
an image determining module for determining a target reference image according to the image offset, the first reference image and the second reference image;
and the generation module is used for generating a parallax image of the target image according to the target image and the target reference image.
10. A disparity map generating apparatus, characterized in that it comprises a processor, a memory, and a disparity map generating program stored in the memory and executable on the processor, the processor executing the disparity map generating program to implement the steps in the disparity map generating method according to any one of claims 1 to 8.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a disparity map generation program that is executed by a processor to realize the steps in the disparity map generation method according to any one of claims 1 to 8.
CN202111627116.4A 2021-12-28 2021-12-28 Parallax map generation method, device, equipment and computer readable storage medium Pending CN116363238A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111627116.4A CN116363238A (en) 2021-12-28 2021-12-28 Parallax map generation method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111627116.4A CN116363238A (en) 2021-12-28 2021-12-28 Parallax map generation method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116363238A true CN116363238A (en) 2023-06-30

Family

ID=86926048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111627116.4A Pending CN116363238A (en) 2021-12-28 2021-12-28 Parallax map generation method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116363238A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173157A (en) * 2023-10-24 2023-12-05 粤芯半导体技术股份有限公司 Patterning process quality detection method, patterning process quality detection device, patterning process quality detection equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173157A (en) * 2023-10-24 2023-12-05 粤芯半导体技术股份有限公司 Patterning process quality detection method, patterning process quality detection device, patterning process quality detection equipment and storage medium
CN117173157B (en) * 2023-10-24 2024-02-13 粤芯半导体技术股份有限公司 Patterning process quality detection method, patterning process quality detection device, patterning process quality detection equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109313799B (en) Image processing method and apparatus
CN108470356B (en) Target object rapid ranging method based on binocular vision
WO2019221013A4 (en) Video stabilization method and apparatus and non-transitory computer-readable medium
CN111355884B (en) Monitoring method, device, system, electronic equipment and storage medium
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
CN112261387B (en) Image fusion method and device for multi-camera module, storage medium and mobile terminal
KR20140015892A (en) Apparatus and method for alignment of images
US20160093028A1 (en) Image processing method, image processing apparatus and electronic device
CN111340737B (en) Image correction method, device and electronic system
WO2023236508A1 (en) Image stitching method and system based on billion-pixel array camera
CN113838151B (en) Camera calibration method, device, equipment and medium
CN116363238A (en) Parallax map generation method, device, equipment and computer readable storage medium
CN112419424B (en) Gun-ball linkage calibration method and device and related equipment
Sun et al. Rolling shutter distortion removal based on curve interpolation
JP6178646B2 (en) Imaging apparatus and image shake correction processing method
CN112804444B (en) Video processing method and device, computing equipment and storage medium
KR20110133677A (en) Method and apparatus for processing 3d image
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
KR101220003B1 (en) Generating method for disparity map
EP3624050B1 (en) Method and module for refocusing at least one plenoptic video
CN111524087B (en) Image processing method and device, storage medium and terminal
CN113706429B (en) Image processing method, device, electronic equipment and storage medium
CN107086033B (en) Cloud computing system
He et al. Enhancing RAW-to-sRGB with Decoupled Style Structure in Fourier Domain
WO2020181509A1 (en) Image processing method, apparatus and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication