WO2020253805A1 - 视差图校正的方法、装置、终端及非暂时性计算机可读存储介质 - Google Patents
视差图校正的方法、装置、终端及非暂时性计算机可读存储介质 Download PDFInfo
- Publication number
- WO2020253805A1 WO2020253805A1 PCT/CN2020/096963 CN2020096963W WO2020253805A1 WO 2020253805 A1 WO2020253805 A1 WO 2020253805A1 CN 2020096963 W CN2020096963 W CN 2020096963W WO 2020253805 A1 WO2020253805 A1 WO 2020253805A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- disparity
- pixel
- error point
- point
- value
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Definitions
- the present disclosure relates to the field of image processing, and in particular to a disparity map correction method, a disparity map correction device, a terminal, and a non-transitory computer-readable storage medium.
- Binocular stereo matching has always been a research hotspot of binocular vision.
- the binocular camera shoots the left and right viewpoint images of the same scene, uses the stereo matching algorithm to obtain the disparity map, and then obtains the depth map.
- the application range of the depth map is very wide. Because it can record the distance between the objects in the scene and the camera, it can be used for measurement, three-dimensional reconstruction, and synthesis of virtual viewpoints.
- the embodiments of the present disclosure provide a disparity map correction method, a disparity map correction device, a terminal, and a non-transitory computer-readable storage medium.
- a first aspect of the present disclosure provides a disparity map correction method, the method includes:
- the disparity value corrects the disparity values of the initial error point, the boundary error point, and the pixel points located between the initial error point and the boundary error point.
- the determining each pixel in the disparity map corresponding to the contour according to the difference between the disparity values of each pixel corresponding to the contour and its neighboring pixels in the disparity map Whether the point is the starting error point including:
- the adjacent pixels are all located in the same row as the pixel;
- the detecting the position of the boundary error point in the same row as the starting error point includes:
- the pixel point corresponding to the disparity value is used as the boundary error point.
- the initial error point, the boundary error point, and the boundary error point are located between the initial error point and the boundary error point according to the disparity value of the preset number of pixel points
- the disparity value of the pixel point is corrected, including:
- the initial error point, the boundary error point, and the disparity value of each pixel point between the initial error point and the boundary error point are replaced with the average value.
- the preset number is three.
- the preset threshold is 3.
- the reference view is a left view or a right view
- the preset direction is a horizontal to right direction; in a case where the reference view is a right view, the preset direction is a horizontal to left direction.
- a second aspect of the present disclosure provides a disparity map correction device, the device includes: an acquisition unit, an extractor, a judgment unit, a detector, and a corrector; wherein,
- the acquiring unit is configured to acquire a reference view for generating a disparity map and generate a disparity map to be corrected;
- the extractor is used to extract the contour of the target in the reference view
- the determining unit is used to determine whether each pixel corresponding to the contour is an initial error according to the difference between the disparity values of each pixel corresponding to the contour and its neighboring pixels in the disparity map point;
- the detector is used to detect the initial error point when determining that the pixel is the initial error point The position of the marginal error point in the same row;
- the corrector is used to obtain the disparity value of a preset number of pixel points located on the same row as the boundary error point and on the side of the boundary error point away from the initial error point, and according to the preset
- the disparity values of the number of pixel points correct the disparity values of the starting error point, the boundary error point, and the pixel points located between the starting error point and the boundary error point.
- the judgment unit includes: a first extraction unit and a first judgment unit;
- the first extraction unit is configured to extract the first disparity value corresponding to the pixel from the disparity map, which is located on the side of the pixel
- the second disparity value corresponding to the adjacent pixel point and the third disparity value corresponding to the adjacent pixel point on the other side of the pixel point wherein, the adjacent pixel point on one side of the pixel point and the The adjacent pixels on the other side of the pixel are all located in the same row as the pixel;
- the first determining unit is configured to determine whether the pixel point is the initial error point according to the first disparity value, the second disparity value, the third disparity value, and a preset threshold; When the difference between the second disparity value and the first disparity value is less than the preset threshold, and the difference between the third disparity value and the first disparity value is less than the preset threshold, it is determined The pixel point is the initial error point.
- the detector includes: a second extraction unit and a second judgment unit;
- the second extracting unit is configured to take the starting error point as a starting point and sequentially extract the disparity values corresponding to the pixel points in the same row as the starting error point from the disparity map along a preset direction ;as well as
- the second determining unit is used to determine whether the difference between the disparity value and the first disparity value is greater than a preset threshold; if the disparity value is greater than the first disparity value If the difference between the values is greater than the preset threshold, stop extracting the disparity value, and use the pixel corresponding to the disparity value as the boundary error point.
- the corrector includes: a third extraction unit, a calculation unit, and a correction unit;
- the third extraction unit is configured to extract the disparity value corresponding to the preset number of pixels from the disparity map
- the calculation unit is configured to calculate the average value of the disparity values corresponding to the preset number of pixels.
- the correction unit is configured to replace the initial error point, the boundary error point, and the disparity value of each pixel point between the initial error point and the boundary error point with the average value.
- the preset number is three.
- the preset threshold is 3.
- the reference view is a left view or a right view
- the preset direction is a horizontal to right direction; in a case where the reference view is a right view, the preset direction is a horizontal to left direction.
- a third aspect of the present disclosure provides a terminal, including: at least one processor and a memory; wherein,
- the memory stores computer executable instructions
- the at least one processor executes the computer executable instructions stored in the memory, so that the terminal executes the disparity map correction method according to any one of the embodiments of the first aspect of the present disclosure.
- a fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium having computer-executable instructions stored thereon, and when the computer-executable instructions are executed by a processor, The disparity map correction method according to any one of the embodiments of the first aspect of the present disclosure is implemented.
- FIG. 1 is a flowchart of a disparity map correction method provided by an embodiment of the disclosure
- FIG. 2 is a schematic diagram of the position of the error point provided by the embodiment of the disclosure.
- FIG. 3 is a flowchart of determining whether a pixel point is an initial error point according to an embodiment of the disclosure
- FIG. 5 is a schematic diagram of a disparity map correction device provided by an embodiment of the disclosure.
- FIG. 6 shows a schematic diagram of the hardware structure of an electronic device for performing a disparity map correction method provided by an embodiment of the present disclosure.
- the inventor of the inventive concept found that in the process of obtaining the disparity map using the left and right viewpoint images, the inconsistency of the information at the target contours in the left and right viewpoint images will cause the disparity map to be distorted, which will seriously affect the binocular Visual application.
- FIG. 1 is a flowchart of the disparity map correction method provided by the embodiment of the disclosure.
- the disparity map correction method may include the following steps S110 to S160.
- Step S110 Obtain a reference view for generating a disparity map and generate a disparity map to be corrected.
- the left view and the right view are acquired by the binocular vision sensor.
- the left view is used as the reference view
- the right view is used as the auxiliary view
- a disparity map corresponding to the left view is generated, and then the Parallax map and left view.
- the right view can also be used as the reference view, depending on actual needs.
- Step S120 Extract the contour of the target in the reference view.
- the left view is segmented so that each object in the left view is easy to distinguish.
- the contour of the target is extracted from the segmented left view, and the contour of the target is The coordinates of the pixel points are saved. It should be noted that when the left view is segmented, the segmentation can also be performed based on pixel-level processing of image morphology.
- Step S130 Determine whether each pixel corresponding to the contour in the disparity map is an initial error point according to the difference between the disparity values of each pixel corresponding to the contour and its neighboring pixels in the disparity map.
- the disparity value of each pixel corresponding to the contour and the disparity of the pixel adjacent to the pixel can be extracted value.
- the pixel adjacent to the pixel in particular refers to the pixel adjacent to the pixel and located in the same row.
- the disparity value of each pixel on the object is similar. Therefore, a preset threshold can be set to calculate the disparity value of the pixel corresponding to the contour in the disparity map and the disparity value of the pixel adjacent to it.
- the visual value of the pixel corresponding to the contour in the disparity map is When the difference and the difference between the disparity values of the adjacent pixels are less than the preset threshold, it can be considered as the same object. When the difference between the disparity value of the pixel corresponding to the contour and the disparity value of the adjacent pixel in the disparity map is greater than the preset threshold, it can be considered as a different object.
- the embodiment of the present disclosure is based on the disparity value in the disparity map extracted from the contour of the target in the reference image, if the disparity map is not distorted, the two pixels on both sides of any pixel on the contour should correspond to different Object; when the above calculation results indicate that a certain pixel in the pixel corresponding to the contour in the disparity map is the same object as the pixel on both sides of the adjacent side, then the pixel is determined to be the starting error point.
- Step S140 For any pixel (for example, for any one of the pixels in the disparity map corresponding to the contour), when it is determined that the pixel is the initial error point, detect and start The starting error point is at the position of the boundary error point on the same line.
- FIG. 2 is a schematic diagram of the position of the error point provided by the embodiment of the disclosure.
- the boundary error point may be located in the same row as the starting error point and correspond to the pixel points of the contour of the disparity map.
- the boundary of the error must be found to determine the area to be corrected (for example, area A in FIG. 2). Therefore, on the line where the initial error point is located, take the initial error point as the starting point, and along the preset direction, check whether each pixel is a normal pixel point by point, that is, the pixel point and the initial error point correspond to different For an object, when the normal pixel is found, the normal pixel is used as the boundary error point.
- Step S150 Obtain the disparity value of the preset number of pixel points located on the same row as the boundary error point and on the side of the boundary error point away from the initial error point.
- the boundary error point when the boundary error point is found, the area to be corrected which is composed of the initial error point, the boundary error point, and the pixel points between the initial error point and the boundary error point is obtained.
- the disparity value of the preset number of pixel points outside the area to be corrected is extracted, and used in the correction.
- Step S160 Correct the disparity values of the initial error point, the boundary error point, and the pixel points located between the initial error point and the boundary error point according to the disparity values of the preset number of pixel points.
- the disparity value of the preset number of pixels outside the area to be corrected can be considered It is the parallax value that each pixel in the area to be corrected should have. Therefore, the disparity value of each pixel in the area to be corrected is replaced according to the disparity value to perform correction.
- the disparity map is obtained by matching the left view and the right view, so that the disparity map is distorted at the boundary of the contour of the target.
- the starting error point and the boundary error point in the disparity map are determined according to the contour of the target in the reference view, so as to determine the pixel points that need to be corrected. Then, according to the disparity value of the normal pixel point close to the pixel point that needs to be corrected, the pixel point that needs to be corrected is corrected, so as to correct the distorted disparity map and improve the accuracy of forming the disparity map.
- FIG. 3 is a flowchart of determining whether a pixel point is an initial error point according to an embodiment of the disclosure. As shown in FIG. 3, when determining whether any pixel point corresponding to the contour in the disparity map is the initial error point, the following steps S131 to S133 may be included.
- Step S131 extract from the disparity map the first disparity value corresponding to the pixel, the second disparity value corresponding to the adjacent pixel on one side of the pixel, and the adjacent pixel on the other side of the pixel.
- the third disparity value corresponding to the pixel For example, the adjacent pixel on one side of the pixel and the adjacent pixel on the other side of the pixel are all located in the same row as the pixel.
- the disparity value d of the pixel is extracted from the disparity map, and the pixel The second disparity value d1 corresponding to the adjacent pixel on one side of the dot and the third disparity value d2 corresponding to the adjacent pixel on the other side of the pixel.
- the coordinates of two adjacent pixels are (x-1, y) and (x+1, y) respectively.
- Step S132 Determine whether the difference between the second disparity value and the first disparity value is less than a preset threshold and whether the difference between the third disparity value and the first disparity value is less than the preset threshold.
- the preset threshold T is a small value, and is generally set to 3 based on experience.
- Step S133 Determine the pixel point as the initial error point.
- detecting the position of the boundary error point in the same row as the initial error point in step S140 includes: taking the initial error point as a starting point, along a preset direction, sequentially from the disparity map Extract the disparity value of the pixel in the same row as the initial error point, and determine whether the pixel corresponding to the disparity value is a boundary error point.
- FIG. 4 is a flowchart of detecting boundary error points provided by an embodiment of the disclosure. As shown in FIG. 4, for example, during detection, the coordinate mark of the initial error point is (x, y), and the corresponding first disparity value is d.
- the abscissa of each pixel can represent the number of the column where the pixel is located, and the ordinate can represent the number of the row where the pixel is located.
- i is a non-zero integer.
- step S140 shown in FIG. 1 may include the following steps S141 to S145.
- Step S141 Set the initial value of i to 1 or -1.
- Step S142 Extract the disparity value of the pixel with the coordinates (x+i, y) from the disparity map, the disparity value is denoted as d x+i, y ; that is, extract the disparity value that is located in the same row as the initial error point The parallax value of the pixel.
- Step S143 Determine whether the difference between the disparity value d x+i,y and the first disparity value d corresponding to the initial error point is greater than a preset threshold.
- step S142 suppose the parallax value of the pixel with the coordinate (x+i,y) is d x+i, y , where i is an integer, and the initial value of i is 1 or -1.
- the embodiment of the present disclosure may also perform extraction along a certain direction, for example, along the direction where the initial value of i is only 1 or along the direction where the initial value of i is only -1, which is specifically determined according to actual needs.
- step S144 it is judged whether d and d x+i,y satisfy
- Step S144 Use the pixel point corresponding to the disparity value as the boundary error point.
- the coordinates of the boundary error point are (x+i,y).
- the initial value of i and the pixel point corresponding to the disparity value d x+i, y in step 143 may be determined according to the preset direction after the pixel point corresponding to the disparity value d x + i, y is an error pixel point.
- the abscissa of the pixel gradually increases from the left to the right of the image, and the preset direction is horizontal to right.
- the initial value of i is 1, and the disparity value d x+i is determined in step 143 .
- the pixel corresponding to y is an error pixel, add 1 to the value of i; if the preset direction is horizontal to the left, the initial value of i is -1, and the disparity value d x+i is determined in step 143 , After the pixel corresponding to y is the error pixel, the value of i is reduced by 1.
- area A is the area where the information of the pixels in the left view and the right view is inconsistent.
- the image is acquired, because the acquisition angles of the left view and the right view are different, when the target in a side view is When obstacles are occluded, the information in the left view and the right view will not be exactly the same, resulting in distortion of the disparity map obtained by matching the left view and the right view at the contour boundary of the target object.
- the preset direction may be determined according to whether the reference view for generating the disparity map is the left view or the right view. For example, when the disparity map is generated using the left view as the reference view, the preset direction is horizontal to right; when the disparity map is generated using the right view as the reference view, the preset direction is horizontal to the left.
- step S160 shown in FIG. 1 may include the following steps S161 to S163.
- Step S161 Extract disparity values corresponding to a preset number of pixels from the disparity map.
- Step S162 Calculate the average value of the disparity values corresponding to the preset number of pixels.
- Step S163 Replace the start error point, the boundary error point, and the disparity value of each pixel between the start error point and the boundary error point with the average value.
- the preset number is set to 3. Then, take (x+i+1, y), (x+i+2, y) and (x +i+3, y) 3 pixels, the disparity values corresponding to the 3 pixels are d3, d4, and d5 respectively, and the average value d6 of d3, d4, and d5 is obtained. Replace the start error point, the boundary error point, and the parallax value of each pixel between the start error point and the boundary error point with d6 for correction. It should be understood that the preset number is not limited to three, and in actual applications, the preset number may be less than or greater than three.
- FIG. 5 is a schematic diagram of a disparity map correction device (which may be an electronic device) provided by an embodiment of the disclosure.
- the disparity map correction apparatus may include: an acquisition unit 510, an extractor 520, a judgment unit 530, a detector 540, and a corrector 550.
- the obtaining unit 510 is used for obtaining a reference view for generating a disparity map and for generating a disparity map to be corrected.
- the extractor 520 is used to extract the contour of the target in the reference view.
- the determining unit 530 is configured to determine whether each pixel corresponding to the contour is an initial error point according to the difference between the disparity values of each pixel corresponding to the contour and its neighboring pixels in the disparity map.
- the detector 540 is used to detect when it is determined that the pixel is the initial error point The position of the boundary error point on the same line as the initial error point.
- the corrector 550 is used to obtain the disparity value of a preset number of pixels on the same line as the boundary error point and on the side of the boundary error point away from the initial error point, and according to the disparity value of the preset number of pixel points Correct the parallax values of the starting error point, the boundary error point, and the pixel points between the starting error point and the boundary error point.
- the starting error point and the boundary error point in the parallax map are determined, so as to determine the pixel to be corrected, and then according to the normality close to the pixel to be corrected.
- the disparity value of the pixel is corrected for the pixel that needs to be corrected, so as to correct the distorted disparity map and improve the accuracy of forming the disparity map.
- the judgment unit 530 includes: a first extraction unit 531 and a first judgment unit 532.
- the first extracting unit 531 is configured to extract the first disparity value corresponding to the pixel from the disparity map, and the adjacent pixel on the side of the pixel.
- the second disparity value corresponding to the pixel and the third disparity value corresponding to the adjacent pixel on the other side of the pixel are all located in the same row as the pixel.
- the first determining unit 532 is configured to determine whether the pixel point is the initial error point according to the first disparity value, the second disparity value, the third disparity value, and the preset threshold. When the difference between the second disparity value and the first disparity value is less than the preset threshold, and the difference between the third disparity value and the first disparity value is less than the preset threshold, the pixel point is determined to be the initial error point.
- the detector 540 includes: a second extraction unit 541 and a second judgment unit 542.
- the second extracting unit 541 is configured to take the initial error point as a starting point and sequentially extract the disparity values corresponding to the pixel points in the same row as the initial error point from the disparity map along a preset direction.
- the second determining unit 542 is used to determine whether the difference between the disparity value and the first disparity value is greater than a preset threshold. If the difference between the disparity value and the first disparity value is greater than the preset threshold value, stop extracting the disparity value, and use the pixel point corresponding to the disparity value as the boundary error point.
- the corrector 550 includes a third extraction unit 551, a calculation unit 552, and a correction unit 553.
- the third extracting unit 551 is configured to extract the disparity value corresponding to a preset number of pixels from the disparity map.
- the calculation unit 552 is configured to calculate the average value of the disparity values corresponding to the preset number of pixels.
- the correction unit 553 is used to replace the starting error point, the boundary error point, and the disparity value of each pixel between the starting error point and the boundary error point with the average value.
- the disparity map correction device may further include a memory 560.
- the memory 560 may be connected to the acquiring unit 510, for example, for storing the reference view, the disparity map, each pixel of the reference view, and the disparity map. Correspondence between each pixel in the disparity map, the coordinates of each pixel in the disparity map and its corresponding disparity value, the starting error point, the boundary error point, and other related data and computer programs .
- each component of the disparity map correction device shown in FIG. 5 may be implemented in a hardware manner, or may be implemented in a combination of hardware and software.
- each component of the disparity map correction device shown in FIG. 5 may be a central processing unit (CPU), an application processor (AP), a digital signal processor (DSP), Field programmable logic circuit (FPGA), microprocessor (MCU), integrated circuit (IC) or application specific integrated circuit (ASIC).
- the various components of the disparity map correction device shown in FIG. 5 can be implemented by a combination of a processor, a memory, and a computer program.
- the computer program is stored in the memory, and the processor receives data from the memory.
- the computer program is read and executed, thereby serving as the various components of the disparity map correction device shown in FIG. 5.
- the embodiment of the present disclosure also provides a terminal (for example, an electronic device), and the terminal may include: at least one processor and a memory.
- the memory is used to store computer executable instructions.
- the at least one processor executes the computer executable instructions stored in the memory, so that the terminal can execute the above-mentioned disparity map correction method.
- the terminal may be a mobile phone, a notebook computer, a personal computer, a server, etc.
- FIG. 6 shows a schematic diagram of the hardware structure of an electronic device for performing a disparity map correction method provided by an embodiment of the present disclosure.
- the electronic device may include one or more processors 610 and a memory 620.
- one processor 610 is taken as an example.
- the processor 610 and the memory 620 may be connected through a bus or in other ways. In FIG. 6, the connection through a bus is taken as an example.
- the memory 620 can be used to store software programs, computer-executable programs/modules (such as program instructions/modules corresponding to the disparity map correction method in the embodiment of the present disclosure), and related data (As mentioned above).
- the processor 610 executes various functional applications and data processing herein by running software programs, instructions, and modules stored in the memory 620, that is, realizes the disparity map correction method in the above-mentioned embodiment of the invention.
- the memory 620 may include a program storage area and a data storage area.
- the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to any of the above methods.
- the memory 620 may include a high-speed random access memory, and may also include a non-transitory memory, such as a magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
- the memory 620 may optionally include a memory remotely provided with respect to the processor 610, and these remote memories may be connected to a processor running any of the above methods through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
- One or more computer program modules may be stored in the memory 620, and when executed by one or more processors 610, implement the above-mentioned disparity map correction method.
- the embodiments of the present disclosure also provide a non-transitory computer-readable storage medium, wherein computer-executable instructions are stored on the non-transitory computer-readable storage medium, and when the computer-executable instructions are executed by a processor, the aforementioned parallax is realized.
- Figure correction method
- the disclosed device and method may be implemented in other ways.
- the device embodiments described above are merely illustrative.
- the division of components is only a logical function division. In actual implementation, there may be other division methods.
- multiple components may be combined or integrated into another.
- a system or some features can be ignored, or some steps can be omitted.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of the devices or units, and may be in electrical, mechanical or other forms.
- the components described as separate components may or may not be physically separated, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the components may be selected according to actual needs to implement the method or device of the embodiments of the present disclosure.
- various functional components in the embodiments of the present disclosure may be integrated into one processing component, or each component may exist alone physically, or two or more components may be integrated into one component.
- the method is implemented in the form of a computer program and sold or used as an independent product, it can be stored in a non-transitory computer readable storage medium.
- the technical solution of the present disclosure essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product.
- the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods in the various embodiments of the present disclosure.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
- the disparity map correction method, the disparity map correction device, the terminal, and the non-transitory computer-readable storage medium provided by the above-mentioned embodiments of the present disclosure can at least achieve the following beneficial technical effects: the disparity map formed in the prior art is corrected Distortion.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (16)
- 一种视差图校正方法,所述方法包括:获取用于生成视差图的基准视图并且生成待校正的视差图;提取所述基准视图中目标物的轮廓;根据所述视差图中与所述轮廓对应的各个像素点及其相邻像素点的视差值之差,判断所述视差图中与所述轮廓对应的各个像素点是否为起始误差点;对于所述视差图中与所述轮廓对应的各个像素点中的任意一个像素点,当判断出该像素点为所述起始误差点时,检测与所述起始误差点位于同一行的边界误差点的位置;以及获取与所述边界误差点位于同一行,且位于所述边界误差点背离所述起始误差点一侧的预设数量的像素点的视差值,并根据所述预设数量的像素点的视差值对所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的像素点的视差值进行校正。
- 根据权利要求1所述的视差图校正方法,其中,所述根据所述视差图中与所述轮廓对应的各个像素点及其相邻像素点的视差值之差,判断所述视差图中与所述轮廓对应的各个像素点是否为起始误差点,包括:对于所述视差图中与所述轮廓对应的任一像素点,从所述视差图中提取与该像素点对应的第一视差值、位于该像素点一侧的相邻像素点所对应的第二视差值以及位于该像素点另一侧的相邻像素点所对应的第三视差值;其中,位于该像素点一侧的相邻像素点以及位于该像素点另一侧的相邻像素点均与该像素点位于同一行;判断所述第二视差值与所述第一视差值之差是否小于预设阈值以及所述第三视差值与所述第一视差值之差是否小于所述预设阈值;以及当所述第二视差值与所述第一视差值之差小于所述预设阈值, 且所述第三视差值与所述第一视差值之差小于所述预设阈值时,确定该像素点为起始误差点。
- 根据权利要求1或2所述的视差图校正方法,其中,所述检测与所述起始误差点位于同一行的边界误差点的位置,包括:以所述起始误差点为起点,沿预设方向,从所述视差图中依次提取与所述起始误差点位于同一行的像素点的视差值;对于每次提取的视差值,判断该视差值与所述起始误差点对应的第一视差值之差是否大于预设阈值;以及若该视差值与所述起始误差点对应的第一视差值之差大于所述预设阈值,则以该视差值所对应的像素点作为边界误差点。
- 根据权利要求1至3中任一项所述的视差图校正方法,其中,所述根据所述预设数量的像素点的视差值对所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的像素点的视差值进行校正,包括:从所述视差图中提取所述预设数量的像素点所对应的视差值;计算所述预设数量的像素点所对应的视差值的平均值;以及以所述平均值替换所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的每一个像素点的视差值。
- 根据权利要求1至4中任一项所述的视差图校正方法,其中,所述预设数量为3。
- 根据权利要求2或3所述的视差图校正方法,其中,所述预设阈值为3。
- 根据权利要求3所述的视差图校正方法,其中,所述基准视图为左视图或右视图;以及在所述基准视图为左视图的情况下,所述预设方向为水平向右 的方向;在所述基准视图为右视图的情况下,所述预设方向为水平向左的方向。
- 一种视差图校正装置,所述装置包括:获取单元、提取器、判断单元、检测器以及校正器;其中,所述获取单元用于获取用于生成视差图的基准视图并且生成待校正的视差图;所述提取器用于提取所述基准视图中目标物的轮廓;所述判断单元用于根据所述视差图中与所述轮廓对应的各个像素点及其相邻像素点所对应的视差值之差,判断所述轮廓对应的各个像素点是否为起始误差点;对于所述视差图中与所述轮廓对应的各个像素点中的任意一个像素点,所述检测器用于当判断出该像素点为所述起始误差点时,检测与所述起始误差点位于同一行的边界误差点的位置;以及所述校正器用于获取与所述边界误差点位于同一行,且位于所述边界误差点背离所述起始误差点一侧的预设数量的像素点的视差值,并根据所述预设数量的像素点的视差值对所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的像素点的视差值进行校正。
- 根据权利要求8所述的视差图校正装置,其中,所述判断单元包括:第一提取单元以及第一判断单元;对于所述视差图中与所述轮廓对应的任一像素点,所述第一提取单元用于从所述视差图中提取与该像素点对应的第一视差值、位于该像素点一侧的相邻像素点所对应的第二视差值以及位于该像素点另一侧的相邻像素点所对应的第三视差值;其中,位于该像素点一侧的相邻像素点以及位于该像素点另一侧的相邻像素点均与该像素点位于同一行;以及所述第一判断单元用于根据所述第一视差值、所述第二视差值、所述第三视差值以及预设阈值判断该像素点是否为起始误差点;当所 述第二视差值与所述第一视差值之差小于所述预设阈值,且所述第三视差值与所述第一视差值之差小于所述预设阈值时,确定该像素点为起始误差点。
- 根据权利要求8或9所述的视差图校正装置,其中,所述检测器包括:第二提取单元以及第二判断单元;所述第二提取单元用于以所述起始误差点为起点,沿预设方向,从所述视差图中依次提取与所述起始误差点位于同一行的像素点所对应的视差值;以及对于每次提取的视差值,所述第二判断单元用于判断该视差值与所述第一视差值之差是否大于预设阈值;若该视差值与所述第一视差值之差大于所述预设阈值,则停止提取视差值,并以该视差值所对应的像素点作为边界误差点。
- 根据权利要求8至10中任一项所述的视差图校正装置,其中,所述校正器包括:第三提取单元、计算单元以及校正单元;所述第三提取单元用于从所述视差图中提取所述预设数量的像素点所对应的视差值;所述计算单元用于计算所述预设数量的像素点所对应的视差值的平均值;以及所述校正单元用于以所述平均值替换所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的每一个像素点的视差值。
- 根据权利要求8至11中任一项所述的视差图校正装置,其中,所述预设数量为3。
- 根据权利要求9或10所述的视差图校正装置,其中,所述预设阈值为3。
- 根据权利要求10所述的视差图校正装置,其中,所述基准视图为左视图或右视图;以及在所述基准视图为左视图的情况下,所述预设方向为水平向右的方向;在所述基准视图为右视图的情况下,所述预设方向为水平向左的方向。
- 一种终端,包括:至少一个处理器和存储器;其中,所述存储器存储有计算机可执行指令;以及所述至少一个处理器执行所述存储器中存储的所述计算机可执行指令,使得所述终端执行根据权利要求1至7中任一项所述的视差图校正方法。
- 一种非暂时性计算机可读存储介质,所述非暂时性计算机可读存储介质上存储有计算机可执行指令,当所述计算机可执行指令被处理器执行时,实现根据权利要求1至7中任一项所述的视差图校正方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910533023.1 | 2019-06-19 | ||
CN201910533023.1A CN112116660B (zh) | 2019-06-19 | 2019-06-19 | 视差图校正方法、装置、终端及计算机可读介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020253805A1 true WO2020253805A1 (zh) | 2020-12-24 |
Family
ID=73796654
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/096963 WO2020253805A1 (zh) | 2019-06-19 | 2020-06-19 | 视差图校正的方法、装置、终端及非暂时性计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112116660B (zh) |
WO (1) | WO2020253805A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114440775B (zh) * | 2021-12-29 | 2024-08-20 | 全芯智造技术有限公司 | 特征尺寸的偏移误差计算方法及装置、存储介质、终端 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104252701A (zh) * | 2013-06-28 | 2014-12-31 | 株式会社理光 | 校正视差图的方法和系统 |
CN104915927A (zh) * | 2014-03-11 | 2015-09-16 | 株式会社理光 | 视差图像优化方法及装置 |
CN105023263A (zh) * | 2014-04-22 | 2015-11-04 | 南京理工大学 | 一种基于区域生长的遮挡检测和视差校正的方法 |
CN105631887A (zh) * | 2016-01-18 | 2016-06-01 | 武汉理工大学 | 基于自适应支持权重匹配算法的两步视差改良方法及系统 |
US20170163960A1 (en) * | 2014-06-12 | 2017-06-08 | Toyota Jidosha Kabushiki Kaisha | Disparity image generating device, disparity image generating method, and image |
CN108401460A (zh) * | 2017-09-29 | 2018-08-14 | 深圳市大疆创新科技有限公司 | 生成视差图的方法、系统、存储介质和计算机程序产品 |
CN109215044A (zh) * | 2017-06-30 | 2019-01-15 | 京东方科技集团股份有限公司 | 图像处理方法和系统、存储介质和移动系统 |
CN109859253A (zh) * | 2018-12-17 | 2019-06-07 | 深圳市道通智能航空技术有限公司 | 一种立体匹配方法、装置和电子设备 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180016823A (ko) * | 2016-08-08 | 2018-02-20 | 한국전자통신연구원 | 영상 보정 장치 및 방법 |
CN107909036B (zh) * | 2017-11-16 | 2020-06-23 | 海信集团有限公司 | 一种基于视差图的道路检测方法及装置 |
-
2019
- 2019-06-19 CN CN201910533023.1A patent/CN112116660B/zh active Active
-
2020
- 2020-06-19 WO PCT/CN2020/096963 patent/WO2020253805A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104252701A (zh) * | 2013-06-28 | 2014-12-31 | 株式会社理光 | 校正视差图的方法和系统 |
CN104915927A (zh) * | 2014-03-11 | 2015-09-16 | 株式会社理光 | 视差图像优化方法及装置 |
CN105023263A (zh) * | 2014-04-22 | 2015-11-04 | 南京理工大学 | 一种基于区域生长的遮挡检测和视差校正的方法 |
US20170163960A1 (en) * | 2014-06-12 | 2017-06-08 | Toyota Jidosha Kabushiki Kaisha | Disparity image generating device, disparity image generating method, and image |
CN105631887A (zh) * | 2016-01-18 | 2016-06-01 | 武汉理工大学 | 基于自适应支持权重匹配算法的两步视差改良方法及系统 |
CN109215044A (zh) * | 2017-06-30 | 2019-01-15 | 京东方科技集团股份有限公司 | 图像处理方法和系统、存储介质和移动系统 |
CN108401460A (zh) * | 2017-09-29 | 2018-08-14 | 深圳市大疆创新科技有限公司 | 生成视差图的方法、系统、存储介质和计算机程序产品 |
CN109859253A (zh) * | 2018-12-17 | 2019-06-07 | 深圳市道通智能航空技术有限公司 | 一种立体匹配方法、装置和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN112116660B (zh) | 2024-03-29 |
CN112116660A (zh) | 2020-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105374019B (zh) | 一种多深度图融合方法及装置 | |
CN111066065B (zh) | 用于混合深度正则化的系统和方法 | |
US9070042B2 (en) | Image processing apparatus, image processing method, and program thereof | |
US20180300937A1 (en) | System and a method of restoring an occluded background region | |
WO2019102442A1 (en) | Systems and methods for 3d facial modeling | |
CN107316326B (zh) | 应用于双目立体视觉的基于边的视差图计算方法和装置 | |
CN107392958A (zh) | 一种基于双目立体摄像机确定物体体积的方法及装置 | |
CN102523464A (zh) | 一种双目立体视频的深度图像估计方法 | |
WO2020119467A1 (zh) | 高精度稠密深度图像的生成方法和装置 | |
CN111160232B (zh) | 正面人脸重建方法、装置及系统 | |
TWI553591B (zh) | 深度影像處理方法及深度影像處理系統 | |
CN111882655A (zh) | 三维重建的方法、装置、系统、计算机设备和存储介质 | |
WO2022142139A1 (zh) | 投影面选取和投影图像校正方法、装置、投影仪及介质 | |
CN112802081B (zh) | 一种深度检测方法、装置、电子设备及存储介质 | |
CN116029996A (zh) | 立体匹配的方法、装置和电子设备 | |
CN106558038B (zh) | 一种水天线检测方法及装置 | |
CN107155100B (zh) | 一种基于图像的立体匹配方法及装置 | |
WO2020253805A1 (zh) | 视差图校正的方法、装置、终端及非暂时性计算机可读存储介质 | |
CN108805841B (zh) | 一种基于彩色图引导的深度图恢复及视点合成优化方法 | |
CN110800020B (zh) | 一种图像信息获取方法、图像处理设备及计算机存储介质 | |
CN111833441A (zh) | 一种基于多相机系统的人脸三维重建方法和装置 | |
CN111383185A (zh) | 一种基于稠密视差图的孔洞填充方法及车载设备 | |
CN112053434B (zh) | 视差图的生成方法、三维重建方法及相关装置 | |
CN112233164B (zh) | 一种视差图错误点识别与校正方法 | |
CN111630569A (zh) | 双目匹配的方法、视觉成像装置及具有存储功能的装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20825661 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20825661 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20825661 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.07.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20825661 Country of ref document: EP Kind code of ref document: A1 |