WO2020253805A1 - 视差图校正的方法、装置、终端及非暂时性计算机可读存储介质 - Google Patents

视差图校正的方法、装置、终端及非暂时性计算机可读存储介质 Download PDF

Info

Publication number
WO2020253805A1
WO2020253805A1 PCT/CN2020/096963 CN2020096963W WO2020253805A1 WO 2020253805 A1 WO2020253805 A1 WO 2020253805A1 CN 2020096963 W CN2020096963 W CN 2020096963W WO 2020253805 A1 WO2020253805 A1 WO 2020253805A1
Authority
WO
WIPO (PCT)
Prior art keywords
disparity
pixel
error point
point
value
Prior art date
Application number
PCT/CN2020/096963
Other languages
English (en)
French (fr)
Inventor
孙一郎
侯一凡
Original Assignee
京东方科技集团股份有限公司
北京京东方光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 北京京东方光电科技有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2020253805A1 publication Critical patent/WO2020253805A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present disclosure relates to the field of image processing, and in particular to a disparity map correction method, a disparity map correction device, a terminal, and a non-transitory computer-readable storage medium.
  • Binocular stereo matching has always been a research hotspot of binocular vision.
  • the binocular camera shoots the left and right viewpoint images of the same scene, uses the stereo matching algorithm to obtain the disparity map, and then obtains the depth map.
  • the application range of the depth map is very wide. Because it can record the distance between the objects in the scene and the camera, it can be used for measurement, three-dimensional reconstruction, and synthesis of virtual viewpoints.
  • the embodiments of the present disclosure provide a disparity map correction method, a disparity map correction device, a terminal, and a non-transitory computer-readable storage medium.
  • a first aspect of the present disclosure provides a disparity map correction method, the method includes:
  • the disparity value corrects the disparity values of the initial error point, the boundary error point, and the pixel points located between the initial error point and the boundary error point.
  • the determining each pixel in the disparity map corresponding to the contour according to the difference between the disparity values of each pixel corresponding to the contour and its neighboring pixels in the disparity map Whether the point is the starting error point including:
  • the adjacent pixels are all located in the same row as the pixel;
  • the detecting the position of the boundary error point in the same row as the starting error point includes:
  • the pixel point corresponding to the disparity value is used as the boundary error point.
  • the initial error point, the boundary error point, and the boundary error point are located between the initial error point and the boundary error point according to the disparity value of the preset number of pixel points
  • the disparity value of the pixel point is corrected, including:
  • the initial error point, the boundary error point, and the disparity value of each pixel point between the initial error point and the boundary error point are replaced with the average value.
  • the preset number is three.
  • the preset threshold is 3.
  • the reference view is a left view or a right view
  • the preset direction is a horizontal to right direction; in a case where the reference view is a right view, the preset direction is a horizontal to left direction.
  • a second aspect of the present disclosure provides a disparity map correction device, the device includes: an acquisition unit, an extractor, a judgment unit, a detector, and a corrector; wherein,
  • the acquiring unit is configured to acquire a reference view for generating a disparity map and generate a disparity map to be corrected;
  • the extractor is used to extract the contour of the target in the reference view
  • the determining unit is used to determine whether each pixel corresponding to the contour is an initial error according to the difference between the disparity values of each pixel corresponding to the contour and its neighboring pixels in the disparity map point;
  • the detector is used to detect the initial error point when determining that the pixel is the initial error point The position of the marginal error point in the same row;
  • the corrector is used to obtain the disparity value of a preset number of pixel points located on the same row as the boundary error point and on the side of the boundary error point away from the initial error point, and according to the preset
  • the disparity values of the number of pixel points correct the disparity values of the starting error point, the boundary error point, and the pixel points located between the starting error point and the boundary error point.
  • the judgment unit includes: a first extraction unit and a first judgment unit;
  • the first extraction unit is configured to extract the first disparity value corresponding to the pixel from the disparity map, which is located on the side of the pixel
  • the second disparity value corresponding to the adjacent pixel point and the third disparity value corresponding to the adjacent pixel point on the other side of the pixel point wherein, the adjacent pixel point on one side of the pixel point and the The adjacent pixels on the other side of the pixel are all located in the same row as the pixel;
  • the first determining unit is configured to determine whether the pixel point is the initial error point according to the first disparity value, the second disparity value, the third disparity value, and a preset threshold; When the difference between the second disparity value and the first disparity value is less than the preset threshold, and the difference between the third disparity value and the first disparity value is less than the preset threshold, it is determined The pixel point is the initial error point.
  • the detector includes: a second extraction unit and a second judgment unit;
  • the second extracting unit is configured to take the starting error point as a starting point and sequentially extract the disparity values corresponding to the pixel points in the same row as the starting error point from the disparity map along a preset direction ;as well as
  • the second determining unit is used to determine whether the difference between the disparity value and the first disparity value is greater than a preset threshold; if the disparity value is greater than the first disparity value If the difference between the values is greater than the preset threshold, stop extracting the disparity value, and use the pixel corresponding to the disparity value as the boundary error point.
  • the corrector includes: a third extraction unit, a calculation unit, and a correction unit;
  • the third extraction unit is configured to extract the disparity value corresponding to the preset number of pixels from the disparity map
  • the calculation unit is configured to calculate the average value of the disparity values corresponding to the preset number of pixels.
  • the correction unit is configured to replace the initial error point, the boundary error point, and the disparity value of each pixel point between the initial error point and the boundary error point with the average value.
  • the preset number is three.
  • the preset threshold is 3.
  • the reference view is a left view or a right view
  • the preset direction is a horizontal to right direction; in a case where the reference view is a right view, the preset direction is a horizontal to left direction.
  • a third aspect of the present disclosure provides a terminal, including: at least one processor and a memory; wherein,
  • the memory stores computer executable instructions
  • the at least one processor executes the computer executable instructions stored in the memory, so that the terminal executes the disparity map correction method according to any one of the embodiments of the first aspect of the present disclosure.
  • a fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium having computer-executable instructions stored thereon, and when the computer-executable instructions are executed by a processor, The disparity map correction method according to any one of the embodiments of the first aspect of the present disclosure is implemented.
  • FIG. 1 is a flowchart of a disparity map correction method provided by an embodiment of the disclosure
  • FIG. 2 is a schematic diagram of the position of the error point provided by the embodiment of the disclosure.
  • FIG. 3 is a flowchart of determining whether a pixel point is an initial error point according to an embodiment of the disclosure
  • FIG. 5 is a schematic diagram of a disparity map correction device provided by an embodiment of the disclosure.
  • FIG. 6 shows a schematic diagram of the hardware structure of an electronic device for performing a disparity map correction method provided by an embodiment of the present disclosure.
  • the inventor of the inventive concept found that in the process of obtaining the disparity map using the left and right viewpoint images, the inconsistency of the information at the target contours in the left and right viewpoint images will cause the disparity map to be distorted, which will seriously affect the binocular Visual application.
  • FIG. 1 is a flowchart of the disparity map correction method provided by the embodiment of the disclosure.
  • the disparity map correction method may include the following steps S110 to S160.
  • Step S110 Obtain a reference view for generating a disparity map and generate a disparity map to be corrected.
  • the left view and the right view are acquired by the binocular vision sensor.
  • the left view is used as the reference view
  • the right view is used as the auxiliary view
  • a disparity map corresponding to the left view is generated, and then the Parallax map and left view.
  • the right view can also be used as the reference view, depending on actual needs.
  • Step S120 Extract the contour of the target in the reference view.
  • the left view is segmented so that each object in the left view is easy to distinguish.
  • the contour of the target is extracted from the segmented left view, and the contour of the target is The coordinates of the pixel points are saved. It should be noted that when the left view is segmented, the segmentation can also be performed based on pixel-level processing of image morphology.
  • Step S130 Determine whether each pixel corresponding to the contour in the disparity map is an initial error point according to the difference between the disparity values of each pixel corresponding to the contour and its neighboring pixels in the disparity map.
  • the disparity value of each pixel corresponding to the contour and the disparity of the pixel adjacent to the pixel can be extracted value.
  • the pixel adjacent to the pixel in particular refers to the pixel adjacent to the pixel and located in the same row.
  • the disparity value of each pixel on the object is similar. Therefore, a preset threshold can be set to calculate the disparity value of the pixel corresponding to the contour in the disparity map and the disparity value of the pixel adjacent to it.
  • the visual value of the pixel corresponding to the contour in the disparity map is When the difference and the difference between the disparity values of the adjacent pixels are less than the preset threshold, it can be considered as the same object. When the difference between the disparity value of the pixel corresponding to the contour and the disparity value of the adjacent pixel in the disparity map is greater than the preset threshold, it can be considered as a different object.
  • the embodiment of the present disclosure is based on the disparity value in the disparity map extracted from the contour of the target in the reference image, if the disparity map is not distorted, the two pixels on both sides of any pixel on the contour should correspond to different Object; when the above calculation results indicate that a certain pixel in the pixel corresponding to the contour in the disparity map is the same object as the pixel on both sides of the adjacent side, then the pixel is determined to be the starting error point.
  • Step S140 For any pixel (for example, for any one of the pixels in the disparity map corresponding to the contour), when it is determined that the pixel is the initial error point, detect and start The starting error point is at the position of the boundary error point on the same line.
  • FIG. 2 is a schematic diagram of the position of the error point provided by the embodiment of the disclosure.
  • the boundary error point may be located in the same row as the starting error point and correspond to the pixel points of the contour of the disparity map.
  • the boundary of the error must be found to determine the area to be corrected (for example, area A in FIG. 2). Therefore, on the line where the initial error point is located, take the initial error point as the starting point, and along the preset direction, check whether each pixel is a normal pixel point by point, that is, the pixel point and the initial error point correspond to different For an object, when the normal pixel is found, the normal pixel is used as the boundary error point.
  • Step S150 Obtain the disparity value of the preset number of pixel points located on the same row as the boundary error point and on the side of the boundary error point away from the initial error point.
  • the boundary error point when the boundary error point is found, the area to be corrected which is composed of the initial error point, the boundary error point, and the pixel points between the initial error point and the boundary error point is obtained.
  • the disparity value of the preset number of pixel points outside the area to be corrected is extracted, and used in the correction.
  • Step S160 Correct the disparity values of the initial error point, the boundary error point, and the pixel points located between the initial error point and the boundary error point according to the disparity values of the preset number of pixel points.
  • the disparity value of the preset number of pixels outside the area to be corrected can be considered It is the parallax value that each pixel in the area to be corrected should have. Therefore, the disparity value of each pixel in the area to be corrected is replaced according to the disparity value to perform correction.
  • the disparity map is obtained by matching the left view and the right view, so that the disparity map is distorted at the boundary of the contour of the target.
  • the starting error point and the boundary error point in the disparity map are determined according to the contour of the target in the reference view, so as to determine the pixel points that need to be corrected. Then, according to the disparity value of the normal pixel point close to the pixel point that needs to be corrected, the pixel point that needs to be corrected is corrected, so as to correct the distorted disparity map and improve the accuracy of forming the disparity map.
  • FIG. 3 is a flowchart of determining whether a pixel point is an initial error point according to an embodiment of the disclosure. As shown in FIG. 3, when determining whether any pixel point corresponding to the contour in the disparity map is the initial error point, the following steps S131 to S133 may be included.
  • Step S131 extract from the disparity map the first disparity value corresponding to the pixel, the second disparity value corresponding to the adjacent pixel on one side of the pixel, and the adjacent pixel on the other side of the pixel.
  • the third disparity value corresponding to the pixel For example, the adjacent pixel on one side of the pixel and the adjacent pixel on the other side of the pixel are all located in the same row as the pixel.
  • the disparity value d of the pixel is extracted from the disparity map, and the pixel The second disparity value d1 corresponding to the adjacent pixel on one side of the dot and the third disparity value d2 corresponding to the adjacent pixel on the other side of the pixel.
  • the coordinates of two adjacent pixels are (x-1, y) and (x+1, y) respectively.
  • Step S132 Determine whether the difference between the second disparity value and the first disparity value is less than a preset threshold and whether the difference between the third disparity value and the first disparity value is less than the preset threshold.
  • the preset threshold T is a small value, and is generally set to 3 based on experience.
  • Step S133 Determine the pixel point as the initial error point.
  • detecting the position of the boundary error point in the same row as the initial error point in step S140 includes: taking the initial error point as a starting point, along a preset direction, sequentially from the disparity map Extract the disparity value of the pixel in the same row as the initial error point, and determine whether the pixel corresponding to the disparity value is a boundary error point.
  • FIG. 4 is a flowchart of detecting boundary error points provided by an embodiment of the disclosure. As shown in FIG. 4, for example, during detection, the coordinate mark of the initial error point is (x, y), and the corresponding first disparity value is d.
  • the abscissa of each pixel can represent the number of the column where the pixel is located, and the ordinate can represent the number of the row where the pixel is located.
  • i is a non-zero integer.
  • step S140 shown in FIG. 1 may include the following steps S141 to S145.
  • Step S141 Set the initial value of i to 1 or -1.
  • Step S142 Extract the disparity value of the pixel with the coordinates (x+i, y) from the disparity map, the disparity value is denoted as d x+i, y ; that is, extract the disparity value that is located in the same row as the initial error point The parallax value of the pixel.
  • Step S143 Determine whether the difference between the disparity value d x+i,y and the first disparity value d corresponding to the initial error point is greater than a preset threshold.
  • step S142 suppose the parallax value of the pixel with the coordinate (x+i,y) is d x+i, y , where i is an integer, and the initial value of i is 1 or -1.
  • the embodiment of the present disclosure may also perform extraction along a certain direction, for example, along the direction where the initial value of i is only 1 or along the direction where the initial value of i is only -1, which is specifically determined according to actual needs.
  • step S144 it is judged whether d and d x+i,y satisfy
  • Step S144 Use the pixel point corresponding to the disparity value as the boundary error point.
  • the coordinates of the boundary error point are (x+i,y).
  • the initial value of i and the pixel point corresponding to the disparity value d x+i, y in step 143 may be determined according to the preset direction after the pixel point corresponding to the disparity value d x + i, y is an error pixel point.
  • the abscissa of the pixel gradually increases from the left to the right of the image, and the preset direction is horizontal to right.
  • the initial value of i is 1, and the disparity value d x+i is determined in step 143 .
  • the pixel corresponding to y is an error pixel, add 1 to the value of i; if the preset direction is horizontal to the left, the initial value of i is -1, and the disparity value d x+i is determined in step 143 , After the pixel corresponding to y is the error pixel, the value of i is reduced by 1.
  • area A is the area where the information of the pixels in the left view and the right view is inconsistent.
  • the image is acquired, because the acquisition angles of the left view and the right view are different, when the target in a side view is When obstacles are occluded, the information in the left view and the right view will not be exactly the same, resulting in distortion of the disparity map obtained by matching the left view and the right view at the contour boundary of the target object.
  • the preset direction may be determined according to whether the reference view for generating the disparity map is the left view or the right view. For example, when the disparity map is generated using the left view as the reference view, the preset direction is horizontal to right; when the disparity map is generated using the right view as the reference view, the preset direction is horizontal to the left.
  • step S160 shown in FIG. 1 may include the following steps S161 to S163.
  • Step S161 Extract disparity values corresponding to a preset number of pixels from the disparity map.
  • Step S162 Calculate the average value of the disparity values corresponding to the preset number of pixels.
  • Step S163 Replace the start error point, the boundary error point, and the disparity value of each pixel between the start error point and the boundary error point with the average value.
  • the preset number is set to 3. Then, take (x+i+1, y), (x+i+2, y) and (x +i+3, y) 3 pixels, the disparity values corresponding to the 3 pixels are d3, d4, and d5 respectively, and the average value d6 of d3, d4, and d5 is obtained. Replace the start error point, the boundary error point, and the parallax value of each pixel between the start error point and the boundary error point with d6 for correction. It should be understood that the preset number is not limited to three, and in actual applications, the preset number may be less than or greater than three.
  • FIG. 5 is a schematic diagram of a disparity map correction device (which may be an electronic device) provided by an embodiment of the disclosure.
  • the disparity map correction apparatus may include: an acquisition unit 510, an extractor 520, a judgment unit 530, a detector 540, and a corrector 550.
  • the obtaining unit 510 is used for obtaining a reference view for generating a disparity map and for generating a disparity map to be corrected.
  • the extractor 520 is used to extract the contour of the target in the reference view.
  • the determining unit 530 is configured to determine whether each pixel corresponding to the contour is an initial error point according to the difference between the disparity values of each pixel corresponding to the contour and its neighboring pixels in the disparity map.
  • the detector 540 is used to detect when it is determined that the pixel is the initial error point The position of the boundary error point on the same line as the initial error point.
  • the corrector 550 is used to obtain the disparity value of a preset number of pixels on the same line as the boundary error point and on the side of the boundary error point away from the initial error point, and according to the disparity value of the preset number of pixel points Correct the parallax values of the starting error point, the boundary error point, and the pixel points between the starting error point and the boundary error point.
  • the starting error point and the boundary error point in the parallax map are determined, so as to determine the pixel to be corrected, and then according to the normality close to the pixel to be corrected.
  • the disparity value of the pixel is corrected for the pixel that needs to be corrected, so as to correct the distorted disparity map and improve the accuracy of forming the disparity map.
  • the judgment unit 530 includes: a first extraction unit 531 and a first judgment unit 532.
  • the first extracting unit 531 is configured to extract the first disparity value corresponding to the pixel from the disparity map, and the adjacent pixel on the side of the pixel.
  • the second disparity value corresponding to the pixel and the third disparity value corresponding to the adjacent pixel on the other side of the pixel are all located in the same row as the pixel.
  • the first determining unit 532 is configured to determine whether the pixel point is the initial error point according to the first disparity value, the second disparity value, the third disparity value, and the preset threshold. When the difference between the second disparity value and the first disparity value is less than the preset threshold, and the difference between the third disparity value and the first disparity value is less than the preset threshold, the pixel point is determined to be the initial error point.
  • the detector 540 includes: a second extraction unit 541 and a second judgment unit 542.
  • the second extracting unit 541 is configured to take the initial error point as a starting point and sequentially extract the disparity values corresponding to the pixel points in the same row as the initial error point from the disparity map along a preset direction.
  • the second determining unit 542 is used to determine whether the difference between the disparity value and the first disparity value is greater than a preset threshold. If the difference between the disparity value and the first disparity value is greater than the preset threshold value, stop extracting the disparity value, and use the pixel point corresponding to the disparity value as the boundary error point.
  • the corrector 550 includes a third extraction unit 551, a calculation unit 552, and a correction unit 553.
  • the third extracting unit 551 is configured to extract the disparity value corresponding to a preset number of pixels from the disparity map.
  • the calculation unit 552 is configured to calculate the average value of the disparity values corresponding to the preset number of pixels.
  • the correction unit 553 is used to replace the starting error point, the boundary error point, and the disparity value of each pixel between the starting error point and the boundary error point with the average value.
  • the disparity map correction device may further include a memory 560.
  • the memory 560 may be connected to the acquiring unit 510, for example, for storing the reference view, the disparity map, each pixel of the reference view, and the disparity map. Correspondence between each pixel in the disparity map, the coordinates of each pixel in the disparity map and its corresponding disparity value, the starting error point, the boundary error point, and other related data and computer programs .
  • each component of the disparity map correction device shown in FIG. 5 may be implemented in a hardware manner, or may be implemented in a combination of hardware and software.
  • each component of the disparity map correction device shown in FIG. 5 may be a central processing unit (CPU), an application processor (AP), a digital signal processor (DSP), Field programmable logic circuit (FPGA), microprocessor (MCU), integrated circuit (IC) or application specific integrated circuit (ASIC).
  • the various components of the disparity map correction device shown in FIG. 5 can be implemented by a combination of a processor, a memory, and a computer program.
  • the computer program is stored in the memory, and the processor receives data from the memory.
  • the computer program is read and executed, thereby serving as the various components of the disparity map correction device shown in FIG. 5.
  • the embodiment of the present disclosure also provides a terminal (for example, an electronic device), and the terminal may include: at least one processor and a memory.
  • the memory is used to store computer executable instructions.
  • the at least one processor executes the computer executable instructions stored in the memory, so that the terminal can execute the above-mentioned disparity map correction method.
  • the terminal may be a mobile phone, a notebook computer, a personal computer, a server, etc.
  • FIG. 6 shows a schematic diagram of the hardware structure of an electronic device for performing a disparity map correction method provided by an embodiment of the present disclosure.
  • the electronic device may include one or more processors 610 and a memory 620.
  • one processor 610 is taken as an example.
  • the processor 610 and the memory 620 may be connected through a bus or in other ways. In FIG. 6, the connection through a bus is taken as an example.
  • the memory 620 can be used to store software programs, computer-executable programs/modules (such as program instructions/modules corresponding to the disparity map correction method in the embodiment of the present disclosure), and related data (As mentioned above).
  • the processor 610 executes various functional applications and data processing herein by running software programs, instructions, and modules stored in the memory 620, that is, realizes the disparity map correction method in the above-mentioned embodiment of the invention.
  • the memory 620 may include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to any of the above methods.
  • the memory 620 may include a high-speed random access memory, and may also include a non-transitory memory, such as a magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
  • the memory 620 may optionally include a memory remotely provided with respect to the processor 610, and these remote memories may be connected to a processor running any of the above methods through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • One or more computer program modules may be stored in the memory 620, and when executed by one or more processors 610, implement the above-mentioned disparity map correction method.
  • the embodiments of the present disclosure also provide a non-transitory computer-readable storage medium, wherein computer-executable instructions are stored on the non-transitory computer-readable storage medium, and when the computer-executable instructions are executed by a processor, the aforementioned parallax is realized.
  • Figure correction method
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of components is only a logical function division. In actual implementation, there may be other division methods.
  • multiple components may be combined or integrated into another.
  • a system or some features can be ignored, or some steps can be omitted.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of the devices or units, and may be in electrical, mechanical or other forms.
  • the components described as separate components may or may not be physically separated, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the components may be selected according to actual needs to implement the method or device of the embodiments of the present disclosure.
  • various functional components in the embodiments of the present disclosure may be integrated into one processing component, or each component may exist alone physically, or two or more components may be integrated into one component.
  • the method is implemented in the form of a computer program and sold or used as an independent product, it can be stored in a non-transitory computer readable storage medium.
  • the technical solution of the present disclosure essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods in the various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
  • the disparity map correction method, the disparity map correction device, the terminal, and the non-transitory computer-readable storage medium provided by the above-mentioned embodiments of the present disclosure can at least achieve the following beneficial technical effects: the disparity map formed in the prior art is corrected Distortion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种视差图校正方法、视差图校正装置、终端及非暂时性计算机可读存储介质。该视差图校正方法包括:获取用于生成视差图的基准视图并且生成待校正的视差图(S110);提取所述基准视图中目标物的轮廓(S120);判断所述视差图中与所述轮廓对应的各个像素点是否为起始误差点(S130);当判断出该像素点为所述起始误差点时,检测与所述起始误差点位于同一行的边界误差点的位置(S140);以及获取与所述边界误差点位于同一行,且位于所述边界误差点背离所述起始误差点一侧的预设数量的像素点的视差值(S150);并根据所述预设数量的像素点的视差值对所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的像素点的视差值进行校正(S160)。

Description

视差图校正的方法、装置、终端及非暂时性计算机可读存储介质
相关申请的交叉引用
本申请要求于2019年6月19日提交的中国专利申请No.201910533023.1的优先权,该专利申请的全部内容通过引用方式合并于此。
技术领域
本公开涉及图像处理领域,具体涉及一种视差图校正方法、一种视差图校正装置、一种终端及一种非暂时性计算机可读存储介质。
背景技术
双目立体匹配一直是双目视觉的研究热点,双目相机拍摄同一场景的左、右两幅视点图像,运用立体匹配算法获取视差图,进而获取深度图。而深度图的应用范围非常广泛,由于其能够记录场景中物体距离摄像机的距离,可以用以测量、三维重建、以及虚拟视点的合成等。
发明内容
本公开的实施例提供了一种视差图校正方法、一种视差图校正装置、一种终端及一种非暂时性计算机可读存储介质。
本公开的第一方面提供了一种视差图校正方法,所述方法包括:
获取用于生成视差图的基准视图并且生成待校正的视差图;
提取所述基准视图中目标物的轮廓;
根据所述视差图中与所述轮廓对应的各个像素点及其相邻像素点的视差值之差,判断所述视差图中与所述轮廓对应的各个像素点是否为起始误差点;
对于所述视差图中与所述轮廓对应的各个像素点中的任意一个像素点,当判断出该像素点为所述起始误差点时,检测与所述起始误差点位于同一行的边界误差点的位置;以及
获取与所述边界误差点位于同一行,且位于所述边界误差点背离所述起始误差点一侧的预设数量的像素点的视差值,并根据所述预设数量的像素点的视差值对所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的像素点的视差值进行校正。
在一个实施例中,所述根据所述视差图中与所述轮廓对应的各个像素点及其相邻像素点的视差值之差,判断所述视差图中与所述轮廓对应的各个像素点是否为起始误差点,包括:
对于所述视差图中与所述轮廓对应的任一像素点,从所述视差图中提取与该像素点对应的第一视差值、位于该像素点一侧的相邻像素点所对应的第二视差值以及位于该像素点另一侧的相邻像素点所对应的第三视差值;其中,位于该像素点一侧的相邻像素点以及位于该像素点另一侧的相邻像素点均与该像素点位于同一行;
判断所述第二视差值与所述第一视差值之差是否小于预设阈值以及所述第三视差值与所述第一视差值之差是否小于所述预设阈值;以及
当所述第二视差值与所述第一视差值之差小于所述预设阈值,且所述第三视差值与所述第一视差值之差小于所述预设阈值时,确定该像素点为起始误差点。
在一个实施例中,所述检测与所述起始误差点位于同一行的边界误差点的位置,包括:
以所述起始误差点为起点,沿预设方向,从所述视差图中依次提取与所述起始误差点位于同一行的像素点的视差值;
对于每次提取的视差值,判断该视差值与所述起始误差点对应的第一视差值之差是否大于预设阈值;以及
若该视差值与所述起始误差点对应的第一视差值之差大于所述预设阈值,则以该视差值所对应的像素点作为边界误差点。
在一个实施例中,所述根据所述预设数量的像素点的视差值对所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的像素点的视差值进行校正,包括:
从所述视差图中提取所述预设数量的像素点所对应的视差值;
计算所述预设数量的像素点所对应的视差值的平均值;以及
以所述平均值替换所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的每一个像素点的视差值。
在一个实施例中,所述预设数量为3。
在一个实施例中,所述预设阈值为3。
在一个实施例中,所述基准视图为左视图或右视图;以及
在所述基准视图为左视图的情况下,所述预设方向为水平向右的方向;在所述基准视图为右视图的情况下,所述预设方向为水平向左的方向。
本公开的第二方面提供了一种视差图校正装置,所述装置包括:获取单元、提取器、判断单元、检测器以及校正器;其中,
所述获取单元用于获取用于生成视差图的基准视图并且生成待校正的视差图;
所述提取器用于提取所述基准视图中目标物的轮廓;
所述判断单元用于根据所述视差图中与所述轮廓对应的各个像素点及其相邻像素点所对应的视差值之差,判断所述轮廓对应的各个像素点是否为起始误差点;
对于所述视差图中与所述轮廓对应的各个像素点中的任意一个像素点,所述检测器用于当判断出该像素点为所述起始误差点时,检测与所述起始误差点位于同一行的边界误差点的位置;以及
所述校正器用于获取与所述边界误差点位于同一行,且位于所述边界误差点背离所述起始误差点一侧的预设数量的像素点的视差值,并根据所述预设数量的像素点的视差值对所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的像素点的视差值进行校正。
在一个实施例中,所述判断单元包括:第一提取单元以及第一 判断单元;
对于所述视差图中与所述轮廓对应的任一像素点,所述第一提取单元用于从所述视差图中提取与该像素点对应的第一视差值、位于该像素点一侧的相邻像素点所对应的第二视差值以及位于该像素点另一侧的相邻像素点所对应的第三视差值;其中,位于该像素点一侧的相邻像素点以及位于该像素点另一侧的相邻像素点均与该像素点位于同一行;以及
所述第一判断单元用于根据所述第一视差值、所述第二视差值、所述第三视差值以及预设阈值判断该像素点是否为起始误差点;当所述第二视差值与所述第一视差值之差小于所述预设阈值,且所述第三视差值与所述第一视差值之差小于所述预设阈值时,确定该像素点为起始误差点。
在一个实施例中,所述检测器包括:第二提取单元以及第二判断单元;
所述第二提取单元用于以所述起始误差点为起点,沿预设方向,从所述视差图中依次提取与所述起始误差点位于同一行的像素点所对应的视差值;以及
对于每次提取的视差值,所述第二判断单元用于判断该视差值与所述第一视差值之差是否大于预设阈值;若该视差值与所述第一视差值之差大于所述预设阈值,则停止提取视差值,并以该视差值所对应的像素点作为边界误差点。
在一个实施例中,所述校正器包括:第三提取单元、计算单元以及校正单元;
所述第三提取单元用于从所述视差图中提取所述预设数量的像素点所对应的视差值;
所述计算单元用于计算所述预设数量的像素点所对应的视差值的平均值;以及
所述校正单元用于以所述平均值替换所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的每一个像素点的视差值。
在一个实施例中,所述预设数量为3。
在一个实施例中,所述预设阈值为3。
在一个实施例中,所述基准视图为左视图或右视图;以及
在所述基准视图为左视图的情况下,所述预设方向为水平向右的方向;在所述基准视图为右视图的情况下,所述预设方向为水平向左的方向。
本公开的第三方面提供了一种终端,包括:至少一个处理器和存储器;其中,
所述存储器存储有计算机可执行指令;以及
所述至少一个处理器执行所述存储器中存储的所述计算机可执行指令,使得所述终端执行根据本公开的第一方面的各个实施例中任一个所述的视差图校正方法。
本公开的第四方面提供了一种非暂时性计算机可读存储介质,所述非暂时性计算机可读存储介质上存储有计算机可执行指令,当所述计算机可执行指令被处理器执行时,实现根据本公开的第一方面的各个实施例中任一个所述的视差图校正方法。
附图说明
附图是用来提供对本公开的进一步理解,并且构成说明书的一部分,与下面的具体实施方式一起用于解释本公开,但并不构成对本公开的限制。在附图中:
图1为本公开实施例提供的视差图校正方法的流程图;
图2为本公开实施例提供的误差点位置的示意图;
图3为本公开实施例提供的判断像素点是否为起始误差点的流程图;
图4为本公开实施例提供的检测边界误差点的流程图;
图5为本公开实施例提供的视差图校正装置的示意图;以及
图6示出了本公开实施例提供的执行视差图校正方法的电子设备的硬件结构的示意图。
具体实施方式
以下结合附图对本公开的具体实施方式进行详细说明。应当理解的是,此处所描述的具体实施方式仅用于说明和解释本公开,并不用于限制本公开。
本发明构思的发明人发现,在利用左、右两幅视点图像获取视差图的过程中,左、右两幅视点图中目标轮廓处的信息不一致会导致视差图产生畸变,从而严重影响双目视觉的应用。
本公开的实施例提供一种视差图校正方法,图1为本公开实施例提供的视差图校正方法的流程图。如图1所示,视差图校正方法可以包括以下步骤S110至步骤S160。
步骤S110、获取用于生成视差图的基准视图并且生成待校正的视差图。
在本步骤中,通过双目视觉传感器获取左视图和右视图,在本公开实施例中,以左视图为基准视图,右视图作为辅助视图,生成与左视图相对应的视差图,然后提取该视差图以及左视图。当然,在生成视差图的过程中也可以右视图作为基准视图,具体根据实际需要而定。
步骤S120、提取基准视图中目标物的轮廓。
在本步骤中,基于像素级的图像分割算法,对左视图进行分割,以使左视图中的各个物体便于区分,从分割后的左视图中提取目标物的轮廓,并将目标物轮廓处各个像素点的坐标进行保存。需要说明的是,对左视图进行分割时,也可以基于图像形态学的像素级处理进行分割。
步骤S130、根据视差图中与轮廓对应的各个像素点及其相邻像素点的视差值之差,判断视差图中与轮廓对应的各个像素点是否为起始误差点。
在本步骤中,根据目标物的轮廓,从视差图中与该轮廓相对应的位置,可以提取出该轮廓对应的各个像素点的视差值,以及与该像素点相邻像素点的视差值。其中,对于视差图中任意一个与轮廓对应的像素点,与该像素点相邻的像素点尤其是指,与该像素点相邻且位 于同一行的像素点。对于同一物体而言,该物体上各个像素点的视差值相近。因此,可以设定一预设阈值,对视差图中与轮廓对应的像素点的视差值以及与其相邻的像素点的视差值进行计算,当视差图中与轮廓对应的像素点的视差值以及与其相邻的像素点的视差值之差小于该预设阈值时,可以认为是同一物体。当视差图中与轮廓对应的像素点的视差值以及与其相邻的像素点的视差值之差大于该预设阈值时,可以认为是不同的物体。由于本公开实施例是基于基准图像中目标物的轮廓提取的视差图中的视差值,因此,若视差图未发生畸变,则轮廓上任一像素点两侧的两个像素点应对应不同的物体;当上述计算结果表明视差图中与轮廓对应的像素点中的某一像素点与相邻两侧的像素点均为同一物体时,则确定该像素点为起始误差点。
步骤S140、对于任意一个像素点(例如,对于所述视差图中与所述轮廓对应的各个像素点中的任意一个像素点),当判断出该像素点为起始误差点时,检测与起始误差点位于同一行的边界误差点的位置。
图2为本公开实施例提供的误差点位置的示意图。如图2所示,在本步骤中,边界误差点可以与起始误差点位于同一行、且对应于视差图的轮廓的像素点。在确定起始误差点之后,还要找到误差的边界,从而确定待校正的区域(例如,图2中的区域A)。因此,在起始误差点的所在行上,以起始误差点为起点,沿预设方向,逐点检测每个像素点是否为正常的像素点,即该像素点与起始误差点对应不同物体,当找到该正常的像素点时,以该正常的像素点作为边界误差点。
步骤S150、获取与边界误差点位于同一行,且位于边界误差点背离起始误差点一侧的预设数量的像素点的视差值。
在本步骤中,当找到边界误差点时,即得到以起始误差点、边界误差点以及起始误差点以及边界误差点之间的像素点构成的待校正的区域。沿起始误差点到边界误差点的方向,提取待校正的区域之外的预设数量的像素点的视差值,在校正时使用。
步骤S160、根据预设数量的像素点的视差值对起始误差点、边界误差点以及位于起始误差点和边界误差点之间的像素点的视差值 进行校正。
在本步骤中,在确定了待校正的区域之后,沿起始误差点到边界误差点的方向,所提取的待校正的区域之外的预设数量的像素点的视差值,可以被认为是待校正区域中的各个像素点应有的视差值。因此,根据此视差值对待校正区域的每一个像素点的视差值进行替换,以进行校正。
在采集图像时,左视图与右视图的采集角度不同,当在采集某侧视图中目标物被障碍物遮挡时,将会导致左视图以及右视图所具有的信息不完全一致。在现有技术中,通过左视图以及右视图匹配得到的视差图,从而使得该视差图在目标的轮廓的边界处出现失真。在根据本公开实施例的视差图校正方法中,根据基准视图中目标物的轮廓,确定视差图中的起始误差点以及边界误差点,从而确定需要校正的像素点。之后,根据靠近需要校正的像素点的正常像素点的视差值,对需要校正的像素点进行校正,从而对失真视差图进行校正,提高形成视差图的精确性。
图3为本公开实施例提供的判断像素点是否为起始误差点的流程图。如图3所示,在判断视差图中与轮廓对应的任一像素点是否为起始误差点时,可以包括以下步骤S131至步骤S133。
步骤S131、从视差图中提取与该像素点对应的第一视差值、位于该像素点一侧的相邻像素点所对应的第二视差值以及位于该像素点另一侧的相邻像素点所对应的第三视差值。例如,位于该像素点一侧的相邻像素点以及位于该像素点另一侧的相邻像素点均与该像素点位于同一行。
在本步骤中,对于视差图中与轮廓对应的任一像素点而言,根据该像素点的坐标(x,y),从视差图中提取出该像素点的视差值d,以及该像素点一侧的相邻像素点所对应的第二视差值d1和位于该像素点另一侧的相邻像素点所对应的第三视差值d2。例如,两个相邻像素点的坐标分别(x-1,y)和(x+1,y)。
步骤S132、判断第二视差值与第一视差值之差是否小于预设阈值以及第三视差值与第一视差值之差是否小于预设阈值。
在本步骤中,判断d、d1以及d2是否满足|d–d1|<T且|d–d2|<T,若判断的结果为是,则执行S133,若判断的结果为否,则认为该像素点为正常的像素点。在一个实施例中,预设阈值T是一个较小的值,一般根据经验设为3。
步骤S133、确定该像素点为起始误差点。
在一示例性实施例中,步骤S140中的检测与所述起始误差点位于同一行的边界误差点的位置,包括:以起始误差点为起点,沿预设方向,从视差图中依次提取与起始误差点位于同一行的像素点的视差值,并判断该视差值所对应的像素点是否为边界误差点。图4为本公开实施例提供的检测边界误差点的流程图。如图4所示,例如,在进行检测时,将起始误差点的坐标记作(x,y),其对应的第一视差值为d。例如,每个像素点的横坐标可以表示像素点所在列的编号、纵坐标可以表示像素点所在行的编号。此外,i为非零的整数。
如图4所示,图1所示的步骤S140可以包括以下步骤S141至步骤S145。
步骤S141、设置i的初始值为1或-1。
步骤S142、从视差图中提取坐标为(x+i,y)的像素点的视差值,该视差值记作d x+i,y;即,提取与起始误差点位于同一行的像素点的视差值。
步骤S143、判断该视差值d x+i,y与起始误差点对应的第一视差值d之差是否大于预设阈值。
在本步骤中,设坐标为(x+i,y)的像素点的视差值为d x+i,y,i为整数,i的初始值为1或-1。在起始误差点所在的行上,以起始误差点为起点,分别向起始误差点的左侧和右侧,逐个像素点提取视差值,并在每次提取后,进行步骤S142的判断。当然,本公开实施例也可以沿某一方向进行提取,例如,沿着i的初始值仅为1的方向或沿着i的初始值仅为-1的方向,具体根据实际需要确定。
在本步骤中,判断d与d x+i,y是否满足|d-d x+i,y|>T。若是,执行步骤S144;若否,则确定该视差值d x+i,y所对应的像素点为误差像素点,并且执行步骤S145中的将i的值加1或减1以返回值步骤S142。 重复该过程直至找到边界误差点为止。
步骤S144、以该视差值所对应的像素点作为边界误差点。例如,边界误差点的坐标为(x+i,y)。
在步骤S140中,i的初始值,以及步骤143中判断出视差值d x+i, y所对应的像素点为误差像素点后对i值加1还是减可以根据预设方向确定。例如,像素点的横坐标自图像左侧至右侧逐渐增大,预设方向为水平向右,这时,则i的初始值为1,步骤143中判断出视差值d x+i,y所对应的像素点为误差像素点后,将i的值加1;若预设方向为水平向左,则i的初始值为-1,步骤143中判断出视差值d x+i,y所对应的像素点为误差像素点后,将i的值减1。
如图2所示,区域A为左视图与右视图中像素点所具有的信息不一致的区域,在采集图像时,由于左视图与右视图的采集角度不同,当某侧视图中的目标物被障碍物遮挡时,将会导致左视图以及右视图所具有的信息不完全一致,从而导致通过左视图以及右视图匹配得到的视差图在目标物的轮廓边界处出现失真。例如,在以左视图作为基准视图生成视差图时,当左视图中位于目标物左侧的某一像素点Pa(图中未示出)与右视图中与该像素点Pa所对应的像素点Pb(图中未示出)所具有的信息不一致时,则在左视图中像素点Pa的所在行上,向像素点Pa的左侧遍历寻找与像素点Pa具有较高相似度的像素点Pc(图中未示出),并以像素点Pc的视差值作为像素点Pa的视差值。当各个像素点经过上述过程后,得到如图2所示的失真的视差图。
因此,在本公开实施例中,预设方向可以根据生成视差图的基准视图为左视图还是右视图而定。例如,当视差图是以左视图为基准视图生成时,预设方向为水平向右的方向;当视差图是以右视图为基准视图生成时,预设方向为水平向左的方向。
在一示例性实施例中,图1所示的步骤S160可以包括以下步骤S161至步骤S163。
步骤S161、从视差图中提取预设数量的像素点所对应的视差值。
步骤S162、计算预设数量的像素点所对应的视差值的平均值。
步骤S163、以平均值替换起始误差点、边界误差点以及位于起始误差点和边界误差点之间的每一个像素点的视差值。
具体地,由于边界误差点的坐标为(x+i,y),设预设数量为3,那么,取(x+i+1,y)、(x+i+2,y)以及(x+i+3,y)的3个像素点,3个像素点所对应的视差值分别为d3、d4以及d5,求得d3、d4以及d5的平均值d6。以d6替换起始误差点、边界误差点以及位于起始误差点和边界误差点之间的每一个像素点的视差值,以进行校正。应当理解的是,预设数量不限于3,在实际应用中,预设数量可以小于或大于3。
基于相同的发明构思,本公开实施例还提供一种视差图校正装置。图5为本公开实施例提供的视差图校正装置(其可以是电子设备)的示意图。如图5所示,视差图校正装置可以包括:获取单元510、提取器520、判断单元530、检测器540以及校正器550。
获取单元510用于获取用于生成视差图的基准视图并且用于生成待校正的视差图。
提取器520用于提取基准视图中目标物的轮廓。
判断单元530用于根据视差图中与轮廓对应的各个像素点及其相邻像素点所对应的视差值之差,判断轮廓对应的各个像素点是否为起始误差点。
对于任意一个像素点(例如,对于所述视差图中与所述轮廓对应的各个像素点中的任意一个像素点),检测器540用于当判断出该像素点为起始误差点时,检测与起始误差点位于同一行的边界误差点的位置。
校正器550用于获取与边界误差点位于同一行,且位于边界误差点背离起始误差点一侧的预设数量的像素点的视差值,并根据预设数量的像素点的视差值对起始误差点、边界误差点以及位于起始误差点和边界误差点之间的像素点的视差值进行校正。
采用本公开实施例的装置,根据基准视图中目标物的轮廓,确定视差图中的起始误差点以及边界误差点,从而确定需要校正的像素点,之后,根据靠近需要校正的像素点的正常像素点的视差值,对需 要校正的像素点进行校正,从而对失真视差图进行校正,提高形成视差图的精确性。
在一示例性实施例中,判断单元530包括:第一提取单元531以及第一判断单元532。
对于所述视差图中与所述轮廓对应的任一像素点,第一提取单元531用于从视差图中提取与该像素点对应的第一视差值、位于该像素点一侧的相邻像素点所对应的第二视差值以及位于该像素点另一侧的相邻像素点所对应的第三视差值。例如,位于该像素点一侧的相邻像素点以及位于该像素点另一侧的相邻像素点均与该像素点位于同一行。
第一判断单元532用于根据第一视差值、第二视差值、第三视差值以及预设阈值判断该像素点是否为起始误差点。当第二视差值与第一视差值之差小于预设阈值,且第三视差值与第一视差值之差小于预设阈值时,确定该像素点为起始误差点。
在一示例性实施例中,检测器540包括:第二提取单元541以及第二判断单元542。
第二提取单元541用于以起始误差点为起点,沿预设方向,从视差图中依次提取与起始误差点位于同一行的像素点所对应的视差值。
对于每次提取的视差值,第二判断单元542用于判断该视差值与第一视差值之差是否大于预设阈值。若该视差值与第一视差值之差大于预设阈值,停止提取视差值,并以该视差值所对应的像素点作为边界误差点。
在一示例性实施例中,校正器550包括:第三提取单元551、计算单元552以及校正单元553。
第三提取单元551用于从视差图中提取预设数量的像素点所对应的视差值。
计算单元552用于计算预设数量的像素点所对应的视差值的平均值。
校正单元553用于以平均值替换起始误差点、边界误差点以及 位于起始误差点和边界误差点之间的每一个像素点的视差值。
此外,所述视差图校正装置还可以包括存储器560,存储器560例如可以连接至获取单元510,用于存储所述基准视图、所述视差图、所述基准视图的各个像素点和所述视差图的各个像素点之间的对应关系、所述视差图中的各个像素点的坐标及其对应的视差值、所述起始误差点、所述边界误差点、以及其他相关的数据和计算机程序。
应当理解的是,图5所示的视差图校正装置的各个组件可以通过硬件的方式来实现,也可以通过硬件和软件相结合的方式来实现。例如,图5所示的视差图校正装置的各个组件可以是具有本公开的实施例所述的相应功能的中央处理器(CPU)、应用处理器(AP)、数字信号处理器(DSP)、现场可编程逻辑电路(FPGA)、微处理器(MCU)、集成电路(IC)或专用集成电路(ASIC)。例如,图5所示的视差图校正装置的各个组件可以通过处理器、存储器和计算机程序相结合的方式来实现,所述计算机程序存储在所述存储器中,所述处理器从所述存储器中读取并执行所述计算机程序,从而用作图5所示的视差图校正装置的各个组件。
本公开实施例还提供一种终端(例如,电子设备),该终端可以包括:至少一个处理器和存储器。
存储器用于存储计算机可执行指令。
所述至少一个处理器执行存储器中存储的计算机可执行指令,使得所述终端能够执行上述的视差图校正方法。例如,所述终端可以是手机、笔记本电脑、个人计算机、服务器等。
图6示出了本公开实施例提供的执行视差图校正方法的电子设备的硬件结构的示意图。如图6所示,该电子设备可以包括一个或多个处理器610以及存储器620,图6中以一个处理器610为例。
处理器610和存储器620可以通过总线或者其他方式连接,图6中以通过总线连接为例。
存储器620作为一种非暂时性计算机可读存储介质,可用于存储软件程序、计算机可执行程序/模块(如本公开实施例中的视差图校正方法对应的程序指令/模块)、以及相关的数据(如上所述)。 处理器610通过运行存储在存储器620中的软件程序、指令以及模块,从而执行本文中的各种功能应用以及数据处理,即实现上述发明实施例中的视差图校正方法。
存储器620可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统以及至少一个功能所需要的应用程序;存储数据区可存储根据任意以上方法的使用所创建的数据等。此外,存储器620可以包括高速随机存取存储器,还可以包括非暂时性存储器,例如磁盘存储器件、闪存器件或其他非暂时性固态存储器件。在一示例性实施例中,存储器620可选包括相对于处理器610远程设置的存储器,这些远程存储器可以通过网络连接至运行任意以上方法的处理器。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
一个或者多个计算机程序模块可以存储在存储器620中,当被一个或者多个处理器610执行时,实现上述的视差图校正方法。
本公开实施例还提供一种非暂时性计算机可读存储介质,其中,非暂时性计算机可读存储介质上存储有计算机可执行指令,当计算机可执行指令被处理器执行时,实现上述的视差图校正方法。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置及其组件的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。换言之,在没有明显冲突的情况下,本公开的各个实施例可以互相结合。
在本公开实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,组件的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或一些步骤可以不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的组件可以是或者也可以不是物理上分开的, 即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部组件来实现本公开实施例的方法或装置。
另外,在本公开实施例中的各功能组件可以集成在一个处理组件中,也可以是各个组件单独物理存在,也可以两个或两个以上组件集成在一个组件中。
方法如果以计算机程序的形式实现并作为独立的产品销售或使用时,可以存储在非暂时性计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本公开的上述实施例所提供的视差图校正方法、视差图校正装置、终端及非暂时性计算机可读存储介质,至少能够取得以下的有益技术效果:校正了现有技术中形成视差图时产生的失真。
最后应说明的是:以上实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的技术范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的所限定的保护范围为准。

Claims (16)

  1. 一种视差图校正方法,所述方法包括:
    获取用于生成视差图的基准视图并且生成待校正的视差图;
    提取所述基准视图中目标物的轮廓;
    根据所述视差图中与所述轮廓对应的各个像素点及其相邻像素点的视差值之差,判断所述视差图中与所述轮廓对应的各个像素点是否为起始误差点;
    对于所述视差图中与所述轮廓对应的各个像素点中的任意一个像素点,当判断出该像素点为所述起始误差点时,检测与所述起始误差点位于同一行的边界误差点的位置;以及
    获取与所述边界误差点位于同一行,且位于所述边界误差点背离所述起始误差点一侧的预设数量的像素点的视差值,并根据所述预设数量的像素点的视差值对所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的像素点的视差值进行校正。
  2. 根据权利要求1所述的视差图校正方法,其中,所述根据所述视差图中与所述轮廓对应的各个像素点及其相邻像素点的视差值之差,判断所述视差图中与所述轮廓对应的各个像素点是否为起始误差点,包括:
    对于所述视差图中与所述轮廓对应的任一像素点,从所述视差图中提取与该像素点对应的第一视差值、位于该像素点一侧的相邻像素点所对应的第二视差值以及位于该像素点另一侧的相邻像素点所对应的第三视差值;其中,位于该像素点一侧的相邻像素点以及位于该像素点另一侧的相邻像素点均与该像素点位于同一行;
    判断所述第二视差值与所述第一视差值之差是否小于预设阈值以及所述第三视差值与所述第一视差值之差是否小于所述预设阈值;以及
    当所述第二视差值与所述第一视差值之差小于所述预设阈值, 且所述第三视差值与所述第一视差值之差小于所述预设阈值时,确定该像素点为起始误差点。
  3. 根据权利要求1或2所述的视差图校正方法,其中,所述检测与所述起始误差点位于同一行的边界误差点的位置,包括:
    以所述起始误差点为起点,沿预设方向,从所述视差图中依次提取与所述起始误差点位于同一行的像素点的视差值;
    对于每次提取的视差值,判断该视差值与所述起始误差点对应的第一视差值之差是否大于预设阈值;以及
    若该视差值与所述起始误差点对应的第一视差值之差大于所述预设阈值,则以该视差值所对应的像素点作为边界误差点。
  4. 根据权利要求1至3中任一项所述的视差图校正方法,其中,所述根据所述预设数量的像素点的视差值对所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的像素点的视差值进行校正,包括:
    从所述视差图中提取所述预设数量的像素点所对应的视差值;
    计算所述预设数量的像素点所对应的视差值的平均值;以及
    以所述平均值替换所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的每一个像素点的视差值。
  5. 根据权利要求1至4中任一项所述的视差图校正方法,其中,所述预设数量为3。
  6. 根据权利要求2或3所述的视差图校正方法,其中,所述预设阈值为3。
  7. 根据权利要求3所述的视差图校正方法,其中,所述基准视图为左视图或右视图;以及
    在所述基准视图为左视图的情况下,所述预设方向为水平向右 的方向;在所述基准视图为右视图的情况下,所述预设方向为水平向左的方向。
  8. 一种视差图校正装置,所述装置包括:获取单元、提取器、判断单元、检测器以及校正器;其中,
    所述获取单元用于获取用于生成视差图的基准视图并且生成待校正的视差图;
    所述提取器用于提取所述基准视图中目标物的轮廓;
    所述判断单元用于根据所述视差图中与所述轮廓对应的各个像素点及其相邻像素点所对应的视差值之差,判断所述轮廓对应的各个像素点是否为起始误差点;
    对于所述视差图中与所述轮廓对应的各个像素点中的任意一个像素点,所述检测器用于当判断出该像素点为所述起始误差点时,检测与所述起始误差点位于同一行的边界误差点的位置;以及
    所述校正器用于获取与所述边界误差点位于同一行,且位于所述边界误差点背离所述起始误差点一侧的预设数量的像素点的视差值,并根据所述预设数量的像素点的视差值对所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的像素点的视差值进行校正。
  9. 根据权利要求8所述的视差图校正装置,其中,所述判断单元包括:第一提取单元以及第一判断单元;
    对于所述视差图中与所述轮廓对应的任一像素点,所述第一提取单元用于从所述视差图中提取与该像素点对应的第一视差值、位于该像素点一侧的相邻像素点所对应的第二视差值以及位于该像素点另一侧的相邻像素点所对应的第三视差值;其中,位于该像素点一侧的相邻像素点以及位于该像素点另一侧的相邻像素点均与该像素点位于同一行;以及
    所述第一判断单元用于根据所述第一视差值、所述第二视差值、所述第三视差值以及预设阈值判断该像素点是否为起始误差点;当所 述第二视差值与所述第一视差值之差小于所述预设阈值,且所述第三视差值与所述第一视差值之差小于所述预设阈值时,确定该像素点为起始误差点。
  10. 根据权利要求8或9所述的视差图校正装置,其中,所述检测器包括:第二提取单元以及第二判断单元;
    所述第二提取单元用于以所述起始误差点为起点,沿预设方向,从所述视差图中依次提取与所述起始误差点位于同一行的像素点所对应的视差值;以及
    对于每次提取的视差值,所述第二判断单元用于判断该视差值与所述第一视差值之差是否大于预设阈值;若该视差值与所述第一视差值之差大于所述预设阈值,则停止提取视差值,并以该视差值所对应的像素点作为边界误差点。
  11. 根据权利要求8至10中任一项所述的视差图校正装置,其中,所述校正器包括:第三提取单元、计算单元以及校正单元;
    所述第三提取单元用于从所述视差图中提取所述预设数量的像素点所对应的视差值;
    所述计算单元用于计算所述预设数量的像素点所对应的视差值的平均值;以及
    所述校正单元用于以所述平均值替换所述起始误差点、所述边界误差点以及位于所述起始误差点和所述边界误差点之间的每一个像素点的视差值。
  12. 根据权利要求8至11中任一项所述的视差图校正装置,其中,所述预设数量为3。
  13. 根据权利要求9或10所述的视差图校正装置,其中,所述预设阈值为3。
  14. 根据权利要求10所述的视差图校正装置,其中,所述基准视图为左视图或右视图;以及
    在所述基准视图为左视图的情况下,所述预设方向为水平向右的方向;在所述基准视图为右视图的情况下,所述预设方向为水平向左的方向。
  15. 一种终端,包括:至少一个处理器和存储器;其中,
    所述存储器存储有计算机可执行指令;以及
    所述至少一个处理器执行所述存储器中存储的所述计算机可执行指令,使得所述终端执行根据权利要求1至7中任一项所述的视差图校正方法。
  16. 一种非暂时性计算机可读存储介质,所述非暂时性计算机可读存储介质上存储有计算机可执行指令,当所述计算机可执行指令被处理器执行时,实现根据权利要求1至7中任一项所述的视差图校正方法。
PCT/CN2020/096963 2019-06-19 2020-06-19 视差图校正的方法、装置、终端及非暂时性计算机可读存储介质 WO2020253805A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910533023.1 2019-06-19
CN201910533023.1A CN112116660B (zh) 2019-06-19 2019-06-19 视差图校正方法、装置、终端及计算机可读介质

Publications (1)

Publication Number Publication Date
WO2020253805A1 true WO2020253805A1 (zh) 2020-12-24

Family

ID=73796654

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096963 WO2020253805A1 (zh) 2019-06-19 2020-06-19 视差图校正的方法、装置、终端及非暂时性计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN112116660B (zh)
WO (1) WO2020253805A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114440775B (zh) * 2021-12-29 2024-08-20 全芯智造技术有限公司 特征尺寸的偏移误差计算方法及装置、存储介质、终端

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252701A (zh) * 2013-06-28 2014-12-31 株式会社理光 校正视差图的方法和系统
CN104915927A (zh) * 2014-03-11 2015-09-16 株式会社理光 视差图像优化方法及装置
CN105023263A (zh) * 2014-04-22 2015-11-04 南京理工大学 一种基于区域生长的遮挡检测和视差校正的方法
CN105631887A (zh) * 2016-01-18 2016-06-01 武汉理工大学 基于自适应支持权重匹配算法的两步视差改良方法及系统
US20170163960A1 (en) * 2014-06-12 2017-06-08 Toyota Jidosha Kabushiki Kaisha Disparity image generating device, disparity image generating method, and image
CN108401460A (zh) * 2017-09-29 2018-08-14 深圳市大疆创新科技有限公司 生成视差图的方法、系统、存储介质和计算机程序产品
CN109215044A (zh) * 2017-06-30 2019-01-15 京东方科技集团股份有限公司 图像处理方法和系统、存储介质和移动系统
CN109859253A (zh) * 2018-12-17 2019-06-07 深圳市道通智能航空技术有限公司 一种立体匹配方法、装置和电子设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180016823A (ko) * 2016-08-08 2018-02-20 한국전자통신연구원 영상 보정 장치 및 방법
CN107909036B (zh) * 2017-11-16 2020-06-23 海信集团有限公司 一种基于视差图的道路检测方法及装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252701A (zh) * 2013-06-28 2014-12-31 株式会社理光 校正视差图的方法和系统
CN104915927A (zh) * 2014-03-11 2015-09-16 株式会社理光 视差图像优化方法及装置
CN105023263A (zh) * 2014-04-22 2015-11-04 南京理工大学 一种基于区域生长的遮挡检测和视差校正的方法
US20170163960A1 (en) * 2014-06-12 2017-06-08 Toyota Jidosha Kabushiki Kaisha Disparity image generating device, disparity image generating method, and image
CN105631887A (zh) * 2016-01-18 2016-06-01 武汉理工大学 基于自适应支持权重匹配算法的两步视差改良方法及系统
CN109215044A (zh) * 2017-06-30 2019-01-15 京东方科技集团股份有限公司 图像处理方法和系统、存储介质和移动系统
CN108401460A (zh) * 2017-09-29 2018-08-14 深圳市大疆创新科技有限公司 生成视差图的方法、系统、存储介质和计算机程序产品
CN109859253A (zh) * 2018-12-17 2019-06-07 深圳市道通智能航空技术有限公司 一种立体匹配方法、装置和电子设备

Also Published As

Publication number Publication date
CN112116660B (zh) 2024-03-29
CN112116660A (zh) 2020-12-22

Similar Documents

Publication Publication Date Title
CN105374019B (zh) 一种多深度图融合方法及装置
CN111066065B (zh) 用于混合深度正则化的系统和方法
US9070042B2 (en) Image processing apparatus, image processing method, and program thereof
US20180300937A1 (en) System and a method of restoring an occluded background region
WO2019102442A1 (en) Systems and methods for 3d facial modeling
CN107316326B (zh) 应用于双目立体视觉的基于边的视差图计算方法和装置
CN107392958A (zh) 一种基于双目立体摄像机确定物体体积的方法及装置
CN102523464A (zh) 一种双目立体视频的深度图像估计方法
WO2020119467A1 (zh) 高精度稠密深度图像的生成方法和装置
CN111160232B (zh) 正面人脸重建方法、装置及系统
TWI553591B (zh) 深度影像處理方法及深度影像處理系統
CN111882655A (zh) 三维重建的方法、装置、系统、计算机设备和存储介质
WO2022142139A1 (zh) 投影面选取和投影图像校正方法、装置、投影仪及介质
CN112802081B (zh) 一种深度检测方法、装置、电子设备及存储介质
CN116029996A (zh) 立体匹配的方法、装置和电子设备
CN106558038B (zh) 一种水天线检测方法及装置
CN107155100B (zh) 一种基于图像的立体匹配方法及装置
WO2020253805A1 (zh) 视差图校正的方法、装置、终端及非暂时性计算机可读存储介质
CN108805841B (zh) 一种基于彩色图引导的深度图恢复及视点合成优化方法
CN110800020B (zh) 一种图像信息获取方法、图像处理设备及计算机存储介质
CN111833441A (zh) 一种基于多相机系统的人脸三维重建方法和装置
CN111383185A (zh) 一种基于稠密视差图的孔洞填充方法及车载设备
CN112053434B (zh) 视差图的生成方法、三维重建方法及相关装置
CN112233164B (zh) 一种视差图错误点识别与校正方法
CN111630569A (zh) 双目匹配的方法、视觉成像装置及具有存储功能的装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20825661

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20825661

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20825661

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.07.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20825661

Country of ref document: EP

Kind code of ref document: A1