WO2021195940A1 - 一种图像处理方法及可移动平台 - Google Patents

一种图像处理方法及可移动平台 Download PDF

Info

Publication number
WO2021195940A1
WO2021195940A1 PCT/CN2020/082360 CN2020082360W WO2021195940A1 WO 2021195940 A1 WO2021195940 A1 WO 2021195940A1 CN 2020082360 W CN2020082360 W CN 2020082360W WO 2021195940 A1 WO2021195940 A1 WO 2021195940A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
target
disparity
disparity map
initial
Prior art date
Application number
PCT/CN2020/082360
Other languages
English (en)
French (fr)
Inventor
刘洁
周游
杨振飞
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/082360 priority Critical patent/WO2021195940A1/zh
Publication of WO2021195940A1 publication Critical patent/WO2021195940A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Definitions

  • the invention relates to the technical field of image processing, in particular to an image processing method and a movable platform.
  • Computer vision relies on imaging systems instead of visual organs as input.
  • the most commonly used imaging system is a camera.
  • the imaging system may be a basic vision system composed of dual cameras, called stereo vision, and two cameras can be used to capture two images at the same time to generate a depth map.
  • the imaging system may be a single camera, and two images taken with a single camera at a preset time interval (for example, two working moments before and after) can also generate a depth map.
  • the initial disparity map In order to generate a depth map, it is usually first to obtain the two images as described above, and then obtain the initial disparity map based on the two images, for example, obtain the initial disparity map through the semi-global matching (SGM) algorithm, and use the initial disparity map Determine the depth information, and obtain the depth map according to the depth information.
  • SGM semi-global matching
  • the obtained initial disparity map usually has a lot of noise and erroneous observations. For example, the sky area in the image should be black, indicating that there is no observation at infinity, but the sky area actually obtained in the initial disparity map appears to be observed.
  • the initial disparity map is usually filtered again, such as the commonly used Speckle Filter algorithm.
  • the initial disparity map becomes smoother, but some small objects (such as wires and branches) may also be treated as noise. Filter out. It can be seen that the above filtering processing method is not very good for detail processing, and it is specifically manifested as the inability to identify small objects, and local useful information will be lost.
  • the embodiment of the invention discloses an image processing method, device and a movable platform, which can effectively identify small objects while filtering out the noise in the disparity map, thereby retaining the observation of small obstacles in the disparity map, and avoiding loss of local usefulness information.
  • the first aspect of the embodiments of the present invention discloses an image processing method, which is applied to a movable platform, wherein the movable platform includes a photographing device, and the method includes:
  • the second initial disparity map it is determined whether to fill the target disparity area into the first initial disparity map after the filtering process.
  • the second aspect of the embodiments of the present invention discloses a movable platform, including: a processor, a photographing device, and a memory, wherein:
  • the memory is used to store program instructions
  • the second initial disparity map it is determined whether to fill the target disparity area into the first initial disparity map after the filtering process.
  • an image processing device which is applied to a movable platform, wherein the movable platform includes a photographing device, and the image processing device includes:
  • An acquisition module for acquiring an image output by the photographing device
  • the acquiring module is further configured to acquire a first initial disparity map and a second initial disparity map according to the image;
  • a determining module configured to determine the filtered target disparity area in the first initial disparity map after filtering the first initial disparity map
  • the filling module is configured to determine whether to fill the target disparity area into the first initial disparity map after the filtering process according to the second initial disparity map.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the image processing method described in the first aspect is implemented.
  • the embodiment of the present invention can acquire the image output by the camera of the movable platform, acquire the first initial disparity map and the second initial disparity map according to the images, and determine the first initial disparity map after filtering the first initial disparity map According to the second initial disparity map, determine whether to fill the target disparity area into the first initial disparity map after the filtering process, that is, to retain the area that should not be filtered out, so that the disparity map can be filtered out At the same time of mid-noise, it can effectively identify small objects, retain the observation of small obstacles in the disparity map, and avoid losing local useful information.
  • FIG. 1 is a schematic flowchart of an image processing method disclosed in an embodiment of the present invention
  • Fig. 2a is a schematic diagram of filtering a disparity map according to an embodiment of the present invention
  • 2b is a schematic diagram of determining projection pixels according to an embodiment of the present invention.
  • 2c is a schematic flowchart of another image processing method disclosed in an embodiment of the present invention.
  • Fig. 3 is a schematic structural diagram of an image processing device disclosed in an embodiment of the present invention.
  • Fig. 4 is a schematic structural diagram of a movable platform disclosed in an embodiment of the present invention.
  • the movable platform in the embodiment of the present invention may include aerial photography aircraft or other vehicles with single-camera/multi-camera, such as unmanned vehicles, drones, VR/AR glasses, dual-camera mobile phones, and vision systems.
  • Devices such as smart cars are not limited in the embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
  • the image processing method described in this embodiment can be applied to a movable platform that includes a photographing device, and the image processing method includes the following steps:
  • the photographing device may include a dual camera or a single camera.
  • the first initial disparity map and the second initial disparity map may be acquired according to the images.
  • the second initial image may be determined based on the two images before the two images used to determine the first initial disparity map, and the second initial image may also be determined based on the two images used to determine the first initial disparity map.
  • the two images after the two images are determined, and there is no specific limitation here.
  • the second initial disparity map may be one frame or multiple frames.
  • at least one frame of the second initial image may be determined based on the two images before the two images used to determine the first initial disparity map. Yes, at least one frame of the second initial image may also be determined based on two images after the two images used to determine the first initial disparity map.
  • the movable platform can use dual cameras to collect environmental images. Each time a photo is taken, two images can be obtained. These two images include one image taken by the left eye camera of the dual camera and the image of the dual camera Another image taken by the right-eye camera.
  • the mobile platform can use these two images to calculate the initial disparity map of the current shooting environment.
  • the movable platform can use the camera to take multiple pictures. Among them, two images are obtained by taking pictures at time t0, and the first initial disparity map is generated based on the two images at time t0; two images are obtained by taking pictures at time t1, and according to t1
  • the two images at the moment generate a second initial disparity map.
  • take the second initial disparity map as the reference disparity map of the first initial disparity map as an example.
  • the first initial disparity map may also be used as the reference disparity map of the second initial disparity map as an example.
  • the movable platform may first obtain the two images, and then use the SGM algorithm to process the two images to obtain the initial disparity map.
  • the image obtained by any one of the cameras can be used as the reference.
  • the image obtained by the left-eye camera can be used as the reference, or the right-eye camera can be used as the reference.
  • the obtained image is a reference, and the embodiment of the present invention does not limit it.
  • the aforementioned photographing device may also be a single camera.
  • a single camera takes three images at t0, t1, and t2, and generates a first initial disparity map based on the two images at time t0 and t1, and generates a second initial disparity map based on the two images at time t1 and t2.
  • t1 and t2 can be the time after t0.
  • a single camera takes three images at t1, t2, and t0, and generates the first initial disparity map based on the two images at t2 and t0, and generates the first disparity map based on the two images at t1 and t2. 2.
  • Initial disparity map where t1 and t2 can be the time before t0.
  • the mobile platform can filter the initial disparity map.
  • the second initial disparity map is used as the reference disparity map of the first initial disparity map as an example.
  • the Speckle Filter algorithm can be used to filter the first initial disparity map.
  • the first initial disparity map is the pre-filtered disparity map.
  • the Speckle Filter algorithm can be used to filter out the small blocks represented by the oval frame in Figure 2a (ie the image area). It can be seen that the noise area on the upper right is Effectively filter out, but at the same time small objects such as wires in the middle position are also filtered out. If it is directly used as the final result, it will obviously lose local useful information.
  • the mobile platform can first determine the target disparity area where Speckle Filter is performed on the first initial disparity map, and the target disparity area can be filtered out. All image areas of, for example, the target parallax area may be the image area represented by each oval frame in FIG. 2a.
  • the target disparity area may be a partial area of all areas that are filtered out.
  • the mobile platform can first determine all image areas (denoted as the initial image area) filtered out of the first initial disparity map after filtering the first initial disparity map, and then determine the target disparity area from the initial image area, where, The number of pixels in the target parallax area is greater than or equal to a preset number threshold, and the preset number threshold may be 10, for example.
  • the first initial disparity map as D0
  • all the filtered image regions as Ms0
  • Df0 the first initial disparity map after filtering
  • the number of pixels in each image area in the initial image area can be obtained, and then an image area with the number of pixels greater than or equal to a preset number threshold is used as the target parallax area.
  • the initial image area is the image area represented by 5 oval frames.
  • the number of pixels in the image area represented by each oval frame is compared with the preset number threshold, and the number of pixels is greater than Those image areas equal to or equal to the preset number threshold are regarded as target parallax areas.
  • the movable platform may perform secondary verification on the target disparity area according to the second initial disparity map to determine the target disparity Whether the area should not be filtered out, and determine whether to fill the target disparity area into the first initial disparity map after the filtering process, if it is an area that should not be filtered out, fill the target disparity area after the filtering process The first initial disparity map.
  • the movable platform determines the depth information according to the filled first initial disparity map; if it is determined not to fill the target disparity area into the filtering process After the first initial disparity map, the movable platform determines the depth information according to the filtered first initial disparity map, and then generates the depth map of the current shooting environment based on the depth information.
  • the movable platform can acquire the image output by the camera, acquire the first initial disparity map and the second initial disparity map according to the images, and determine the first initial disparity map after filtering the first initial disparity map According to the filtered target disparity area in the second initial disparity map, it is determined whether to fill the target disparity area into the first initial disparity map after the filtering process.
  • Filter is a representative disparity map filter, which effectively retains areas with higher reliability while eliminating unreliable noise areas. Compared with a simple Speckle Filter filter, the embodiments of the present invention can effectively identify small objects. Keep the observation of small obstacles in the disparity map to avoid losing local useful information.
  • the movable platform determines whether to fill the target disparity area into the first initial disparity map after the filtering process according to the second initial disparity map, including: the movable platform calculates the spatial points corresponding to the pixels in the target disparity area Project to the second initial disparity map to determine the projected pixel points in the second initial disparity map, and determine whether to fill the target disparity area to the second after the filtering process according to the disparity of the pixel points in the target disparity area and the disparity of the projected pixel points An initial disparity map.
  • the movable platform can project the spatial point corresponding to any pixel point in the target disparity area onto the second initial disparity map to obtain the projected pixel coordinates corresponding to the second initial disparity map, and determine the arbitrary pixel coordinates according to the projected pixel coordinates.
  • a pixel point corresponds to the projection pixel point in the second initial disparity map, and the disparity of any pixel point and the disparity of the corresponding projection pixel point are acquired, and according to the acquired disparity, it is determined whether the target disparity area is an area that needs to be reserved, thereby It is determined whether to fill the target disparity area into the first initial disparity map after the filtering process.
  • the projected pixel points may include multiple pixel points around the projected pixel coordinates in the second initial disparity map.
  • the 4 integer coordinates (x1, y2) that are closest to the projected pixel coordinates (x, y) can be found , (X2, y2), (x1, y1), (x2, y1), and one or more of the pixels corresponding to these 4 integer coordinates can be used as projection pixels.
  • the movable platform determines whether to fill the target disparity area into the first initial disparity map after the filtering process according to the disparity of the pixel points in the target disparity area and the disparity of the projected pixel points, including: the movable platform determines the target Based on the deviation between the parallax of the pixel point in the parallax area and the parallax of the projected pixel point, it is determined whether to fill the target parallax area into the first initial disparity map after the filtering process according to the deviation.
  • the movable platform can calculate the deviation between the parallax of the pixel in the target parallax area and the parallax of the projected pixel.
  • the deviation can be the absolute value of the difference between the parallax of the pixel and the parallax of the corresponding projected pixel.
  • a large deviation means that there is a mismatch between the pixel point and the corresponding projection pixel point
  • a small deviation means that the pixel point and the corresponding projection pixel point match, so that according to the deviation, it can be determined whether the target parallax area needs to be reserved To determine whether to fill the target disparity area into the first initial disparity map after the filtering process.
  • the movable platform determines whether to fill the target disparity area into the first initial disparity map after the filtering process according to the deviation, which includes: the movable platform compares the deviation with a preset deviation threshold, and determines that the target disparity area includes The number of pixels in which the corresponding deviation is less than or equal to the preset deviation threshold, and according to the number, it is determined whether to fill the target disparity area into the first initial disparity map after the filtering process.
  • the movable platform compares the deviation corresponding to each pixel included in the target parallax area with a preset deviation threshold, and counts the number of pixels whose deviation is less than or equal to the preset deviation threshold, that is, in the target parallax area The number of pixels that match the corresponding projected pixels. If the number is greater than or equal to the preset number threshold (for example, 50) or the ratio of the number to the number of pixels in the target parallax area is greater than or equal to the threshold ratio threshold (for example, 60%), it can be determined that the target parallax area needs to be reserved
  • the target disparity area can be filled into the first initial disparity map after filtering, so that the target disparity area in the disparity map finally obtained after filtering is not filtered out.
  • the movable platform cannot calculate the disparity of the target pixel and the corresponding projection pixel.
  • a second weight can be determined for the deviation between the parallaxes.
  • the movable platform can calculate the deviation between the disparity of the pixel and the disparity of the corresponding projection pixel, if the deviation is less than or equal to the preset deviation Threshold value, a first weight value can be determined for these pixels, the first weight value is greater than the second weight value, and the first weight value and the second weight value are used to determine whether to fill the target disparity area to the first initial value after filtering.
  • Threshold value a first weight value can be determined for these pixels, the first weight value is greater than the second weight value, and the first weight value and the second weight value are used to determine whether to fill the target disparity area to the first initial value after filtering.
  • the mobile platform can calculate the sum of all the weight values of the first weight value and the second weight value, and when the weight value is When the sum is greater than or equal to the preset weight threshold, it can be determined that the target disparity area is an area that needs to be reserved, and the target disparity area can be filled in the first initial disparity map after the filtering process.
  • the first weight can be set to 1
  • the second weight can be set to 0.2, assuming that the parallax of the projection pixel corresponding to the first pixel in the target parallax area is an invalid value, and the projection pixel corresponding to the second pixel has an invalid value.
  • the first weight and the second weight it is possible to fully and evenly consider whether each pixel in the target disparity area has a disparity in the projection pixel in the second initial disparity map.
  • the movable platform projects the spatial points corresponding to the pixel points in the target disparity area onto the second initial disparity map to determine the projected pixel points in the second initial disparity map, including: the movable platform determines The position coordinates of the spatial points corresponding to the pixel points in the target disparity area, and the spatial points are projected to the second initial disparity map according to the position coordinates of the spatial points, so as to determine the projection pixel points in the second initial disparity map.
  • the movable platform may first obtain the spatial point of the pixel in the camera coordinate system at the moment of taking the photo corresponding to the first initial disparity map, and then take the corresponding photo according to the first initial disparity map
  • the pose relationship between the time and the second initial disparity map corresponding to the shooting time and the camera internal parameter matrix the spatial point is converted to the second initial disparity map corresponding to the camera plane physical coordinate system at the shooting time, and then projected to the second initial disparity map
  • the projection pixel corresponding to the pixel in the second initial disparity map can be found.
  • the first initial disparity map corresponds to the shooting time t0
  • the second initial disparity map corresponds to the shooting time t1.
  • f is the focal length focal length
  • b is the binocular distance baseline
  • the corresponding 3D point of the pixel p in the camera coordinate system at time t0 (that is, the above-mentioned space Point) is d ⁇ K -1 p
  • the 3D point is converted to the physical coordinate system of the camera plane at t1 to obtain R(d ⁇ K -1 p) +t, and then calculate the 3D point corresponding to the point [x, y, z] in the physical coordinate system of the camera plane at t1 through the camera internal parameter K.
  • T K(R(d
  • the foregoing second initial disparity map may be multiple initial disparity maps corresponding to multiple shooting moments.
  • the above-mentioned second initial disparity map includes two initial disparity maps corresponding to the two shooting moments as an example.
  • the two initial disparity maps are recorded as the third initial disparity map at time t1 and the fourth initial disparity map at time t2.
  • the three initial disparity maps and the fourth initial disparity map are both used as the reference disparity map of the first initial disparity map at time t0, and the movable platform determines the filtered target disparity area in the first initial disparity map after the filtering process It is necessary to perform secondary verification on the target disparity area according to the third initial disparity map and the fourth initial disparity map to determine whether to fill the target disparity area into the first initial disparity map after the filtering process.
  • the movable platform may project the spatial points corresponding to the pixel points in the target disparity area to the third initial disparity map and the fourth initial disparity map, respectively, to determine the first projection pixel point in the third initial disparity map, and
  • the difference between the disparity of the pixel and the disparity of the two projection pixels can be used to determine Determining whether the pixel point matches the two projection pixels may be determined to match when the deviation is less than or equal to a preset deviation threshold.
  • the third initial disparity map and the fourth initial disparity map respectively correspond to a weight coefficient
  • the third initial disparity map corresponds to the first weight coefficient
  • the fourth initial disparity map corresponds to the second weight coefficient
  • the first weight coefficient and the second weight The sum of the coefficients can be 1.
  • the movable platform may perform probabilistic fusion processing on the matching results of the pixel point in the third initial disparity map and the fourth initial disparity map, if the pixel point corresponds to the first projection pixel in the third initial disparity map Point matching, and the pixel point matches the corresponding second projection pixel point in the fourth initial disparity map, the increase in the number of matched pixels in the target disparity area is 1 (that is, the first weight coefficient and the second weight Coefficient sum); if the pixel point matches the first projection pixel point, and the pixel point does not match the second projection pixel point, then the increase in the number of matched pixels in the target disparity area is the first weight coefficient; If the pixel point does not match the first projection pixel point, and the pixel point matches the second projection pixel point, the increase in the number of matched pixels in the target parallax area is the second weighting coefficient; if the pixel point matches Neither the first projection pixel point nor the second projection pixel point matches, then the increase in the number of matched pixels in the
  • the target disparity area After all pixels in the target disparity area are respectively projected to the third initial disparity map and the fourth initial disparity map, if the number of matching pixels in the target disparity area is greater than or equal to the preset number threshold (for example, 50) or the ratio of the number to the number of pixels in the target parallax area is greater than or equal to the threshold ratio threshold (for example, 60%), it can be determined that the target parallax area is an area that needs to be reserved, and the target parallax area can be filled to After filtering the first initial disparity map, the target disparity area in the disparity map finally obtained after filtering is not filtered out.
  • the preset number threshold For example, 50
  • the ratio of the number to the number of pixels in the target parallax area for example, 60%
  • the magnitude between the first weight coefficient corresponding to the third initial disparity map and the second weight coefficient corresponding to the fourth initial disparity map may be related to the corresponding photographing moment, specifically related to the first initial disparity map.
  • the smaller the photographing time interval of the graph the larger the weighting coefficient.
  • the photographing time interval between the third initial disparity map and the first initial disparity map is smaller than the photographing time interval between the fourth initial disparity map and the first initial disparity map
  • the first weight coefficient is greater than the second weight coefficient, and the first weight coefficient can be set to 0.7 and the second weight coefficient to 0.3.
  • t0 can represent the current moment
  • t1 and t2 represent the adjacent moments of t0, which can be the previous moment (t0-1, t0-2), or the next moment (t0+ 1, t0+2), it can also be a tandem time (t0-1, t0+1), which is not limited in the embodiment of the present invention.
  • FIG. 2c it is a schematic flowchart of another image processing method disclosed in an embodiment of the present invention.
  • the image processing method may specifically include the following steps:
  • the pre-filtering disparity map D0 at time t0 is the aforementioned first initial disparity map
  • the pre-filtering disparity map Dn at time tn is the aforementioned reference disparity map.
  • the filtered area Ms can be determined, and the filtered area Ms includes one Or multiple connected domains, you can filter each connected domain in the filtered area Ms according to the principle of preserving a large area, and obtain the filtered connected domain S0 (that is, the above-mentioned target parallax area), and then according to the camera pose relationship
  • Each pixel in the connected domain S0 is respectively projected to the camera coordinate system where the pre-filtered disparity map at other times is located, and then the disparity is compared with the projected pixel points in the pre-filtered disparity map at the corresponding time, and the disparity is filtered out according to the disparity.
  • the pre-filtered disparity map at each time corresponds to the area composed of matched pixels, and the area composed of the matched pixels corresponding to the pre-filtered disparity map at each time is subjected to probabilistic fusion processing to obtain the matching area S0' (that is, the area that needs to be reserved ), and then add the area S0' to the filtered disparity map Df to get the final disparity map at time t0 after filtering, which improves the current disparity map filter represented by Speckle Filter, and eliminates untrustworthy In addition to the noise area, the reliable area is effectively retained.
  • the embodiment of the present invention can effectively identify small objects, retain the observation of small obstacles in the disparity map, and avoid losing local Useful information.
  • FIG. 3 is an image processing device provided by an embodiment of the present invention, which is applied to a movable platform, wherein the movable platform includes a photographing device, and the image processing device includes:
  • the acquiring module 301 is used to acquire the image output by the photographing device.
  • the acquiring module 301 is further configured to acquire a first initial disparity map and a second initial disparity map according to the image.
  • the determining module 302 is configured to determine the filtered target disparity area in the first initial disparity map after filtering the first initial disparity map.
  • the filling module 303 is configured to determine, according to the second initial disparity map, whether to fill the target disparity area into the first initial disparity map after the filtering process.
  • the filling module 303 is also used for:
  • the target disparity area is filled into the first initial disparity map after the filtering process.
  • the determining module 302 is further configured to:
  • the depth information is determined according to the filled first initial disparity map.
  • the determining module 302 is further configured to:
  • the depth information is determined according to the first initial disparity map after the filtering process.
  • the filling module 303 is specifically used for:
  • the projected pixel point is the number of pixels around the projected pixel coordinates in the second initial disparity map. Pixels.
  • the filling module 303 is specifically used for:
  • the filling module 303 is specifically used for:
  • the number it is determined whether to fill the target disparity area into the first initial disparity map after the filtering process.
  • the filling module 303 is specifically used for:
  • the number is greater than or equal to a preset number threshold or the ratio of the number to the number of pixels in the target parallax area is greater than or equal to a threshold ratio threshold, it is determined to fill the target parallax area after the filtering process The first initial disparity map.
  • the determining module 302 is further configured to:
  • a second weight value is determined for the target pixel point in the target disparity area.
  • the determining module 302 is specifically configured to:
  • the deviation between the disparity of the pixel in the target disparity area and the disparity of the projection pixel is determined.
  • the filling module 303 is specifically used for:
  • the filling module 303 is specifically used for:
  • the sum of the first weight and the second weight is determined.
  • the filling module 303 is specifically used for:
  • the position coordinates of the spatial point corresponding to the pixel point in the target disparity area are determined.
  • the determining module 302 is specifically configured to:
  • a target parallax area is determined from the initial image area, wherein the number of pixels in the target parallax area is greater than or equal to a preset number threshold.
  • FIG. 4 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention.
  • the movable platform described in this embodiment includes a processor 401, a memory 402, and a photographing device 403.
  • the aforementioned processor 401, memory 402, and photographing device 403 are connected via a bus.
  • the aforementioned processor 401 may be a central processing unit (CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), application specific integrated circuits (ASICs). ), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the aforementioned photographing device 403 may be a binocular camera or a monocular camera.
  • the aforementioned memory 402 may include a read-only memory and a random access memory, and provides program instructions and data to the processor 401.
  • a part of the memory 402 may also include a non-volatile random access memory.
  • the image output by the photographing device 403 is acquired.
  • the second initial disparity map it is determined whether to fill the target disparity area into the first initial disparity map after the filtering process.
  • the processor 401 is further configured to:
  • the target disparity area is filled into the first initial disparity map after the filtering process.
  • the processor 401 is further configured to:
  • the depth information is determined according to the filled first initial disparity map.
  • the processor 401 is further configured to:
  • the depth information is determined according to the first initial disparity map after the filtering process.
  • the processor 401 is specifically configured to:
  • the projected pixel point is the number of pixels around the projected pixel coordinates in the second initial disparity map. Pixels.
  • the processor 401 is specifically configured to:
  • the processor 401 is specifically configured to:
  • the number it is determined whether to fill the target disparity area into the first initial disparity map after the filtering process.
  • the processor 401 is specifically configured to:
  • the number is greater than or equal to a preset number threshold or the ratio of the number to the number of pixels in the target parallax area is greater than or equal to a threshold ratio threshold, it is determined to fill the target parallax area after the filtering process The first initial disparity map.
  • the processor 401 is further configured to:
  • a second weight value is determined for the target pixel in the target disparity area.
  • the processor 401 is specifically configured to:
  • the deviation between the disparity of the pixel in the target disparity area and the disparity of the projection pixel is determined.
  • the processor 401 is specifically configured to:
  • the sum of the first weight and the second weight is determined.
  • the processor 401 is specifically configured to:
  • the position coordinates of the spatial point corresponding to the pixel point in the target disparity area are determined.
  • the processor 401 is specifically configured to:
  • a target parallax area is determined from the initial image area, wherein the number of pixels in the target parallax area is greater than or equal to a preset number threshold.
  • the processor 401, the memory 402, and the photographing device 403 described in the embodiment of the present invention can perform the implementation described in the image processing method provided in FIG. The implementation of the image processing device described in 3 will not be repeated here.
  • the embodiment of the present invention also provides a computer storage medium in which program instructions are stored, and the program execution may include part or all of the steps of the image processing method in the embodiment corresponding to FIG. 1.
  • the program can be stored in a computer-readable storage medium, and the storage medium can include: Flash disk, read-only memory (Read-Only Memory, ROM), random access device (Random Access Memory, RAM), magnetic disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法及可移动平台,其中图像处理方法包括:获取可移动平台的拍摄装置输出的图像,根据该图像获取第一初始视差图和第二初始视差图,并确定对第一初始视差图进行滤波处理后第一初始视差图中的被滤除的目标视差区域,根据第二初始视差图确定是否将目标视差区域填充到滤波处理后第一初始视差图,即保留本不该被滤除的区域,从而可以在滤除视差图中噪点的同时,有效识别细小物体,保留视差图中对细小障碍物的观测,避免丢失局部有用信息。

Description

一种图像处理方法及可移动平台 技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像处理方法及可移动平台。
背景技术
计算机视觉是依靠成像系统代替视觉器官作为输入,最常用的成像系统是摄像头。例如所述成像系统可以是由双摄像头即可组成一个基础的视觉系统,称为立体视觉(Stereo Vision),利用两个摄像头可以拍摄得到同一时刻的两张图像生成深度图(Depth Map)。再例如所述成像系统可以是单摄像头,利用单摄像头预设时间间隔(例如前后两个工作时刻)拍摄的两张图像也可以生成深度图。
为了生成深度图,通常是先获取如前所述的两张图像,然后根据两张图像获取初始视差图,例如通过半全局匹配(Semi Global Matching,SGM)算法得到初始视差图,利用初始视差图确定深度信息,根据深度信息得到深度图。然而,得到的初始视差图通常会有大量的噪点和错误观测,例如图像里的天空区域应该是黑色,表示无穷远无观测,但实际得到的初始视差图中天空区域却表现为有观测。目前,通常会再对初始视差图进行滤波处理,例如常用的Speckle Filter算法,滤波后可以使得初始视差图变得较为平滑,但一些细小物体(例如电线、树枝)很可能也会被当做噪点而滤除掉。可见,上述滤波处理方式对于细节处理的不是很好,具体表现为无法识别细小物体,会丢失局部有用信息。
发明内容
本发明实施例公开了一种图像处理方法、装置及可移动平台,可以在滤除视差图中噪点的同时,有效识别细小物体,从而保留视差图中对细小障碍物的观测,避免丢失局部有用信息。
本发明实施例第一方面公开了一种图像处理方法,应用于可移动平台,其中,所述可移动平台包括拍摄装置,所述方法包括:
获取所述拍摄装置输出的图像;
根据所述图像获取第一初始视差图和第二初始视差图;
确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的目标视差区域;
根据所述第二初始视差图确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
本发明实施例第二方面公开了一种可移动平台,包括:处理器、拍摄装置和存储器,其中:
所述存储器,用于存储程序指令;
所述处理器调用所述程序指令时用于执行:
获取所述拍摄装置输出的图像;
根据所述图像获取第一初始视差图和第二初始视差图;
确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的目标视差区域;
根据所述第二初始视差图确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
本发明实施例第三方面公开了一种图像处理装置,应用于可移动平台,其中,所述可移动平台包括拍摄装置,所述图像处理装置包括:
获取模块,用于获取所述拍摄装置输出的图像;
所述获取模块,还用于根据所述图像获取第一初始视差图和第二初始视差图;
确定模块,用于确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的目标视差区域;
填充模块,用于根据所述第二初始视差图确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
本发明实施例第四方面公开了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面所述的图像处理方法。
本发明实施例可以获取可移动平台的拍摄装置输出的图像,根据该图像获取第一初始视差图和第二初始视差图,并确定对第一初始视差图进行滤波处理 后第一初始视差图中的被滤除的目标视差区域,根据第二初始视差图确定是否将目标视差区域填充到滤波处理后第一初始视差图,即保留本不该被滤除的区域,从而可以在滤除视差图中噪点的同时,有效识别细小物体,保留视差图中对细小障碍物的观测,避免丢失局部有用信息。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例公开的一种图像处理方法的流程示意图;
图2a是本发明实施例公开的一种对视差图进行滤波处理的示意图;
图2b是本发明实施例公开的一种确定投影像素点的示意图;
图2c是本发明实施例公开的另一种图像处理方法的流程示意图;
图3是本发明实施例公开的一种图像处理装置的结构示意图;
图4是本发明实施例公开的一种可移动平台的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例中的可移动平台可以包括航拍飞行器或其他带有单摄像头/多摄像头的载具,例如无人驾驶汽车、无人机、VR/AR眼镜、双摄像头的手机、有视觉系统的智能小车等设备,本发明实施例不做限定。
请参阅图1,为本发明实施例提供的一种图像处理方法的流程示意图。本实施例中所描述的图像处理方法可以应用于可移动平台,该可移动平台包括拍 摄装置,该图像处理方法包括以下步骤:
101、获取所述拍摄装置输出的图像。
102、根据所述图像获取第一初始视差图和第二初始视差图。
具体的,所述拍摄装置可以包括双摄像头或者单摄像头。在获取拍摄装置输出的图像,可以根据所述图像获取第一初始视差图和第二初始视差图。其中,所述第二初始图像可以是根据用于确定第一初始视差图的两张图像之前的两张图像确定的,所述第二初始图像也可以是根据用于确定第一初始视差图的两张图像之后的两张图像确定的,在这里不做具体的限定。在某些情况中,所述第二初始视差图可以为一帧,也可以为多帧。在某些情况中,所述第二初始视差图为多帧时,其中,至少一帧所述第二初始图像可以是根据用于确定第一初始视差图的两张图像之前的两张图像确定的,至少一帧所述第二初始图像也可以是根据用于确定第一初始视差图的两张图像之后的两张图像确定的。
若所述拍摄装置为双摄像头,可移动平台可以利用双摄像头采集环境图像,每拍照一次,可以得到两张图像,这两张图像包括双摄像头的左目摄像头拍照得到的一张图像以及双摄像头的右目摄像头拍照得到的另一张图像。可移动平台利用这两张图像可以计算得到当前拍摄环境的初始视差图。可移动平台可以利用拍摄装置进行多次拍照,其中,在t0时刻拍照得到两张图像,并根据t0时刻的两张图像生成第一初始视差图;在t1时刻拍照得到两张图像,并根据t1时刻的两张图像生成第二初始视差图。这里以第二初始视差图作为第一初始视差图的参考视差图为例。当然,也可以以第一初始视差图作为第二初始视差图的参考视差图为例。
在一些可行的实施方式中,针对每次拍照得到两张图像,可移动平台可以先获取这两张图像,然后利用SGM算法对这两张图像进行处理得到初始视差图。
其中,在利用两张图像计算得到当前拍摄环境的初始视差图时,可以以其中任意一个摄像头拍照得到的图像为基准,例如,可以以左目摄像头拍照得到的图像为基准,也可以以右目摄像头拍照得到的图像为基准,本发明实施例不做限定。
在一些可行的实施方式中,上述拍摄装置也可以是单摄像头。此时,需要 利用单摄像头每拍照两次得到两张图像才能计算得到当前拍摄环境的一个初始视差图。例如,单摄像头在t0、t1和t2时刻拍照得到三张图像,并根据t0时刻和t1的两张图像生成第一初始视差图,并根据t1时刻和t2的两张图像生成第二初始视差图,其中,t1和t2时刻可以是t0时刻之后的时刻。在某些情况中,单摄像头在t1、t2和t0时刻拍照得到三张图像,并根据t2时刻和t0的两张图像生成第一初始视差图,并根据t1时刻和t2的两张图像生成第二初始视差图,其中,t1和t2时刻可以是t0时刻之前的时刻。
103、确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的目标视差区域。
具体的,考虑到初始视差图通常会有大量的噪点和错误观测,可移动平台可以对初始视差图进行滤波处理,这里是以第二初始视差图作为第一初始视差图的参考视差图为例,则可以采用Speckle Filter算法对第一初始视差图进行滤波处理。以图2a为例,第一初始视差图为滤波前视差图,采用Speckle Filter算法可以滤除图2a中椭圆形框代表的小块(即图像区域),可以看出,右上方的噪点区域被有效滤除了,但同时中间位置的电线等细小物体也同样被滤除了,如果直接将其作为最终结果,则显然会丢失局部有用信息。
为了有效识别细小物体,避免丢失局部有用信息,可移动平台可以先确定对第一初始视差图进行Speckle Filter滤波导致第一初始视差图被滤除的目标视差区域,目标视差区域可以是被滤除的所有图像区域,例如目标视差区域可以是图2a中各个椭圆形框代表的图像区域。
在一些可行的实施方式中,目标视差区域可以是被滤除的所有区域中的部分区域。可移动平台可以先确定对第一初始视差图进行滤波处理后第一初始视差图被滤除的所有图像区域(记为初始图像区域),然后再从初始图像区域中确定目标视差区域,其中,目标视差区域中像素点的数量大于或等于预设数量阈值,预设数量阈值例如可以是10个。将第一初始视差图记为D0,被滤除的所有图像区域记为Ms0,滤波后的第一初始视差图记为Df0,则可以理解为D0-Ms0=Df0。
具体的,可以获取初始图像区域中每一个图像区域的像素点的数量,然后将像素点的数量大于或等于预设数量阈值的图像区域作为目标视差区域。以图 2a为例,初始图像区域为5个椭圆形框代表的图像区域,将每个椭圆形框代表的图像区域的像素点的数量与预设数量阈值进行比较,并将像素点的数量大于或等于预设数量阈值的那些图像区域作为目标视差区域。
104、根据所述第二初始视差图确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
具体的,针对每一个目标视差区域,基于第二初始视差图作为第一初始视差图的参考视差图,可移动平台可以根据第二初始视差图对目标视差区域进行二次验证,以确定目标视差区域是否为不应该被滤除的区域,并确定是否将目标视差区域填充到滤波处理后第一初始视差图中,如果是不应该被滤除的区域,则将目标视差区域填充到滤波处理后的第一初始视差图中。
进一步的,如果确定将目标视差区域填充到滤波处理后的第一初始视差图中,则可移动平台根据填充后的第一初始视差图确定深度信息;如果确定不将目标视差区域填充到滤波处理后第一初始视差图中,则可移动平台根据滤波处理后的第一初始视差图确定深度信息,再根据深度信息即可生成当前拍摄环境的深度图。
本发明实施例中,可移动平台可以获取拍摄装置输出的图像,根据该图像获取第一初始视差图和第二初始视差图,并确定对第一初始视差图进行滤波处理后第一初始视差图中的被滤除的目标视差区域,根据第二初始视差图确定是否将目标视差区域填充到滤波处理后第一初始视差图,可以保留本不该被滤除的区域,从而改进了目前以Speckle Filter为代表的视差图滤波器,在剔除了不可信的噪点区域的同时,有效保留了可信度较高的区域,相比单纯的Speckle Filter滤波器,本发明实施例可以有效识别细小物体,保留视差图中对细小障碍物的观测,避免丢失局部有用信息。
在一些可行的实施方式中,可移动平台根据第二初始视差图确定是否将目标视差区域填充到滤波处理后第一初始视差图,包括:可移动平台将目标视差区域中像素点对应的空间点向第二初始视差图中投影,以在第二初始视差图中确定投影像素点,并根据目标视差区域中像素点的视差和投影像素点的视差确定是否将目标视差区域填充到滤波处理后第一初始视差图中。
具体的,可移动平台可以将目标视差区域中任意一个像素点对应的空间点向第二初始视差图中投影,得到在第二初始视差图上对应的投影像素坐标,根据投影像素坐标确定该任意一个像素点在第二初始视差图中对应的投影像素点,并获取该任意一个像素点的视差和对应的投影像素点的视差,根据获取的视差确定目标视差区域是否为需要保留的区域,从而确定是否将目标视差区域填充到滤波处理后第一初始视差图中。
在一些可行的实施方式中,当上述空间点在第二初始视差图上的投影像素坐标包括小数坐标时,投影像素点可以包括第二初始视差图中在该投影像素坐标周围的多个像素点。如图2b所示,投影像素坐标(x,y)的坐标值x和/或y为小数,则可以找出与投影像素坐标(x,y)距离最近的4个整数坐标(x1,y2)、(x2,y2)、(x1,y1)、(x2,y1),并可以将这4个整数坐标对应的像素点中的一个或者多个像素点作为投影像素点。
在一些可行的实施方式中,可移动平台根据目标视差区域中像素点的视差和投影像素点的视差确定是否将目标视差区域填充到滤波处理后第一初始视差图,包括:可移动平台确定目标视差区域中像素点的视差和投影像素点的视差之间的偏差,根据该偏差确定是否将目标视差区域填充到滤波处理后第一初始视差图。
具体的,可移动平台可以计算目标视差区域中像素点的视差和投影像素点的视差之间的偏差,该偏差可以是像素点的视差与对应的投影像素点的视差之间差值的绝对值,偏差较大意味着像素点与对应的投影像素点之间不匹配,偏差较小则意味着像素点与对应的投影像素点之间匹配,从而根据该偏差可以确定目标视差区域是否为需要保留的区域,从而确定是否将目标视差区域填充到滤波处理后第一初始视差图中。
在一些可行的实施方式中,可移动平台根据偏差确定是否将目标视差区域填充到滤波处理后第一初始视差图,包括:可移动平台将偏差与预设偏差阈值进行比较,确定目标视差区域包括的像素点中对应的偏差小于或等于预设偏差阈值的像素点的数量,并根据该数量确定是否将目标视差区域填充到滤波处理 后第一初始视差图。
具体的,可移动平台将目标视差区域包括的每个像素点对应的偏差与预设偏差阈值进行比较,并统计偏差小于或等于预设偏差阈值的像素点的数量,也就是说目标视差区域中与对应的投影像素点相匹配的像素点的数量。如果该数量大于或等于预设数量阈值(例如50个)或者该数量与目标视差区域中像素点的数量之比大于或等于阈值比率阈值(例如60%),则可以确定目标视差区域是需要保留的区域,并可以将目标视差区域填充到滤波处理后第一初始视差图,使得滤波后最终得到的视差图中目标视差区域不被滤除。
在一些可行的实施方式中,如果目标视差区域中存在目标像素点对应的投影像素点的视差为无效值,例如视差为无穷大,则可移动平台无法计算目标像素点的视差与对应的投影像素点的视差之间的偏差,对于这类目标像素点,可以确定一第二权值。如果目标视差区域中像素点对应的投影像素点的视差为有效值,则可移动平台可以计算像素点的视差与对应的投影像素点的视差之间的偏差,如果该偏差小于或等于预设偏差阈值,则可以为这些像素点确定一第一权值,第一权值大于第二权值,并根据第一权值和第二权值确定是否将目标视差区域填充到滤波处理后第一初始视差图。
具体的,可移动平台在获取目标视差区域中每个像素点的第一权值或第二权值后,可以计算所有的第一权值和第二权值的权值之和,当权值之和大于或等于预设权值阈值时,可以确定目标视差区域是需要保留的区域,并可以将目标视差区域填充到滤波处理后第一初始视差图中。例如第一权值可以设为1,第二权值可以设为0.2,假设目标视差区域中的第一像素点对应的投影像素点的视差为无效值,第二像素点对应的投影像素点的视差为有效值,且第二像素点对应的偏差小于或等于预设偏差阈值,则可以确定第一像素点对应第二权值(即0.2),第二像素点对应第一权值(即1),对应的权值之和为1+0.2=1.2,可以理解为第一像素点和第二像素点对上述匹配的像素点的数量的贡献是1.2,并将上述匹配的像素点的数量增加1.2,通过设置第一权值和第二权值可以充分而又均衡地考虑到目标视差区域中每个像素点在第二初始视差图中的投影像素点有无视差的情况。
在一些可行的实施方式中,可移动平台将目标视差区域中像素点对应的空间点向第二初始视差图中投影,以在第二初始视差图中确定投影像素点,包括:可移动平台确定目标视差区域中像素点对应的空间点的位置坐标,并根据空间点的位置坐标将空间点向第二初始视差图中投影,以在第二初始视差图中确定投影像素点。
具体的,针对目标视差区域中的任意一个像素点,可移动平台可以先获取该像素点在第一初始视差图对应拍照时刻的相机坐标系中的空间点,然后根据第一初始视差图对应拍照时刻与第二初始视差图对应拍照时刻之间的位姿关系和相机内参矩阵,将空间点转换到第二初始视差图对应拍照时刻的相机平面物理坐标系中,再投影到第二初始视差图对应拍照时刻的相机平面像素坐标系中,即可找到该像素点在第二初始视差图中对应的投影像素点。
以目标视差区域中某一像素点p为例,第一初始视差图对应拍照时刻为t0,第二初始视差图对应拍照时刻为t1,该像素点p的视差记为disp,计算其深度值为d=f*b/disp,f为焦距foucal length,b为双目间距baseline,根据相机内参K以及深度d,得到该像素点p在t0时刻的相机坐标系中对应的3D点(即上述空间点)为d·K -1p,再通过t0时刻与t1时刻之间的位姿关系,将该3D点转换到t1时刻的相机平面物理坐标系下,得到R(d·K -1p)+t,然后通过相机内参K计算得到该3D点对应到t1时刻的相机平面物理坐标系中的点[x,y,z] T=K(R(d·K -1p)+t),从而计算出该像素点p投影变换到第二初始视差图中对应的投影像素点。
在一些可行的实施方式中,上述第二初始视差图可以是多个拍照时刻对应的多个初始视差图。上述第二初始视差图包括两个拍照时刻对应的两个初始视差图为例,该两个初始视差图记为t1时刻的第三初始视差图和t2时刻的第四初始视差图,此时第三初始视差图和第四初始视差图同时作为t0时刻的第一初始视差图的参考视差图,则可移动平台在确定出滤波处理后第一初始视差图中的被滤除的目标视差区域后,需要根据第三初始视差图和第四初始视差图对目标视差区域进行二次验证,以确定是否将目标视差区域填充到滤波处理后第 一初始视差图。
具体的,可移动平台可以将目标视差区域中像素点对应的空间点分别向第三初始视差图和第四初始视差图进行投影,以在第三初始视差图中确定第一投影像素点,并在第四初始视差图中确定第二投影像素点,可以通过该像素点的视差分别与这两个投影像素点(即第一投影像素点和第二投影像素点)的视差之间的偏差来确定该像素点是否与这两个投影像素点相匹配,可以是在偏差小于或等于预设偏差阈值时判定匹配。其中,第三初始视差图和第四初始视差图分别对应有一个权重系数,第三初始视差图对应第一权重系数,第四初始视差图对应第二权重系数,第一权重系数与第二权重系数之和可以为1。
进一步的,可移动平台可以对该像素点在第三初始视差图和第四初始视差图中的匹配结果进行概率融合处理,如果该像素点与在第三初始视差图中对应的第一投影像素点匹配,且该像素点与在第四初始视差图中对应的第二投影像素点匹配,则目标视差区域中匹配的像素点的数量的增加量为1(即第一权重系数与第二权重系数之和);如果该像素点与第一投影像素点匹配,且该像素点与第二投影像素点不匹配,则目标视差区域中匹配的像素点的数量的增加量为第一权重系数;如果该像素点与第一投影像素点不匹配,且该像素点与第二投影像素点匹配,则目标视差区域中匹配的像素点的数量的增加量为第二权重系数;如果该像素点与第一投影像素点、第二投影像素点均不匹配,则目标视差区域中匹配的像素点的数量的增加量为0。
对目标视差区域中所有像素点都分别向第三初始视差图和第四初始视差图进行投影处理完毕后,如果最终得到的目标视差区域中匹配的像素点的数量大于或等于预设数量阈值(例如50个)或者该数量与目标视差区域中像素点的数量之比大于或等于阈值比率阈值(例如60%),则可以确定目标视差区域是需要保留的区域,并可以将目标视差区域填充到滤波处理后第一初始视差图,使得滤波后最终得到的视差图中目标视差区域不被滤除,通过采用多个参考视差图进行概率融合可以得到更加准确的匹配像素点的数量,并用于确定是否需要将目标视差区域填充到滤波处理后第一初始视差图中,有助于进一步提升对细小物体识别的准确度,从而准确保留视差图中对细小障碍物的观测,避免丢失局部有用信息。
在一些可行的实施方式中,第三初始视差图对应的第一权重系数与第四初始视差图对应的第二权重系数之间的大小可以与对应的拍照时刻有关,具体是与第一初始视差图的拍摄时间间隔越小,权重系数越大,例如,第三初始视差图与第一初始视差图之间的拍摄时间间隔小于第四初始视差图与第一初始视差图之间的拍摄时间间隔,则第一权重系数大于第二权重系数,可以设第一权重系数为0.7,第二权重系数为0.3。
在一些可行的实施方式中,t0可以表示当前时刻,t1与t2表示的是t0的相邻时刻,可以是前一时刻(t0-1,t0-2),也可以是后一时刻(t0+1,t0+2),也可以是一前一后时刻(t0-1,t0+1),本发明实施例不做限定。
在一些可行的实施方式中,如图2c所示,是本发明实施例公开的另一种图像处理方法的流程示意图。该图像处理方法具体可以包括以下步骤:
其中,将t0时刻作为当前时刻,t1~tn时刻作为参考时刻,针对每个时刻利用双摄像头拍照得到的两张图像,先获取双目灰度图,然后再利用SGM算法得到滤波前视差图D0,t0时刻的滤波前视差图D0即为上述第一初始视差图,t1时刻的滤波前视差图D1,……,tn时刻的滤波前视差图Dn即为上述参考视差图。对t0时刻的滤波前视差图D0进行Speckle Filter滤波处理,得到滤波后视差图Df,根据滤波前视差图D0和滤波后视差图Df可以确定出滤除的区域Ms,滤除的区域Ms包括一个或多个连通域,可以按照保留大的区域的原则对滤除的区域Ms中的各个连通域进行筛选,得到筛选后的连通域S0(即上述目标视差区域),然后再根据相机位姿关系将连通域S0中的每个像素点分别向其他时刻的滤波前视差图所在的相机坐标系下进行投影,然后与对应时刻的滤波前视差图中投影像素点比较视差,根据视差筛选出与每个时刻的滤波前视差图对应匹配的像素点组成的区域,将与每个时刻的滤波前视差图对应匹配的像素点组成的区域进行概率融合处理,得到匹配区域S0'(即需要保留的区域),然后再将区域S0'补回滤波后视差图Df,即可得到经过滤波处理的t0时刻的最终视差图,从而改进了目前以Speckle Filter为代表的视差图滤波器,在剔除了不可信的噪点区域的同时,有效保留了可信度较高的区域,相比单纯的Speckle Filter滤波器,本发明实施例可以有效识别细小物体,保留视差图中对细小障碍物的观测, 避免丢失局部有用信息。
请参阅图3,为本发明实施例提供的一种图像处理装置,应用于可移动平台,其中,所述可移动平台包括拍摄装置,所述图像处理装置包括:
获取模块301,用于获取所述拍摄装置输出的图像。
所述获取模块301,还用于根据所述图像获取第一初始视差图和第二初始视差图。
确定模块302,用于确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的目标视差区域。
填充模块303,用于根据所述第二初始视差图确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述填充模块303,还用于:
当是时,将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述确定模块302,还用于:
根据所述填充后的第一初始视差图确定深度信息。
可选的,所述确定模块302,还用于:
当否时,根据所述滤波处理后所述第一初始视差图确定深度信息。
可选的,所述填充模块303,具体用于:
将所述目标视差区域中像素点对应的空间点向所述第二初始视差图中投影,以在所述第二初始视差图中确定投影像素点。
根据所述目标视差区域中像素点的视差和所述投影像素点的视差确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,当所述空间点在所述第二初始视差图上的投影像素坐标包括小数坐标时,所述投影像素点为所述第二初始视差图中在所述投影像素坐标周围的多个像素点。
可选的,所述填充模块303,具体用于:
确定所述目标视差区域中像素点的视差和所述投影像素点的视差之间的偏差。
根据所述偏差确定是否将所述目标视差区域填充到所述滤波处理后所述 第一初始视差图。
可选的,所述填充模块303,具体用于:
确定所述偏差小于或等于预设偏差阈值的所述目标视差区域中像素点的数量。
根据所述数量确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述填充模块303,具体用于:
当所述数量大于或等于预设数量阈值或者所述数量与所述目标视差区域中像素点的数量之比大于或等于阈值比率阈值时,确定将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述确定模块302,还用于:
当所述目标视差区域中像素点对应的投影像素点的视差为无效值时,为所述目标视差区域中目标像素点确定第二权值。
所述确定模块302,具体用于:
当所述目标视差区域中像素点对应的投影像素点的视差为有效值时,确定所述目标视差区域中像素点的视差和所述投影像素点的视差之间的偏差。
所述填充模块303,具体用于:
确定所述偏差小于或等于预设偏差阈值的所述目标视差区域中像素点,并为所述像素点确定第一权值,所述第一权值大于所述第二权值。
根据所述第一权值和所述第二权值确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述填充模块303,具体用于:
确定所述第一权值和所述第二权值之和。
当所述第一权值和所述第二权值之和大于或等于预设权值阈值时,确定将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述填充模块303,具体用于:
确定所述目标视差区域中像素点对应的空间点的位置坐标。
根据所述空间点的位置坐标将所述空间点向所述第二初始视差图中投影,以在所述第二初始视差图中确定投影像素点。
可选的,所述确定模块302,具体用于:
确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的初始图像区域。
从所述初始图像区域中确定目标视差区域,其中,所述目标视差区域中像素点的数量大于或等于预设数量阈值。
可以理解的是,本发明实施例所描述的图像处理装置的各功能模块的功能可根据图1所述的方法实施例中的方法具体实现,其具体实现过程可以参照图1的方法实施例的相关描述,此处不再赘述。
请参阅图4,为本发明实施例提供的一种可移动平台的结构示意图。本实施例中所描述的可移动平台,包括:处理器401、存储器402和拍摄装置403。上述处理器401、存储器402和拍摄装置403通过总线连接。
上述处理器401可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
上述拍摄装置403可以是双目相机,也可以是单目相机。
上述存储器402可以包括只读存储器和随机存取存储器,并向处理器401提供程序指令和数据。存储器402的一部分还可以包括非易失性随机存取存储器。其中,所述处理器401调用所述程序指令时用于执行:
获取所述拍摄装置403输出的图像。
根据所述图像获取第一初始视差图和第二初始视差图。
确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的目标视差区域。
根据所述第二初始视差图确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述处理器401,还用于:
当是时,将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述处理器401,还用于:
根据所述填充后的第一初始视差图确定深度信息。
可选的,所述处理器401,还用于:
当否时,根据所述滤波处理后所述第一初始视差图确定深度信息。
可选的,所述处理器401,具体用于:
将所述目标视差区域中像素点对应的空间点向所述第二初始视差图中投影,以在所述第二初始视差图中确定投影像素点。
根据所述目标视差区域中像素点的视差和所述投影像素点的视差确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,当所述空间点在所述第二初始视差图上的投影像素坐标包括小数坐标时,所述投影像素点为所述第二初始视差图中在所述投影像素坐标周围的多个像素点。
可选的,所述处理器401,具体用于:
确定所述目标视差区域中像素点的视差和所述投影像素点的视差之间的偏差。
根据所述偏差确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述处理器401,具体用于:
确定所述偏差小于或等于预设偏差阈值的所述目标视差区域中像素点的数量。
根据所述数量确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述处理器401,具体用于:
当所述数量大于或等于预设数量阈值或者所述数量与所述目标视差区域中像素点的数量之比大于或等于阈值比率阈值时,确定将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述处理器401,还用于:
当所述目标视差区域中像素点对应的投影像素点的视差为无效值时,为所 述目标视差区域中目标像素点确定第二权值。
所述处理器401,具体用于:
当所述目标视差区域中像素点对应的投影像素点的视差为有效值时,确定所述目标视差区域中像素点的视差和所述投影像素点的视差之间的偏差。
确定所述偏差小于或等于预设偏差阈值的所述目标视差区域中像素点,并为所述像素点确定第一权值,所述第一权值大于所述第二权值。
根据所述第一权值和所述第二权值确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述处理器401,具体用于:
确定所述第一权值和所述第二权值之和。
当所述第一权值和所述第二权值之和大于或等于预设权值阈值时,确定将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
可选的,所述处理器401,具体用于:
确定所述目标视差区域中像素点对应的空间点的位置坐标。
根据所述空间点的位置坐标将所述空间点向所述第二初始视差图中投影,以在所述第二初始视差图中确定投影像素点。
可选的,所述处理器401,具体用于:
确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的初始图像区域。
从所述初始图像区域中确定目标视差区域,其中,所述目标视差区域中像素点的数量大于或等于预设数量阈值。
具体实现中,本发明实施例中所描述的处理器401、存储器402和拍摄装置403可执行本发明实施例图1提供的图像处理方法中所描述的实现方式,也可执行本发明实施例图3所描述的图像处理装置的实现方式,在此不再赘述。
本发明实施例还提供了一种计算机存储介质,该计算机存储介质中存储有程序指令,所述程序执行时可包括如图1对应实施例中的图像处理方法的部分或全部步骤。
需要说明的是,对于前述的各个方法实施例,为了简单描述,故将其都表 述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本申请,某一些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
以上对本发明实施例所提供的一种图像处理方法、装置及可移动平台进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (26)

  1. 一种图像处理方法,应用于可移动平台,其中,所述可移动平台包括拍摄装置,其特征在于,所述方法包括:
    获取所述拍摄装置输出的图像;
    根据所述图像获取第一初始视差图和第二初始视差图;
    确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的目标视差区域;
    根据所述第二初始视差图确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当是时,将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    根据所述填充后的第一初始视差图确定深度信息。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述方法还包括:
    当否时,根据所述滤波处理后所述第一初始视差图确定深度信息。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述根据所述第二初始视差图确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图,包括:
    将所述目标视差区域中像素点对应的空间点向所述第二初始视差图中投影,以在所述第二初始视差图中确定投影像素点;
    根据所述目标视差区域中像素点的视差和所述投影像素点的视差确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  6. 根据权利要求5所述的方法,其特征在于,当所述空间点在所述第二 初始视差图上的投影像素坐标包括小数坐标时,所述投影像素点为所述第二初始视差图中在所述投影像素坐标周围的多个像素点。
  7. 根据权利要求5或6所述的方法,其特征在于,所述根据所述目标视差区域中像素点的视差和所述投影像素点的视差确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图,包括:
    确定所述目标视差区域中像素点的视差和所述投影像素点的视差之间的偏差;
    根据所述偏差确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述偏差确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图,包括:
    确定所述偏差小于或等于预设偏差阈值的所述目标视差区域中像素点的数量;
    根据所述数量确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述数量确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图,包括:
    当所述数量大于或等于预设数量阈值或者所述数量与所述目标视差区域中像素点的数量之比大于或等于阈值比率阈值时,确定将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  10. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    当所述目标视差区域中像素点对应的投影像素点的视差为无效值时,为所述目标视差区域中目标像素点确定第二权值;
    所述确定所述目标视差区域中像素点的视差和所述投影像素点的视差之间的偏差,包括:
    当所述目标视差区域中像素点对应的投影像素点的视差为有效值时,确定所述目标视差区域中像素点的视差和所述投影像素点的视差之间的偏差;
    所述根据所述偏差确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图,包括:
    确定所述偏差小于或等于预设偏差阈值的所述目标视差区域中像素点,并为所述像素点确定第一权值,所述第一权值大于所述第二权值;
    根据所述第一权值和所述第二权值确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  11. 根据权利要求10所述的方法,其特征在于,所述根据所述第一权值和所述第二权值确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图,包括:
    确定所述第一权值和所述第二权值之和;
    当所述第一权值和所述第二权值之和大于或等于预设权值阈值时,确定将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  12. 根据权利要求5所述的方法,其特征在于,所述将所述目标视差区域中像素点对应的空间点向所述第二初始视差图中投影,以在所述第二初始视差图中确定投影像素点,包括:
    确定所述目标视差区域中像素点对应的空间点的位置坐标;
    根据所述空间点的位置坐标将所述空间点向所述第二初始视差图中投影,以在所述第二初始视差图中确定投影像素点。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的目标视差区域,包括:
    确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的初始图像区域;
    从所述初始图像区域中确定目标视差区域,其中,所述目标视差区域中像 素点的数量大于或等于预设数量阈值。
  14. 一种可移动平台,其特征在于,包括:处理器、拍摄装置和存储器,其中:
    所述存储器,用于存储程序指令;
    所述处理器调用所述程序指令时用于执行:
    获取所述拍摄装置输出的图像;
    根据所述图像获取第一初始视差图和第二初始视差图;
    确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的目标视差区域;
    根据所述第二初始视差图确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  15. 根据权利要求14所述的可移动平台,其特征在于,所述处理器,还用于:
    当是时,将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  16. 根据权利要求15所述的可移动平台,其特征在于,所述处理器,还用于:
    根据所述填充后的第一初始视差图确定深度信息。
  17. 根据权利要求14-16中任一项所述的可移动平台,其特征在于,所述处理器,还用于:
    当否时,根据所述滤波处理后所述第一初始视差图确定深度信息。
  18. 根据权利要求14-17中任一项所述的可移动平台,其特征在于,所述处理器,具体用于:
    将所述目标视差区域中像素点对应的空间点向所述第二初始视差图中投影,以在所述第二初始视差图中确定投影像素点;
    根据所述目标视差区域中像素点的视差和所述投影像素点的视差确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  19. 根据权利要求18所述的可移动平台,其特征在于,当所述空间点在所述第二初始视差图上的投影像素坐标包括小数坐标时,所述投影像素点为所述第二初始视差图中在所述投影像素坐标周围的多个像素点。
  20. 根据权利要求18或19所述的可移动平台,其特征在于,所述处理器,具体用于:
    确定所述目标视差区域中像素点的视差和所述投影像素点的视差之间的偏差;
    根据所述偏差确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  21. 根据权利要求20所述的可移动平台,其特征在于,所述处理器,具体用于:
    确定所述偏差小于或等于预设偏差阈值的所述目标视差区域中像素点的数量;
    根据所述数量确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  22. 根据权利要求21所述的可移动平台,其特征在于,所述处理器,具体用于:
    当所述数量大于或等于预设数量阈值或者所述数量与所述目标视差区域中像素点的数量之比大于或等于阈值比率阈值时,确定将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  23. 根据权利要求20所述的可移动平台,其特征在于,所述处理器,还用于:
    当所述目标视差区域中像素点对应的投影像素点的视差为无效值时,为所述目标视差区域中目标像素点确定第二权值;
    所述处理器,具体用于:
    当所述目标视差区域中像素点对应的投影像素点的视差为有效值时,确定所述目标视差区域中像素点的视差和所述投影像素点的视差之间的偏差;
    确定所述偏差小于或等于预设偏差阈值的所述目标视差区域中像素点,并为所述像素点确定第一权值,所述第一权值大于所述第二权值;
    根据所述第一权值和所述第二权值确定是否将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  24. 根据权利要求23所述的可移动平台,其特征在于,所述处理器,具体用于:
    确定所述第一权值和所述第二权值之和;
    当所述第一权值和所述第二权值之和大于或等于预设权值阈值时,确定将所述目标视差区域填充到所述滤波处理后所述第一初始视差图。
  25. 根据权利要求18所述的可移动平台,其特征在于,所述处理器,具体用于:
    确定所述目标视差区域中像素点对应的空间点的位置坐标;
    根据所述空间点的位置坐标将所述空间点向所述第二初始视差图中投影,以在所述第二初始视差图中确定投影像素点。
  26. 根据权利要求14-25任一项所述的可移动平台,其特征在于,所述处理器,具体用于:
    确定对所述第一初始视差图进行滤波处理后所述第一初始视差图中的被滤除的初始图像区域;
    从所述初始图像区域中确定目标视差区域,其中,所述目标视差区域中像素点的数量大于或等于预设数量阈值。
PCT/CN2020/082360 2020-03-31 2020-03-31 一种图像处理方法及可移动平台 WO2021195940A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/082360 WO2021195940A1 (zh) 2020-03-31 2020-03-31 一种图像处理方法及可移动平台

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/082360 WO2021195940A1 (zh) 2020-03-31 2020-03-31 一种图像处理方法及可移动平台

Publications (1)

Publication Number Publication Date
WO2021195940A1 true WO2021195940A1 (zh) 2021-10-07

Family

ID=77927092

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082360 WO2021195940A1 (zh) 2020-03-31 2020-03-31 一种图像处理方法及可移动平台

Country Status (1)

Country Link
WO (1) WO2021195940A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102547338A (zh) * 2011-12-05 2012-07-04 四川虹微技术有限公司 一种适用于3d电视的dibr系统
CN104680510A (zh) * 2013-12-18 2015-06-03 北京大学深圳研究生院 Radar视差图优化方法、立体匹配视差图优化方法及系统
CN104915927A (zh) * 2014-03-11 2015-09-16 株式会社理光 视差图像优化方法及装置
US20170019654A1 (en) * 2015-07-15 2017-01-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN106355570A (zh) * 2016-10-21 2017-01-25 昆明理工大学 一种结合深度特征的双目立体视觉匹配方法
CN108401460A (zh) * 2017-09-29 2018-08-14 深圳市大疆创新科技有限公司 生成视差图的方法、系统、存储介质和计算机程序产品
CN108564536A (zh) * 2017-12-22 2018-09-21 洛阳中科众创空间科技有限公司 一种深度图的全局优化方法
CN110533701A (zh) * 2018-05-25 2019-12-03 杭州海康威视数字技术股份有限公司 一种图像视差确定方法、装置及设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102547338A (zh) * 2011-12-05 2012-07-04 四川虹微技术有限公司 一种适用于3d电视的dibr系统
CN104680510A (zh) * 2013-12-18 2015-06-03 北京大学深圳研究生院 Radar视差图优化方法、立体匹配视差图优化方法及系统
CN104915927A (zh) * 2014-03-11 2015-09-16 株式会社理光 视差图像优化方法及装置
US20170019654A1 (en) * 2015-07-15 2017-01-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN106355570A (zh) * 2016-10-21 2017-01-25 昆明理工大学 一种结合深度特征的双目立体视觉匹配方法
CN108401460A (zh) * 2017-09-29 2018-08-14 深圳市大疆创新科技有限公司 生成视差图的方法、系统、存储介质和计算机程序产品
CN108564536A (zh) * 2017-12-22 2018-09-21 洛阳中科众创空间科技有限公司 一种深度图的全局优化方法
CN110533701A (zh) * 2018-05-25 2019-12-03 杭州海康威视数字技术股份有限公司 一种图像视差确定方法、装置及设备

Similar Documents

Publication Publication Date Title
US11830211B2 (en) Disparity map acquisition method and apparatus, device, control system and storage medium
CN106960454B (zh) 景深避障方法、设备及无人飞行器
JP2017112602A (ja) パノラマ魚眼カメラの画像較正、スティッチ、および深さ再構成方法、ならびにそのシステム
JP2019510234A (ja) 奥行き情報取得方法および装置、ならびに画像取得デバイス
JP7227969B2 (ja) 三次元再構成方法および三次元再構成装置
US20220156954A1 (en) Stereo matching method, image processing chip and mobile vehicle
CN110602474B (zh) 一种图像视差的确定方法、装置及设备
CN113129241B (zh) 图像处理方法及装置、计算机可读介质、电子设备
US9619886B2 (en) Image processing apparatus, imaging apparatus, image processing method and program
CN109658497B (zh) 一种三维模型重建方法及装置
CN110458952B (zh) 一种基于三目视觉的三维重建方法和装置
CN115035235A (zh) 三维重建方法及装置
CN112184603B (zh) 一种点云融合方法、装置、电子设备和计算机存储介质
JP5824953B2 (ja) 画像処理装置、画像処理方法及び撮像装置
CN110852979A (zh) 一种基于相位信息匹配的点云配准及融合方法
CN112150518B (zh) 一种基于注意力机制的图像立体匹配方法及双目设备
WO2021104308A1 (zh) 全景深度测量方法、四目鱼眼相机及双目鱼眼相机
CN110782507A (zh) 一种基于人脸网格模型的纹理贴图生成方法、系统及电子设备
CN111882655A (zh) 三维重建的方法、装置、系统、计算机设备和存储介质
CN114419568A (zh) 一种基于特征融合的多视角行人检测方法
WO2020181510A1 (zh) 一种图像数据处理方法、装置及系统
WO2021193672A1 (ja) 三次元モデル生成方法及び三次元モデル生成装置
CN111160233B (zh) 基于三维成像辅助的人脸活体检测方法、介质及系统
WO2021195940A1 (zh) 一种图像处理方法及可移动平台
CN107845108B (zh) 一种光流值计算方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20928147

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20928147

Country of ref document: EP

Kind code of ref document: A1