WO2019033777A1 - 一种提升3d图像深度信息的方法、装置及无人机 - Google Patents

一种提升3d图像深度信息的方法、装置及无人机 Download PDF

Info

Publication number
WO2019033777A1
WO2019033777A1 PCT/CN2018/084072 CN2018084072W WO2019033777A1 WO 2019033777 A1 WO2019033777 A1 WO 2019033777A1 CN 2018084072 W CN2018084072 W CN 2018084072W WO 2019033777 A1 WO2019033777 A1 WO 2019033777A1
Authority
WO
WIPO (PCT)
Prior art keywords
exposure
image
exposure time
effective pixel
pixel points
Prior art date
Application number
PCT/CN2018/084072
Other languages
English (en)
French (fr)
Inventor
简羽鹏
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Priority to EP18846940.7A priority Critical patent/EP3663791B1/en
Publication of WO2019033777A1 publication Critical patent/WO2019033777A1/zh
Priority to US16/793,931 priority patent/US11030762B2/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/933Lidar systems specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4868Controlling received signal intensity or exposure of sensor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4918Controlling received signal intensity, gain or exposure of sensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U30/00Means for producing lift; Empennages; Arrangements thereof
    • B64U30/20Rotors; Rotor supports
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • G01S17/36Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • Embodiments of the present invention relate to the field of drone technology, and in particular, to a method, device, and drone for improving 3D image depth information.
  • the drone adopts a 3D depth camera to capture 3D images.
  • the position of the obstacle is determined according to the depth of field, and ranging and obstacle avoidance are performed.
  • some pixel regions have poor exposure effects due to different depth of field layers or poor reflectance of the reflective surface, resulting in incomplete images or low confidence of local pixel points.
  • TOF is the abbreviation of Time of Flight
  • the Chinese translation is flight time.
  • the technical problem to be solved by the embodiments of the present invention is to provide a method, a device, and a drone for improving the depth information of a 3D image, which can solve the poor exposure effect in the picture taken in the 3D depth camera in the prior art, resulting in poor image quality.
  • the problem is to provide a method, a device, and a drone for improving the depth information of a 3D image, which can solve the poor exposure effect in the picture taken in the 3D depth camera in the prior art, resulting in poor image quality.
  • a technical solution adopted by an embodiment of the present invention is to provide a method for improving depth information of a 3D image.
  • the methods include:
  • the effective pixel points are calibrated and calibrated to generate a 3D depth information map.
  • the acquiring a frame of the original 3D image, and performing the first exposure on the original 3D image, before the generating the first exposure image further includes:
  • An exposure time gradient array is preset, and the gradient array includes a plurality of exposure times arranged in a small to large time length, wherein the exposure time having the largest time length is an exposure time threshold.
  • the acquiring a frame of the original 3D image, and performing the first exposure on the original 3D image to generate the first exposure image includes:
  • the original 3D image is first exposed according to the first exposure time to generate the exposure image.
  • the number of effective pixel points in the first exposure image does not satisfy the preset condition, acquiring a second exposure time in the array of exposure time gradients adjacent to the first exposure time and greater than the first exposure time
  • the original 3D image is subjected to a second exposure to generate a second exposure image.
  • the exposure is continued and the corresponding exposure image is generated until the exposure time reaches the exposure time threshold or the number of effective pixel points in the corresponding image generated by the exposure meets a preset condition, including:
  • the original 3D image is subjected to a third exposure to generate a third exposure image until the exposure time reaches the exposure time threshold or the number of effective pixel points in the corresponding image generated by the exposure satisfies a preset condition.
  • the acquiring the effective pixel points in the first exposure image includes:
  • the pixel point If the pixel point satisfies a preset signal quality parameter, the pixel point is marked as a valid pixel point, and all the effective pixel points in the first exposure image are acquired.
  • another technical solution adopted by the embodiments of the present invention is to provide an apparatus for improving 3D image depth information, where the apparatus includes a memory, a processor, and is stored in the memory and can be A computer program running on a processor, the computer program being executed by the processor to implement the following steps:
  • the effective pixel point in the exposure image is extracted.
  • the effective pixel points are calibrated and calibrated to generate a 3D depth information map.
  • the computer program when executed by the processor, further implements the following steps:
  • An exposure time gradient array is preset, and the gradient array includes a plurality of exposure times arranged in a small to large time length, wherein the exposure time having the largest time length is an exposure time threshold.
  • the computer program when executed by the processor, further implements the following steps:
  • the original 3D image is first exposed according to the first exposure time to generate the exposure image.
  • the computer program when executed by the processor, further implements the following steps:
  • the number of effective pixel points in the first exposure image does not satisfy the preset condition, acquiring a second exposure time in the array of exposure time gradients adjacent to the first exposure time and greater than the first exposure time
  • the original 3D image is subjected to a second exposure to generate a second exposure image.
  • the computer program when executed by the processor, further implements the following steps:
  • the original 3D image is subjected to a third exposure to generate a third exposure image until the exposure time reaches the exposure time threshold or the number of effective pixel points in the corresponding image generated by the exposure satisfies a preset condition.
  • the computer program when executed by the processor, further implements the following steps:
  • the pixel point If the pixel point satisfies a preset signal quality parameter, the pixel point is marked as a valid pixel point, and all the effective pixel points in the first exposure image are acquired.
  • a drone which includes:
  • a propeller comprising a hub and a blade, the hub being coupled to a rotating shaft of the motor, and driving the blade to rotate when the rotating shaft of the motor rotates to generate a force for moving the drone;
  • a camera mounted on the housing
  • a processor configured to perform the foregoing method for improving 3D image depth information.
  • Another embodiment of the present invention provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program The instructions, when executed by the processor, cause the processor to perform the method of enhancing 3D image depth information as described above.
  • Another embodiment of the present invention provides a non-transitory computer readable storage medium storing computer-executable instructions that are one or more
  • the processor when executed, may cause the one or more processors to perform the method of enhancing 3D image depth information as described above.
  • the embodiment of the invention provides a method, a device and a drone for improving the depth information of the 3D image, and by performing multiple exposures on the image captured by the 3D camera, extracting effective pixel points in the exposed image according to the preset condition for calibration and After calibration, a 3D depth image is generated.
  • the embodiment of the present invention can effectively improve the quality of the 3D depth image, and better restore the real 3D scene, which provides convenience for the drone to avoid obstacles.
  • FIG. 1 is a schematic flowchart of a method for improving depth information of a 3D image according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart diagram of an effective pixel point acquisition method for improving a 3D image depth information according to another embodiment of the present invention
  • FIG. 3 is a hardware structural diagram of an apparatus for improving 3D image depth information according to another embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a device program module for improving 3D image depth information according to an embodiment of the present invention
  • FIG. 5 is a structural block diagram of a drone according to another embodiment of the present invention.
  • the existing 3D camera of the drone often uses a single exposure when performing image shooting.
  • the single exposure causes insufficient exposure or reflection of some pixels in the 3D depth information in one frame.
  • the misjudgment of the information is inconvenient, which brings inconvenience to the flight safety of the drone.
  • the drone needs to obtain the distance between the drone and the obstacle in real time.
  • the drone captures the original 3D image through a 3D camera and performs exposure. Since the quality of the exposed image directly affects the generation of the 3D depth map, the signal of the image may be too poor or too strong during a single exposure.
  • the signal of the corresponding image is too poor to indicate that the reflected signal is too weak, and the signal of the image is too strong to reflect. If the light is too strong, the final 3D depth map will also be affected.
  • the 3D camera refers to a device for measuring the depth of field based on the TOF technology.
  • the generated image is the distance between the emission point and the reflection point, and the different colors represent different distances, reflecting different levels of scene 3D depth information.
  • the 3D camera acquires the depth of field information in the 3D image by using the TOF technology, and determines the position of the obstacle according to the depth of field, thereby avoiding obstacles.
  • the TOF is an abbreviation of Time of Flight technology.
  • the sensor emits modulated near-infrared light or laser light. After it is reflected by an object, the distance between the scene and the scene is calculated by calculating the time difference or phase difference between the light emission and reflection. Generate depth distance information.
  • the first exposure image is extracted.
  • the effective pixel points in the effective pixel point are calibrated and calibrated to generate a 3D depth information map; wherein the preset conditions include a threshold number that is not limited to the effective pixel point.
  • the exposure time of the first exposure is recorded as the first exposure time.
  • the second exposure is performed on the captured original 3D image to generate a second exposure image, wherein the second exposure time is longer than the first exposure time.
  • the effective pixel points in the second exposure image are extracted, and the effective pixel points are calibrated and calibrated to generate a 3D depth information map.
  • the exposure is continued and a corresponding exposure image is generated, wherein the subsequent exposure time is longer than the previous exposure time, and the exposure is continued until the exposure time reaches the preset time.
  • the exposure time threshold or the number of effective pixel points in the corresponding image generated by the exposure satisfies a preset condition.
  • the effective pixel points in the corresponding image are extracted, and the effective pixel points are calibrated and calibrated to generate a 3D depth information.
  • the figure realizes the calibration and calibration of the effective pixel points of the most qualified exposure image in the multiple exposure, thereby effectively improving the quality of the 3D depth image and better restoring the real 3D scene.
  • FIG. 1 is a schematic flowchart diagram of a method for improving depth information of a 3D image according to an embodiment of the present invention. This embodiment includes:
  • Step S100 Acquire a frame of the original 3D image, and perform a first exposure on the original 3D image to generate a first exposure image.
  • the drone captures the real-time video through the 3D camera during the flight process, and after processing each frame image in the video, the 3D depth image is acquired to determine the distance between the drone and the obstacle. Acquiring a frame of the original 3D image, performing a first exposure on the original 3D image, wherein the exposure time of the first exposure may be preset by the user to obtain the first exposure image generated by the first exposure.
  • Step S200 Obtain an effective pixel point of the first exposure image, determine whether the number of effective pixel points satisfies a preset condition, if yes, execute step S300, and if no, execute step S400.
  • the effective pixel point in the first exposure image is extracted; otherwise, the original 3D image is subjected to the second exposure to generate a second exposure image.
  • the first exposure image acquired in step S100 is acquired, and effective pixel points in the first exposure image are acquired, wherein the effective pixel point refers to a pixel point that satisfies a preset signal quality parameter.
  • the signal quality parameters include, but are not limited to, signal amplitude, signal to noise ratio.
  • the preset signal quality parameter may be a preset signal amplitude interval or a preset signal to noise ratio interval.
  • Step S300 If the number of effective pixel points satisfies a preset condition, the effective pixel points in the first exposure image are extracted.
  • the preset condition includes, but is not limited to, a threshold of the number of effective pixel points. For example, if the number of effective pixel points in the first exposure image satisfies the threshold of the preset number of effective pixel points, the effective pixel points in the first exposure image are extracted.
  • Step S400 If the number of effective pixel points does not satisfy the preset condition, performing a second exposure on the original 3D image to generate a second exposure image.
  • the second exposure is performed on the original 3D image, and the time of the second exposure is the first time.
  • the time of exposure is different, and the preferred second exposure time is longer than the time of the first exposure, and the second exposure image generated by the second exposure is acquired.
  • step S500 it is determined that the number of effective pixel points in the second exposure image satisfies a preset condition, and if yes, step S600 is performed, and if no, step S700 is performed.
  • Step S600 If the number of effective pixel points in the second exposure image satisfies a preset condition, the effective pixel points in the second exposure image are extracted.
  • the second exposure image in step S400 is obtained, and if the number of effective pixel points in the second exposure image satisfies a preset condition, the effective pixel point in the second exposure image is extracted.
  • Step S700 If the number of effective pixel points in the second exposure image does not satisfy the preset condition, continue to expose and generate a corresponding exposure image, and determine whether the number of effective pixel points of the exposed image meets the preset condition until the exposure When the time reaches the exposure time threshold or the number of effective pixel points in the corresponding exposure image generated by the exposure satisfies a preset condition, the effective pixel point in the exposure image is extracted.
  • the exposure is continued and a corresponding exposure image is generated, and at this time, it is further determined whether the number of effective pixel points of the exposed image meets the preset.
  • the condition is that after the exposure time reaches the exposure time threshold or the number of effective pixel points in the corresponding exposure image generated by the exposure satisfies a preset condition, the effective pixel point of the corresponding exposure image is extracted.
  • the preset condition is that the number of effective pixels generated by the exposure exceeds a certain number of triggers, or the number of effective pixels generated by the exposure is equal to the number of pixels of the image.
  • the number of triggers refers to the preset number of thresholds, and the number of triggers is smaller than the number of pixels of the exposed image. For example, if the number of pixels in the exposed image is 200 and the number of triggers is 180, the number of effective pixel points generated by the exposure is more than 180, and the effective pixel in the exposed image satisfies the preset condition.
  • Step S800 After calibrating and calibrating the effective pixel points, generating a 3D depth information map.
  • the effective pixel points extracted in step S300 and step S600 and step S700 are sequentially calibrated and calibrated to generate a high-quality 3D depth information map.
  • the calibration and calibration methods are specifically calibrated and calibrated according to the parameters of the 3D camera.
  • step S100 the method further includes the following steps:
  • Step S10 an exposure time gradient array is preset, and the gradient array includes a plurality of exposure times arranged in a small to large time length, wherein the exposure time with the largest time length is an exposure time threshold.
  • the exposure time gradient array is an array of gradients of known length, and the length of time is arranged from small to large. After each exposure is performed, whether the next exposure is performed is determined according to whether the number of effective pixel points satisfies the preset condition, and if not, the exposure time selects the next larger gradient value in the exposure time gradient array.
  • step S100 includes:
  • Step S101 Acquire a frame of the original 3D image, and obtain an exposure time of the exposure time gradient array as the first exposure time;
  • Step S102 Perform a first exposure on the original 3D image according to the first exposure time to generate a first exposure image.
  • the 3D camera captures a frame of the original 3D image, and obtains the exposure time in the pre-adjusted exposure time gradient array as the first exposure time, and preferably the exposure time that minimizes the exposure time gradient array time length as the first exposure time. And performing the first exposure on the acquired original 3D image according to the first exposure time to generate a first exposure image.
  • step S400 includes:
  • the second exposure is performed on the original 3D image according to the second exposure time to generate a second exposure image, wherein the second exposure time is the next gradient array of the first exposure time in the exposure time gradient array.
  • step S700 the exposure is continued and the corresponding exposure image is generated until the exposure time reaches the exposure time threshold or the number of effective pixel points in the corresponding image generated by the exposure satisfies a preset condition:
  • the third exposure generates a third exposure image until the exposure time reaches the exposure time threshold or the number of effective pixel points in the corresponding image generated by the exposure satisfies a preset condition.
  • the third exposure time is the exposure time threshold in the exposure time gradient array, the valid pixel points of the mark of the third exposure image end the effective pixel point acquisition regardless of whether the preset condition is satisfied, and the effective pixels in the third exposure image are extracted. point.
  • FIG. 2 is a schematic flowchart of a method for acquiring an effective pixel point of a method for improving depth information of a 3D image according to another embodiment of the present invention. As shown in FIG. 2, obtaining the effective image in the first exposure image is performed in step S200. Pixels, including:
  • Step S201 Obtain a pixel point of the first exposure image, and determine whether the pixel point meets a preset signal quality parameter
  • Step S202 If the pixel point satisfies the preset signal quality parameter, mark the pixel point as an effective pixel point, and acquire all valid pixel points of the first exposure image;
  • step S203 no processing is performed on the pixel points.
  • all pixel points of the first exposed image are obtained, and whether the signal quality parameter of each pixel point satisfies a preset signal quality parameter is sequentially determined. If the signal quality parameter of the pixel point satisfies the preset signal quality parameter, The pixel is marked as a valid pixel, and all the pixels in the first exposed image are marked according to the principle, and all valid pixels of the mark of the first exposed image are acquired. If the signal quality parameter of the pixel does not satisfy the preset signal quality parameter, no processing is performed on the pixel.
  • the signal quality parameters include not limited to signal amplitude and signal to noise ratio.
  • the signal amplitude value is introduced as an example.
  • the relevant signal amplitude is as follows. If the signal amplitude is recorded as amp, amp ⁇ 100, it means that the reflected signal is too weak, it is unreliable data, and should be discarded; amp>100 and amp ⁇ 1000, it means normal valid data, amp >1000, indicating that the pixel is a saturated pixel, is invalid data and should be discarded. Therefore, it is judged whether it is a valid pixel point, that is, whether the signal amplitude of the pixel point is between 100 and 1000.
  • FIG. 3 is a hardware structural diagram of an apparatus for improving 3D image depth information according to an embodiment of the present invention.
  • the device 10 for enhancing 3D image depth information includes a memory 101 and a processor 102.
  • the device 10 for enhancing 3D image depth information may be a separate electronic device for processing a 3D image, such as a 3D camera or the like, or may be an accessory device that is attached to or coordinated with a camera device such as a camera to achieve 3D image processing.
  • a camera device such as a camera to achieve 3D image processing.
  • the memory 101 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), static random access.
  • RAM random access memory
  • the processor 102 can be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor, or other data processing chip or the like.
  • FIG. 4 is a schematic diagram of a program module of an apparatus for improving depth information of a 3D image according to another embodiment of the present invention.
  • the apparatus 10 for enhancing 3D image depth information includes an exposure module 100, a first effective pixel point extraction module 200, a second effective pixel point extraction module 300, and a calibration and calibration module 400.
  • the modules are configured to be executed by one or more processors (the processor 102 in this embodiment) to complete the present invention.
  • a module referred to in the present invention is a computer program segment that performs a particular function.
  • the memory 101 is used to store data such as program codes of the device 10 for ascending 3D image depth information.
  • the processor 102 is operative to execute program code stored in the memory 101.
  • the exposure module 100 is configured to acquire a frame of the original 3D image, perform a first exposure on the original 3D image, and generate a first exposure image;
  • the first effective pixel point extraction module 200 is configured to acquire valid pixel points of the first exposure image, and if the number of effective pixel points satisfies a preset condition, extract valid pixel points in the first exposure image; otherwise, the original pixel
  • the 3D image is subjected to a second exposure to generate a second exposure image;
  • the second effective pixel point extraction module 300 is configured to extract valid pixel points in the second exposure image if the number of effective pixel points in the second exposure image satisfies a preset condition, otherwise continue to expose and generate a corresponding exposure image And determining whether the number of effective pixel points of the exposed image meets a preset condition, until the exposure time reaches the exposure time threshold or the number of effective pixel points in the corresponding exposure image generated by the exposure satisfies a preset condition, and extracts the corresponding image Effective pixel point;
  • the calibration and calibration module 400 is configured to calibrate and calibrate the effective pixel points to generate a 3D depth information map.
  • the drone captures the real-time video through the 3D camera during the flight process, and after processing each frame image in the video, the 3D depth image needs to be acquired to determine the distance between the drone and the obstacle.
  • the exposure module 100 is configured to acquire a frame of the original 3D image, and perform the first exposure on the original 3D image, wherein the exposure time of the first exposure may be preset by the user to obtain the first exposure image generated by the first exposure.
  • the first effective pixel point extraction module 200 is configured to acquire a first exposure image acquired in the exposure module 100, and acquire effective pixel points in the first exposure image, where the effective pixel point refers to a pixel point that satisfies a preset signal quality parameter.
  • the signal quality parameters include, but are not limited to, signal amplitude, signal to noise ratio.
  • the preset signal quality parameter may be a preset signal amplitude interval or a preset signal to noise ratio interval.
  • the preset condition includes, but is not limited to, a threshold of the number of effective pixel points. For example, if the number of effective pixel points in the first exposure image satisfies the threshold of the preset number of effective pixel points, the effective pixel points in the first exposure image are extracted.
  • the second exposure is performed on the original 3D image, and the time of the second exposure is different from the time of the first exposure.
  • the second exposure time is longer than the time of the first exposure, and the second exposure image generated by the second exposure is acquired.
  • the second effective pixel point extraction module 300 is configured to acquire a second exposure image in the first effective pixel point extraction module 200, and if the number of effective pixel points in the second exposure image satisfies a preset condition, extract the second exposure image. Effective pixel points in .
  • the exposure is continued and a corresponding exposure image is generated, and at this time, it is further determined whether the number of effective pixel points of the exposed image meets the preset condition, until After the exposure time reaches the exposure time threshold or the number of effective pixel points in the corresponding exposure image generated by the exposure satisfies a preset condition, the effective pixel point of the corresponding exposure image is extracted.
  • the preset condition is that the number of effective pixels generated by the exposure exceeds a certain number of triggers, or the number of effective pixels generated by the exposure is equal to the number of pixels of the image.
  • the number of triggers refers to the preset number of thresholds, and the number of triggers is smaller than the number of pixels of the exposed image. For example, if the number of pixels in the exposed image is 200 and the number of triggers is 180, the number of effective pixels generated by the exposure is more than 180, and the effective pixel in the exposed image satisfies the preset condition.
  • the calibration and calibration module 400 is configured to sequentially calibrate and calibrate the effective pixel points according to the extracted effective pixel points to generate a high-quality 3D depth information map.
  • the calibration and calibration methods are specifically calibrated and calibrated according to the parameters of the 3D camera.
  • the device further includes:
  • the pre-setting module is configured to preset an exposure time gradient array, and the gradient array includes a plurality of exposure times arranged in a small to large time length, wherein the exposure time with the largest time length is an exposure time threshold.
  • the pixel points cannot be calibrated and calibrated, which brings inconvenience to the drone acquiring the 3D depth information map in real time. If the exposure time is shorter, most of the pixels in the image are underexposed, which is likely to cause misjudgment of the distance information. Therefore, an array of exposure time gradients needs to be preset to ensure the quality of the image exposure without prolonging the acquisition time of the 3D depth information map.
  • the preset module is used to preset an exposure time gradient array, and the exposure time gradient array is a gradient array of a known time length, and the time length is arranged from small to large. After each exposure is performed, whether the next exposure is performed is determined according to whether the number of effective pixel points satisfies the preset condition, and if not, the exposure time selects the next larger gradient value in the exposure time gradient array.
  • the exposure module 100 is further configured to:
  • the first exposure is performed on the original 3D image according to the first exposure time to generate a first exposure image.
  • the exposure module 100 is further configured to: after the 3D camera captures a frame of the original 3D image, obtain an exposure time in the pre-adjusted exposure time gradient array as the first exposure time, and preferably minimize the exposure time gradient array time length.
  • the exposure time is used as the first exposure time, and the acquired original 3D image is first exposed according to the first exposure time to generate a first exposure image.
  • the first effective pixel point extraction module 200 is further configured to:
  • the first effective pixel point extraction module 200 is further configured to: if the number of effective pixel points in the first exposure image does not satisfy the preset condition, obtain an exposure time gradient array adjacent to the first exposure time, and a second exposure time greater than the first exposure time, performing a second exposure on the original 3D image according to the second exposure time to generate a second exposure image, wherein the second exposure time is the first exposure time in the exposure time gradient array A gradient array.
  • the second effective pixel point extraction module 300 is further configured to:
  • the third exposure generates a third exposure image until the exposure time reaches the exposure time threshold or the number of effective pixel points in the corresponding image generated by the exposure satisfies a preset condition.
  • the second effective pixel point extraction module 300 is further configured to: if the number of effective pixel points in the second exposure image does not satisfy the preset condition, obtain an exposure time gradient array adjacent to the second exposure time, and The third exposure time greater than the second exposure time performs a third exposure on the original 3D image to generate a third exposure image. If the third exposure time is the exposure time threshold in the exposure time gradient array, the valid pixel points of the mark of the third exposure image end the effective pixel point acquisition regardless of whether the preset condition is satisfied, and the effective pixels in the third exposure image are extracted. point.
  • the first effective pixel point extraction module 200 is further configured to:
  • the pixel point If the pixel point satisfies the preset signal quality parameter, the pixel point is marked as a valid pixel point, and all valid pixel points of the first exposure image are acquired.
  • the first effective pixel point extraction module 200 is further configured to acquire a pixel point of the first exposed image, and determine whether the signal quality parameter of the pixel point satisfies a preset signal quality parameter, if the signal quality parameter of the pixel point satisfies the preset.
  • the signal quality parameter is marked as a valid pixel point, and all the pixels in the first exposure image are marked according to the principle, and all valid pixel points of the mark of the first exposure image are acquired.
  • the signal quality parameters include not limited to signal amplitude and signal to noise ratio.
  • the signal amplitude value is introduced as an example.
  • the relevant signal amplitude is as follows. If the signal amplitude is recorded as amp, amp ⁇ 100, it means that the reflected signal is too weak, it is unreliable data, and should be discarded; amp>100 and amp ⁇ 1000, it means normal valid data, amp >1000, indicating that the pixel is a saturated pixel, is invalid data and should be discarded. Therefore, it is judged whether it is a valid pixel point, that is, whether the signal amplitude of the pixel point is between 100 and 1000.
  • FIG. 5 another embodiment of the present invention provides a drone 600.
  • the drone 600 includes a housing 610, an arm 620, a motor 630 mounted on the arm 620, and a propeller 640 including a hub. 641 and paddle 642, the hub 641 is coupled to the rotating shaft of the motor 630.
  • the driving blade 642 rotates to generate a force for moving the drone 600; the camera 650 is mounted on the housing 610.
  • a processor 660 for performing the method steps S100 to S800 in FIG. 1 and the method steps S201 to S203 in FIG. 2 to implement the functions of the modules 100-400 in FIG.
  • Embodiments of the present invention provide a non-transitory computer readable storage medium storing computer-executable instructions that are executed by one or more processors, for example, to perform the above described
  • the method steps S100 to S800 in FIG. 1 and the method steps S201 to S203 in FIG. 2 implement the functions of the modules 100-400 in FIG.
  • Another embodiment of the present invention provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program The instructions, when executed by the processor, cause the processor to perform the method of enhancing 3D image depth information as described above. For example, performing the method steps S100 to S800 in FIG. 1 described above, and the method steps S201 to S203 in FIG. 2, the functions of the modules 100-400 in FIG. 4 are implemented.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the embodiments can be implemented by means of software plus a general hardware platform, and of course, can also be implemented by hardware.
  • the above technical solution may be embodied in the form of a software product in essence or in the form of a software product, which may exist in a computer readable storage medium such as a ROM/RAM or a disk. , an optical disk, etc., includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments or portions of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Studio Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种提升3D图像深度信息的方法,所述方法包括:获取一帧原始3D图像,对原始3D图像进行第一次曝光,生成第一曝光图像,若第一曝光图像中的有效像素点个数满足预设的条件,则提取第一曝光图像中的有效像素点;否则,对原始3D图像继续曝光并生成对应的曝光图像,判断曝光图像中的有效像素点个数是否满足预设的条件,直至曝光时间达到曝光时间阈值或者曝光所生成对应图像中的有效像素点个数满足预设的条件,提取对应图像中的有效像素点;对有效像素点进行标定与校准,生成3D深度信息图。以及一种提升3D图像深度信息的装置及无人机。通过多次曝光提取符合条件的曝光图像中的有效像素点生成3D深度信息图,提升了3D图像的质量。

Description

一种提升3D图像深度信息的方法、装置及无人机
申请要求于2017年8月18日申请的、申请号为201710714161.0、申请名称为“一种提升3D图像深度信息的方法、装置及无人机”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施方式涉及无人机技术领域,特别是涉及一种提升3D图像深度信息的方法、装置及无人机。
背景技术
无人机在飞行过程中,采用3D深度相机拍摄3D图像,通过获取3D图像中的景深信息,根据景深确定障碍物的位置,进行测距和避障。但是现有的3D深度相机拍摄的一帧图像中因景深层次不同或者反射面的反射率差等原因造成某些像素区域曝光效果不佳,从而导致画面不完整或局部像素点置信度不高。
现有技术中解决TOF测距的方案中,通常会利用增加曝光时间的方法或者增加照明的方法解决上述问题。其中TOF是Time of Flight的缩写,中文译文为飞行时间。虽然这些方法能在一定程度上使弱反射区域的信号质量得以提升,但是对于强反射区域可能造成饱和而产生曝光过度,而对于无人机面阵避障应用而言,曝光过度可能会造成障碍物的误判或漏检测,造成无人机与障碍物的碰撞。
发明内容
本发明实施方式主要解决的技术问题是提供一种提升3D图像深度信息的方法、装置及无人机,能够解决现有技术中的3D深度相机中拍 摄的图片中曝光效果差,造成图像质量差的问题。
为解决上述技术问题,本发明实施方式采用的一个技术方案是:提供一种提升3D图像深度信息的方法,
其中,方法包括:
获取一帧原始3D图像,对所述原始3D图像进行第一次曝光,生成第一曝光图像;
获取所述第一曝光图像中的有效像素点,如果所述有效像素点个数满足预设的条件,则提取所述第一曝光图像中的有效像素点;否则,对所述原始3D图像进行第二次曝光,生成第二曝光图像;
如果所述第二曝光图像中的有效像素点个数满足预设的条件,则提取所述第二曝光图像中的有效像素点,否则,继续曝光并生成对应的曝光图像,判断所述曝光图像的有效像素点个数是否满足预设的条件,直至曝光时间达到曝光时间阈值或者曝光所生成对应的曝光图像中的有效像素点个数满足预设的条件时,提取所述曝光图像中的有效像素点;
对所述有效像素点进行标定与校准,生成3D深度信息图。
可选地,所述获取一帧原始3D图像,对所述原始3D图像进行第一次曝光,生成第一曝光图像之前还包括:
预先设置一个曝光时间梯度数组,所述梯度数组中包含有若干个时间长度由小到大排列的曝光时间,其中,时间长度最大的曝光时间为曝光时间阈值。
可选地,所述获取一帧原始3D图像,对所述原始3D图像进行第一次曝光,生成第一曝光图像,包括:
获取一帧原始3D图像,获取所述曝光时间梯度数组的一个曝光时间作为第一曝光时间;
根据所述第一曝光时间对所述原始3D图像进行第一次曝光,生成所述曝光图像。
可选地,对所述原始3D图像进行第二次曝光,生成第二曝光图像;包括:
如果所述第一曝光图像中的有效像素点个数不满足预设的条件,则 获取曝光时间梯度数组中与第一曝光时间相邻,且大于第一曝光时间的第二曝光时间对所述原始3D图像进行第二次曝光,生成第二曝光图像。
可选地,继续曝光并生成对应的曝光图像,直至曝光时间达到曝光时间阈值或者曝光所生成对应图像中的有效像素点个数满足预设的条件,包括:
如果所述第二曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第二曝光时间相邻,且大于第二曝光时间的第三曝光时间对所述原始3D图像进行第三次曝光,生成第三曝光图像,直至曝光时间达到曝光时间阈值或者曝光所生成对应图像中的有效像素点个数满足预设的条件。
可选地,所述获取所述第一曝光图像中的有效像素点,具体包括:
获取所述第一曝光图像的像素点,判断所述像素点是否满足预设的信号质量参数;
若所述像素点满足预设的信号质量参数,则将所述像素点标记为有效像素点,并获取所述第一曝光图像中的所有所述有效像素点。
为解决上述技术问题,本发明实施方式采用的另一个技术方案是:提供一种提升3D图像深度信息的装置,其中,所述装置包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现以下步骤:
获取一帧原始3D图像,对所述原始3D图像进行第一次曝光,生成第一曝光图像;
获取所述第一曝光图像中的有效像素点,如果有效像素点个数满足预设的条件,则提取所述第一曝光图像中的有效像素点;否则,对所述原始3D图像进行第二次曝光,生成第二曝光图像;
如果所述第二曝光图像中的有效像素点个数满足预设的条件,则提取所述第二曝光图像中有效像素点,否则,继续曝光并生成对应的曝光图像,判断所述曝光图像的有效像素点个数是否满足预设的条件,直至曝光时间达到曝光时间阈值或者曝光所生成对应的曝光图像的有效像素点个数满足预设的条件时,提取所述曝光图像中的有效像素点;
对所述有效像素点进行标定与校准,生成3D深度信息图。
可选地,所述计算机程序被所述处理器执行时还实现以下步骤:
预先设置一个曝光时间梯度数组,所述梯度数组中包含有若干个时间长度由小到大排列的曝光时间,其中,时间长度最大的曝光时间为曝光时间阈值。
可选地,所述计算机程序被所述处理器执行时还实现以下步骤:
获取一帧原始3D图像,获取所述曝光时间梯度数组的一个曝光时间作为第一曝光时间;
根据所述第一曝光时间对所述原始3D图像进行第一次曝光,生成所述曝光图像。
可选地,所述计算机程序被所述处理器执行时还实现以下步骤:
如果所述第一曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第一曝光时间相邻,且大于第一曝光时间的第二曝光时间对所述原始3D图像进行第二次曝光,生成第二曝光图像。
可选地,所述计算机程序被所述处理器执行时还实现以下步骤:
如果所述第二曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第二曝光时间相邻,且大于第二曝光时间的第三曝光时间对所述原始3D图像进行第三次曝光,生成第三曝光图像,直至曝光时间达到曝光时间阈值或者曝光所生成对应图像中的有效像素点个数满足预设的条件。
可选地,所述计算机程序被所述处理器执行时还实现以下步骤:
获取所述第一曝光图像的像素点,判断所述像素点是否满足预设的信号质量参数;
若所述像素点满足预设的信号质量参数,则将所述像素点标记为有效像素点,并获取所述第一曝光图像中的所有所述有效像素点。
为解决上述技术问题,本发明实施方式采用的另一个技术方案是:提供一种无人机,其中,包括:
壳体;
机臂;
电机,安装于所述机臂上;
螺旋桨,包括桨毂和桨叶,桨毂与所述电机的转轴连接,当电机的转轴转动时,驱动桨叶旋转,以产生使得所述无人机运动的力;
相机,安装在所述壳体上;以及
处理器,用于执行上述提升3D图像深度信息的方法。
本发明的另一种实施方式提供了一种计算机程序产品,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被处理器执行时,使所述处理器执行上述的提升3D图像深度信息的方法。
本发明的另一种实施方式提供了一种非易失性计算机可读存储介质,所述非易失性计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行时,可使得所述一个或多个处理器执行上述的提升3D图像深度信息的方法。
本发明实施方式提供了一种提升3D图像深度信息的方法、装置及无人机,通过对3D相机拍摄的图像进行多次曝光,提取符合预设条件的曝光图像中的有效像素点进行标定和校准后,生成3D深度图像。区别于现有技术的情况,本发明实施方式能够有效提升3D深度图像的质量,较好的还原真实3D场景,为无人机躲避障碍提供了方便。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例中所需要使用的附图作简单的介绍。显而易见地,下面所描述的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获取其他的附图。
图1是本发明实施例提供的一种提升3D图像深度信息的方法的流程示意图;
图2是本发明又一实施例提供的一种提升3D图像深度信息的方法的有效像素点获取方法的流程示意图;
图3是本发明又一实施例提供的提升3D图像深度信息的装置的硬件 结构图;
图4是本发明实施例提供的一种提升3D图像深度信息的装置程序模块示意图;
图5是本发明另一实施例提供的一种无人机的结构框图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步的详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
现有的无人机的3D相机在进行图像拍摄时,经常是采用单次曝光,单次曝光对一帧画面中的3D深度信息中某些区域得到像素点曝光不足或反射太弱,造成距离信息的误判缺陷,从而为无人机的飞行安全带来不便。
无人机在飞行过程中,需要实时获取无人机与障碍物的距离。无人机通过3D相机拍摄原始3D图像,进行曝光。因曝光图像中质量直接影响3D深度信息图的生成,单次曝光时可能出现图像的信号太差或太强,对应的图像的信号太差表示反射信号太弱,图像的信号太强会出现反射光太强,则最终的3D深度信息图也会受影响。
在本发明实施例中,3D相机这里指基于TOF技术测量景深的装置,所产生的图像是发射点与反射点的距离,不同的色彩代表不同距离,反映出不同层次的场景3D深度信息。3D相机通过采用TOF技术获取3D图像中的景深信息,根据景深确定障碍物的位置,从而避障。其中TOF是飞行时间(Time of Flight)技术的缩写,传感器发出经调制的近红外光或激光,遇物体后反射,通过计算光线发射和反射时间差或相位差,来换算被拍摄景物的距离,以产生深度距离信息。
因此需要获取一帧原始3D图像,并对原始3D图像进行第一次曝光,生成第一曝光图像,如果第一曝光图像中的有效像素点个数满足预设的条件,则提取第一曝光图像中的有效像素点,对有效像素点进行标定及校准,生成3D深度信息图;其中预设的条件包括不限于有效像素点的阈值个数。第一次曝光的曝光时间记为第一曝光时间。
如果第一曝光图像中的有效像素点个数不满足预设的条件,则对拍摄的原始3D图像进行第二次曝光,生成第二曝光图像,其中第二次曝光时间长于第一曝光时间。
如果第二曝光图像中的有效像素点个数满足预设的条件,就提取第二曝光图像中有效像素点,并对有效像素点进行标定与校准,生成3D深度信息图。
如果第二曝光图像中有效像素点个数不满足预设的条件,则继续曝光并生成对应的曝光图像,其中,后一次的曝光时间长于前一次的曝光时间,持续曝光直到曝光时间达到预设的曝光时间阈值或者曝光所生成的对应图像中的有效像素点个数满足预设的条件,此时提取对应图像中的有效像素点,对有效像素点进行标定与校准,生成一幅3D深度信息图,从而实现获取多次曝光中最符合条件的曝光图像的有效像素点进行标定和校准,有效提升了3D深度图像的质量,较好地还原真实3D场景。
参阅图1,图1为本发明实施例的一种提升3D图像深度信息的方法的流程示意图。该实施方式包括:
步骤S100、获取一帧原始3D图像,对原始3D图像进行第一次曝光,生成第一曝光图像。
具体实施时,无人机在飞行过程,通过3D相机来拍摄实时视频,根据对视频中的每一帧图像进行处理后,获取3D深度图像来确定无人机与障碍物的距离。获取一帧原始3D图像,对该原始3D图像进行第一次曝光,其中第一次曝光的曝光时间可由用户进行预先设置,获取第一次曝光生成的第一曝光图像。
步骤S200、获取第一曝光图像的有效像素点,判断有效像素点个数是否满足预设的条件,如果是,则执行步骤S300,如果否,则执行步骤S400。
有效像素点个数满足预设的条件,则提取第一曝光图像中的有效像素点;否则,对原始3D图像进行第二次曝光,生成第二曝光图像。
具体实施时,获取步骤S100中获取的第一曝光图像,并获取第一曝光图像中的有效像素点,其中有效像素点指满足预设的信号质量参数 的像素点。其中信号质量参数包括但不限于信号幅值、信噪比。预设的信号质量参数可能是预先设置的信号幅值区间,或是预先设置的信噪比区间。
步骤S300、如果有效像素点个数满足预设的条件,则提取第一曝光图像中的有效像素点。
具体实施时,如果第一曝光图像中的有效像素点个数满足预设的条件,其中预设的条件包括但不限于有效像素点个数的阈值。例如:若第一曝光图像中的有效像素点个数满足预设的有效像素点个数的阈值,则提取第一曝光图像中的有效像素点。
步骤S400、如果有效像素点个数不满足预设的条件,对原始3D图像进行第二次曝光,生成第二曝光图像。
具体实施时,如果第一曝光图像中的有效像素点个数不满足预设的有效像素点个数的阈值,则对原始3D图像进行第二次曝光,第二次曝光的时间与第一次曝光的时间不同,优选的第二次曝光的时间长于第一次曝光的时间,获取第二次曝光生成的第二曝光图像。
步骤S500、判断第二曝光图像中的有效像素点个数满足预设的条件,如果是,则执行步骤S600,如果否,则执行步骤S700。
步骤S600、如果第二曝光图像中的有效像素点个数满足预设的条件,则提取第二曝光图像中的有效像素点。
具体实施时,获取步骤S400中的第二曝光图像,如果第二曝光图像中的有效像素点个数满足预设的条件,则提取第二曝光图像中的有效像素点。
步骤S700、如果第二曝光图像中的有效像素点个数不满足预设的条件,则继续曝光并生成对应的曝光图像,判断曝光图像的有效像素点个数是否满足预设的条件,直至曝光时间达到曝光时间阈值或是曝光所生成对应的曝光图像中的有效像素点个数满足预设的条件时,提取曝光图像中的有效像素点。
具体实施时,如果第二曝光图像中的有效像素点个数不满足预设的条件,则继续曝光并生成对应的曝光图像,此时还继续判断曝光图像的 有效像素点个数是否满足预设的条件,直至曝光时间达到曝光时间阈值或者曝光所生成对应的曝光图像中的有效像素点个数满足预设的条件后,提取对应的曝光图像的有效像素点。
其中预设的条件是曝光产生的有效像素点个数超过一定触发个数,或是曝光产生的有效像素点个数等于图像的像素点个数。触发个数是指预设的阈值个数,触发个数小于曝光图像的像素点个数。例如,曝光图像中的像素点个数为200个,触发个数为180,则检测到曝光产生的有效像素点个数超过180,此时说明曝光图像中的有效像素点满足预设的条件。
步骤S800、对有效像素点进行标定与校准后,生成3D深度信息图。
具体实施时,根据步骤S300、步骤S600步骤S700提取的有效像素点,并对有效像素点依次进行标定与校准,生成一帧高质量的3D深度信息图。其中,标定与校准方法具体根据3D相机的参数来进行标定与校准。
可选地,步骤S100之前还包括步骤:
步骤S10、预先设置一个曝光时间梯度数组,梯度数组中包含有若干个时间长度由小到大排列的曝光时间,其中,时间长度最大的曝光时间为曝光时间阈值。
具体实施时,若曝光时间过长,则3D相机拍摄图像后,曝光时间无限制的延长,则无法对像素点进行标定与校准,为无人机实时获取3D深度信息图带来了不便。若曝光时间越短,则图像中的大部分像素曝光不足,容易造成距离信息的误判。因此需要预置曝光时间梯度数组,保证图像曝光的质量,而不延长3D深度信息图的获取时间。其中曝光时间梯度数组是一个已知时间长度的梯度数组,时间长度由小到大排列。每次曝光执行完,根据是否有效像素点的个数满足预设条件,决定是否进行下一次曝光,若不满足,则曝光时间选取曝光时间梯度数组中下一个较大的梯度值。
可选地,步骤S100包括:
步骤S101、获取一帧原始3D图像,获取曝光时间梯度数组的一个 曝光时间作为第一曝光时间;
步骤S102、根据第一曝光时间对原始3D图像进行第一次曝光,生成第一曝光图像。
具体实施时,3D相机拍摄一帧原始3D图像,获取预先调协的曝光时间梯度数组中的曝光时间为第一曝光时间,优选的将曝光时间梯度数组时间长度最小的曝光时间作为第一曝光时间,根据第一曝光时间对获取的原始3D图像进行第一次曝光,生成第一曝光图像。
可选地,步骤S400包括:
如果第一曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第一曝光时间相邻,且大于第一曝光时间的第二曝光时间对原始3D图像进行第二次曝光,生成第二曝光图像。
具体实施时,如果第一曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第一曝光时间相邻,且大于第一曝光时间的第二曝光时间,根据第二曝光时间对原始3D图像进行第二次曝光,生成第二曝光图像,其中第二曝光时间为第一曝光时间在曝光时间梯度数组中的下一个梯度数组。
可选地,步骤S700中继续曝光并生成对应的曝光图像,直至曝光时间达到曝光时间阈值或者曝光所生成对应图像中的有效像素点个数满足预设的条件具体为:
如果第二曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第二曝光时间相邻,且大于第二曝光时间的第三曝光时间对原始3D图像进行第三次曝光,生成第三曝光图像,直至曝光时间达到曝光时间阈值或者曝光所生成对应图像中的有效像素点个数满足预设的条件。
具体实施时,如果第二曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第二曝光时间相邻,且大于第二曝光时间的第三曝光时间对原始3D图像进行第三次曝光,生成第三曝光图像。若第三曝光时间为曝光时间梯度数组中的曝光时间阈值,则第三曝光图像的标记的有效像素点不管是否满足预设的条件,则有效像素点 获取结束,提取第三曝光图像中有效像素点。
参阅图2,图2为本发明另一实施例的一种提升3D图像深度信息的方法的有效像素点获取方法的流程示意图,如图2所示,步骤S200中获取第一曝光图像中的有效像素点,包括:
步骤S201、获取第一曝光图像的像素点,判断像素点是否满足预设的信号质量参数;
步骤S202、若像素点满足预设的信号质量参数,则将像素点标记为有效像素点,并获取第一曝光图像的所有有效像素点;
步骤S203、不对像素点作任何处理。
具体实施时,获取第一曝光图像的所有像素点,依次判断每个像素点的信号质量参数是否满足预设的信号质量参数,若像素点的信号质量参数满足预设的信号质量参数,则将该像素点标记为有效像素点,对第一曝光图像的中所有像素点,均按该原则进行标记,获取第一曝光图像的标记的所有有效像素点。若像素点的信号质量参数不满足预设的信号质量参数,则不对像素点作任何处理。
其中信号质量参数包括不限于信号幅值和信噪比。该实施方式中以信号幅值为例进行介绍。有关信号幅值约定如下,将信号幅值记为amp,amp<100,则表示反射信号太弱,为不可靠数据,应丢弃;amp>100且amp<1000,则表示是正常有效数据,amp>1000,则表示像素为饱和像素,是无效数据,应丢弃。因此判断是否为有效像素点的,即判断像素点的信号幅值是否在100到1000之间。
参阅图3,图3是本发明一实施例提供的一种提升3D图像深度信息的装置的硬件结构图。
在本实施例中,所述提升3D图像深度信息的装置10中包括存储器101及处理器102。所述提升3D图像深度信息的装置10可以是用于处理3D图像的独立的电子装置,如3D相机等拍摄设备,也可以是依附于或协同与相机等拍照设备的达到3D图像处理的附属装置,如装设有3D相机的电子设备,装设有3D相机的无人机等。其中,存储器101至少包括一种类型的可读存储介质,可读存储介质包括闪存、硬盘、多媒体 卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。处理器102可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片等。
参阅图4,图4是本发明又一实施例提供的一种提升3D图像深度信息的装置的程序模块示意图。
所述提升3D图像深度信息的装置10包括曝光模块100、第一有效像素点提取模块200、第二有效像素点提取模块300以及标定与校准模块400。模块被配置成由一个或多个处理器(本实施例为处理器102)执行,以完成本发明。本发明所称的模块是完成一特定功能的计算机程序段。存储器101用于存储提升3D图像深度信息的装置10的程序代码等资料。处理器102用于执行存储器101中存储的程序代码。
以下将详细描述各个程序模块实现的功能。
曝光模块100,用于获取一帧原始3D图像,对原始3D图像进行第一次曝光,生成第一曝光图像;
第一有效像素点提取模块200,用于获取第一曝光图像的有效像素点,如果有效像素点个数满足预设的条件,则提取第一曝光图像中的有效像素点;否则,则对原始3D图像进行第二次曝光,生成第二曝光图像;
第二有效像素点提取模块300,用于如果第二曝光图像中的有效像素点个数满足预设的条件,则提取第二曝光图像中的有效像素点,否则继续曝光并生成对应的曝光图像,判断曝光图像的有效像素点个数是否满足预设的条件,直至曝光时间达到曝光时间阈值或者曝光所生成对应的曝光图像中的有效像素点个数满足预设的条件时,提取对应图像中的有效像素点;
标定与校准模块400,用于对有效像素点进行标定与校准,生成3D深度信息图。
具体实施时,例如,无人机在飞行过程,通过3D相机来拍摄实时 视频,根据对视频中的每一帧图像进行处理后,需要获取3D深度图像来确定无人机与障碍物的距离。曝光模块100用于获取一帧原始3D图像,对该原始3D图像进行第一次曝光,其中第一次曝光的曝光时间可由用户进行预先设置,获取第一次曝光生成的第一曝光图像。
第一有效像素点提取模块200用于获取曝光模块100中获取的第一曝光图像,并获取第一曝光图像中的有效像素点,其中有效像素点指满足预设的信号质量参数的像素点。其中信号质量参数包括但不限于信号幅值、信噪比。预设的信号质量参数可能是预先设置的信号幅值区间,或是预先设置的信噪比区间。
如果第一曝光图像中的有效像素点个数满足预设的条件,其中预设的条件包括但不限于有效像素点个数的阈值。例如:若第一曝光图像中的有效像素点个数满足预设的有效像素点个数的阈值,则提取第一曝光图像中的有效像素点。
如果第一曝光图像中的有效像素点个数不满足预设的有效像素点个数的阈值,则对原始3D图像进行第二次曝光,第二次曝光的时间与第一次曝光的时间不同,优选的第二次曝光的时间长于第一次曝光的时间,获取第二次曝光生成的第二曝光图像。
第二有效像素点提取模块300用于获取第一有效像素点提取模块200中的第二曝光图像,如果第二曝光图像中的有效像素点个数满足预设的条件,则提取第二曝光图像中的有效像素点。
如果第二曝光图像中的有效像素点个数不满足预设的条件,则继续曝光并生成对应的曝光图像,此时还继续判断曝光图像的有效像素点个数是否满足预设的条件,直至曝光时间达到曝光时间阈值或者曝光所生成对应的曝光图像中的有效像素点个数满足预设的条件后,提取对应的曝光图像的有效像素点。
其中预设的条件是曝光产生的有效像素点个数超过一定触发个数,或是曝光产生的有效像素点个数等于图像的像素点个数。触发个数是指预设的阈值个数,触发个数小于曝光图像的像素点个数。例如,曝光图像中的像素点个数为200个,触发个数为180,则检测到曝光产生的有 效像素点个数超过180,此时说明曝光图像中的有效像素点满足预设的条件。
标定与校准模块400用于根据提取的有效像素点,并对有效像素点依次进行标定与校准,生成一帧高质量的3D深度信息图。其中,标定与校准方法具体根据3D相机的参数来进行标定与校准。
可选地,装置还包括:
预先设置模块,用于预先设置一个曝光时间梯度数组,梯度数组中包含有若干个时间长度由小到大排列的曝光时间,其中,时间长度最大的曝光时间为曝光时间阈值。
具体实施时,若曝光时间过长,则3D相机拍摄图像后,曝光时间无限制的延长,则无法对像素点进行标定与校准,为无人机实时获取3D深度信息图带来了不便。若曝光时间越短,则图像中的大部分像素曝光不足,容易造成距离信息的误判。因此需要预置曝光时间梯度数组,保证图像曝光的质量,而不延长3D深度信息图的获取时间。其中预先设置模块用于预先设置曝光时间梯度数组,曝光时间梯度数组是一个已知时间长度的梯度数组,时间长度由小到大排列。每次曝光执行完,根据是否有效像素点的个数满足预设条件,决定是否进行下一次曝光,若不满足,则曝光时间选取曝光时间梯度数组中下一个较大的梯度值。
可选地,曝光模块100还用于,
获取一帧原始3D图像,获取曝光时间梯度数组的一个曝光时间作为第一曝光时间;
根据第一曝光时间对原始3D图像进行第一次曝光,生成第一曝光图像。
具体实施时,曝光模块100还用于在3D相机拍摄一帧原始3D图像后,获取预先调协的曝光时间梯度数组中的曝光时间为第一曝光时间,优选的将曝光时间梯度数组时间长度最小的曝光时间作为第一曝光时间,根据第一曝光时间对获取的原始3D图像进行第一次曝光,生成第一曝光图像。
可选地,第一有效像素点提取模块200还用于,
如果第一曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第一曝光时间相邻,且大于第一曝光时间的第二曝光时间对原始3D图像进行第二次曝光,生成第二曝光图像。
具体实施时,第一有效像素点提取模块200还用于如果第一曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第一曝光时间相邻,且大于第一曝光时间的第二曝光时间,根据第二曝光时间对原始3D图像进行第二次曝光,生成第二曝光图像,其中第二曝光时间为第一曝光时间在曝光时间梯度数组中的下一个梯度数组。
可选地,第二有效像素点提取模块300还用于,
如果第二曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第二曝光时间相邻,且大于第二曝光时间的第三曝光时间对原始3D图像进行第三次曝光,生成第三曝光图像,直至曝光时间达到曝光时间阈值或者曝光所生成对应图像中的有效像素点个数满足预设的条件。
具体实施时,第二有效像素点提取模块300还用于如果第二曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第二曝光时间相邻,且大于第二曝光时间的第三曝光时间对原始3D图像进行第三次曝光,生成第三曝光图像。若第三曝光时间为曝光时间梯度数组中的曝光时间阈值,则第三曝光图像的标记的有效像素点不管是否满足预设的条件,则有效像素点获取结束,提取第三曝光图像中有效像素点。
可选地,第一有效像素点提取模块200还用于,
获取第一曝光图像的像素点,判断像素点是否满足预设的信号质量参数;
若像素点满足预设的信号质量参数,则将像素点标记为有效像素点,并获取第一曝光图像的所有有效像素点。
具体实施时,第一有效像素点提取模块200还用于获取第一曝光图像的像素点,判断像素点的信号质量参数是否满足预设的信号质量参数,若像素点的信号质量参数满足预设的信号质量参数,则将该像素点 标记为有效像素点,对第一曝光图像的中所有像素点,均按该原则进行标记,获取第一曝光图像的标记的所有有效像素点。
其中信号质量参数包括不限于信号幅值和信噪比。该实施方式中以信号幅值为例进行介绍。有关信号幅值约定如下,将信号幅值记为amp,amp<100,则表示反射信号太弱,为不可靠数据,应丢弃;amp>100且amp<1000,则表示是正常有效数据,amp>1000,则表示像素为饱和像素,是无效数据,应丢弃。因此判断是否为有效像素点的,即判断像素点的信号幅值是否在100到1000之间。
如图5所示,本发明另一实施例提供的一种无人机600,无人机600包括壳体610;机臂620;电机630,安装于机臂620上;螺旋桨640,包括桨毂641和桨叶642,桨毂641与电机630的转轴连接,当电机630的转轴转动时,驱动桨叶642旋转,以产生使得无人机600运动的力;相机650,安装在壳体610上;以及处理器660,用于执行图1中的方法步骤S100至步骤S800,图2中的方法步骤S201至步骤S203,实现图4中的模块100-400的功能。
本发明实施例提供了一种非易失性计算机可读存储介质,计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如,执行以上描述的图1中的方法步骤S100至步骤S800,图2中的方法步骤S201至步骤S203,实现图4中的模块100-400的功能。
本发明的另一种实施例提供了一种计算机程序产品,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被处理器执行时,使所述处理器执行上述的提升3D图像深度信息的方法。例如,执行以上描述的图1中的方法步骤S100至步骤S800,图2中的方法步骤S201至步骤S203,实现图4中的模块100-400的功能。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分 布到多个网络单元上。可以根据实际需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施例的描述,本领域的技术人员可以清楚地了解到各实施例可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件实现。基于这样的理解,上述技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存在于计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (14)

  1. 一种提升3D图像深度信息的方法,其特征在于,所述方法包括:
    获取一帧原始3D图像,对所述原始3D图像进行第一次曝光,生成第一曝光图像;
    获取所述第一曝光图像中的有效像素点,如果所述有效像素点个数满足预设的条件,则提取所述第一曝光图像中的有效像素点;否则,对所述原始3D图像进行第二次曝光,生成第二曝光图像;
    如果所述第二曝光图像中的有效像素点个数满足预设的条件,则提取所述第二曝光图像中的有效像素点,否则,继续曝光并生成对应的曝光图像,判断所述曝光图像的有效像素点个数是否满足预设的条件,直至曝光时间达到曝光时间阈值或者曝光所生成对应的曝光图像中的有效像素点个数满足预设的条件时,提取所述曝光图像中的有效像素点;
    对所述有效像素点进行标定与校准,生成3D深度信息图。
  2. 根据权利要求1所述的提升3D图像深度信息的方法,其特征在于,获取一帧原始3D图像,对所述原始3D图像进行第一次曝光,生成第一曝光图像之前,还包括:
    预先设置一个曝光时间梯度数组,所述梯度数组中包含有若干个时间长度由小到大排列的曝光时间,其中,时间长度最大的曝光时间为曝光时间阈值。
  3. 根据权利要求2所述的提升3D图像深度信息的方法,其特征在于,获取一帧原始3D图像,对所述原始3D图像进行第一次曝光,生成第一曝光图像,包括:
    获取一帧原始3D图像,获取所述曝光时间梯度数组的一个曝光时间作为第一曝光时间;
    根据所述第一曝光时间对所述原始3D图像进行第一次曝光,生成所述第一曝光图像。
  4. 根据权利要求3所述的提升3D图像深度信息的方法,其特征在于,对所述原始3D图像进行第二次曝光,生成第二曝光图像,包括:
    如果所述第一曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第一曝光时间相邻,且大于第一曝光时间的第二曝光时间对所述原始3D图像进行第二次曝光,生成第二曝光图像。
  5. 根据权利要求4所述的提升3D图像深度信息的方法,其特征在于,继续曝光并生成对应的曝光图像,直至曝光时间达到曝光时间阈值或者曝光所生成对应图像中的有效像素点个数满足预设的条件,包括:
    如果所述第二曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第二曝光时间相邻,且大于第二曝光时间的第三曝光时间对所述原始3D图像进行第三次曝光,生成第三曝光图像,直至曝光时间达到曝光时间阈值或者曝光所生成对应图像中的有效像素点个数满足预设的条件。
  6. 根据权利要求1-5任一项所述的提升3D图像深度信息的方法,其特征在于,所述获取所述第一曝光图像中的有效像素点,具体包括:
    获取所述第一曝光图像的像素点,判断所述像素点是否满足预设的信号质量参数;
    若所述像素点满足预设的信号质量参数,则将所述像素点标记为有效像素点,并获取所述第一曝光图像中的所有所述有效像素点。
  7. 一种提升3D图像深度信息的装置,其特征在于,所述装置包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现以下步骤:
    获取一帧原始3D图像,对所述原始3D图像进行第一次曝光,生成第一曝光图像;
    获取所述第一曝光图像中的有效像素点,如果有效像素点个数满足预设的条件,则提取所述第一曝光图像中的有效像素点;否则,对所述原始3D图像进行第二次曝光,生成第二曝光图像;
    如果所述第二曝光图像中的有效像素点个数满足预设的条件,则提取所述第二曝光图像中有效像素点,否则,继续曝光并生成对应的曝光图像,判断所述曝光图像的有效像素点个数是否满足预设的条件,直至曝光时间达到曝光时间阈值或者曝光所生成对应的曝光图像的有效像 素点个数满足预设的条件时,提取所述曝光图像中的有效像素点;
    对所述有效像素点进行标定与校准,生成3D深度信息图。
  8. 根据权利要求7所述的提升3D图像深度信息的装置,其特征在于,所述计算机程序被所述处理器执行时还实现以下步骤:
    预先设置一个曝光时间梯度数组,所述梯度数组中包含有若干个时间长度由小到大排列的曝光时间,其中,时间长度最大的曝光时间为曝光时间阈值。
  9. 根据权利要求8所述的提升3D图像深度信息的装置,其特征在于,所述计算机程序被所述处理器执行时还实现以下步骤:
    获取一帧原始3D图像,获取所述曝光时间梯度数组的一个曝光时间作为第一曝光时间;
    根据所述第一曝光时间对所述原始3D图像进行第一次曝光,生成所述第一曝光图像。
  10. 根据权利要求9所述的提升3D图像深度信息的装置,其特征在于,所述计算机程序被所述处理器执行时还实现以下步骤:
    如果所述第一曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第一曝光时间相邻,且大于第一曝光时间的第二曝光时间对所述原始3D图像进行第二次曝光,生成第二曝光图像。
  11. 根据权利要求10所述的提升3D图像深度信息的装置,其特征在于,所述计算机程序被所述处理器执行时还实现以下步骤:
    如果所述第二曝光图像中的有效像素点个数不满足预设的条件,则获取曝光时间梯度数组中与第二曝光时间相邻,且大于第二曝光时间的第三曝光时间对所述原始3D图像进行第三次曝光,生成第三曝光图像,直至曝光时间达到曝光时间阈值或者曝光所生成对应图像中的有效像素点个数满足预设的条件。
  12. 根据权利要求7-11任一项所述的提升3D图像深度信息的装置,其特征在于,所述计算机程序被所述处理器执行时还实现以下步骤:
    获取所述第一曝光图像的像素点,判断所述像素点是否满足预设的信号质量参数;
    若所述像素点满足预设的信号质量参数,则将所述像素点标记为有效像素点,并获取所述第一曝光图像中的所有所述有效像素点。
  13. 一种无人机,其特征在于,包括:
    壳体;
    机臂;
    电机,安装于所述机臂上;
    螺旋桨,包括桨毂和桨叶,桨毂与所述电机的转轴连接,当电机的转轴转动时,驱动桨叶旋转,以产生使得所述无人机运动的力;
    相机,安装在所述壳体上;以及
    处理器,用于执行如权利要求1-6中任一所述的方法。
  14. 一种非易失性计算机可读存储介质,其特征在于,所述非易失性计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行时,可使得所述一个或多个处理器执行权利要求1-6任一项所述的方法。
PCT/CN2018/084072 2017-08-18 2018-04-23 一种提升3d图像深度信息的方法、装置及无人机 WO2019033777A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18846940.7A EP3663791B1 (en) 2017-08-18 2018-04-23 Method and device for improving depth information of 3d image, and unmanned aerial vehicle
US16/793,931 US11030762B2 (en) 2017-08-18 2020-02-18 Method and apparatus for improving 3D image depth information and unmanned aerial vehicle

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710714161.0A CN109398731B (zh) 2017-08-18 2017-08-18 一种提升3d图像深度信息的方法、装置及无人机
CN201710714161.0 2017-08-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/793,931 Continuation US11030762B2 (en) 2017-08-18 2020-02-18 Method and apparatus for improving 3D image depth information and unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
WO2019033777A1 true WO2019033777A1 (zh) 2019-02-21

Family

ID=65362068

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/084072 WO2019033777A1 (zh) 2017-08-18 2018-04-23 一种提升3d图像深度信息的方法、装置及无人机

Country Status (4)

Country Link
US (1) US11030762B2 (zh)
EP (1) EP3663791B1 (zh)
CN (1) CN109398731B (zh)
WO (1) WO2019033777A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111982291A (zh) * 2019-05-23 2020-11-24 杭州海康机器人技术有限公司 一种基于无人机的火点定位方法、装置及系统
CN116184364A (zh) * 2023-04-27 2023-05-30 上海杰茗科技有限公司 一种iToF相机的多机干扰检测与去除方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150529B (zh) * 2019-06-28 2023-09-01 北京地平线机器人技术研发有限公司 一种图像特征点的深度信息确定方法及装置
CN110428381B (zh) * 2019-07-31 2022-05-06 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、移动终端及存储介质
CN110475078B (zh) * 2019-09-03 2020-12-15 河北科技大学 摄像机曝光时间调整方法及终端设备
JP7436428B2 (ja) 2021-06-25 2024-02-21 株式会社日立エルジーデータストレージ 測距装置、測距システム、及び、干渉回避方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104853080A (zh) * 2014-02-13 2015-08-19 宏达国际电子股份有限公司 图像处理装置
CN105894567A (zh) * 2011-01-07 2016-08-24 索尼互动娱乐美国有限责任公司 放缩三维场景中的用户控制的虚拟对象的像素深度值
WO2016171913A1 (en) * 2015-04-21 2016-10-27 Microsoft Technology Licensing, Llc Time-of-flight simulation of multipath light phenomena
CN106461763A (zh) * 2014-06-09 2017-02-22 松下知识产权经营株式会社 测距装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9807322B2 (en) * 2013-03-15 2017-10-31 Duelight Llc Systems and methods for a digital image sensor
JP6296401B2 (ja) * 2013-06-27 2018-03-20 パナソニックIpマネジメント株式会社 距離測定装置および固体撮像素子
CN104823437A (zh) * 2014-06-12 2015-08-05 深圳市大疆创新科技有限公司 一种图片处理方法、装置
CN104994309B (zh) * 2015-07-07 2017-09-19 广东欧珀移动通信有限公司 一种消除相机光斑的方法及系统
US9594381B1 (en) * 2015-09-24 2017-03-14 Kespry, Inc. Enhanced distance detection system
CN105611185B (zh) * 2015-12-18 2017-10-31 广东欧珀移动通信有限公司 图像生成方法、装置及终端设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894567A (zh) * 2011-01-07 2016-08-24 索尼互动娱乐美国有限责任公司 放缩三维场景中的用户控制的虚拟对象的像素深度值
CN104853080A (zh) * 2014-02-13 2015-08-19 宏达国际电子股份有限公司 图像处理装置
CN106461763A (zh) * 2014-06-09 2017-02-22 松下知识产权经营株式会社 测距装置
WO2016171913A1 (en) * 2015-04-21 2016-10-27 Microsoft Technology Licensing, Llc Time-of-flight simulation of multipath light phenomena

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3663791A4

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111982291A (zh) * 2019-05-23 2020-11-24 杭州海康机器人技术有限公司 一种基于无人机的火点定位方法、装置及系统
CN111982291B (zh) * 2019-05-23 2022-11-04 杭州海康机器人技术有限公司 一种基于无人机的火点定位方法、装置及系统
CN116184364A (zh) * 2023-04-27 2023-05-30 上海杰茗科技有限公司 一种iToF相机的多机干扰检测与去除方法及装置
CN116184364B (zh) * 2023-04-27 2023-07-07 上海杰茗科技有限公司 一种iToF相机的多机干扰检测与去除方法及装置

Also Published As

Publication number Publication date
EP3663791B1 (en) 2023-05-31
CN109398731A (zh) 2019-03-01
US20200184666A1 (en) 2020-06-11
EP3663791A1 (en) 2020-06-10
CN109398731B (zh) 2020-09-08
US11030762B2 (en) 2021-06-08
EP3663791A4 (en) 2020-06-17

Similar Documents

Publication Publication Date Title
WO2019033777A1 (zh) 一种提升3d图像深度信息的方法、装置及无人机
CN109376667B (zh) 目标检测方法、装置及电子设备
EP3640892B1 (en) Image calibration method and device applied to three-dimensional camera
US9916689B2 (en) Apparatus and method for estimating camera pose
US11196919B2 (en) Image processing method, electronic apparatus, and computer-readable storage medium
KR102443551B1 (ko) 포인트 클라우드 융합 방법, 장치, 전자 기기 및 컴퓨터 저장 매체
WO2018227576A1 (zh) 地面形态检测方法及系统、无人机降落方法和无人机
CN107517346B (zh) 基于结构光的拍照方法、装置及移动设备
CN111680574B (zh) 一种人脸检测方法、装置、电子设备和存储介质
CN111080784A (zh) 一种基于地面图像纹理的地面三维重建方法和装置
US11182951B2 (en) 3D object modeling using scale parameters and estimated distance
CN111160233B (zh) 基于三维成像辅助的人脸活体检测方法、介质及系统
CN113128430B (zh) 人群聚集检测方法、装置、电子设备和存储介质
US20230057655A1 (en) Three-dimensional ranging method and device
EP3073441B1 (fr) Procédé de correction d&#39;une image d&#39;au moins un objet présenté à distance devant un imageur et éclairé par un système d&#39;éclairage et système de prise de vues pour la mise en oeuvre dudit procédé
US10721419B2 (en) Ortho-selfie distortion correction using multiple image sensors to synthesize a virtual image
KR102439142B1 (ko) 기간 시설의 이미지를 획득하는 방법 및 장치
CN116709035B (zh) 图像帧的曝光调节方法、装置和计算机存储介质
JP2018066655A (ja) 鏡面情報取得装置、鏡面測定方法及びコンピュータプログラム
EP4171015A1 (en) Handling blur in multi-view imaging
WO2024195127A1 (ja) 画像処理装置、画像処理方法及び非一時的なコンピュータ可読媒体
WO2024166355A1 (ja) 画像解析装置、撮影システム、画像解析方法、および記録媒体
CN116664774A (zh) 一种三维仿真图像生成方法、装置、电子设备及存储介质
JP2024099222A (ja) 三次元情報生成装置及び三次元情報生成方法
CN115797995A (zh) 人脸活体检测方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18846940

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018846940

Country of ref document: EP

Effective date: 20200304