WO2019037088A1 - 一种曝光的控制方法、装置以及无人机 - Google Patents

一种曝光的控制方法、装置以及无人机 Download PDF

Info

Publication number
WO2019037088A1
WO2019037088A1 PCT/CN2017/099069 CN2017099069W WO2019037088A1 WO 2019037088 A1 WO2019037088 A1 WO 2019037088A1 CN 2017099069 W CN2017099069 W CN 2017099069W WO 2019037088 A1 WO2019037088 A1 WO 2019037088A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
depth
target object
determining
exposure parameter
Prior art date
Application number
PCT/CN2017/099069
Other languages
English (en)
French (fr)
Inventor
周游
杜劼熹
蔡剑钊
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201780004476.4A priority Critical patent/CN108401457A/zh
Priority to PCT/CN2017/099069 priority patent/WO2019037088A1/zh
Publication of WO2019037088A1 publication Critical patent/WO2019037088A1/zh
Priority to US16/748,973 priority patent/US20200162655A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/72Combination of two or more compensation controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • Embodiments of the present invention relate to the field of control, and in particular, to a method, a device, and a drone for controlling exposure.
  • the depth image is acquired by the depth sensor, and the target image is recognized and tracked by the depth image is an important means of target object detection.
  • the exposure control method of the depth sensor in the prior art may cause the target object to be overexposed.
  • the phenomenon of underexposure causes the partial depth value in the depth image acquired by the depth sensor to become an invalid value, thereby causing the detection and recognition of the target object to fail.
  • the embodiment of the invention provides a method and a device for controlling the exposure and a drone to eliminate the overexposure or underexposure of the target object, so that the depth image acquired by the depth sensor is more accurate, and the detection success rate of the target object is improved.
  • a first aspect of the embodiments of the present invention provides a method for controlling exposure, which includes:
  • the first exposure parameter is determined according to the brightness of the image of the target object, wherein the first exposure parameter is used to control the next automatic exposure of the depth sensor.
  • a second aspect of the embodiments of the present invention provides an apparatus for controlling exposure, comprising: a memory and a processor, wherein
  • the memory is configured to store program instructions
  • the processor the program code is called, when the program code is executed, used to execute Next operation:
  • the first exposure parameter is determined according to the brightness of the image of the target object, wherein the first exposure parameter is used to control the next automatic exposure of the depth sensor.
  • a third aspect of an embodiment of the present invention provides a drone, characterized in that it comprises a control device for exposure as described in the second aspect.
  • An embodiment of the present invention provides an exposure control method and apparatus, and an unmanned aerial vehicle, which determines an image of a target object from an image output by the depth sensor, and determines a depth sensor for controlling the next time according to the brightness of the image of the target object.
  • the first exposure parameter of the automatic exposure can effectively eliminate the overexposure or underexposure of the target object, so that the depth image acquired by the depth sensor is more accurate, and the detection success rate of the target object is improved.
  • FIG. 1 is a flowchart of a method for controlling exposure according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of determining an image of a target object from an image according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for controlling exposure according to another embodiment of the present invention.
  • FIG. 4 is a flowchart of a method for controlling exposure according to another embodiment of the present invention.
  • FIG. 5 is a flowchart of a method for controlling exposure according to another embodiment of the present invention.
  • FIG. 6 is a structural diagram of an apparatus for controlling exposure according to an embodiment of the present invention.
  • FIG. 7 is a structural diagram of a drone according to an embodiment of the present invention.
  • a component when referred to as being "fixed” to another component, it can be directly on the other component or the component can be in the middle. When a component is considered to "connect” another component, it can be directly connected to another component or possibly a central component.
  • the exposure strategy of the depth sensor is based on the global brightness within the detection range, that is, the exposure time, the exposure gain and the like are adjusted according to the global brightness to achieve the desired brightness, so that when the target object is in a high dynamic environment.
  • using the global brightness to adjust the exposure parameters of the depth sensor may cause the target object to be overexposed or underexposed, which may result in inaccurate depth images acquired by the depth sensor.
  • Some of the depth values in the image may be invalid values, which may result in the target object being undetectable or detecting errors using the depth image.
  • determining the brightness of the target object from the image output by the depth sensor to adjust the exposure parameter of the depth sensor can effectively prevent the phenomenon that the target object is overexposed or underexposed, so that the depth image output by the depth sensor is accurate.
  • the method for controlling the exposure provided by the embodiment of the present invention is described in detail below.
  • FIG. 1 is a flowchart of a method for controlling exposure according to an embodiment of the present invention. As shown in FIG. 1, the method in the embodiment of the present invention may include:
  • Step S101 Acquire an image output by the depth sensor according to the current exposure parameter
  • the execution body of the control method may be an exposure control device, and may further be a processor of the control device, wherein the processor may be a dedicated processor or a general purpose processor.
  • the depth sensor automatically exposes to the exposure range based on the current exposure parameters. When shooting, an image in the surrounding environment can be obtained, and the target object (for example, the target object can be a user) is within the detection range of the depth sensor, and the captured image also includes an image of the target object, and the target object can be identified.
  • the processor can be electrically connected to the depth sensor, and the processor acquires an image of the depth sensor output.
  • the depth sensor may be any sensor that can output a depth image or an image according to the output thereof, and may specifically be a binocular camera, a monocular camera, an RGB camera, a TOF camera, and an RGB-D camera. Or a variety. Therefore, the image may be a grayscale image or an RGB image; the exposure parameter includes one or more of an exposure time, an exposure gain, and an aperture value.
  • Step S102 determining an image of the target object from the image
  • the processor determines an image corresponding to the target object from the image. For example, as shown in FIG. 2, when the gesture of the user is recognized according to the depth sensor, the entire image may be The image corresponding to the user is determined in the image.
  • Step S103 Determine a first exposure parameter according to a brightness of an image of the target object, wherein the first exposure parameter is used to control a next automatic exposure of the depth sensor.
  • the brightness information of the image of the target object may be further acquired, and the first exposure parameter is determined according to the brightness information of the image of the target object, wherein the first exposure parameter is used.
  • the first exposure parameter is an exposure parameter for controlling the next automatic exposure of the depth sensor, that is, the first exposure parameter is the current exposure parameter at the next exposure.
  • the exposure control method provided by the implementation of the present invention determines an image of the target object from the image output by the depth sensor, and determines a first exposure parameter for controlling the next automatic exposure of the depth sensor according to the brightness of the image of the target object, The phenomenon that the target object in the image is overexposed or underexposed is effectively prevented, so that the depth image acquired by the depth sensor is more utilized for detecting and recognizing the target object, thereby improving the accuracy of the depth sensor detecting the target object.
  • FIG. 3 is a flowchart of a method for controlling exposure according to an embodiment of the present invention. As shown in FIG. 3, on the basis of the embodiment shown in FIG. 1, the method in the embodiment of the present invention may include:
  • Step S301 Acquire an image output by the depth sensor according to the current exposure parameter
  • step S301 and step S101 are the same, and are not described here.
  • Step S302 acquiring a depth image corresponding to the image
  • the processor may acquire a depth image corresponding to the image, wherein the depth image may be used for detecting and identifying the target object.
  • the obtaining the depth image corresponding to the image may be implemented in the following manners:
  • a feasible implementation manner acquiring a depth image corresponding to the image output by the depth sensor.
  • some depth sensors output a depth image correspondingly in addition to an image.
  • the TOF camera outputs a depth image corresponding to the gray image in addition to the gray image, and the processor can acquire the image.
  • the depth image corresponding to the image can acquire the image.
  • the acquiring the grayscale image output by the depth sensor comprises: acquiring at least two frames of images output by the depth sensor; and acquiring the depth image corresponding to the grayscale image comprises: according to the at least two The frame image acquires the depth image.
  • some depth sensors cannot directly output a depth image that is determined from the image output by the depth sensor.
  • the depth sensor is a binocular camera
  • the binocular camera outputs two frames of grayscale images (a grayscale image output by the left eye and a grayscale image of the right eye output) at the same time, and the processor can perform the grayscale image according to the two frames. Calculate the depth image.
  • the depth sensor may be a monocular camera, and the processor may acquire two consecutive grayscale images output by the monocular camera, and determine the depth image according to the consecutive two frames of grayscale images.
  • Step S303 determining an image of the target object from the image according to the depth image
  • an image of the target object may be determined from the image according to the depth image, that is, an image belonging to the target object is determined from the entire image.
  • determining the image of the target object from the image based on the depth image comprises determining a grayscale image of the target object from one of the at least two frames of grayscale images based on the depth image.
  • the depth sensor outputs at least two frames of images
  • the processor may acquire the depth image according to the at least two frames of images, and further the processor may select one of the at least two frames of images according to the depth image. Determine the image of the target object.
  • the binocular camera when the depth sensor is a binocular camera, the binocular camera outputs two frames of grayscale images (a grayscale image output by the left eye and a grayscale image of the right eye output) at the same time, and when the depth image is calculated, the right eye can be The output gray image is mapped to the gray image of the left eye output to calculate the depth image, and the target image may be determined from the gray image output from the left eye according to the depth image. image.
  • the determining, according to the depth image, the image of the target object from the image comprises: determining, according to the depth image, a first target area of the target object in the image, determining from the image according to the first target area The image of the target object.
  • the first target area of the target object in the image may be determined according to the depth image, where the first target area is an area occupied by the target object in the image, that is, which area of the image is determined by the target object, After the first target area is determined, an image of the target object can be acquired from the first target area.
  • determining, according to the depth image, the first target area of the target object in the image may be implemented by: determining a second target area of the target object in the depth image; determining the target object according to the second target area a first target area in the grayscale image.
  • the area occupied by the target object in the depth image, that is, the second target area may be first determined, because the depth image and the corresponding image thereof have The mapping relationship may be performed after the second target area in the depth image is acquired, that is, the area occupied by the target object in the image, that is, the first target area, may be determined according to the second target area.
  • the determining, by the target object, the second target area in the depth image comprises: determining a connected domain in the depth image; determining a connected domain that satisfies a preset requirement as a second target area of the target object in the depth image .
  • the connected domain in the depth image may be determined, wherein the area occupied by the target object in the image is one or more of the obtained connected domains.
  • the processor can detect the characteristics of each connected domain, and determine the connected domain that meets the preset requirement as the second target area.
  • determining the connected domain that meets the preset requirement as the second target area of the target object in the depth image comprises: determining an average depth of each connected domain in the connected domain; and the number of pixels is greater than or equal to and average The connected domain of the pixel number threshold corresponding to the depth is determined as the second target area of the target object in the depth image.
  • the upper body portion of the general user has an area of about 0.4 square meters (the person skilled in the art can adjust according to actual conditions).
  • the size of the area occupied by the target object in the depth image should be related to the distance between the target object and the depth sensor, that is, the corresponding number of pixels of the target object in the depth image is related to the distance between the target object and the depth sensor, and The closer the target object is to the depth sensor, the more the corresponding number of pixels of the target object in the depth sensor, and the farther the target object is from the depth sensor, the smaller the number of pixels corresponding to the target object in the depth sensor.
  • determining, by the connectivity domain that has a pixel number greater than or equal to a pixel threshold corresponding to the average depth, as the second target region of the target object in the depth image includes: the pixel number is greater than or equal to a pixel threshold corresponding to the average depth and The connected domain with the smallest average depth is determined as the second target area of the target object in the depth image.
  • the processor filters the connected domain, the search may be started from the connected domain with a small average depth.
  • the search may be stopped, and the processor stops.
  • the connected domain is determined as the second target area of the target object in the depth image.
  • the distance of the user from the depth sensor should be the smallest, so the number of pixels is greater than or equal to the pixel threshold corresponding to the average depth and The connected domain with the smallest average depth is determined as the second target area of the target object in the depth image.
  • Step S304 Determine a first exposure parameter according to a brightness of an image of the target object, wherein the first exposure parameter is used to control a next automatic exposure of the depth sensor.
  • step S304 and step S103 are the same, and are not described here.
  • FIG. 4 is a flowchart of a method for controlling exposure according to an embodiment of the present invention. As shown in FIG. 4, on the basis of the embodiments described in FIG. 1 and FIG. 3, the method in the embodiment of the present invention may include:
  • Step S401 Acquire an image output by the depth sensor according to the current exposure parameter
  • step S401 and step S101 are the same, and details are not described herein again.
  • Step S402 determining an image of the target object from the image
  • step S402 and step S102 are the same, and are not described here.
  • Step S403 determining an average brightness of the image of the target object, and determining a first exposure parameter according to the average brightness, wherein the first exposure parameter is used to control the next automatic exposure of the depth sensor.
  • the average brightness of the target object can be determined, and the first exposure parameter is determined according to the average brightness.
  • the determining the first exposure parameter according to the average brightness comprises: determining the first exposure parameter according to the average brightness and the preset brightness. Specifically, a difference between the average brightness and the preset brightness may be determined, and when the difference is greater than or equal to the preset brightness threshold, the first exposure parameter is determined according to the difference.
  • the average brightness is an average brightness of an image corresponding to the target object in the current image
  • the preset brightness may be an average brightness of the desired target object. If the average brightness of the target object in the current image is different from the preset brightness, the depth image acquired by the depth sensor may not utilize the detection and recognition of the target object, and the first exposure parameter may be determined according to the difference, and the first The exposure parameter controls the next automatic exposure of the depth sensor. When the difference is less than the preset brightness threshold, it is indicated that the average brightness of the target object in the image has converged or nearly converges to the preset brightness, and the exposure parameter of the next automatic exposure of the depth sensor may not be adjusted.
  • the first exposure parameter is used to control the next exposure of the depth sensor, specifically, when the next automatic exposure is performed, the first exposure parameter is used as the current exposure.
  • the difference is less than the preset brightness threshold, stopping determining the first exposure parameter, and using the current exposure parameter Locked to the final exposure parameter of the depth sensor, then the depth sensor exposure is controlled using the final exposure parameter during subsequent automatic exposure of the depth sensor.
  • the target object when the detection of the target object or the part of the target object is started, for example, the target object may be a user, and when the gesture of the user is detected, that is, the depth image acquired by the processor through the depth sensor
  • the exposure control method of the foregoing embodiment can be used to quickly converge the average brightness of the user in the image to the preset brightness, that is, the current exposure parameter can be locked as the final exposure parameter, and the exposure parameter can be used to control. Subsequent exposure of the depth sensor.
  • the exposure parameters of the depth sensor are re-determined using the exposure method of the foregoing embodiment.
  • FIG. 6 is a structural diagram of a control device for exposure according to an embodiment of the present invention.
  • the device 600 in the embodiment of the present invention may include: a memory and a processor, where
  • the memory 601 is configured to store program instructions
  • the processor 602 the program instruction is invoked, and when the program instruction is executed, is used to perform the following operations:
  • the first exposure parameter is determined according to the brightness of the image of the target object, wherein the first exposure parameter is used to control the next automatic exposure of the depth sensor.
  • the processor 602 is further configured to acquire a depth image corresponding to the grayscale image
  • An image of the target object is determined from the image based on the depth image.
  • the processor 602 determines an image of the target object from the image according to the depth image
  • the processor 602 is configured to:
  • An image of the target object is determined from the image based on the first target area.
  • the processor 602 determines, according to the depth image, that the target object is in the first target area in the image, specifically:
  • the processor 602 determines that the target object is in the second target area in the depth image
  • the processor is configured to:
  • the connected domain that satisfies the preset requirement is determined as the second target area of the target object in the depth image.
  • the processor 602 determines whether the connected domain meets a preset requirement, specifically, the processor is configured to:
  • a connected domain having a pixel number greater than or equal to a pixel number threshold corresponding to the average depth is determined as a second target region of the target object in the depth image.
  • the processor 602 determines, by using the connected field that the number of pixels is greater than or equal to the threshold number of pixels corresponding to the average depth, as the second target area of the target object in the depth image, specifically for
  • the determining the connected domain of the pixel number greater than or equal to the pixel threshold corresponding to the average depth as the second target region of the target object in the depth image comprises:
  • a connected domain having a pixel number greater than or equal to a pixel threshold corresponding to the average depth and having a minimum average depth is determined as a second target region of the target object in the depth image.
  • the processor 602 acquires the grayscale image output by the depth sensor
  • the processor is configured to:
  • the processor 602 acquires a depth image corresponding to the image, it is specifically used to:
  • the depth image is acquired according to the at least two frames of images.
  • the processor 602 determines an image in the target area from the image according to the depth image
  • the processor 602 is configured to:
  • the processor 602 determines the first exposure parameter according to the brightness of the target image, specifically:
  • a first exposure parameter is determined based on the average brightness.
  • the method is:
  • the first exposure parameter is determined according to the average brightness and the preset brightness.
  • the processor 502 determines the first exposure parameter according to the average brightness and the preset brightness, specifically, the:
  • the first exposure parameter is determined based on the difference.
  • processor 502 is further configured to:
  • the depth sensor comprises at least one of a binocular camera and a TOF camera.
  • the exposure parameter includes at least one of an exposure time, an exposure gain, and an aperture.
  • FIG. 7 is a structural diagram of a drone according to an embodiment of the present invention.
  • the drone 700 in this embodiment may include the exposure control device 701 according to any one of the preceding embodiments.
  • the drone may further include a depth sensor 702, wherein the exposed control device 701 may be in communication with the depth sensor 702 for controlling automatic exposure of the depth sensor 702, and the drone further includes
  • the fuselage 703 is a power system 704 disposed on the fuselage 703, wherein the power system is used to provide flight power to the drone.
  • the drone further includes a carrying member 705 disposed on the body 703, wherein the carrying member 805 can be a two-axis or three-axis pan/tilt, wherein the depth sensor can be mounted on the body, the depth sensor It can also be mounted on the carrier member 705.
  • a depth sensor is provided here on the body as a schematic illustration.
  • the carrying part 705 is used to carry the shooting device 706 of the drone, and the user can control the drone through the control terminal and connect the image taken by the shooting device 706.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division, and may be implemented in actual implementation.
  • There are additional ways of dividing for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Signal Processing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

一种曝光的控制方法、装置以及无人机,所述方法包括:获取深度传感器根据当前曝光参数输出的图像;从所述图像中确定目标对象的图像;根据所述目标对象的图像的亮度确定第一曝光参数,其中,所述第一曝光参数用于控制深度传感器下一次的自动曝光。

Description

一种曝光的控制方法、装置以及无人机 技术领域
本发明实施例涉及控制领域,尤其涉及一种曝光的控制方法、装置以及无人机。
背景技术
目前,通过深度传感器获取深度图像,利用深度图像对目标对象识别与追踪的是目标对象检测的重要手段。然而,当目标对象处于高动态场景时,例如用户身穿白色衣服站在黑色幕布前,需要对用户的手势进行识别时,现有技术中的深度传感器的曝光控制方法可能导致目标对象出现过曝或者欠曝的现象,导致通过深度传感器获取的深度图像中的部分深度值变为无效值,进而导致对目标对象的检测和识别失败。
发明内容
本发明实施例提供一种曝光的控制方法、装置以及无人机,以消除目标对象的过曝或者欠曝现象,使得通过深度传感器获取的深度图像更加精准,提高对目标对象的检测成功率。
本发明实施例的第一方面提供了一种曝光的控制方法,其特征在于,包括:
获取深度传感器根据当前曝光参数输出的图像;
从所述图像中确定目标对象的图像;
根据所述目标对象的图像的亮度确定第一曝光参数,其中,所述第一曝光参数用于控制深度传感器下一次的自动曝光。
本发明实施例的第二方面提供了一种曝光的控制装置,其特征在于,包括:存储器和处理器,其中,
所述存储器,用于存储程序指令;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以 下操作:
获取深度传感器根据当前曝光参数输出的图像;
从所述图像中确定目标对象的图像;
根据所述目标对象的图像的亮度确定第一曝光参数,其中,所述第一曝光参数用于控制深度传感器下一次的自动曝光。
本发明实施例的第三方面提供了一种无人机,其特征在于,包括如第二方面所述曝光的控制装置。
本发明实施例提供的一种曝光的控制方法、装置以及无人机,从深度传感器输出的图像中确定目标对象的图像,根据所述目标对象的图像的亮度确定用于控制深度传感器下一次的自动曝光的第一曝光参数,可以有效地消除目标对象的过曝或者欠曝现象,使得通过深度传感器获取的深度图像更加精准,提高对目标对象的检测成功率。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图进行简单介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种曝光的控制方法的流程图。
图2为本发明实施例提供的从图像中确定目标对象的图像的示意图。
图3为本发明另一实施例提供的一种曝光的控制方法的流程图。
图4为为本发明另一实施例提供的一种曝光的控制方法的流程图。
图5为本发明另一实施例提供的一种曝光的控制方法的流程图。
图6为本发明实施例提供的曝光的控制装置的结构图。
图7为本发明实施例提供的一种无人机的结构图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做 出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
目前,深度传感器的曝光策略是根据探测范围内的全局亮度来进行曝光,即根据全局亮度来调节曝光时间、曝光增益等曝光参数以达到期望的亮度,这样,当目标对象在高动态环境下时(例如在明暗变化剧烈的场景下时),使用全局亮度来调节深度传感器的曝光参数,会导致目标对象出现过曝或者欠曝的现象,这样会导致通过深度传感器获取的深度图像不准确,深度图像中的部分深度值可能是无效值,这样会导致使用该深度图像不能检测到目标对象或者检测错误。本发明实施例中从深度传感器输出的图像中确定目标对象的亮度来调节深度传感器的曝光参数,可以有效地防止出现目标对象过曝或者欠曝的现象,使得通过深度传感器输出的深度图像精准。下面对本发明实施例提供的曝光的控制方法进行详细地描述。
本发明实施例提供一种曝光的控制方法。图1为本发明实施例提供一种曝光的控制方法的流程图。如图1所示,本发明实施例中的方法,可以包括:
步骤S101:获取深度传感器根据当前的曝光参数输出的图像;
具体地,所述控制方法的执行主体可以为曝光的控制装置,进一步地可以为控制装置的处理器,其中,所述处理器可以为专用处理器或者通用处理器。深度传感器根据当前的曝光参数自动曝光,对探测范围内的环境 进行拍摄,可以得到周围环境中的图像,目标对象(例如目标对象可以为用户)在深度传感器的探测范围内,则拍摄得到的图像中也包括目标对象的图像,所述目标对象可以为需要识别的对象。所述处理器可以与深度传感器电性连接,处理器获取深度传感器输出的图像。其中,深度传感器可以为任何可以输出深度图像或者根据其输出的图像可以获取深度图像的传感器,具体地可以为双目摄像头、单目摄像头、RGB相机、TOF相机、RGB-D相机中的一种或多种。因此,所述图像可以为灰度图像或者RGB图像;所述曝光参数包括曝光时间、曝光增益、光圈值中的一种或多种。
步骤S102:从所述图像中确定目标对象的图像;
具体地,处理器在获取到深度传感器输出的图像后,从所述图像中确定目标对象对应的图像,例如,如图2所示,在根据深度传感器来识别用户的手势时,可以从整张图像中确定出用户对应的图像。
步骤S103:根据所述目标对象的图像的亮度确定第一曝光参数,其中,所述第一曝光参数用于控制深度传感器下一次的自动曝光。
具体地,在从图像中确定出目标对象的图像后,可以进一步获取目标对象的图像的亮度信息,根据目标对象的图像的亮度信息来确定第一曝光参数,其中,所述第一曝光参数用于控制深度传感器下一次的自动曝光,进一步地,所述第一曝光参数为控制深度传感器下一次自动曝光的曝光参数,即在下一次曝光时,第一曝光参数即为当前曝光参数。
本发明实施提供的曝光的控制方法,从深度传感器输出的图像中确定目标对象的图像,根据所述目标对象的图像的亮度确定用于控制深度传感器下一次的自动曝光的第一曝光参数,能够有效地防止图像中的目标对象出现过曝或者欠曝的现象,使得通过深度传感器获取的深度图像更加有利用对目标对象的检测和识别,提高了深度传感器对目标对象检测的准确性。
本发明实施例提供一种曝光的控制方法。图3为本发明实施例提供一种曝光的控制方法的流程图。如图3所示,在图1所述的实施例的基础上,本发明实施例中的方法,可以包括:
步骤S301:获取深度传感器根据当前曝光参数输出的图像;
步骤S301和步骤S101的具体方法和原理一致,此处不再赘述。
步骤S302:获取与所述图像对应的深度图像;
具体地,处理器可以获取与图像对应的深度图像,其中,深度图像可用于对目标对象的检测与识别。其中,获取与所述图像对应的深度图像可以通过如下几种方式实现:
一种可行的实现方式:获取深度传感器输出的与所述图像对应的深度图像。具体地,某些深度传感器除了会输出图像,也会对应地输出深度图像,例如,TOF相机除了输出灰度图像,也会输出与该灰度图像对应的深度图像,处理器可以获取与所述图像对应的深度图像。
另一种可行的实现方式:所述获取深度传感器输出的灰度图像包括:获取深度传感器输出的至少两帧图像;所述获取与所述灰度图像对应的深度图像包括:根据所述至少两帧图像获取所述深度图像。具体地,某些深度传感器不能直接输出深度图像,所述深度图像是根据深度传感器输出的图像确定出来的。例如,当所述深度传感器为双目摄像头时,双目摄像头在同一时刻输出两帧灰度图像(左目输出的灰度图像和右目输出的灰度图像),处理器可以根据两帧灰度图像计算出深度图像。另外,深度传感器可以为单目摄像头,处理器可以获取单目摄像头输出的连续两帧灰度图像,并根据所述连续两帧灰度图像确定深度图像。
步骤S303:根据深度图像从所述图像中确定目标对象的图像;
具体地,在获取到与所述图像对应的深度图像后,可以根据深度图像从所述图像中确定出目标对象的图像,即从整个图像中确定出属于目标对象的图像。
在某些实施例中,根据深度图像从所述图像中确定目标对象的图像包括:根据所述深度图像从所述至少两帧灰度图像中的一帧图像中确定目标对象的灰度图像。具体地,如前所述,深度传感器输出至少两帧图像,处理器可以根据所述至少两帧图像获取深度图像,进一步地处理器可以根据深度图像从所述至少两帧图像中的一帧图像中确定出目标对象的图像。例如,当所述深度传感器为双目摄像头时,双目摄像头在同一时刻输出两帧灰度图像(左目输出的灰度图像和右目输出的灰度图像),在计算深度图像时,可以将右目输出的灰度图像映射到左目输出的灰度图像上计算得到深度图像,则可以根据深度图像从左目输出的灰度图像中确定目标对象的 图像。
进一步地,所述根据深度图像从所述图像中确定目标对象的图像包括:根据深度图像确定目标对象在所述图像中的第一目标区域,根据所述第一目标区域从所述图像中确定目标对象的图像。具体地,可以根据深度图像确定目标对象在所述图像中的第一目标区域,第一目标区域为目标对象在图像中所占的区域,即确定目标对象在所述图像中的哪一个区域,在确定了第一目标区域之后,即可以从第一目标区域中获取目标对象的图像。
进一步地,根据深度图像确定目标对象在所述图像中的第一目标区域可以通过如下方式实现:确定目标对象在所述深度图像中的第二目标区域;根据所述第二目标区域确定目标对象在所述灰度图像中的第一目标区域。具体地,在获取到所述深度图像后,由于深度图像便于对目标检测和识别,可以首先确定目标对象在深度图像中所占的区域,即第二目标区域,由于深度图像与其对应的图像有映射关系,在获取目标对象在深度图像中的第二目标区域之后,即可以根据第二目标区域确定目标对象在图像中所占的区域,即第一目标区域。
进一步地,所述确定目标对象在所述深度图像中的第二目标区域包括:确定深度图像中的连通域;将满足预设要求的连通域确定为目标对象在深度图像中的第二目标区域。具体地,由于目标对象的深度信息一般是连续变化的,因此,可以确定出深度图像中的连通域,其中目标对象在图像中所占的区域即为所述得到的连通域中的一个或多个,处理器可以对每一个连通域的特征进行检测,将符合预设要求的连通域确定为第二目标区域。
进一步地,所述将满足预设要求的连通域确定为目标对象在深度图像中的第二目标区域包括:确定所述连通域中每一个连通域的平均深度;将像素数目大于或等于与平均深度对应的像素数目阈值的连通域确定为目标对象在深度图像中的第二目标区域。
具体地,由于目标对象或者目标对象的部分的大小是一定的,例如当所述目标对象为用户时,一般用户的上半身部分的面积约为0.4平方米(本领域技术人员可以根据实际情况调整),对于面积不变的目标对象, 则在目标对象在深度图像中所占的面积大小应当与目标对象和深度传感器之间的距离相关,即目标对象在深度图像中对应的像素数目与目标对象和深度传感器之间的距离相关,且目标对象距离深度传感器越近,目标对象在深度传感器中对应的像素数目越多,目标对象距离深度传感器越远,目标对象在深度传感器中对应的像素数目越少。例如,当用户在距离深度传感器为0.5m的地方时,用户在深度图像中对应的像素数目应当为12250个像素(320*240分辨率、焦距f=350左右),当用户在距离深度传感器为1m的地方时,用户在深度图像中对应的像素数目应当为3062个。因此,可以不同的距离处设定不同的像素数目阈值,每一个距离对应一个像素数目阈值,处理器对连通域进行筛选,确定每一个连通域的平均深度,当某一个连通域中的像素数目大于或等于该连通域的平均深度对应的像素数目阈值时,即将该连通域确定为目标对象在深度图像中的第二目标区域。
进一步地,所述将像素数目大于或等于与平均深度对应的像素阈值的连通域确定为目标对象在深度图像中的第二目标区域包括:将像素数目大于或等于与平均深度对应的像素阈值且平均深度最小的连通域确定为目标对象在深度图像中的第二目标区域。具体地,处理器对连通域进行筛选时,可以从平均深度小的连通域开始搜索,当搜索到像素数目大于或等于与平均深度对应的像素阈值的连通域之后就可以停止搜索,处理器便将该连通域确定为目标对象在深度图像中的第二目标区域。通常,在对目标对象进行检测时,例如对用户进行检测或者对用户的手势进行检测时,用户距离深度传感器的距离应当是最小的,因此将像素数目大于或等于与平均深度对应的像素阈值且平均深度最小的连通域确定为目标对象在深度图像中的第二目标区域。
步骤S304:根据所述目标对象的图像的亮度确定第一曝光参数,其中,所述第一曝光参数用于控制深度传感器下一次的自动曝光。
步骤S304和步骤S103的具体方法和原理一致,此处不再赘述。
本发明实施例提供一种曝光的控制方法。图4为本发明实施例提供一种曝光的控制方法的流程图。如图4所示,在图1和图3所述的实施例的基础上,本发明实施例中的方法,可以包括:
步骤S401:获取深度传感器根据当前曝光参数输出的图像;
步骤S401和步骤S101的具体方法和原理一致,此处不再赘述。
步骤S402:从所述图像中确定目标对象的图像;
步骤S402和步骤S102的具体方法和原理一致,此处不再赘述。
步骤S403:确定目标对象的图像的平均亮度,根据所述平均亮度确定第一曝光参数,其中,所述第一曝光参数用于控制深度传感器下一次的自动曝光。
具体地,在确定出目标对象的图像后,即可以确定目标对象的平均亮度,根据平均亮度确定第一曝光参数。
进一步地,所述根据平均亮度确定第一曝光参数包括:根据平均亮度和预设亮度确定第一曝光参数。具体地,可以确定平均亮度与预设亮度之间的差值,当所述差值大于或等于预设亮度阈值时,根据所述差值确定第一曝光参数。其中,所述平均亮度为在当前图像中目标对象对应的图像的平均亮度,预设亮度可以为期望的目标对象的平均亮度。当前图像中目标对象的平均亮度与预设亮度相差较大,则通过深度传感器获取的深度图像可能不利用目标对象的检测与识别,可以根据所述差值确定第一曝光参数,并利用第一曝光参数控制深度传感器的下一次自动曝光。当所述差值小于预设亮度阈值时,说明图像中目标对象的平均亮度已经收敛或者接近收敛于预设亮度,可以不再调节深度传感器的下一次自动曝光的曝光参数。
在深度传感器的下一次自动曝光时,将所述第一曝光参数确定为当前的曝光参数,控制深度传感器的自动曝光,重复上述步骤,直至所述差值小于预设亮度阈值,将当前曝光参数锁定为控制深度传感器自动曝光的最终曝光参数。具体地,如图5所示,在确定出第一曝光参数时,利用第一曝光参数控制深度传感器下一次曝光,具体地,在进行下一次自动曝光时,将第一曝光参数作为当前的曝光参数,深度传感器根据当前曝光参数自动曝光,处理器获取深度传感器输出的图像,处理器从所述图像中确定出目标对象的图像,确定目标对象的图像的平均亮度,进一步确定平均亮度与预设亮度之间的差值是否大于预设亮度阈值,当所述差值大于预设亮度阈值时,根据所述差值确定新的第一曝光参数,并重复上述步骤。当所述差值小于所述预设亮度阈值时,停止确定第一曝光参数,将当前的曝光参数 锁定为深度传感器的最终曝光参数,则在深度传感器的后续自动曝光时,则使用所述最终曝光参数控制深度传感器曝光。
在实际应用中,当开启对目标对象或者对目标对象的部分进行检测时,例如所述目标对象可以为用户,当开启对用户的手势进行检测时,即当处理器通过深度传感器获取的深度图像对用户的手势进行检测时,使用前述实施例的曝光控制方法可以使用户在图像中的平均亮度快速收敛到预设亮度,即可以锁定当前的曝光参数为最终曝光参数,并使用该曝光参数控制深度传感器的后续曝光。当对目标对象的检测失败时,使用前述实施例的曝光方法重新确定深度传感器的曝光参数。
本发明实施例提供一种曝光的控制设备。图6为本发明实施例提供一种曝光的控制设备的结构图。如图6所示,本发明实施例中的设备600,可以包括:存储器和处理器,其中,
所述存储器,601用于存储程序指令;
所述处理器602,调用所述程序指令,当程序指令被执行时,用于执行以下操作:
获取深度传感器根据当前曝光参数输出的图像;
从所述图像中确定目标对象的图像;
根据所述目标对象的图像的亮度确定第一曝光参数,其中,所述第一曝光参数用于控制深度传感器下一次的自动曝光。
可选地,所述处理器602,还用于获取与所述灰度图像对应的深度图像;
所述处理器从所述图像中确定目标对象的图像时,具体用于:
根据所述深度图像从所述图像中确定目标对象的图像。
可选地,所述处理器602根据所述深度图像从所述图像中确定目标对象的图像时,具体用于:
根据深度图像确定目标对象在所述图像中的第一目标区域;
根据所述第一目标区域从所述图像中确定目标对象的图像。
可选地,所述处理器602根据深度图像确定目标对象在所述图像中的第一目标区域时,具体用于:
确定目标对象在所述深度图像中的第二目标区域;
根据所述第二目标区域确定目标对象在所述图像中的第一目标区域。
可选地,所述处理器602确定目标对象在所述深度图像中的第二目标区域时,具体用于:
确定深度图像中的连通域;
将满足预设要求连通域确定为目标对象在深度图像中的第二目标区域。
可选地,所述处理器602确定所述连通域是否符合预设要求时,具体用于:
确定所述连通域中每一个连通域的平均深度;
将像素数目大于或等于与平均深度对应的像素数目阈值的连通域确定为目标对象在深度图像中的第二目标区域。
可选地,所述处理器602将像素数目大于或等于与平均深度对应的像素数目阈值的连通域确定为目标对象在深度图像中的第二目标区域时,具体用于
所述将像素数目大于或等于与平均深度对应的像素阈值的连通域确定为目标对象在深度图像中的第二目标区域包括:
将像素数目大于或等于与平均深度对应的像素阈值且平均深度最小的连通域确定为目标对象在深度图像中的第二目标区域。
可选地,所述处理器602获取深度传感器输出的灰度图像时,具体用于:
获取深度传感器输出的至少两帧图像;
所述处理器602获取与所述图像对应的深度图像时,具体用于:
根据所述至少两帧图像获取所述深度图像。
可选地,所述处理器602根据所述深度图像从所述图像中确定目标区域内的图像时,具体用于:
根据所述深度图像从所述至少两帧图像中的一帧图像确定目标对象的灰度图像。
可选地,所述处理器602根据所述目标图像的亮度确定第一曝光参数时,具体用于:
确定目标对象的图像的平均亮度;
根据所述平均亮度确定第一曝光参数。
可选地,所述处理器根据所述平均亮度确定第一曝光参数时,具体用于:
根据所述平均亮度和预设亮度确定第一曝光参数。
可选地,所述处理器502根据所述平均亮度和预设亮度确定第一曝光参数时,具体用于:
确定所述平均亮度和预设亮度之间的差值;
当所述差值大于亮度阈值时,根据所述差值确定第一曝光参数。
可选地,所述处理器502,还用于:
将第一曝光参数作为当前曝光参数,重复上述步骤,直至所述差值小于或等于所述亮度阈值
将当前曝光参数锁定为控制深度传感器自动曝光的最终曝光参数。
可选地,所述深度传感器包括双目摄像头、TOF相机中的至少一种。
可选地,所述曝光参数包括曝光时间、曝光增益、光圈中的至少一种。
本发明实施例提供一种无人机。图7为本发明实施例提供的无人机的结构图。如图7所示,本实施例中的无人机700,可以包括:前述实施例中任一项所述的曝光的控制设备701。具体地,所述无人机还可以包安装深度传感器702,其中,所述曝光的控制设备701可以与深度传感器702通讯连接,用于控制深度传感器702的自动曝光,所述无人机还包括机身703,设置在机身703上的动力系统704,其中,所述动力系统用于为无人机提供飞行动力。另外无人机还包括设置在机身703上的承载部件705,其中,承载部件805可以为两轴或者三轴的云台,其中,所述深度传感器可以安装在机身上,所述深度传感器也可以安装在承载部件705上,为了进行示意性说明,此处以深度传感器设置在机身上作为示意性说明。当所述深度传感器安装在机身上时,所述承载部件705用于承载无人机的拍摄设备706,用户可以通过控制终端对无人机进行控制,并接拍摄设备706拍摄的图像。
在本发明所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以 有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (31)

  1. 一种曝光的控制方法,包括:
    获取深度传感器根据当前曝光参数输出的图像;
    从所述图像中确定目标对象的图像;
    根据所述目标对象的图像的亮度确定第一曝光参数,其中,所述第一曝光参数用于控制深度传感器下一次的自动曝光。
  2. 根据权利要求1所述的方法,所述方法还包括:
    获取与所述图像对应的深度图像;
    所述从所述图像中确定目标对象的图像包括:
    根据所述深度图像从所述图像中确定目标对象的图像。
  3. 根据权利要求2所述的方法,其特征在于,
    所述根据所述深度图像从所述图像中确定目标对象的图像包括:
    根据深度图像确定目标对象在所述图像中的第一目标区域;
    根据所述第一目标区域从所述图像中确定目标对象的图像。
  4. 根据权利要求3所述的方法,其特征在于,
    所述根据深度图像确定目标对象在所述图像中的第一目标区域包括:
    确定目标对象在所述深度图像中的第二目标区域;
    根据所述第二目标区域确定目标对象在所述图像中的第一目标区域。
  5. 根据权利要求4所述的方法,其特征在于,
    所述确定目标对象在所述深度图像中的第二目标区域包括:
    确定深度图像中的连通域;
    将满足预设要求的连通域确定为目标对象在深度图像中的第二目标区域。
  6. 根据权利要求5所述的方法,其特征在于,
    所述确定所述连通域是否符合预设要求包括:
    确定所述连通域中每一个连通域的平均深度;
    将像素数目大于或等于与平均深度对应的像素数目阈值的连通域确定为目标对象在深度图像中的第二目标区域。
  7. 根据权利要求6所述的方法,其特征在于,
    所述将像素数目大于或等于与平均深度对应的像素阈值的连通域确定为目标对象在深度图像中的第二目标区域包括:
    将像素数目大于或等于与平均深度对应的像素阈值且平均深度最小的连通域确定为目标对象在深度图像中的第二目标区域。
  8. 根据权利要求2-7任一项所述的方法,其特征在于,
    所述获取深度传感器输出的图像包括:
    获取深度传感器输出的至少两帧图像;
    所述获取与所述图像对应的深度图像包括:
    根据所述至少两帧图像获取所述深度图像。
  9. 根据权利要求8所述的方法,其特征在于,
    所述根据所述深度图像从所述图像中确定目标区域内的图像包括:
    根据所述深度图像从所述至少两帧图像中的一帧图像确定目标对象的图像。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,
    所述根据所述目标图像的亮度确定第一曝光参数包括:
    确定目标对象的图像的平均亮度;
    根据所述平均亮度确定第一曝光参数。
  11. 根据权利要10求所述的方法,其特征在于,
    所述根据所述平均亮度确定第一曝光参数包括:
    根据所述平均亮度和预设亮度确定第一曝光参数。
  12. 根据权利要求11所述的方法,其特征在于,
    所述根据所述平均亮度和预设亮度确定第一曝光参数包括:
    确定所述平均亮度和预设亮度之间的差值;
    当所述差值大于亮度阈值时,根据所述差值确定第一曝光参数。
  13. 根据权利要求12所述的方法,其特征在于,
    将第一曝光参数作为当前曝光参数,重复上述步骤,直至所述差值小于或等于所述亮度阈值;
    将当前曝光参数锁定为控制深度传感器自动曝光的最终曝光参数。
  14. 根据权利要求1-13任一项所述的控制方法,其特征在于,
    所述深度传感器包括双目摄像头、单目摄像头、RGB相机、TOF相机、 RGB-D相机中的一种或多种。
  15. 根据权利要求1-14任一项所述的方法,其特征在于,
    所述曝光参数包括曝光时间、曝光增益、光圈值中的至少一种。
  16. 一种曝光的控制设备,包括:存储器和处理器,其中,
    所述存储器用于存储程序指令;
    所述处理器,调用所述程序指令,当程序指令被执行时,用于执行以下操作:
    获取深度传感器根据当前曝光参数输出的图像;
    从所述图像中确定目标对象的图像;
    根据所述目标对象的图像的亮度确定第一曝光参数,其中,所述第一曝光参数用于控制深度传感器下一次的自动曝光。
  17. 根据权利要求16所述的设备,其特征在于,
    所述处理器,还用于获取与所述图像对应的深度图像;
    所述处理器从所述图像中确定目标对象的图像时,具体用于:
    根据所述深度图像从所述图像中确定目标对象的图像。
  18. 根据权利要求17所述的设备,其特征在于,
    所述处理器根据所述深度图像从所述图像中确定目标对象的图像时,具体用于:
    根据深度图像确定目标对象在所述图像中的第一目标区域;
    根据所述第一目标区域从所述图像中确定目标对象的图像。
  19. 根据权利要求18所述的设备,其特征在于,
    所述处理器根据深度图像确定目标对象在所述图像中的第一目标区域时,具体用于:
    确定目标对象在所述深度图像中的第二目标区域;
    根据所述第二目标区域确定目标对象在所述图像中的第一目标区域。
  20. 根据权利要求19所述的设备,其特征在于,
    所述处理器确定目标对象在所述深度图像中的第二目标区域时,具体用于:
    确定深度图像中的连通域;
    将满足预设要求的连通域确定为目标对象在深度图像中的第二目标 区域。
  21. 根据权利要求20所述的设备,其特征在于,
    所述处理器确定所述连通域是否符合预设要求时,具体用于:
    确定所述连通域中每一个连通域的平均深度;
    将像素数目大于或等于与平均深度对应的像素数目阈值的连通域确定为目标对象在深度图像中的第二目标区域。
  22. 根据权利要求21所述的设备,其特征在于,
    所述处理器将像素数目大于或等于与平均深度对应的像素数目阈值的连通域确定为目标对象在深度图像中的第二目标区域时,具体用于
    所述将像素数目大于或等于与平均深度对应的像素阈值的连通域确定为目标对象在深度图像中的第二目标区域包括:
    将像素数目大于或等于与平均深度对应的像素阈值且平均深度最小的连通域确定为目标对象在深度图像中的第二目标区域。
  23. 根据权利要求17-22任一项所述的设备,其特征在于,
    所述处理器获取深度传感器输出的图像时,具体用于:
    获取深度传感器输出的至少两帧图像;
    所述处理器获取与所述图像对应的深度图像时,具体用于:
    根据所述至少两帧图像获取所述深度图像。
  24. 根据权利要求23所述的设备,特征在于,
    所述处理器根据所述深度图像从所述图像中确定目标区域内的图像时,具体用于:
    根据所述深度图像从所述至少两帧图像中的一帧图像确定目标对象的图像。
  25. 根据权利要求16-24任一项所述的设备,其特征在于,
    所述处理器根据所述目标图像的亮度确定第一曝光参数时,具体用于:
    确定目标对象的图像的平均亮度;
    根据所述平均亮度确定第一曝光参数。
  26. 根据权利要25求所述的设备,其特征在于,
    所述处理器根据所述平均亮度确定第一曝光参数时,具体用于:
    根据所述平均亮度和预设亮度确定第一曝光参数。
  27. 根据权利要求26所述的设备,其特征在于,
    所述处理器根据所述平均亮度和预设亮度确定第一曝光参数时,具体用于:
    确定所述平均亮度和预设亮度之间的差值;
    当所述差值大于亮度阈值时,根据所述差值确定第一曝光参数。
  28. 根据权利要求27所述的设备,其特征在于,
    所述处理器,还用于:
    将第一曝光参数作为当前曝光参数,重复上述步骤,直至所述差值小于或等于所述亮度阈值
    将当前曝光参数锁定为控制深度传感器自动曝光的最终曝光参数。
  29. 根据权利要求16-28任一项所述的设备,其特征在于,
    所述深度传感器包括双目摄像头、单目摄像头、RGB相机、TOF相机、RGB-D相机中的一种或多种。
  30. 根据权利要求16-29任一项所述的设备,其特征在于,
    所述曝光参数包括曝光时间、曝光增益、光圈值中的至少一种。
  31. 一种无人机,其特征在于,包括:如权利要求16-30任一项所述曝光的控制装置。
PCT/CN2017/099069 2017-08-25 2017-08-25 一种曝光的控制方法、装置以及无人机 WO2019037088A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201780004476.4A CN108401457A (zh) 2017-08-25 2017-08-25 一种曝光的控制方法、装置以及无人机
PCT/CN2017/099069 WO2019037088A1 (zh) 2017-08-25 2017-08-25 一种曝光的控制方法、装置以及无人机
US16/748,973 US20200162655A1 (en) 2017-08-25 2020-01-22 Exposure control method and device, and unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/099069 WO2019037088A1 (zh) 2017-08-25 2017-08-25 一种曝光的控制方法、装置以及无人机

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/748,973 Continuation US20200162655A1 (en) 2017-08-25 2020-01-22 Exposure control method and device, and unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
WO2019037088A1 true WO2019037088A1 (zh) 2019-02-28

Family

ID=63094897

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/099069 WO2019037088A1 (zh) 2017-08-25 2017-08-25 一种曝光的控制方法、装置以及无人机

Country Status (3)

Country Link
US (1) US20200162655A1 (zh)
CN (1) CN108401457A (zh)
WO (1) WO2019037088A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115379128A (zh) * 2022-08-15 2022-11-22 Oppo广东移动通信有限公司 曝光控制方法及装置、计算机可读介质和电子设备

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7418340B2 (ja) * 2018-03-13 2024-01-19 マジック リープ, インコーポレイテッド 機械学習を使用した画像増強深度感知
CN111491108B (zh) * 2019-01-28 2022-12-09 杭州海康威视数字技术股份有限公司 一种曝光参数的调整方法及装置
CN109903324B (zh) * 2019-04-08 2022-04-15 京东方科技集团股份有限公司 一种深度图像获取方法及装置
CN110095998B (zh) * 2019-04-28 2020-09-15 苏州极目机器人科技有限公司 一种自动控制设备的控制方法及装置
CN111713096A (zh) * 2019-06-20 2020-09-25 深圳市大疆创新科技有限公司 增益系数的获取方法和装置
CN110287672A (zh) * 2019-06-27 2019-09-27 深圳市商汤科技有限公司 验证方法及装置、电子设备和存储介质
CN114556048B (zh) * 2019-10-24 2023-09-26 华为技术有限公司 测距方法、测距装置及计算机可读存储介质
CN111084632B (zh) * 2019-12-09 2022-06-03 深圳圣诺医疗设备股份有限公司 基于蒙版的自动曝光控制方法、装置、存储介质和电子设备
CN111083386B (zh) * 2019-12-24 2021-01-22 维沃移动通信有限公司 图像处理方法及电子设备
CN111416936B (zh) * 2020-03-24 2021-09-17 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及存储介质
CN111885311B (zh) * 2020-03-27 2022-01-21 东莞埃科思科技有限公司 红外摄像头曝光调节的方法、装置、电子设备及存储介质
CN111586312B (zh) * 2020-05-14 2022-03-04 Oppo(重庆)智能科技有限公司 自动曝光的控制方法及装置、终端、存储介质
CN112040091B (zh) * 2020-09-01 2023-07-21 先临三维科技股份有限公司 相机增益的调整方法和装置、扫描系统
CN112361990B (zh) * 2020-10-29 2022-06-28 深圳市道通科技股份有限公司 激光图案提取方法、装置、激光测量设备和系统
CN113727030A (zh) * 2020-11-19 2021-11-30 北京京东乾石科技有限公司 获取图像的方法、装置、电子设备和计算机可读介质
WO2022140913A1 (zh) * 2020-12-28 2022-07-07 深圳市大疆创新科技有限公司 Tof测距装置及其控制方法
CN114979498B (zh) * 2021-02-20 2023-06-30 Oppo广东移动通信有限公司 曝光处理方法、装置、电子设备及计算机可读存储介质
CN113038028B (zh) * 2021-03-24 2022-09-23 浙江光珀智能科技有限公司 一种图像生成方法及系统
WO2023077421A1 (zh) * 2021-11-05 2023-05-11 深圳市大疆创新科技有限公司 可移动平台的控制方法、装置、可移动平台及存储介质
CN115334250B (zh) * 2022-08-09 2024-03-08 阿波罗智能技术(北京)有限公司 一种图像处理方法、装置及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428439A (zh) * 2013-08-22 2013-12-04 浙江宇视科技有限公司 一种成像设备自动曝光控制方法及装置
CN103679743A (zh) * 2012-09-06 2014-03-26 索尼公司 目标跟踪装置和方法,以及照相机
CN103795934A (zh) * 2014-03-03 2014-05-14 联想(北京)有限公司 一种图像处理方法及电子设备
US20150163414A1 (en) * 2013-12-06 2015-06-11 Jarno Nikkanen Robust automatic exposure control using embedded data
CN104853107A (zh) * 2014-02-19 2015-08-19 联想(北京)有限公司 信息处理的方法及电子设备
CN106454090A (zh) * 2016-10-09 2017-02-22 深圳奥比中光科技有限公司 基于深度相机的自动对焦方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4259879B2 (ja) * 2001-05-17 2009-04-30 ゼノジェン コーポレイション 身体領域内の標的の深さ、輝度およびサイズを決定するための方法および装置
CN101247480B (zh) * 2008-03-26 2011-11-23 北京中星微电子有限公司 一种基于图像中目标区域的自动曝光方法
CN101247479B (zh) * 2008-03-26 2010-07-07 北京中星微电子有限公司 一种基于图像中目标区域的自动曝光方法
CN101304489B (zh) * 2008-06-20 2010-12-08 北京中星微电子有限公司 一种自动曝光方法及装置
US8224176B1 (en) * 2011-01-10 2012-07-17 Eastman Kodak Company Combined ambient and flash exposure for improved image quality
CN106131449B (zh) * 2016-07-27 2019-11-29 维沃移动通信有限公司 一种拍照方法及移动终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679743A (zh) * 2012-09-06 2014-03-26 索尼公司 目标跟踪装置和方法,以及照相机
CN103428439A (zh) * 2013-08-22 2013-12-04 浙江宇视科技有限公司 一种成像设备自动曝光控制方法及装置
US20150163414A1 (en) * 2013-12-06 2015-06-11 Jarno Nikkanen Robust automatic exposure control using embedded data
CN104853107A (zh) * 2014-02-19 2015-08-19 联想(北京)有限公司 信息处理的方法及电子设备
CN103795934A (zh) * 2014-03-03 2014-05-14 联想(北京)有限公司 一种图像处理方法及电子设备
CN106454090A (zh) * 2016-10-09 2017-02-22 深圳奥比中光科技有限公司 基于深度相机的自动对焦方法及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115379128A (zh) * 2022-08-15 2022-11-22 Oppo广东移动通信有限公司 曝光控制方法及装置、计算机可读介质和电子设备

Also Published As

Publication number Publication date
CN108401457A (zh) 2018-08-14
US20200162655A1 (en) 2020-05-21

Similar Documents

Publication Publication Date Title
WO2019037088A1 (zh) 一种曝光的控制方法、装置以及无人机
US10997696B2 (en) Image processing method, apparatus and device
EP3422699B1 (en) Camera module and control method
US11461910B2 (en) Electronic device for blurring image obtained by combining plural images based on depth information and method for driving the electronic device
EP3198852B1 (en) Image processing apparatus and control method thereof
KR102143456B1 (ko) 심도 정보 취득 방법 및 장치, 그리고 이미지 수집 디바이스
WO2020237565A1 (zh) 一种目标追踪方法、装置、可移动平台及存储介质
WO2020038255A1 (en) Image processing method, electronic apparatus, and computer-readable storage medium
TWI709110B (zh) 攝像頭校準方法和裝置、電子設備
US11258962B2 (en) Electronic device, method, and computer-readable medium for providing bokeh effect in video
KR20160038460A (ko) 전자 장치와, 그의 제어 방법
CN113301320B (zh) 图像信息处理方法、装置和电子设备
EP4297395A1 (en) Photographing exposure method and apparatus for self-walking device
WO2018219274A1 (zh) 降噪处理方法、装置、存储介质及终端
US10306198B2 (en) Method and electronic device for detecting wavelength spectrum of incident light
WO2022109855A1 (en) Foldable electronic device for multi-view image capture
US9906724B2 (en) Method and device for setting a focus of a camera
US11166005B2 (en) Three-dimensional information acquisition system using pitching practice, and method for calculating camera parameters
WO2015141185A1 (ja) 撮像制御装置、撮像制御方法および記録媒体
US11032462B2 (en) Method for adjusting focus based on spread-level of display object and electronic device supporting the same
CN114005026A (zh) 机器人的图像识别方法、装置、电子设备及存储介质
WO2019134513A1 (zh) 拍照对焦方法、装置、存储介质及电子设备
CN114500870B (zh) 图像处理方法、装置及电子设备
JP2019129469A (ja) 画像処理装置
CN117710467B (zh) 无人机定位方法、设备及飞行器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17922089

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17922089

Country of ref document: EP

Kind code of ref document: A1