WO2021097848A1 - 图像处理方法、图像采集装置、可移动平台及存储介质 - Google Patents

图像处理方法、图像采集装置、可移动平台及存储介质 Download PDF

Info

Publication number
WO2021097848A1
WO2021097848A1 PCT/CN2019/120416 CN2019120416W WO2021097848A1 WO 2021097848 A1 WO2021097848 A1 WO 2021097848A1 CN 2019120416 W CN2019120416 W CN 2019120416W WO 2021097848 A1 WO2021097848 A1 WO 2021097848A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
acquisition device
image acquisition
brightness
target
Prior art date
Application number
PCT/CN2019/120416
Other languages
English (en)
French (fr)
Inventor
李号
周游
王小明
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980040207.2A priority Critical patent/CN112335228A/zh
Priority to PCT/CN2019/120416 priority patent/WO2021097848A1/zh
Publication of WO2021097848A1 publication Critical patent/WO2021097848A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation

Definitions

  • This application relates to the field of image processing technology, and in particular to an image processing method, an image acquisition device, a movable platform and a storage medium.
  • the exposure parameters of the image acquisition device have an important influence on the images collected by the image acquisition device.
  • the exposure control algorithm to automatically adjust the exposure parameters of the image capture device.
  • the exposure control algorithm will adjust the exposure parameters of the image capture device to the maximum value.
  • the image capture device changes from being blocked to unblocked, it will take a while to recover from overexposure. Return to normal exposure. During the time of overexposure, the image quality is not good, resulting in poor calculation effect of computer vision based on the image.
  • the present application provides an image processing method, an image acquisition device, a movable platform, and a storage medium, aiming to improve image quality.
  • this application provides an image processing method applied to an image acquisition device, the image acquisition device comprising a first image acquisition device and a second image acquisition device, the first image acquisition device and the second image acquisition device
  • the daylighting direction of the image acquisition device is different, and the method includes:
  • the second image acquisition device is controlled to generate a target image of the image target brightness based on the target exposure parameter.
  • the present application also provides an image acquisition device, the first image acquisition device and the second image acquisition device of the image acquisition device, the lighting direction of the first image acquisition device and the second image acquisition device Different, the image acquisition device also includes a memory and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • the second image acquisition device is controlled to generate a target image of the image target brightness based on the target exposure parameter.
  • the present application also provides a movable platform, the movable platform includes an image acquisition device, the image acquisition device includes a first image acquisition device and a second image acquisition device, the first image acquisition device Different from the lighting direction of the second image acquisition device;
  • the first image acquisition device is used to acquire an image, and determine the current ambient light brightness according to the image
  • the second image capture device is used to obtain the corresponding image target brightness and the current ambient light brightness, and determine the target exposure corresponding to the second image capture device according to the current ambient light brightness and the image target brightness parameter;
  • the second image acquisition device is further configured to generate a target image of the image target brightness based on the target exposure parameter.
  • the present application also provides a movable platform that includes an image acquisition device and a processor, and the image acquisition device includes a first image acquisition device and a second image acquisition device; wherein, the The image acquisition device is used to communicate with the processor, and the first image acquisition device and the second image acquisition device have different lighting directions; the processor is used to execute the image processing method described above.
  • this application also provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the processor realizes the image processing described above. Method steps.
  • the embodiments of the application provide an image processing method, an image acquisition device, a movable platform, and a storage medium.
  • the environment brightness is determined by the image acquired by the image acquisition device with better lighting, and the image acquisition device with poor lighting is based on the image
  • the target brightness and the environmental brightness determine the target exposure parameters, and based on the target exposure parameters, the corresponding image is generated, so that the exposure of the generated image is normal, which effectively improves the image quality, and can ensure the calculation effect of computer vision based on the image .
  • FIG. 1 is a schematic diagram of a scene in which a mobile platform uses an image acquisition device in an embodiment of the present application
  • FIG. 2 is a schematic flowchart of steps of an image processing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of sub-steps of the image processing method in FIG. 1;
  • FIG. 4 is a schematic diagram of a curve of brightness measurement values of an image acquisition device that has not been calibrated for equivalent sensitivity in an embodiment of the present application;
  • FIG. 5 is a schematic diagram of a curve of the brightness of the integrating sphere in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a curve of the brightness measurement value of the rear right visual sensor in the embodiment of the present application.
  • FIG. 7 is a schematic diagram of a curve of the brightness measurement value of the right visual sensor in the embodiment of the present application.
  • FIG. 8 is a schematic flowchart of steps of another image processing method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of the first image area and the second image area in an embodiment of the present application.
  • FIG. 10 is a schematic block diagram of an image acquisition device provided by an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of a movable platform provided by an embodiment of the present application.
  • FIG. 12 is another schematic block diagram of a movable platform provided by an embodiment of the present application.
  • Figure 1 is an application scenario. Please refer to Figure 1.
  • Figure 1 is a schematic diagram of a scenario where a mobile platform uses an image capture device in an embodiment of the present application, as shown in the figure.
  • the movable platform 100 includes a platform body 101, a first image acquisition device 102, and a second image acquisition device 103.
  • the first image acquisition device 102 can be installed on the front side area and the left side of the platform body 101.
  • At least one of the area, the right area, or the back area is used to capture images of at least one of the front, left, right, or rear of the movable platform, and the second image capture device 103 may be installed on the platform body
  • the bottom area of 101 is used to capture images directly below the movable platform 101.
  • the second image acquisition device 103 is blocked, which causes the light in the lighting direction of the second image acquisition device to be too dark.
  • An image acquisition device 102 is not blocked and can be used normally.
  • the first image acquisition device can assist the second image acquisition device to acquire images.
  • the movable platform 100 may be an unmanned aerial vehicle, a handheld pan/tilt, an electric vehicle, an unmanned vehicle, a pan/tilt vehicle, and the like.
  • the unmanned aerial vehicle may have one or more propulsion units to allow the unmanned aerial vehicle to move in the air.
  • the one or more propulsion units can make the unmanned aerial vehicle move at one or more, two or more, three or more, four or more, five or more, six or more free angles .
  • the unmanned aerial vehicle can rotate around one, two, three, or more rotation axes.
  • the rotation axes can be perpendicular to each other.
  • the rotation axis can be maintained perpendicular to each other during the entire flight of the unmanned aerial vehicle.
  • the rotation axis may include a pitch axis, a roll axis, and/or a yaw axis.
  • the unmanned aerial vehicle can move in one or more dimensions.
  • an unmanned aerial vehicle can move upward due to the lifting force generated by one or more rotors.
  • the unmanned aerial vehicle can move along the Z axis (which can be upward relative to the direction of the unmanned aerial vehicle), the X axis and/or the Y axis (which can be laterally).
  • the unmanned aerial vehicle can move along one, two or three axes that are perpendicular to each other.
  • the unmanned aerial vehicle may be a rotorcraft.
  • the unmanned aerial vehicle may be a multi-rotor aerial vehicle that may include multiple rotors. Multiple rotors can rotate to generate lift for the unmanned aerial vehicle.
  • the rotor can be a propulsion unit that allows the unmanned aerial vehicle to move freely in the air.
  • the rotor can rotate at the same rate and/or can generate the same amount of lift or thrust.
  • the rotor can rotate at different speeds at will, generate different amounts of lifting force or thrust, and/or allow the unmanned aerial vehicle to rotate.
  • one, two, three, four, five, six, seven, eight, nine, ten or more rotors can be provided on the unmanned aerial vehicle.
  • These rotors can be arranged such that their rotation axes are parallel to each other.
  • the rotation axis of the rotors can be at any angle relative to each other, which can affect the movement of the UAV.
  • the unmanned aerial vehicle may have multiple rotors.
  • the rotor may be connected to the main body of the unmanned aerial vehicle, and the main body may include a control unit, an inertial measurement unit (IMU), a processor, a battery, a power supply, and/or other sensors.
  • the rotor may be connected to the body by one or more arms or extensions branching from the central part of the body.
  • one or more arms may extend radially from the central body of the UAV, and may have rotors at or near the end of the arms.
  • the mobile platform 100 may communicate with a terminal device or a server, where the terminal device may be a smart phone, a tablet computer, or a control terminal.
  • the terminal device may be a smart phone, a tablet computer, or a control terminal.
  • the first image acquisition device may be a vision sensor, such as a camera or other shooting device.
  • the photographing device may be a monocular camera (Monocular), a binocular camera (Stereo) or a depth camera (RGB-D).
  • the second image acquisition device may also be a vision sensor, such as a camera or other shooting device.
  • the photographing device may be a monocular camera (Monocular), a binocular camera (Stereo) or a depth camera (RGB-D).
  • the first image acquisition device and the second image acquisition device may be the same or different.
  • the first image acquisition device and the second image acquisition device can be set at different positions of the movable platform 100, and the lighting directions of the first image acquisition device and the second image acquisition device can be different, so that the first image acquisition device and the second image acquisition device The intensity of the light collected by the device can be different.
  • the first image acquisition device and the second image acquisition device can communicate with the terminal device or the server, so that the terminal device or the server communication can acquire the images acquired by the first image acquisition device and the second image acquisition device.
  • FIG. 2 is a schematic flowchart of steps of an image processing method provided by an embodiment of the present application.
  • the image processing method can be applied to an image acquisition device for acquiring images.
  • the image acquisition device includes a first image acquisition device and a second image acquisition device, and the first image acquisition device and the second image acquisition device have different lighting directions.
  • the image processing method includes step S101 to step S103.
  • S101 Acquire an image acquired by the first image acquisition device, and determine the current ambient light brightness according to the image.
  • the first image acquisition device acquires The ambient light brightness and the ambient light brightness acquired by the second image acquisition device are also different.
  • the ambient light brightness acquired by the second image acquisition device is acquired, and it is determined whether the ambient light brightness is less than or equal to a preset threshold; if the ambient light brightness is less than or equal to the preset threshold, the first The image acquired by the image acquisition device, and the current ambient light brightness is determined according to the image.
  • a preset threshold may be set based on actual conditions, which is not specifically limited in this application. The illumination in the lighting direction of the second image acquisition device is too dark, usually because the second image acquisition device is blocked.
  • step S101 includes sub-steps S1011 and S1012.
  • the exposure parameter is acquired through an automatic exposure program, the image is acquired according to the acquired exposure parameter, and the exposure parameter of the acquired image is recorded.
  • the exposure parameter includes at least one of exposure duration, aperture value, sensitivity value, and exposure gain.
  • the exposure time refers to the length of the exposure time.
  • the brightness value of each pixel in the image is acquired, and the brightness value of each pixel is processed according to a preset inverse response curve corresponding to the first image acquisition device;
  • the number of pixels of the image, and the target brightness value of the image is determined according to the number of pixels and the brightness value of each pixel after processing.
  • the preset reverse response curve is determined according to the reverse response curve calibration of the first image acquisition device.
  • the calibration method of the inverse response curve of the first image acquisition device is specifically: using an integrating sphere or a uniform light board to calibrate the inverse response curve of the first image acquisition device.
  • the brightness of the central area of the image is uniform.
  • the inverse response curve of the first image acquisition device can be obtained by polynomial function fitting. It is understandable that in the same way, the inverse response curve corresponding to the second image acquisition device can be calibrated, and the specific calibration process refers to the calibration process of the inverse response curve of the first image acquisition device, which will not be repeated here.
  • the target brightness value of the image can be expressed as:
  • the first preset equivalent sensitivity is determined according to the equivalent sensitivity calibration of the first image acquisition device, the equivalent sensitivity is calibrated by a uniform light plate, and the calibration method of the equivalent sensitivity of the first image acquisition device is specifically: At high and low color temperatures, the illuminance of a uniform lamp panel is calibrated with an illuminance meter as BL measure,l and BL measure,h . It can be seen that the relationship between equivalent sensitivity, BL measure,l and BL measure,h is shown in the following formula:
  • k 1 ′ is the equivalent sensitivity
  • k′ l is the equivalent sensitivity of low color temperature
  • k′ h is the equivalent sensitivity of high color temperature
  • G is the exposure gain
  • T is the exposure time
  • L l is the ambient light brightness at low color temperature
  • L h is the ambient light brightness at high color temperature
  • the host computer reads the value of k′ through the SDK, and if the value of k 1 ′ is 1, set the uniform light board For high illuminance and low illuminance, get the photometry results, and read the photometry results through SDK, and then calculate the value of k 1 ′ according to the above formula, and write the calculated value of k 1 ′ into the calibration file to get Equivalent sensitivity of the first image acquisition device.
  • the illuminance meter is used to measure the degree to which an object is illuminated, that is, the ratio of the luminous flux obtained on the surface of the object to the illuminated area.
  • the illuminance meter is composed of a selenium photovoltaic cell or a silicon photovoltaic cell with a filter and a microammeter.
  • the current ambient light brightness can be calculated through the exposure equation and the exposure parameters, the target brightness value and the equivalent sensitivity of the first image acquisition device.
  • the calculation formula of ambient light brightness is: Is the target brightness value of the image, G is the exposure gain, T is the exposure time, and k 1 ′ is the equivalent sensitivity of the first image acquisition device.
  • the image target brightness is the image brightness set in advance.
  • the second image acquisition device may generate suitable target exposure parameters based on the current ambient light brightness and the image target brightness to generate a suitable image.
  • the mobile platform can perform at least one of positioning, obstacle avoidance, hovering and other algorithms based on the image. It is understandable that the target brightness of the image can be set based on actual conditions, which is not specifically limited in this application.
  • the equivalent ambient light brightness is determined according to the second preset equivalent sensitivity and the current ambient light brightness; the target exposure parameter corresponding to the second image acquisition device is determined based on the image target brightness and the equivalent ambient light brightness.
  • the second preset equivalent sensitivity is determined according to the equivalent sensitivity calibration of the second image acquisition device, and the calibration process of the equivalent sensitivity of the second image acquisition device may refer to the equivalent sensitivity of the second image acquisition device. The sensitivity calibration process will not be repeated here.
  • the equivalent sensitivity of the second image acquisition device is k 2 ′
  • the current ambient light brightness is Then the equivalent ambient light brightness is Assuming that the image target brightness is L′, the target exposure parameter corresponding to the second image acquisition device is:
  • G and T are exposure parameters when the first image acquisition device acquires an image.
  • the measurement accuracy of the first image acquisition device and the second image acquisition device to ambient light can be improved .
  • the following experimental data can be obtained by performing photometry experiments on the six image acquisition devices on the movable platform before and after the equivalent sensitivity is calibrated.
  • the six image acquisition devices are down-view sensor, front-left vision sensor, front-right vision sensor, rear Left vision sensor, rear right vision sensor, left vision sensor and right vision sensor.
  • Table 1 is the experimental data table of the brightness of the integrating sphere measured by the six image acquisition devices without calibrated equivalent sensitivity
  • Table 2 is the calculated value of the calibrated equivalent sensitivity of the six image acquisition devices
  • Table 3 is the calibrated equivalent sensitivity The experimental data table of the six image acquisition devices measuring the brightness of the integrating sphere.
  • the graph shown in Figure 4 can be obtained, and the brightness of the integrating sphere in Table 2 can be obtained as shown in Figure 5.
  • the curve of the brightness of the integrating sphere shown in Table 2 can be obtained by the brightness measurement value of the rear right visual sensor in Table 2
  • the curve shown in Figure 6 can be obtained by the brightness measurement value of the right visual sensor in Table 2.
  • the second image acquisition device is controlled to generate the target image based on the target exposure parameter, that is, the target exposure parameter is sent to the second image acquisition device, and the second image acquisition device is based on the target exposure parameter.
  • the target exposure parameters generate the target image.
  • the brightness of the target image is the target brightness of the image.
  • the target image is sent to the controller, so that the controller executes a corresponding algorithm based on the target image.
  • the executed algorithm includes at least one of a positioning algorithm, an obstacle avoidance algorithm, a hovering algorithm, an exposure control algorithm, and an aerial photography algorithm. Based on the target image with normal exposure, executing the corresponding algorithm can improve the calculation effect of the algorithm.
  • the environmental brightness is determined by the image collected by the image acquisition device with better lighting, and the image acquisition device with poor lighting determines the target exposure parameter based on the image target brightness and the environmental brightness, and based on the The target exposure parameter generates a corresponding image, so that the exposure of the generated image is normal, which effectively improves the image quality, and can ensure the calculation effect of computer vision based on the image.
  • FIG. 8 is a schematic flowchart of the steps of another image processing method provided by an embodiment of the present application.
  • the image processing method includes steps S201 to S203.
  • S201 Acquire an image acquired by the first image acquisition device
  • the first image acquisition device acquires The ambient light brightness and the ambient light brightness acquired by the second image acquisition device are also different.
  • the image includes a first image area and a second image area.
  • the brightness of the first image area is significantly smaller than the brightness of the second image area
  • the first image area is an area with poor light
  • the second image area is an area with good light.
  • the first image area includes a non-sky area
  • the second image area includes a sky area.
  • FIG. 9 is a schematic diagram of the first image area and the second image area in an embodiment of the present application. As shown in FIG. 9, area A is the first image area, and image B is the second image area.
  • Extracting the first image area in the image that is, extracting the non-sky area in the image, and determining the current ambient light brightness according to the first image area.
  • the image is processed according to the trained region segmentation model to obtain the first image region.
  • the region segmentation model involved in the embodiment of the present application may include, but is not limited to: Convolutional Neural Networks (CNN).
  • the input data of the region segmentation model is an image
  • the output result can be the first image region with weaker brightness in the input image. Therefore, it is only necessary to input the image obtained in step S201 into the region segmentation model and obtain the region segmentation.
  • the image output by the model can be extracted to obtain the first image area in the image.
  • the neural network model can also be used to process the image, and obtain a mask of the first image region with weaker brightness in the image output by the neural network model. Therefore, it is only necessary to input the image obtained in step S201 into the image.
  • the region segmentation model is obtained, and the image mask output by the region segmentation model is obtained, and then the image mask is used to compare and extract with the input image, and then the first image region in the image can be extracted.
  • any of the mentioned neural network models can be trained on the basic model based on the sample data before the implementation of this solution.
  • the basic model has the same input, output and internal logic design as the neural network model, but the convolution kernel parameters of the basic model are preset. Through the training of sample data, the convolution kernel parameters of the basic model are continuously adjusted And training, so that the prediction result output by the basic model tends to the real result.
  • the gap between the two is less than the preset threshold
  • the basic model trained at this time is used as the neural network model. After completing the model training process, the area can be obtained. Split the model. It is understandable that the aforementioned model training process can be completed in advance. When the solution is specifically implemented, there is no need to repeat the model training process, so as to save processing steps and time and improve processing efficiency.
  • the image can also be processed through a neural network model to obtain the brightness value of each pixel in the image.
  • the input data of the neural network model is an image
  • the output data is the brightness value of each pixel in the image.
  • the brightness value of each pixel can be compared with a preset brightness threshold, so that a pixel with a brightness higher than the preset brightness threshold is used as a pixel in the second image area, and the brightness is lower than the preset brightness threshold.
  • the pixel points of the brightness threshold value are used as the pixel points of the first image area, so as to extract the first image area in the image.
  • multiple brightness thresholds may be designed to form a brightness interval, and the first image area and the second image area may be determined according to the brightness interval.
  • a pixel with a higher brightness and a pixel with a lower brightness are obtained, and when the first image area and the second image area are further determined , It is possible to determine the pixels whose brightness value difference between adjacent pixels is greater than the preset difference threshold, and use these pixels to form the dividing line between the first image area and the second image area to form the first image area and the second image area.
  • Image area After comparing the brightness value of each pixel with a preset brightness threshold, a pixel with a higher brightness and a pixel with a lower brightness are obtained, and when the first image area and the second image area are further determined . It is possible to determine the pixels whose brightness value difference between adjacent pixels is greater than the preset difference threshold, and use these pixels to form the dividing line between the first image area and the second image area to form the first image area and the second image area.
  • the brightness value of each pixel in the first image area is processed according to the preset inverse response curve corresponding to the first image acquisition device; the number of pixels in the first image area is counted According to the number of pixels and each processed brightness value, the target brightness value of the image is determined; according to the exposure parameter, the target brightness value and the first preset equivalent sensitivity, the current The brightness of the ambient light.
  • the specific process of processing the brightness value of the pixel point can refer to the above-mentioned embodiment, which will not be repeated here.
  • S203 Acquire the image target brightness corresponding to the second image acquisition device, and determine the target exposure parameter corresponding to the second image acquisition device according to the current ambient light brightness and the image target brightness.
  • the image target brightness is the image brightness set in advance. It is understandable that the image target brightness can be set based on actual conditions. This application does not specifically limit this.
  • the second image acquisition device is controlled to generate the target image based on the target exposure parameter, that is, the target exposure parameter is sent to the second image acquisition device, and the second image acquisition device is based on the target exposure parameter.
  • the target exposure parameters generate the target image.
  • the brightness of the target image is the target brightness of the image.
  • the image processing method provided by the above embodiments extracts non-sky areas in the image, and determines the current ambient light brightness based on the non-sky areas, while ignoring the sky area. Since the sky itself has no texture and is regarded as infinite, it is ignoring the sky. After the area, the ambient light brightness can be accurately and quickly determined, and then the target exposure parameters are determined based on the ambient light brightness and the image target brightness, and based on the target exposure parameters, the corresponding image is generated, so that the exposure of the generated image is normal and effectively improved
  • the image quality can guarantee the calculation effect of computer vision based on the image.
  • FIG. 10 is a schematic block diagram of an image acquisition device provided by an embodiment of the present application.
  • the image capture device 300 includes a first image capture device 301, a second image capture device 302, a processor 303, and a memory 304.
  • the first image capture device 301, the second image capture device 302, and the processor 303 It is connected to the memory 304 through a bus 305, which is, for example, an I2C (Inter-integrated Circuit) bus.
  • the first image acquisition device 301 and the second image acquisition device 302 have different lighting directions.
  • the processor 303 may be a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), etc.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 304 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor 303 is configured to run a computer program stored in the memory 304, and implement the following steps when the computer program is executed:
  • the second image acquisition device is controlled to generate a target image of the image target brightness based on the target exposure parameter.
  • the processor realizes that the current ambient light brightness is determined according to the image, it is used to realize:
  • the processor realizes the determination of the target brightness value of the image, it is used to realize:
  • the number of pixels of the image is counted, and the target brightness value of the image is determined according to the number of pixels and the brightness value of each pixel after processing.
  • the processor realizes that the target exposure parameter corresponding to the second image acquisition device is determined according to the current ambient light brightness and the image target brightness, it is used to realize:
  • a target exposure parameter corresponding to the second image acquisition device is determined.
  • the first preset equivalent sensitivity is determined according to the equivalent sensitivity calibration of the first image acquisition device
  • the second preset equivalent sensitivity is determined according to the equivalent sensitivity calibration of the second image acquisition device.
  • the calibration is determined, and the preset reverse response curve is determined according to the reverse response curve calibration of the first image acquisition device.
  • the image includes a first image area and a second image area, the first image area includes a non-sky area, and the second image area includes a sky area; the processor realizes that the current environment is determined according to the image When brightness, it is used to achieve:
  • the first image area in the image is extracted, and the current ambient light brightness is determined according to the first image area.
  • the processor implements the extraction of the first image area in the image, it is used to implement:
  • the image is processed according to the trained region segmentation model to obtain the first image region.
  • the region segmentation model includes: a convolutional neural network model.
  • the brightness of the first image area is less than the brightness of the second image area; when the processor implements the extraction of the first image area in the image, it is used to implement:
  • the brightness value of each pixel in the image is acquired, and the first image area in the image is extracted according to the brightness value of each pixel in the image.
  • the processor realizes that the current ambient light brightness is determined according to the first image area, it is used to realize:
  • the exposure parameter the target brightness value and the first preset equivalent sensitivity, the current ambient light brightness is determined.
  • the processor realizes the acquisition of the image acquired by the first image acquisition device, and determines the current ambient light brightness according to the image, it is also used to realize:
  • the image obtained by the first image acquisition device is acquired, and the current ambient light brightness is determined according to the image.
  • the processor realizes to control the second image acquisition device to generate the target image of the image target brightness based on the target exposure parameter, it is further used to realize:
  • the processor implements a corresponding algorithm based on the target image, it is used to implement:
  • At least one of the following algorithms is executed: positioning algorithm, obstacle avoidance algorithm, hovering algorithm, exposure control algorithm, and aerial photography algorithm.
  • FIG. 11 is a schematic block diagram of a movable platform provided by an embodiment of the present application.
  • the movable platform 400 includes a platform main body 401 and an image capture device 402.
  • the image capture device 402 includes a first image capture device 4021 and a second image capture device 4022.
  • the first image capture device The lighting directions of 4021 and the second image acquisition device 4022 are different;
  • the first image acquisition device 4021 is used to acquire an image, and determine the current ambient light brightness according to the image;
  • the second image acquisition device 4022 is used to acquire the corresponding image target brightness and the current ambient light brightness, and determine the target corresponding to the second image acquisition device according to the current ambient light brightness and the image target brightness Exposure parameters;
  • the second image acquisition device 4022 is further configured to generate a target image of the image target brightness based on the target exposure parameter.
  • the first image acquisition device 4021 is also used for:
  • the first image acquisition device 4021 is also used for:
  • the number of pixels of the image is counted, and the target brightness value of the image is determined according to the number of pixels and the brightness value of each pixel after processing.
  • the second image acquisition device 4022 is also used for:
  • a target exposure parameter corresponding to the second image acquisition device is determined.
  • the first preset equivalent sensitivity is determined according to the equivalent sensitivity calibration of the first image acquisition device 4021
  • the second preset equivalent sensitivity is determined according to the equivalent sensitivity calibration of the second image acquisition device 4022.
  • the effective sensitivity calibration is determined
  • the preset reverse response curve is determined according to the reverse response curve calibration of the first image acquisition device 4021.
  • the image includes a first image area and a second image area, the first image area includes a non-sky area, and the second image area includes a sky area; the first image acquisition device 4021 is further configured to:
  • the first image area in the image is extracted, and the current ambient light brightness is determined according to the first image area.
  • the first image acquisition device 4021 is also used for:
  • the image is processed according to the trained region segmentation model to obtain the first image region.
  • the region segmentation model includes: a convolutional neural network model.
  • the brightness of the first image area is less than the brightness of the second image area; the first image acquisition device 4021 is also used for:
  • the brightness value of each pixel in the image is acquired, and the first image area in the image is extracted according to the brightness value of each pixel in the image.
  • the first image acquisition device 4021 is also used for:
  • the exposure parameter the target brightness value and the first preset equivalent sensitivity, the current ambient light brightness is determined.
  • the first image acquisition device 4021 is also used for:
  • the ambient light brightness is less than or equal to the preset threshold, an image is acquired, and the current ambient light brightness is determined according to the image.
  • the movable platform further includes a processor, and the processor is configured to implement:
  • the processor implements a corresponding algorithm according to the target image, it is used to implement:
  • At least one of the following algorithms is executed: positioning algorithm, obstacle avoidance algorithm, hovering algorithm, exposure control algorithm, and aerial photography algorithm.
  • FIG. 12 is another schematic block diagram of a movable platform provided by an embodiment of the present application.
  • the movable platform 500 includes a processor 501 and an image acquisition device 502.
  • the image acquisition device 502 is configured to communicate with the processor 501.
  • the image acquisition device 502 includes a first image acquisition device.
  • the device 5021 and the second image acquisition device 5022 have different lighting directions from the first image acquisition device 5021 and the second image acquisition device 5022.
  • the processor 501 is configured to execute the image processing method described above.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the foregoing implementation The steps of the image processing method provided in the example.
  • the computer-readable storage medium may be the internal storage unit of the removable platform described in any of the foregoing embodiments, for example, the hard disk or memory of the removable platform.
  • the computer-readable storage medium may also be an external storage device of the movable platform, for example, a plug-in hard disk equipped on the movable platform, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital). , SD) card, flash card (Flash Card), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种图像处理方法、图像采集装置、可移动平台及存储介质,该方法包括:获取所述第一图像采集装置获取到的图像,并根据所述图像确定当前环境光亮度(S101);获取所述第二图像采集装置对应的图像目标亮度,并根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数(S102);控制所述第二图像采集装置基于所述目标曝光参数,生成所述图像目标亮度的目标图像(S103)。该方法提高了图像质量。

Description

图像处理方法、图像采集装置、可移动平台及存储介质 技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像处理方法、图像采集装置、可移动平台及存储介质。
背景技术
图像采集装置的曝光参数对图像采集装置采集到的图像有着重要影响,曝光过度时,图像采集装置采集到的图像的亮度太亮,而曝光不足时,图像采集装置采集到的图像的亮度太暗,在图像的亮度太亮或者太暗时,图像中物体的成像不清晰。为了提高成像清晰度,主要通过曝光控制算法控制光圈和快门速度以自动调整图像采集装置的曝光参数。然而,图像采集装置被遮挡时,曝光控制算法会将该图像采集装置的曝光参数调整到最大值,而当该图像采集装置由被遮挡变化为未被遮挡时,需要经过一段时间才能从过曝恢复到正常曝光,在过曝的时间内,图像质量不好,导致以该图像为基础的计算机视觉的计算效果较差。
发明内容
基于此,本申请提供了一种图像处理方法、图像采集装置、可移动平台及存储介质,旨在提高图像质量。
第一方面,本申请提供了一种图像处理方法,应用于图像采集装置,所述图像采集装置包括第一图像采集装置和第二图像采集装置,所述第一图像采集装置与所述第二图像采集装置的采光方向不同,所述方法包括:
获取所述第一图像采集装置获取到的图像,并根据所述图像确定当前环境光亮度;
获取所述第二图像采集装置对应的图像目标亮度,并根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数;
控制所述第二图像采集装置基于所述目标曝光参数,生成所述图像目标亮度的目标图像。
第二方面,本申请还提供了一种图像采集装置,所述图像采集装置第一图像采集装置和第二图像采集装置,所述第一图像采集装置与所述第二图像采集 装置的采光方向不同,所述图像采集装置还包括存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
获取所述第一图像采集装置获取到的图像,并根据所述图像确定当前环境光亮度;
获取所述第二图像采集装置对应的图像目标亮度,并根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数;
控制所述第二图像采集装置基于所述目标曝光参数,生成所述图像目标亮度的目标图像。
第三方面,本申请还提供了一种可移动平台,所述可移动平台包括图像采集装置,所述图像采集装置包括第一图像采集装置和第二图像采集装置,所述第一图像采集装置与所述第二图像采集装置的采光方向不同;
所述第一图像采集装置用于获取图像,并根据所述图像确定当前环境光亮度;
所述第二图像采集装置用于获取对应的图像目标亮度以及所述当前环境光亮度,并根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数;
所述第二图像采集装置还用于基于所述目标曝光参数,生成所述图像目标亮度的目标图像。
第四方面,本申请还提供了一种可移动平台,所述可移动平台包括图像采集装置和处理器,所述图像采集装置包括第一图像采集装置和第二图像采集装置;其中,所述图像采集装置用于与所述处理器通信连接,所述第一图像采集装置与所述第二图像采集装置的采光方向不同;所述处理器用于执行如上所述的图像处理方法。
第五方面,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如上所述的图像处理方法的步骤。
本申请实施例提供了一种图像处理方法、图像采集装置、可移动平台及存储介质,通过采光较好的图像采集装置采集到的图像确定环境亮度,并由采光较差的图像采集装置基于图像目标亮度和该环境亮度确定目标曝光参数,且基于该目标曝光参数,生成对应的图像,使得生成的图像的曝光正常,有效的提 高图像质量,可以保证以该图像为基础的计算机视觉的计算效果。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例中可移动平台使用图像采集装置的一场景示意图;
图2是本申请一实施例提供的一种图像处理方法的步骤示意流程图;
图3是图1中的图像处理方法的子步骤示意流程图;
图4是本申请实施例中未标定等效灵敏度的图像采集装置的亮度测量值的一曲线示意图;
图5是本申请实施例中积分球亮度的一曲线示意图;
图6是本申请实施例中后右视觉传感器的亮度测量值的一曲线示意图;
图7是本申请实施例中右视觉传感器的亮度测量值的一曲线示意图;
图8是本申请一实施例提供的另一种图像处理方法的步骤示意流程图;
图9是本申请实施例中第一图像区域与第二图像区域的一示意图;
图10是本申请一实施例提供的图像采集装置的示意性框图;
图11是本申请一实施例提供的可移动平台的一示意性框图;
图12是本申请一实施例提供的可移动平台的另一示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下, 下述的实施例及实施例中的特征可以相互组合。
基于上述问题,本申请实施例提供了一种图像处理方法,图1是一个应用场景,请参照图1,图1是本申请实施例中可移动平台使用图像采集装置的一场景示意图,如图1所示,所述可移动平台100包括平台主体101、第一图像采集装置102和第二图像采集装置103,所述第一图像采集装置102可以安装在平台主体101的前侧区域、左侧区域、右侧区域、或后侧区域中的至少一处,用于拍摄可移动平台前方、左方、右方、或后方的至少一处的图像,第二图像采集装置103可以安装在平台主体101的底部区域,用于拍摄可移动平台101正下方的图像,可移动平台起飞或者降落时,第二图像采集装置103被遮挡,导致第二图像采集装置的采光方向的光照太暗,而第一图像采集装置102未被遮挡,可以正常使用,可以通过第一图像采集装置辅助第二图像采集装置获取图像。可移动平台100可以是无人飞行器、手持云台、电动汽车、无人车、云台车等。无人飞行器可具有一个或多个推进单元,以允许无人飞行器可在空中移动。该一个或多个推进单元可使得无人飞行器以一个或多个、两个或多个、三个或多个、四个或多个、五个或多个、六个或多个自由角度移动。在某些情形下,无人飞行器可以绕一个、两个、三个或多个旋转轴旋转。旋转轴可彼此垂直。旋转轴在无人飞行器的整个飞行过程中可维持彼此垂直。旋转轴可包括俯仰轴、横滚轴和/或偏航轴。无人飞行器可沿一个或多个维度移动。例如,无人飞行器能够因一个或多个旋翼产生的提升力而向上移动。在某些情形下,无人飞行器可沿Z轴(可相对无人飞行器方向向上)、X轴和/或Y轴(可为横向)移动。无人飞行器可沿彼此垂直的一个、两个或三个轴移动。
无人飞行器可以是旋翼飞机。在某些情形下,无人飞行器可以是可包括多个旋翼的多旋翼飞行器。多个旋翼可旋转而为无人飞行器产生提升力。旋翼可以是推进单元,可使得无人飞行器在空中自由移动。旋翼可按相同速率旋转和/或可产生相同量的提升力或推力。旋翼可按不同的速率随意地旋转,产生不同量的提升力或推力和/或允许无人飞行器旋转。在某些情形下,在无人飞行器上可提供一个、两个、三个、四个、五个、六个、七个、八个、九个、十个或更多个旋翼。这些旋翼可布置成其旋转轴彼此平行。在某些情形下,旋翼的旋转轴可相对于彼此呈任意角度,从而可影响无人飞行器的运动。
无人飞行器可具有多个旋翼。旋翼可连接至无人飞行器的本体,本体可包含控制单元、惯性测量单元(inertial measuringunit,IMU)、处理器、电池、电源和/或其他传感器。旋翼可通过从本体中心部分分支出来的一个或多个臂或 延伸而连接至本体。例如,一个或多个臂可从无人飞行器的中心本体放射状延伸出来,而且在臂末端或靠近末端处可具有旋翼。
可移动平台100可以与终端设备或服务器通信,其中该终端设备可以为智能手机、平板电脑或者控制终端等。
第一图像采集装置可以是视觉传感器,例如相机等拍摄装置。该拍摄装置可以是单目相机(Monocular)、双目相机(Stereo)或者深度相机(RGB-D)。第二图像采集装置也可以是视觉传感器,例如相机等拍摄装置。该拍摄装置可以是单目相机(Monocular)、双目相机(Stereo)或者深度相机(RGB-D)。第一图像采集装置与第二图像采集装置可以相同,也可以不同。
第一图像采集装置与第二图像采集装置可以设于可移动平台100的不同位置,第一图像采集装置与第二图像采集装置的采光方向可以不同,从而第一图像采集装置与第二图像采集装置采集的光的光照强度可以是不同的。
第一图像采集装置与第二图像采集装置可以和终端设备或服务器通信,从而终端设备或服务器通信可以获取第一图像采集装置与第二图像采集装置获取的图像。
请参阅图2,图2是本申请一实施例提供的一种图像处理方法的步骤示意流程图。该图像处理方法可以应用在图像采集装置中,用于获取图像。其中,该图像采集装置包括第一图像采集装置和第二图像采集装置,所述第一图像采集装置与所述第二图像采集装置的采光方向不同。
具体地,如图1所示,该图像处理方法包括步骤S101至步骤S103。
S101、获取所述第一图像采集装置获取到的图像,并根据所述图像确定当前环境光亮度。
其中,由于第一图像采集装置与第二图像采集装置的采光方向不同,因此,在第一图像采集装置与第二图像采集装置的采光方向的光照强度不同时,第一图像采集装置获取到的环境光亮度以及第二图像采集装置获取到的环境光亮度也不相同。
在一实施例中,获取第二图像采集装置获取到的环境光亮度,并确定该环境光亮度是否小于或等于预设阈值;若所述环境光亮度小于或等于预设阈值,则获取第一图像采集装置获取到的图像,并根据图像确定当前环境光亮度。可以理解的是,上述预设阈值可基于实际情况进行设置,本申请对此不作具体限定。第二图像采集装置的采光方向的光照太暗,通常是因为第二图像采集装置被遮挡引起的。
在一实施例中,如图3所示,步骤S101包括子步骤S1011和S1012。
S1011、获取所述第一图像采集装置采集所述图像时的曝光参数,并确定所述图像的目标亮度值。
其中,第一图像采集装置采集图像时,通过自动曝光程序获取曝光参数,并根据获取到的曝光参数采集图像,且记录采集到的图像的曝光参数。曝光参数包括曝光时长、光圈值、感光度值、曝光增益中的至少一种。其中,曝光时长是指曝光时间的长短。
在一实施例中,获取所述图像中的各像素的亮度值,并根据所述第一图像采集装置对应的预设反响应曲线,对各像素点的所述亮度值进行处理;统计所述图像的像素个数,并根据所述像素个数和经过处理后的各像素点的所述亮度值,确定所述图像的目标亮度值。其中,所述预设反响应曲线根据对所述第一图像采集装置进行反响应曲线标定确定。
其中,第一图像采集装置的反响应曲线的标定方式具体为:使用积分球或者均匀灯板对第一图像采集装置的反响应曲线进行标定,此时,图像的中心区域的亮度是均匀的,设像素的亮度值的测量值为BL measure,积分球或者均匀灯板的亮度测量值为L measure,则BL measure与L measure的关系如下公式所示:
Figure PCTCN2019120416-appb-000001
其中,k 1′为第一图像采集装置的等效灵敏度,G为曝光增益,T为曝光时间,
Figure PCTCN2019120416-appb-000002
为反响应曲线,令k 1′=1,在曝光增益G,曝光时间T,L measure和BL measure已知的情况下,可以通过多项式函数拟合得到第一图像采集装置的反响应曲线
Figure PCTCN2019120416-appb-000003
可以理解的是,按照同样的方式,可以标定得到第二图像采集装置对应的反响应曲线,其具体标定过程参照第一图像采集装置的反响应曲线的标定过程,此处不做赘述。
示例性的,设像素个数为N,像素的亮度值为BL p,第一图像采集装置的反响应曲线为
Figure PCTCN2019120416-appb-000004
则图像的目标亮度值可以表示为:
Figure PCTCN2019120416-appb-000005
S1012、根据所述曝光参数、所述目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。
其中,第一预设等效灵敏度根据对第一图像采集装置进行等效灵敏度标定确定,等效灵敏度是通过均匀灯板标定的,第一图像采集装置的等效灵敏度的标定方式具体为:在高低色温下,使用照度计标定均匀灯板的照度为BL measure,l和 BL measure,h,可知等效灵敏度、BL measure,l和BL measure,h之间的关系如下公式所示:
Figure PCTCN2019120416-appb-000006
k 1′为等效灵敏度,k′ l为低色温的等效灵敏度,k′ h为高色温的等效灵敏度,G为曝光增益,T为曝光时间,
Figure PCTCN2019120416-appb-000007
为反响应曲线,L l为低色温的环境光亮度,L h为高色温的环境光亮度;上位机通过SDK读取k′的值,如果k 1′的值为1,则设置均匀灯板为高照度和低照度,得到测光结果,并通过SDK读取测光结果,然后按照上述公式计算出k 1′的值,并将计算得到的k 1′的值写入标定文件,从而得到第一图像采集装置的等效灵敏度。照度计用于测量物体被照明的程度,也即物体表面所得到的光通量与被照面积之比。照度计是由硒光电池或硅光电池配合滤光片和微安表组成。
在确定曝光参数、目标亮度值和第一图像采集装置的等效灵敏度之后,通过曝光方程以及曝光参数、目标亮度值和第一图像采集装置的等效灵敏度,可以计算得到当前环境光亮度。
其中,环境光亮度的计算公式为:
Figure PCTCN2019120416-appb-000008
为图像的目标亮度值,G为曝光增益,T为曝光时间,k 1′为第一图像采集装置的等效灵敏度。
S102、获取所述第二图像采集装置对应的图像目标亮度,并根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数。
在得到当前环境光亮度之后,获取第二图像采集装置对应的图像目标亮度,其中,该图像目标亮度为提前设定的图像亮度。第二图像采集装置可以基于当前环境光亮度和所述图像目标亮度生成合适的目标曝光参数,以生成合适的图像。使得可移动平台能够基于该图像执行定位、避障、悬停等算法中的至少一种。可以理解的是,该图像目标亮度可基于实际情况进行设置,本申请对此不作具体限定。
在一实施例中,根据第二预设等效灵敏度和当前环境光亮度,确定等效环境光亮度;根据图像目标亮度和等效环境光亮度,确定第二图像采集装置对应的目标曝光参数。其中,所述第二预设等效灵敏度根据对所述第二图像采集装 置进行等效灵敏度标定确定,第二图像采集装置的等效灵敏度的标定过程可以参照前述第二图像采集装置的等效灵敏度的标定过程,此处不做赘述。
示例性的,第二图像采集装置的等效灵敏度为k 2′,当前环境光亮度为
Figure PCTCN2019120416-appb-000009
则等效环境光亮度为
Figure PCTCN2019120416-appb-000010
设图像目标亮度为L′,则第二图像采集装置对应的目标曝光参数为:
Figure PCTCN2019120416-appb-000011
其中,G和T为第一图像采集装置获取图像时的曝光参数。
通过对第一图像采集装置的等效灵敏度k 1′和第一图像采集装置的等效灵敏度k 2′进行标定,可以提高第一图像采集装置和第二图像采集装置对环境光的测量准确性。通过对可移动平台上的六个图像采集装置标定等效灵敏度前后进行测光实验,可以得到以下实验数据,六个图像采集装置分别为下视传感器、前左视觉传感器、前右视觉传感器、后左视觉传感器、后右视觉传感器、左视觉传感器和右视觉传感器。表1为未标定等效灵敏度的六个图像采集装置测量积分球亮度的实验数据表,表2为六个图像采集装置的标定的等效灵敏度的计算值的表,表3为标定等效灵敏度的六个图像采集装置测量积分球亮度的实验数据表。
表1
Figure PCTCN2019120416-appb-000012
表2
Figure PCTCN2019120416-appb-000013
表3
Figure PCTCN2019120416-appb-000014
通过表1中积分球亮度、后右视觉传感器的亮度测量值和右视觉传感器的亮度测量值,可以得到如图4所示的曲线图,通过表2中的积分球亮度,可以得到如图5所示的积分球亮度的曲线图,通过表2中的后右视觉传感器的亮度测量值,可以得到如图6所示的曲线图,通过表2中的右视觉传感器的亮度测量值,可以得到如图7所示的曲线图。通过图4、图5、图6和图7可以知晓,通过标定等效灵敏度后的图像采集装置测量得到的光亮度与实际的积分球亮度之间的差距较小,而未标定等效灵敏度后的图像采集装置测量得到的光亮度与实际的积分球亮度之间的差距较大。
S103、控制所述第二图像采集装置基于所述目标曝光参数,生成所述图像目标亮度的目标图像。
在确定第二图像采集装置对应的目标曝光参数之后,控制第二图像采集装 置基于目标曝光参数,生成目标图像,即将该目标曝光参数发送至第二图像采集装置,由第二图像采集装置基于该目标曝光参数生成目标图像。其中,该目标图像的亮度为所述图像目标亮度。
在一实施例中,将该目标图像发送至控制器,以供该控制器基于该目标图像执行相应的算法。其中,执行的算法包括定位算法、避障算法、悬停算法、曝光控制算法和航拍算法中的至少一种。基于曝光正常的目标图像,执行相应的算法,可以提高算法的计算效果。
上述实施例提供的图像处理方法,通过采光较好的图像采集装置采集到的图像确定环境亮度,并由采光较差的图像采集装置基于图像目标亮度和该环境亮度确定目标曝光参数,且基于该目标曝光参数,生成对应的图像,使得生成的图像的曝光正常,有效的提高图像质量,可以保证以该图像为基础的计算机视觉的计算效果。
请参阅图8,图8是本申请一实施例提供的另一种图像处理方法的步骤示意流程图。
具体地,如图8所示,该图像处理方法包括步骤S201至S203。
S201、获取所述第一图像采集装置获取到的图像;
其中,由于第一图像采集装置与第二图像采集装置的采光方向不同,因此,在第一图像采集装置与第二图像采集装置的采光方向的光照强度不同时,第一图像采集装置获取到的环境光亮度以及第二图像采集装置获取到的环境光亮度也不相同。
S202、提取所述图像中的所述第一图像区域,并根据所述第一图像区域确定当前环境光亮度。
其中,所述图像包括第一图像区域与第二图像区域。所述第一图像区域的亮度明显小于第二图像区域的亮度,所述第一图像区域为光线不好的区域,所述第二图像区域为光线好的区域。例如,所述第一图像区域包括非天空区域,所述第二图像区域包括天空区域。请参照图9,图9是本申请实施例中第一图像区域与第二图像区域的一示意图,如图9所示,区域A是第一图像区域,图B是第二图像区域。
提取所述图像中的第一图像区域,即提取所述图像中的非天空区域,并根据第一图像区域确定当前环境光亮度。通过提取图像中的非天空区域,且基于非天空区域确定当前环境光亮度,而忽略天空区域,由于天空本身无纹理,且被视作无穷远,在忽略天空区域之后,可以准确且快速的确定环境光亮度。
在一实施例中,根据训练好的区域分割模型处理所述图像,得到所述第一图像区域。本申请实施例中所涉及到的所述区域分割模型可以包括但不限于:卷积神经网络模型(ConvolutionalNeural Networks,CNN)。
具体地,区域分割模型的输入数据为图像,输出的结果可以为输入的图像中亮度较弱的第一图像区域,因此只需要将步骤S201获取得到的图像输入该区域分割模型,并获取区域分割模型输出的图像,即可提取得到图像中的第一图像区域。
或者,还可以利用神经网络模型处理所述图像,并获取该神经网络模型输出的图像中亮度较弱的第一图像区域的掩板(mask),因此只需要将步骤S201获取得到的图像输入该区域分割模型,并获取该区域分割模型输出的图像掩板,然后利用该图像掩板与输入的图像进行比对提取,即可提取得到图像中的第一图像区域。
本申请实施例中,所提到的任一神经网络模型可以根据在执行本方案之前,利用样本数据对基础模型进行训练。其中,基础模型具备与该神经网络模型相同的输入、输出与内部逻辑设计,但基础模型的卷积核参数为预设的,通过样本数据的训练,对基础模型的卷积核参数进行不断调整与训练,使得基础模型输出的预测结果趋向真实结果,当二者之间的差距小于预设的阈值时,将此时训练得到的基础模型作为神经网络模型,完成模型训练过程,即可得到区域分割模型。可以理解的是,前述模型训练的过程可以预先完成,在具体执行本方案时,无需再重复执行模型训练过程,以节省处理步骤和时间,提高处理效率。
在一实施例中,获取所述图像中各像素点的亮度值,并根据所述图像中各像素点的亮度值,提取所述图像中的所述第一图像区域。由于天空区域与天空区域的亮度不同,因此通过各像素点的亮度,也可以提取得到图像中的第一图像区域。其中,针对图像中的任意一个像素点,其亮度为该像素点的三色像素值之间的加权和。具体的,任意像素点的亮度值Y可以表示为:Y=(0.299*R)+(0.587*G)+(0.114*B),其中,RGB分别表示红绿蓝三原色。
除此之外,还可以通过神经网络模型来处理所述图像,得到图像中各像素点的亮度值。此时,该神经网络模型的输入数据为图像,输出数据为图像中各像素点的亮度值。在此基础上,还需要进一步根据各像素点的亮度值,来确定第一图像区域。
具体地,可以将各像素点的亮度值与预设的亮度阈值进行比较,从而,将亮度高于预设的亮度阈值的像素点,作为第二图像区域的像素点,将亮度低于 预设的亮度阈值的像素点,作为第一图像区域的像素点,从而提取图像中的第一图像区域。或者,还可以设计多个亮度阈值,以构成亮度区间,并根据亮度区间确定第一图像区域与第二图像区域。或者,在将各像素点的亮度值与预设的亮度阈值进行比对之后,得到亮度较高的像素点与亮度较低的像素点,而在进一步确定第一图像区域与第二图像区域时,可以确定相邻像素点之间亮度值之差大于预设差值阈值的像素点,并以这些像素点构成第一图像区域与第二图像区域的分割线,构成第一图像区域与第二图像区域。
在一实施例中,根据所述第一图像采集装置对应的预设反响应曲线,对所述第一图像区域中的各像素点的亮度值进行处理;统计所述第一图像区域的像素个数,并根据所述像素个数和经过处理后的每个所述亮度值,确定所述图像的目标亮度值;根据所述曝光参数、目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。需要说明的是,基于预设反响应曲线,对像素点的亮度值进行处理的具体过程可以参照上述实施例,此处不做赘述。
S203、获取所述第二图像采集装置对应的图像目标亮度,并根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数。
在得到当前环境光亮度之后,获取第二图像采集装置对应的图像目标亮度,其中,该图像目标亮度为提前设定的图像亮度,可以理解的是,该图像目标亮度可基于实际情况进行设置,本申请对此不作具体限定。
S204、控制所述第二图像采集装置基于所述目标曝光参数,生成所述图像目标亮度的目标图像。
在确定第二图像采集装置对应的目标曝光参数之后,控制第二图像采集装置基于目标曝光参数,生成目标图像,即将该目标曝光参数发送至第二图像采集装置,由第二图像采集装置基于该目标曝光参数生成目标图像。其中,该目标图像的亮度为所述图像目标亮度。
上述实施例提供的图像处理方法,通过提取图像中的非天空区域,且基于非天空区域确定当前环境光亮度,而忽略天空区域,由于天空本身无纹理,且被视作无穷远,在忽略天空区域之后,可以准确且快速的确定环境光亮度,再基于环境光亮度和图像目标亮度确定目标曝光参数,且基于该目标曝光参数,生成对应的图像,使得生成的图像的曝光正常,有效的提高图像质量,可以保证以该图像为基础的计算机视觉的计算效果。
请参阅图10,图10是本申请一实施例提供的图像采集装置的示意性框图。
如图10所示,该图像采集装置300包括第一图像采集装置301、第二图像采集装置302、处理器303和存储器304,第一图像采集装置301、第二图像采集装置302、处理器303和存储器304通过总线305连接,该总线305比如为I2C(Inter-integrated Circuit)总线。所述第一图像采集装置301与所述第二图像采集装置302的采光方向不同。
具体地,处理器303可以是微控制单元(Micro-controllerUnit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器304可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述处理器303用于运行存储在存储器304中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取所述第一图像采集装置获取到的图像,并根据所述图像确定当前环境光亮度;
获取所述第二图像采集装置对应的图像目标亮度,并根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数;
控制所述第二图像采集装置基于所述目标曝光参数,生成所述图像目标亮度的目标图像。
进一步地,所述处理器实现根据所述图像确定当前环境光亮度时,用于实现:
获取所述第一图像采集装置采集所述图像时的曝光参数,并确定所述图像的目标亮度值;
根据所述曝光参数、所述目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。
进一步地,所述处理器实现确定所述图像的目标亮度值时,用于实现:
获取所述图像中的各像素的亮度值,并根据所述第一图像采集装置对应的预设反响应曲线,对各像素点的所述亮度值进行处理;
统计所述图像的像素个数,并根据所述像素个数和经过处理后的各像素点的所述亮度值,确定所述图像的目标亮度值。
进一步地,所述处理器实现根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数时,用于实现:
根据第二预设等效灵敏度和所述当前环境光亮度,确定等效环境光亮度;
根据所述图像目标亮度和所述等效环境光亮度,确定所述第二图像采集装置对应的目标曝光参数。
进一步地,所述第一预设等效灵敏度根据对所述第一图像采集装置进行等效灵敏度标定确定,所述第二预设等效灵敏度根据对所述第二图像采集装置进行等效灵敏度标定确定,所述预设反响应曲线根据对所述第一图像采集装置进行反响应曲线标定确定。
进一步地,所述图像包括第一图像区域与第二图像区域,所述第一图像区域包括非天空区域,所述第二图像区域包括天空区域;所述处理器实现根据所述图像确定当前环境光亮度时,用于实现:
提取所述图像中的所述第一图像区域,并根据所述第一图像区域确定当前环境光亮度。
进一步地,所述处理器实现提取所述图像中的所述第一图像区域时,用于实现:
根据训练好的区域分割模型处理所述图像,得到所述第一图像区域。
进一步地,所述区域分割模型包括:卷积神经网络模型。
进一步地,所述第一图像区域的亮度小于所述第二图像区域的亮度;所述处理器实现提取所述图像中的所述第一图像区域时,用于实现:
获取所述图像中各像素点的亮度值,并根据所述图像中各像素点的亮度值,提取所述图像中的所述第一图像区域。
进一步地,所述处理器实现根据所述第一图像区域确定当前环境光亮度时,用于实现:
根据所述第一图像采集装置对应的预设反响应曲线,对所述第一图像区域中的各像素点的亮度值进行处理;
统计所述第一图像区域的像素个数,并根据所述像素个数和经过处理后的每个所述亮度值,确定所述图像的目标亮度值;
根据所述曝光参数、目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。
进一步地,所述处理器实现获取所述第一图像采集装置获取到的图像,并根据所述图像确定当前环境光亮度之前,还用于实现:
获取所述第二图像采集装置获取到的环境光亮度,并确定所述环境光亮度是否小于或等于预设阈值;
若所述环境光亮度小于或等于预设阈值,则获取所述第一图像采集装置获 取到的图像,并根据所述图像确定当前环境光亮度。
可选地,所述处理器实现控制所述第二图像采集装置基于所述目标曝光参数,生成所述图像目标亮度的目标图像之后,还用于实现:
基于所述目标图像,执行相应的算法。
进一步地,所述处理器实现基于所述目标图像,执行相应的算法时,用于实现:
基于所述目标图像,执行如下至少一种算法:定位算法、避障算法、悬停算法、曝光控制算法和航拍算法。
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的图像采集装置的具体工作过程,可以参考前述图像处理方法实施例中的对应过程,在此不再赘述。
请参阅图11,图11是本申请一实施例提供的可移动平台的一示意性框图。
如图11所示,所述可移动平台400包括平台主体401和图像采集装置402,所述图像采集装置402包括第一图像采集装置4021和第二图像采集装置4022,所述第一图像采集装置4021与所述第二图像采集装置4022的采光方向不同;
所述第一图像采集装置4021用于获取图像,并根据所述图像确定当前环境光亮度;
所述第二图像采集装置4022用于获取对应的图像目标亮度以及所述当前环境光亮度,并根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数;
所述第二图像采集装置4022还用于基于所述目标曝光参数,生成所述图像目标亮度的目标图像。
进一步地,所述第一图像采集装置4021还用于:
获取所述第一图像采集装置采集所述图像时的曝光参数,并确定所述图像的目标亮度值;
根据所述曝光参数、所述目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。
进一步地,所述第一图像采集装置4021还用于:
获取所述图像中的各像素的亮度值,并根据所述第一图像采集装置对应的预设反响应曲线,对各像素点的所述亮度值进行处理;
统计所述图像的像素个数,并根据所述像素个数和经过处理后的各像素点的所述亮度值,确定所述图像的目标亮度值。
进一步地,所述第二图像采集装置4022还用于:
根据第二预设等效灵敏度和所述当前环境光亮度,确定等效环境光亮度;
根据所述图像目标亮度和所述等效环境光亮度,确定所述第二图像采集装置对应的目标曝光参数。
进一步地,所述第一预设等效灵敏度根据对所述第一图像采集装置4021进行等效灵敏度标定确定,所述第二预设等效灵敏度根据对所述第二图像采集装置4022进行等效灵敏度标定确定,所述预设反响应曲线根据对所述第一图像采集装置4021进行反响应曲线标定确定。
进一步地,所述图像包括第一图像区域与第二图像区域,所述第一图像区域包括非天空区域,所述第二图像区域包括天空区域;所述第一图像采集装置4021还用于:
提取所述图像中的所述第一图像区域,并根据所述第一图像区域确定当前环境光亮度。
进一步地,所述第一图像采集装置4021还用于:
根据训练好的区域分割模型处理所述图像,得到所述第一图像区域。
进一步地,所述区域分割模型包括:卷积神经网络模型。
进一步地,所述第一图像区域的亮度小于所述第二图像区域的亮度;所述第一图像采集装置4021还用于:
获取所述图像中各像素点的亮度值,并根据所述图像中各像素点的亮度值,提取所述图像中的所述第一图像区域。
进一步地,所述第一图像采集装置4021还用于:
根据所述第一图像采集装置对应的预设反响应曲线,对所述第一图像区域中的各像素点的亮度值进行处理;
统计所述第一图像区域的像素个数,并根据所述像素个数和经过处理后的每个所述亮度值,确定所述图像的目标亮度值;
根据所述曝光参数、目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。
进一步地,所述第一图像采集装置4021还用于:
获取所述第二图像采集装置获取到的环境光亮度,并确定所述环境光亮度是否小于或等于预设阈值;
若所述环境光亮度小于或等于预设阈值,则获取图像,并根据所述图像确定当前环境光亮度。
进一步地,所述可移动平台还包括处理器,所述处理器用于实现:
获取所述第二图像采集装置生成的目标图像,并根据所述目标图像,执行相应的算法。
进一步地,所述处理器实现根据所述目标图像,执行相应的算法时,用于实现:
根据所述目标图像,执行如下至少一种算法:定位算法、避障算法、悬停算法、曝光控制算法和航拍算法。
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的可移动平台的具体工作过程,可以参考前述图像处理方法实施例中的对应过程,在此不再赘述。
请参阅图12,图12是本申请一实施例提供的可移动平台的另一示意性框图。
如图12所示,所述可移动平台500包括处理器501和图像采集装置502,所述图像采集装置502用于与所述处理器501通信连接,所述图像采集装置502包括第一图像采集装置5021和第二图像采集装置5022,所述第一图像采集装置5021与所述第二图像采集装置5022的采光方向不同。所述处理器501用于执行如上所述的图像处理方法。
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的可移动平台的具体工作过程,可以参考前述图像处理方法实施例中的对应过程,在此不再赘述。
本申请的实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述处理器执行所述程序指令,实现上述实施例提供的图像处理方法的步骤。
其中,所述计算机可读存储介质可以是前述任一实施例所述的可移动平台的内部存储单元,例如所述可移动平台的硬盘或内存。所述计算机可读存储介质也可以是所述可移动平台的外部存储设备,例如所述可移动平台上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (42)

  1. 一种图像处理方法,其特征在于,应用于图像采集装置,所述图像采集装置包括第一图像采集装置和第二图像采集装置,所述第一图像采集装置与所述第二图像采集装置的采光方向不同,所述方法包括:
    获取所述第一图像采集装置获取到的图像,并根据所述图像确定当前环境光亮度;
    获取所述第二图像采集装置对应的图像目标亮度,并根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数;
    控制所述第二图像采集装置基于所述目标曝光参数,生成所述图像目标亮度的目标图像。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述根据所述图像确定当前环境光亮度,包括:
    获取所述第一图像采集装置采集所述图像时的曝光参数,并确定所述图像的目标亮度值;
    根据所述曝光参数、所述目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。
  3. 根据权利要求2所述的图像处理方法,其特征在于,所述确定所述图像的目标亮度值,包括:
    获取所述图像中的各像素的亮度值,并根据所述第一图像采集装置对应的预设反响应曲线,对各像素点的所述亮度值进行处理;
    统计所述图像的像素个数,并根据所述像素个数和经过处理后的各像素点的所述亮度值,确定所述图像的目标亮度值。
  4. 根据权利要求1所述的图像处理方法,其特征在于,所述根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数,包括:
    根据第二预设等效灵敏度和所述当前环境光亮度,确定等效环境光亮度;
    根据所述图像目标亮度和所述等效环境光亮度,确定所述第二图像采集装置对应的目标曝光参数。
  5. 根据权利要求2或4所述的图像处理方法,其特征在于,所述第一预设等效灵敏度根据对所述第一图像采集装置进行等效灵敏度标定确定,所述第二 预设等效灵敏度根据对所述第二图像采集装置进行等效灵敏度标定确定,所述预设反响应曲线根据对所述第一图像采集装置进行反响应曲线标定确定。
  6. 根据权利要求1至4中任一项所述的图像处理方法,其特征在于,所述图像包括第一图像区域与第二图像区域,所述第一图像区域包括非天空区域,所述第二图像区域包括天空区域;所述根据所述图像确定当前环境光亮度,包括:
    提取所述图像中的所述第一图像区域,并根据所述第一图像区域确定当前环境光亮度。
  7. 根据权利要求6所述的图像处理方法,其特征在于,所述提取所述图像中的所述第一图像区域,包括:
    根据训练好的区域分割模型处理所述图像,得到所述第一图像区域。
  8. 根据权利要求7所述的图像处理方法,其特征在于,所述区域分割模型包括:卷积神经网络模型。
  9. 根据权利要求6所述的图像处理方法,其特征在于,所述第一图像区域的亮度小于所述第二图像区域的亮度;所述提取所述图像中的所述第一图像区域,包括:
    获取所述图像中各像素点的亮度值,并根据所述图像中各像素点的亮度值,提取所述图像中的所述第一图像区域。
  10. 根据权利要求6所述的图像处理方法,其特征在于,所述根据所述第一图像区域确定当前环境光亮度,包括:
    根据所述第一图像采集装置对应的预设反响应曲线,对所述第一图像区域中的各像素点的亮度值进行处理;
    统计所述第一图像区域的像素个数,并根据所述像素个数和经过处理后的每个所述亮度值,确定所述图像的目标亮度值;
    根据所述曝光参数、目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。
  11. 根据权利要求1至4中任一项所述的图像处理方法,其特征在于,所述获取所述第一图像采集装置获取到的图像,并根据所述图像确定当前环境光亮度之前,还包括:
    获取所述第二图像采集装置获取到的环境光亮度,并确定所述环境光亮度是否小于或等于预设阈值;
    若所述环境光亮度小于或等于预设阈值,则获取所述第一图像采集装置获 取到的图像,并根据所述图像确定当前环境光亮度。
  12. 根据权利要求1至4中任一项所述的图像处理方法,其特征在于,所述控制所述第二图像采集装置基于所述目标曝光参数,生成所述图像目标亮度的目标图像之后,还包括:
    将所述目标图像发送至控制器,以供所述控制器基于所述目标图像,执行相应的算法。
  13. 根据权利要求12所述的图像处理方法,其特征在于,所述将所述目标图像发送至控制器,以供所述控制器基于所述目标图像,执行相应的算法,包括:
    将所述目标图像发送至控制器,以供所述控制器基于所述目标图像,执行如下至少一种算法:定位算法、避障算法、悬停算法、曝光控制算法和航拍算法。
  14. 一种图像采集装置,其特征在于,所述图像采集装置包括第一图像采集装置和第二图像采集装置,所述第一图像采集装置与所述第二图像采集装置的采光方向不同,所述图像采集装置还包括存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
    获取所述第一图像采集装置获取到的图像,并根据所述图像确定当前环境光亮度;
    获取所述第二图像采集装置对应的图像目标亮度,并根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数;
    控制所述第二图像采集装置基于所述目标曝光参数,生成所述图像目标亮度的目标图像。
  15. 根据权利要求14所述的图像采集装置,其特征在于,所述处理器实现根据所述图像确定当前环境光亮度时,用于实现:
    获取所述第一图像采集装置采集所述图像时的曝光参数,并确定所述图像的目标亮度值;
    根据所述曝光参数、所述目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。
  16. 根据权利要求15所述的图像采集装置,其特征在于,所述处理器实现确定所述图像的目标亮度值时,用于实现:
    获取所述图像中的各像素的亮度值,并根据所述第一图像采集装置对应的预设反响应曲线,对各像素点的所述亮度值进行处理;
    统计所述图像的像素个数,并根据所述像素个数和经过处理后的各像素点的所述亮度值,确定所述图像的目标亮度值。
  17. 根据权利要求14所述的图像采集装置,其特征在于,所述处理器实现根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数时,用于实现:
    根据第二预设等效灵敏度和所述当前环境光亮度,确定等效环境光亮度;
    根据所述图像目标亮度和所述等效环境光亮度,确定所述第二图像采集装置对应的目标曝光参数。
  18. 根据权利要求14或17所述的图像采集装置,其特征在于,所述第一预设等效灵敏度根据对所述第一图像采集装置进行等效灵敏度标定确定,所述第二预设等效灵敏度根据对所述第二图像采集装置进行等效灵敏度标定确定,所述预设反响应曲线根据对所述第一图像采集装置进行反响应曲线标定确定。
  19. 根据权利要求14至17中任一项所述的图像采集装置,其特征在于,所述图像包括第一图像区域与第二图像区域,所述第一图像区域包括非天空区域,所述第二图像区域包括天空区域;所述处理器实现根据所述图像确定当前环境光亮度时,用于实现:
    提取所述图像中的所述第一图像区域,并根据所述第一图像区域确定当前环境光亮度。
  20. 根据权利要求19所述的图像采集装置,其特征在于,所述处理器实现提取所述图像中的所述第一图像区域时,用于实现:
    根据训练好的区域分割模型处理所述图像,得到所述第一图像区域。
  21. 根据权利要求20所述的图像采集装置,其特征在于,所述区域分割模型包括:卷积神经网络模型。
  22. 根据权利要求19所述的图像采集装置,其特征在于,所述第一图像区域的亮度小于所述第二图像区域的亮度;所述处理器实现提取所述图像中的所述第一图像区域时,用于实现:
    获取所述图像中各像素点的亮度值,并根据所述图像中各像素点的亮度值,提取所述图像中的所述第一图像区域。
  23. 根据权利要求19所述的图像采集装置,其特征在于,所述处理器实现根据所述第一图像区域确定当前环境光亮度时,用于实现:
    根据所述第一图像采集装置对应的预设反响应曲线,对所述第一图像区域中的各像素点的亮度值进行处理;
    统计所述第一图像区域的像素个数,并根据所述像素个数和经过处理后的每个所述亮度值,确定所述图像的目标亮度值;
    根据所述曝光参数、目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。
  24. 根据权利要求14至17中任一项所述的图像采集装置,其特征在于,所述处理器实现获取所述第一图像采集装置获取到的图像,并根据所述图像确定当前环境光亮度之前,还用于实现:
    获取所述第二图像采集装置获取到的环境光亮度,并确定所述环境光亮度是否小于或等于预设阈值;
    若所述环境光亮度小于或等于预设阈值,则获取所述第一图像采集装置获取到的图像,并根据所述图像确定当前环境光亮度。
  25. 根据权利要求14至17中任一项所述的图像采集装置,其特征在于,所述处理器实现控制所述第二图像采集装置基于所述目标曝光参数,生成所述图像目标亮度的目标图像之后,还用于实现:
    基于所述目标图像,执行相应的算法。
  26. 根据权利要求25所述的图像采集装置,其特征在于,所述处理器实现基于所述目标图像,执行相应的算法时,用于实现:
    基于所述目标图像,执行如下至少一种算法:定位算法、避障算法、悬停算法、曝光控制算法和航拍算法。
  27. 一种可移动平台,其特征在于,所述可移动平台包括图像采集装置,所述图像采集装置包括第一图像采集装置和第二图像采集装置,所述第一图像采集装置与所述第二图像采集装置的采光方向不同;
    所述第一图像采集装置用于获取图像,并根据所述图像确定当前环境光亮度;
    所述第二图像采集装置用于获取对应的图像目标亮度以及所述当前环境光亮度,并根据所述当前环境光亮度和所述图像目标亮度,确定所述第二图像采集装置对应的目标曝光参数;
    所述第二图像采集装置还用于基于所述目标曝光参数,生成所述图像目标亮度的目标图像。
  28. 根据权利要求27所述的可移动平台,其特征在于,所述第一图像采集 装置还用于:
    获取所述第一图像采集装置采集所述图像时的曝光参数,并确定所述图像的目标亮度值;
    根据所述曝光参数、所述目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。
  29. 根据权利要求28所述的可移动平台,其特征在于,所述第一图像采集装置还用于:
    获取所述图像中的各像素的亮度值,并根据所述第一图像采集装置对应的预设反响应曲线,对各像素点的所述亮度值进行处理;
    统计所述图像的像素个数,并根据所述像素个数和经过处理后的各像素点的所述亮度值,确定所述图像的目标亮度值。
  30. 根据权利要求27所述的可移动平台,其特征在于,所述第二图像采集装置还用于:
    根据第二预设等效灵敏度和所述当前环境光亮度,确定等效环境光亮度;
    根据所述图像目标亮度和所述等效环境光亮度,确定所述第二图像采集装置对应的目标曝光参数。
  31. 根据权利要求27或30所述的可移动平台,其特征在于,所述第一预设等效灵敏度根据对所述第一图像采集装置进行等效灵敏度标定确定,所述第二预设等效灵敏度根据对所述第二图像采集装置进行等效灵敏度标定确定,所述预设反响应曲线根据对所述第一图像采集装置进行反响应曲线标定确定。
  32. 根据权利要求27至30中任一项所述的可移动平台,其特征在于,所述图像包括第一图像区域与第二图像区域,所述第一图像区域包括非天空区域,所述第二图像区域包括天空区域;所述第一图像采集装置还用于:
    提取所述图像中的所述第一图像区域,并根据所述第一图像区域确定当前环境光亮度。
  33. 根据权利要求32所述的可移动平台,其特征在于,所述第一图像采集装置还用于:
    根据训练好的区域分割模型处理所述图像,得到所述第一图像区域。
  34. 根据权利要求33所述的可移动平台,其特征在于,所述区域分割模型包括:卷积神经网络模型。
  35. 根据权利要求32所述的可移动平台,其特征在于,所述第一图像区域的亮度小于所述第二图像区域的亮度;所述第一图像采集装置还用于:
    获取所述图像中各像素点的亮度值,并根据所述图像中各像素点的亮度值,提取所述图像中的所述第一图像区域。
  36. 根据权利要求32所述的可移动平台,其特征在于,所述第一图像采集装置还用于:
    根据所述第一图像采集装置对应的预设反响应曲线,对所述第一图像区域中的各像素点的亮度值进行处理;
    统计所述第一图像区域的像素个数,并根据所述像素个数和经过处理后的每个所述亮度值,确定所述图像的目标亮度值;
    根据所述曝光参数、目标亮度值和第一预设等效灵敏度,确定当前环境光亮度。
  37. 根据权利要求27至30中任一项所述的可移动平台,其特征在于,所述第一图像采集装置还用于:
    获取所述第二图像采集装置获取到的环境光亮度,并确定所述环境光亮度是否小于或等于预设阈值;
    若所述环境光亮度小于或等于预设阈值,则获取图像,并根据所述图像确定当前环境光亮度。
  38. 根据权利要求27至30中任一项所述的可移动平台,其特征在于,所述可移动平台还包括处理器,所述处理器用于实现:
    获取所述第二图像采集装置生成的目标图像,并根据所述目标图像,执行相应的算法。
  39. 根据权利要求38所述的可移动平台,其特征在于,所述处理器实现根据所述目标图像,执行相应的算法时,用于实现:
    根据所述目标图像,执行如下至少一种算法:定位算法、避障算法、悬停算法、曝光控制算法和航拍算法。
  40. 根据权利要求27所述的可移动平台,其特征在于,所述可移动平台包括无人飞行器、无人驾驶车辆和机器人。
  41. 一种可移动平台,其特征在于,所述可移动平台包括图像采集装置和处理器,所述图像采集装置包括第一图像采集装置和第二图像采集装置;其中,所述图像采集装置用于与所述处理器通信连接,所述第一图像采集装置与所述第二图像采集装置的采光方向不同;
    所述处理器用于执行权利要求1至13任一项所述的图像处理方法。
  42. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存 储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1至13中任一项所述的图像处理方法。
PCT/CN2019/120416 2019-11-22 2019-11-22 图像处理方法、图像采集装置、可移动平台及存储介质 WO2021097848A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980040207.2A CN112335228A (zh) 2019-11-22 2019-11-22 图像处理方法、图像采集装置、可移动平台及存储介质
PCT/CN2019/120416 WO2021097848A1 (zh) 2019-11-22 2019-11-22 图像处理方法、图像采集装置、可移动平台及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/120416 WO2021097848A1 (zh) 2019-11-22 2019-11-22 图像处理方法、图像采集装置、可移动平台及存储介质

Publications (1)

Publication Number Publication Date
WO2021097848A1 true WO2021097848A1 (zh) 2021-05-27

Family

ID=74319972

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/120416 WO2021097848A1 (zh) 2019-11-22 2019-11-22 图像处理方法、图像采集装置、可移动平台及存储介质

Country Status (2)

Country Link
CN (1) CN112335228A (zh)
WO (1) WO2021097848A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115361505A (zh) * 2022-08-16 2022-11-18 豪威集成电路(成都)有限公司 一种场景自适应的aec目标亮度控制方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810677A (zh) * 2021-09-09 2021-12-17 Oppo广东移动通信有限公司 屏幕的亮度调节方法、终端及可读存储介质
CN115484384B (zh) * 2021-09-13 2023-12-01 华为技术有限公司 控制曝光的方法、装置与电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150022687A1 (en) * 2013-07-19 2015-01-22 Qualcomm Technologies, Inc. System and method for automatic exposure and dynamic range compression
CN106357987A (zh) * 2016-10-19 2017-01-25 浙江大华技术股份有限公司 一种曝光方法和装置
CN106534709A (zh) * 2015-09-10 2017-03-22 鹦鹉无人机股份有限公司 具有用天空图像分割来自动曝光控制的前视相机的无人机
CN109714524A (zh) * 2017-10-26 2019-05-03 佳能株式会社 摄像装置、系统、摄像装置的控制方法和存储介质
CN110213494A (zh) * 2019-07-03 2019-09-06 Oppo广东移动通信有限公司 拍摄方法和装置、电子设备、计算机可读存储介质
CN110430372A (zh) * 2019-09-02 2019-11-08 深圳市道通智能航空技术有限公司 图像曝光方法、装置、拍摄设备及无人机

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898152A (zh) * 2016-04-12 2016-08-24 乐视控股(北京)有限公司 一种在图像上还原环境光亮度的方法及系统
CN107181917B (zh) * 2017-04-25 2020-08-25 深圳市景阳科技股份有限公司 画面显示的方法及装置
CN108965731A (zh) * 2018-08-22 2018-12-07 Oppo广东移动通信有限公司 一种暗光图像处理方法及装置、终端、存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150022687A1 (en) * 2013-07-19 2015-01-22 Qualcomm Technologies, Inc. System and method for automatic exposure and dynamic range compression
CN106534709A (zh) * 2015-09-10 2017-03-22 鹦鹉无人机股份有限公司 具有用天空图像分割来自动曝光控制的前视相机的无人机
CN106357987A (zh) * 2016-10-19 2017-01-25 浙江大华技术股份有限公司 一种曝光方法和装置
CN109714524A (zh) * 2017-10-26 2019-05-03 佳能株式会社 摄像装置、系统、摄像装置的控制方法和存储介质
CN110213494A (zh) * 2019-07-03 2019-09-06 Oppo广东移动通信有限公司 拍摄方法和装置、电子设备、计算机可读存储介质
CN110430372A (zh) * 2019-09-02 2019-11-08 深圳市道通智能航空技术有限公司 图像曝光方法、装置、拍摄设备及无人机

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115361505A (zh) * 2022-08-16 2022-11-18 豪威集成电路(成都)有限公司 一种场景自适应的aec目标亮度控制方法
CN115361505B (zh) * 2022-08-16 2024-04-30 豪威集成电路(成都)有限公司 一种场景自适应的aec目标亮度控制方法

Also Published As

Publication number Publication date
CN112335228A (zh) 2021-02-05

Similar Documents

Publication Publication Date Title
WO2021097848A1 (zh) 图像处理方法、图像采集装置、可移动平台及存储介质
US20160094770A1 (en) Image Processing Method and Apparatus, and Terminal
WO2020228781A1 (zh) 一种图像亮度调节方法、装置及无人机
CN110753217B (zh) 色彩平衡方法和装置、车载设备以及存储介质
WO2019105261A1 (zh) 背景虚化处理方法、装置及设备
WO2014042104A1 (en) Imaging controller and imaging control method and program
CN107172353A (zh) 自动曝光方法、装置和计算机设备
CN103780840A (zh) 一种高品质成像的双摄像成像装置及其方法
WO2019105297A1 (zh) 图像虚化处理方法、装置、移动设备及存储介质
WO2019029573A1 (zh) 图像虚化方法、计算机可读存储介质和计算机设备
WO2019105298A1 (zh) 图像虚化处理方法、装置、移动设备及存储介质
CN108521863B (zh) 曝光的方法、装置、计算机系统和可移动设备
CN110121064B (zh) 一种图像色彩调节方法、装置及无人机
CN107465903A (zh) 图像白平衡方法、装置和计算机可读存储介质
US9319653B2 (en) White balance compensation method and electronic apparatus using the same
CN114430462B (zh) 无人机自主拍照参数调整方法、装置、设备及存储介质
US10225482B2 (en) Method of controlling photographing device including flash, and the photographing device
CN116661477A (zh) 一种变电站无人机巡检方法、装置、设备及存储介质
CN104754238B (zh) 一种成像控制方法及装置、成像控制系统
TWI545964B (zh) 影像白平衡方法及攝像裝置
WO2021145913A1 (en) Estimating depth based on iris size
CN103559687A (zh) 黑白摄像系统恢复彩色信息的处理方法
CN112330726B (zh) 一种图像处理方法及装置
CN115294002A (zh) 图像融合方法、电子设备、无人机和存储介质
WO2022016340A1 (zh) 确定主摄像装置曝光参数的方法、系统、可移动平台及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19953509

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19953509

Country of ref document: EP

Kind code of ref document: A1