WO2020177723A1 - 图像处理方法、夜间拍摄方法、图像处理芯片及航拍相机 - Google Patents

图像处理方法、夜间拍摄方法、图像处理芯片及航拍相机 Download PDF

Info

Publication number
WO2020177723A1
WO2020177723A1 PCT/CN2020/077821 CN2020077821W WO2020177723A1 WO 2020177723 A1 WO2020177723 A1 WO 2020177723A1 CN 2020077821 W CN2020077821 W CN 2020077821W WO 2020177723 A1 WO2020177723 A1 WO 2020177723A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
target
frame
target image
Prior art date
Application number
PCT/CN2020/077821
Other languages
English (en)
French (fr)
Inventor
李昭早
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Publication of WO2020177723A1 publication Critical patent/WO2020177723A1/zh
Priority to US17/467,366 priority Critical patent/US20210400172A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/634Warning indications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Definitions

  • This application relates to the field of image processing technology, and in particular to an image processing method, a night shooting method, an image processing chip and an aerial camera.
  • embodiments of the present invention provide an image processing method, a night shooting method, an image processing chip, and an aerial camera that can improve the shooting effect of a digital camera.
  • an image processing method includes:
  • an enhanced night scene effect image of the target image is formed.
  • the performing the first noise reduction process on each target image of the multi-frame target image to obtain the intra-frame processing result includes:
  • image filtering processing corresponding to the frequency characteristic is used to obtain the intra-frame processing result.
  • the frequency characteristic of the target pixel is determined.
  • the frequency characteristics include low frequency and high frequency
  • the first image filtering process is average filtering
  • the second image filtering process is weighted average filtering
  • the first filter matrix used in the first image filtering process is larger than the second filter matrix used in the second image filtering process.
  • the center value of the second filter matrix used in the second image filtering process is greater than the peripheral value.
  • the pixel points adjacent to the target pixel point include: a left pixel point located on the left side of the target pixel point in the same row as the target pixel point, and a left pixel point located on the right side of the target pixel point The right pixel of; and
  • the performing a second noise reduction process on any two adjacent frames of target images in the multi-frame target image to obtain an inter-frame processing result includes:
  • the gray value at the position of each pixel is calculated.
  • the signal change characteristics of each frame of the target image at the selected pixel position are determined.
  • the signal change characteristics include static pixels and moving pixels; then,
  • the pixel point of the selected target image at the selected pixel position is a static pixel.
  • the calculating the output result at each pixel point position according to the signal change characteristic specifically includes:
  • the average value is the gray value of the selected pixel position.
  • the forming the night scene shooting effect enhanced image of the target image frame according to the intra-frame processing result and the inter-frame processing result specifically includes:
  • the intra-frame processing result and the inter-frame processing result are weighted by the following formula:
  • Y is the image processing result
  • Ys is the intra-frame processing result
  • Yt is the inter-frame processing result
  • a is the weight of the inter-frame processing result.
  • the night shooting method includes:
  • the image processing method described above is executed on multiple frames of the target image to obtain an image with enhanced night scene effect.
  • an image processing chip In order to solve the above technical problems, embodiments of the present invention also provide the following technical solutions: an image processing chip.
  • the image processing chip includes: a processor and a memory communicatively connected with the processor; the memory stores computer program instructions, and when called by the processor, the computer program instructions enable the processor Perform the image processing method as described above.
  • an aerial camera includes:
  • An image sensor the image sensor is used to collect multiple frames of images with set shooting parameters; a controller; the controller is connected to the image sensor, and is used to trigger the image sensor to continuously expose at a set speed At least two frames of images are collected; an image processor for receiving at least two frames of images collected by the image sensor through continuous exposure, and performing the above-mentioned image processing method on the received at least two frames of images , Obtaining a night scene effect enhanced image; a storage device, the memory is connected to the image processor, and is used to store the night scene effect enhanced image.
  • the aerial camera further includes a brightness sensor; the brightness sensor is used to perceive the current environmental brightness and provide the environmental brightness to the controller; the controller is used to sense the current environmental brightness and provide the environmental brightness to the controller; Value, trigger the image sensor to continuously expose at a set speed to collect at least two image frames.
  • the image processing method of the embodiment of the present invention is based on a plurality of consecutive image frames, and processes the noise in the spatial domain and the time domain respectively, which can achieve a better noise reduction effect, especially extremely It greatly improves the shooting effect in low-brightness or dim environments, and presents users with sharp and low-noise images.
  • Figure 1 is a schematic diagram of an application scenario of an embodiment of the present invention.
  • FIG. 2 is a structural block diagram of an aerial camera provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of the structure of an image processing chip provided by an embodiment of the present invention.
  • FIG. 5 is a method flowchart of a frequency characteristic detection method provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of image frame processing provided by an embodiment of the present invention.
  • FIG. 7 is a flowchart of a method for signal change characteristics provided by an embodiment of the present invention.
  • FIG. 8 is a flowchart of a second noise reduction processing method provided by an embodiment of the present invention.
  • the amount of light entering the photosensitive element through the optical components of the camera is a very important indicator. Therefore, when shooting under the condition of insufficient external light (such as night scene shooting), the method of extending the exposure time and increasing the sensitivity of the photosensitive element is usually used to deal with the insufficient external light.
  • Applying the shooting method provided by the embodiments of the present invention in a camera can effectively alleviate the problems of smearing and excessive noise caused by high ISO in the captured image, so as to enhance the image capturing effect in a dim environment.
  • Fig. 1 is an application scenario of an aerial camera provided by an embodiment of the present invention.
  • the application scenario includes a drone 10 equipped with an aerial camera, a smart terminal 20, and a wireless network 30.
  • the UAV 10 may be an unmanned aerial vehicle driven by any type of power, including but not limited to a four-axis UAV, a fixed-wing aircraft, and a helicopter model. It can have the corresponding volume or power according to the needs of the actual situation, so as to provide the load capacity, flight speed and flight range that can meet the needs of use.
  • the aerial camera can be any type of image acquisition device, including a sports camera, a high-definition camera, or a wide-angle camera.
  • its aerial camera can be installed and fixed on the drone through a fixed bracket such as a pan/tilt, and controlled by the drone 10 to perform image collection tasks.
  • one or more functional modules can be added to the drone to enable the drone to achieve corresponding functions, such as the built-in main control chip, which is used as the core of the drone flight and data transmission control or the image transmission device , Upload the collected image information to the equipment connected with the drone.
  • the smart terminal 20 may be any type of smart device used to establish a communication connection with the drone, such as a mobile phone, a tablet computer, or a smart remote control.
  • the smart terminal 20 may be equipped with one or more different user interaction devices to collect user instructions or display and feedback information to the user.
  • buttons, display screens, touch screens, speakers, and remote control joysticks are included in the smart terminal 20 .
  • the smart terminal 20 may be equipped with a touch screen, through which the user’s remote control instructions for the drone are received and the image information obtained by the aerial camera is displayed to the user through the touch screen. The user can also use the remote control The touch screen switches the image information currently displayed on the display.
  • the UAV 10 and the smart terminal 20 can also integrate existing image visual processing technologies to further provide more intelligent services.
  • the drone 10 may collect images through an aerial camera, and then the smart terminal 20 analyzes the operation gestures in the image, and finally realizes the user's gesture control of the drone 10.
  • the wireless network 30 may be a wireless communication network based on any type of data transmission principle for establishing a data transmission channel between two nodes, such as a Bluetooth network, a WiFi network, a wireless cellular network, or a combination thereof located in a specific signal frequency band.
  • FIG. 2 is a structural block diagram of an aerial camera 11 provided by an embodiment of the present invention.
  • the aerial camera 11 may include: an image sensor 111, a controller 112, an image processor 113, and a storage device 114.
  • the image sensor 111 is a functional module for collecting image frames with set shooting parameters.
  • the optical signal corresponding to the visual image is projected onto the photosensitive element through the lens and related optical components, and the optical signal is converted into the corresponding electrical signal by the photosensitive element.
  • the shooting parameter is a parameter variable that can be adjusted related to the lens and related optical component structure (such as the shutter) during the image acquisition process of the image sensor 111, such as aperture, focal length, or exposure time.
  • the target image refers to image data obtained by the image sensor 111 through one exposure collection.
  • the controller 112 is the control core of the image sensor 111. It is connected to the image sensor, and can correspondingly control the shooting behavior of the image sensor 111 according to the received instruction, for example, set one or more shooting parameters of the image sensor 111.
  • the controller 112 may trigger the image sensor to continuously expose at a set speed to collect at least two frames of target images.
  • the set speed is an artificially set constant value, which can be a default value preset by a technician, or a value set by the user according to the actual situation during use.
  • the image processor 113 is a functional module for image effect enhancement. It can receive at least two frames of target images collected by the image sensor through continuous exposure and perform corresponding image processing methods on them to obtain images with enhanced night scene effects.
  • the night scene effect enhanced image is based on the multi-frame target image acquired by the image processor 113, and the output result is obtained after image processing and integration, which may have better sharpness and lower noise than the originally acquired target image.
  • the storage device 114 is a device for storing data information generated during the use of the aerial camera 11, such as a night scene effect enhanced image.
  • data information generated during the use of the aerial camera 11, such as a night scene effect enhanced image can be used.
  • any type of non-volatile memory with suitable capacity such as SD card, flash memory, or solid state hard disk, can be used.
  • the storage device 114 may also be a detachable structure or a distributed arrangement structure.
  • the aerial camera may only be provided with a data interface, and data such as night scene effect enhanced images can be transferred to the corresponding device for storage through the data interface.
  • one or more functional modules (such as a controller, an image processor, and a storage device) of the aerial camera 11 shown in FIG. 2 can also be integrated into the drone 10 as a part of the drone 10 Part.
  • FIG. 2 only exemplarily describes the functional modules of the aerial camera 11 based on the image acquisition process, and is not used to limit the functional modules of the aerial camera 11.
  • Fig. 3 is a structural block diagram of an image processing chip provided by an embodiment of the present invention.
  • the image processing chip can be used to implement all or part of the functions of the image processor and/or the controller.
  • the image processing chip 100 may include a processor 110 and a memory 120.
  • the processor 110 and the memory 120 establish a communication connection between any two through a bus.
  • the processor 110 may be of any type and has one or more processing cores. It can perform single-threaded or multi-threaded operations, and is used to parse instructions to perform operations such as obtaining data, performing logical operation functions, and issuing operation processing results.
  • the memory 120 is used as a non-volatile computer-readable storage medium, such as at least one magnetic disk storage device, a flash memory device, a distributed storage device remotely provided with respect to the processor 110, or other non-volatile solid-state storage devices.
  • the memory 120 may have a program storage area for storing non-volatile software programs, non-volatile computer executable programs, and modules for the processor 110 to call to enable the processor 110 to execute one or more method steps.
  • the memory 120 may also have a data storage area for storing the operation processing result issued and output by the processor 110.
  • the aerial camera carried by the drone 10 can adopt different shooting modes according to different shooting environments and specific usage conditions.
  • the aerial camera carried by the drone 10 can start shooting actions according to the shooting instructions sent by the user on the smart terminal 20, and feed back the target image obtained by shooting.
  • the drone 10 when the drone 10 is performing shooting tasks in a dark environment such as outdoors at night, it is limited by the problem of ambient light. After receiving the shooting instruction, the aerial camera usually needs to use a longer exposure time and high sensitivity. Sensitivity to shoot.
  • the image obtained by the original shooting is prone to smear due to long exposure time and the use of high sensitivity causes a lot of background noise.
  • the aerial camera can adopt the image processing method provided by the embodiment of the present invention to avoid the problems of smear and noise.
  • the controller can trigger the image sensor to continuously expose, collect multiple frames of target images and provide them to the image processor for comprehensive processing. Finally, after the image processing method executed by the image processor, the night scene effect enhanced image with clean image background and sharp object outline is obtained.
  • These night scene effect enhanced images can be provided to the smart terminal 20 for storage or displayed to the user, and can also be stored by the storage device of the aerial camera 11 itself.
  • the aerial camera also includes a brightness sensor 115.
  • the brightness sensor 115 can be arranged outside the aerial camera for sensing the current environmental brightness.
  • the controller 112 may determine whether the image processing method needs to be executed according to the current environmental brightness. For example, when the environmental brightness is lower than the set value, the image processing method is executed.
  • the controller 112 may also use a prompt message method.
  • the smart terminal displays prompt information to prompt the user to perform an image processing method.
  • the application scenario shown in Figure 1 takes the image processing method applied to the aerial camera carried by the drone as an example.
  • the image processing method can also be used with other types of image acquisition devices to improve the shooting effect of the image acquisition device in a dim environment.
  • the image processing method disclosed in the embodiment of the present invention is not limited to the application on the drone shown in FIG. 1.
  • FIG. 4 is a method flowchart of an image processing method provided by an embodiment of the present invention. As shown in Figure 4, the image processing method includes the following steps:
  • the acquired frames of images have a time linear relationship and can be arranged in time into an image sequence.
  • it can be a series of images obtained by an image sensor in a continuous exposure manner.
  • the number of consecutive exposures and the interval time are artificially set values, which can be set by technicians or users according to actual needs.
  • the first noise reduction processing refers to a set of one or more calculation steps performed on each pixel in the image space domain.
  • the effect of its processing is to reduce the spatial noise of the target image (that is, the background noise in the application scene shown in FIG. 1).
  • the second noise reduction processing refers to a set of one or more calculation steps performed on each pixel according to signal changes of adjacent target images.
  • the purpose of its processing is to reduce the smear phenomenon caused by long exposure time as much as possible.
  • the smear is caused by excessive changes in the image signal at different times. Therefore, smear can also be considered as temporal noise.
  • the "inter-frame processing result" is used here to indicate the output of the target image frame after the second noise reduction processing.
  • the intra-frame processing results and the inter-frame processing results obtained based on two different noise reduction purposes finally need to be combined in a preset manner to form the final night scene effect to enhance the image output.
  • “Combination” is the process of considering the results of intra-frame processing and inter-frame processing, and calculating and determining one output based on two input data. Specifically, any type of model or function calculation can be adopted according to the needs of actual applications or the characteristics of the shooting environment.
  • a weighted calculation method may be used. First, determine the weight of the intra-frame processing result and the inter-frame processing result. Then, weighted calculation of the intra-frame processing result and the inter-frame processing result to obtain an enhanced night scene effect image.
  • the image processing result can be obtained by calculating the following formula (1):
  • Y is the output; Ys is the intra-frame processing result, Yt is the inter-frame processing result, and a is the weight of the inter-frame processing result.
  • the sum of the weights of the intra-frame processing result and the inter-frame processing result is 1.
  • Technicians can adjust the value of the value a, correspondingly change the noise reduction intensity of the image processing results for time domain noise and spatial domain noise to adapt to different shooting environments, and output corresponding night scene effect enhanced images.
  • the noise reduction strength for time domain noise can be increased, so that the image processing result is sharper and the edges are clearer.
  • the noise reduction strength for spatial domain noise can be increased, so that the background noise of the image processing result can be smoothly eliminated.
  • the image processing method provided by the embodiment of the present invention respectively uses the first noise reduction processing and the second noise reduction processing, and at the same time, considering the elimination of spatial noise and temporal noise, it can achieve better reduction of image frames taken in a dark environment. Noise effect.
  • the specific noise reduction method used in order to improve the effect of the first noise reduction processing, can also be adjusted adaptively according to the characteristics of each pixel to obtain better intra-frame processing results.
  • the frequency characteristics of each pixel of the target image frame in the spatial domain are sequentially detected. Then, an image filtering process corresponding to the frequency characteristic is used for each pixel.
  • the frequency characteristic in the spatial domain refers to the magnitude of the signal change frequency of the pixel with respect to the neighboring pixels. This frequency characteristic is related to the image frame and is a relatively important parameter in image processing. For example, the pixels at the edge of the image will have a higher signal change frequency, while the background part of the image will have a lower signal change frequency.
  • the typical noise reduction processing is an image filtering process in pixels. Different image filtering processes have corresponding characteristics based on differences in matrix size or weight distribution.
  • selecting the corresponding image filtering processing can provide adaptive capabilities, so that the first noise reduction processing can be adjusted according to the target image frame in different parts of the screen, thereby effectively improving The effect of noise reduction processing.
  • the detection method shown in FIG. 5 can also be used to simply determine the frequency characteristics of the pixels to reduce the amount of calculation.
  • the detection method may include the following steps:
  • any method or any rule can be followed to select the target pixel from the target image frame, and it is only necessary to ensure that the pixel with the determined frequency characteristic will not be repeatedly selected.
  • Spatial distance refers to the signal difference between two pixels in the same image frame. Specifically, in an image frame preprocessed as a luminance image, the spatial distance may refer to the grayscale difference between the selected target pixel and the pixel adjacent to the target pixel.
  • the sampling template M may have a cross shape, and adjacent pixels sampled each time include: being in the same row as the target pixel, and located on the left and right of the target pixel respectively The left pixel point L, the right pixel point R of and the upper pixel point T and the lower pixel point B that are in the same column as the target pixel point and are respectively located above and below the target pixel point.
  • step 530 Determine whether the spatial distances between the target pixel and multiple adjacent pixels are all less than the frequency threshold. If yes, go to step 540, if not, go to step 550.
  • Frequency Threshold is a standard used to delimit high-frequency and low-frequency parts. It can be set by technicians according to actual needs, or it can be a value that can be adjusted adaptively following changes in the situation.
  • Low frequency is a relative concept, which indicates that the currently detected pixel is a relatively low frequency part in the overall image frame.
  • High frequency is a relative concept, which indicates that the currently detected pixel is a relatively high frequency part in the overall image frame.
  • step 560 Determine whether the frequency characteristics of all pixels of the target image frame are determined. If not, return to step 510, if yes, end the detection.
  • the whole detection process is carried out in units of pixels, until all the pixels are traversed once and then it ends.
  • the dichotomy that is, the division into two different frequency characteristics according to a frequency threshold
  • the dichotomy is taken as an example for description.
  • other different types of judgment methods can be used to determine the type and number of frequency characteristics and the frequency characteristics of the pixels, for example, according to two frequency thresholds. , Divided into three different frequency characteristics.
  • the first image filtering process may be used at the low frequency
  • the second image filtering process whose smoothing intensity is smaller than the first image filtering process may be used at the high frequency.
  • the first image filtering process is average filtering
  • the second image filtering process is weighted average filtering
  • Mean filtering is a windowing process based on the filtering matrix. As shown in FIG. 6, it can roughly include the following steps: the sampling window first selects a pixel area equivalent to the size of the filter matrix, and then multiplies the signal of each pixel by the element at the corresponding position in the filter matrix. Finally, the signal calculation results of all pixels are superimposed as the signal of the target pixel.
  • the elements in the filter matrix can take different values, so that different pixels are given different weights during smoothing to highlight a certain part of the pixels. When such a filter matrix is used, it can also be called "weighted average”. Filtering".
  • the low frequency with high probability can be considered as the background part in the target image frame, and the background part is the area where noise is most likely to appear when shooting with high sensitivity. Therefore, for the low frequency part, you can choose to use a very strong average filter for noise reduction processing to eliminate noise as much as possible.
  • the first filter matrix used in the first image filtering process is larger than the second filter matrix used in the second image filtering process to achieve a stronger smoothing effect and eliminate noise as much as possible .
  • the high frequency should be classified as the foreground part of the target image frame, which contains the shooting target that the user wants to observe. Therefore, it is always desirable to retain as much texture information of these high-frequency parts as possible to ensure the clarity of the image.
  • the center value of the second filter matrix used in the second image filtering process is greater than the peripheral value. That is, the element located in the center of the filter matrix has a larger value, while the element located at the edge has a smaller value, so that the signal characteristics of the pixel located in the center of the filter matrix are better highlighted.
  • Such a first noise reduction processing method performs different filtering processing on the high and low frequencies in the target image, each with its own emphasis, and a better noise reduction effect can be obtained.
  • the second noise reduction processing may use the pixel position as the processing unit to sequentially detect the signal change characteristics of the multi-frame target image at each pixel position. Then, according to the signal change characteristics, the output result at each pixel position is calculated.
  • the signal change characteristic is an index that measures the magnitude of the signal change amplitude between the target image and the adjacent target image. Based on the way the camera collects the target image, the skilled person can understand that the signal change characteristic indicates the smear phenomenon at the pixel position, which can be used as a basis for calculation to remove time-domain noise.
  • Fig. 7 is a flowchart of a method for detecting a signal change characteristic provided by an embodiment of the present invention. As shown in Figure 7, the method includes the following steps:
  • any method or any rule can be followed to select a pixel position sequentially from the target image, and it is only necessary to ensure that each pixel position in the target image can be traversed.
  • the time domain distance refers to the grayscale difference of a selected position in any two adjacent frames of target images in the multiple frames of target images.
  • a preset sampling window is used for calculation.
  • the preset sampling window is an area with a set size that contains multiple pixel positions, and is used to collect pixel signals within the area.
  • the preset sampling window is formed by expanding outward with the selected pixel position as the center.
  • the time domain distance indicates the signal change over time between adjacent target images during multiple exposures.
  • step 730 Determine whether the temporal distances at all pixel positions within the preset sampling window are greater than a preset threshold. If yes, go to step 740, if not, go to step 750.
  • the preset "threshold” is a standard for delimiting motion and stillness, which can be set by the technicians according to the needs of the actual situation, or it can be a value that is adaptively adjusted according to changes in the situation.
  • a pixel of the target image at the selected pixel position is a moving pixel.
  • the moving pixel is a relative concept, which indicates that the signal change at the pixel position is too large, and the phenomenon of smear is likely to occur.
  • a stationary pixel is also a relative concept, which indicates that the signal change at the pixel position is relatively small, and smear is basically impossible at this position.
  • step 760 Determine whether the signal change characteristics at all pixel positions are determined, if yes, return to step 710; if yes, end the detection of the signal change characteristics.
  • the signal change characteristic of each pixel position is simply divided into two kinds of moving pixels and static pixels according to the magnitude of the change amplitude.
  • those skilled in the art can understand that other different types of judgment methods can also be used to determine specific signal change characteristics.
  • the second noise reduction processing method shown in FIG. 8 may be used to filter out temporal noise and increase the sharpness of the image.
  • the second noise reduction processing method may include the following steps:
  • the average value refers to the average value of the signal of the pixel. Specifically, the gray value can be selected.
  • steps 810 and 820 may be repeated until the gray values of all pixel positions are determined to obtain the final inter-frame processing result.
  • the interference of moving pixels is filtered out, which can better avoid the smear phenomenon that occurs during shooting.
  • the process of performing mean filtering on stationary pixels can also eliminate time-domain noise well.
  • each target image consists of M rows by N columns of pixels Composition, the pixels are gray values.
  • the selected target pixel is represented by Y 1 m,n , where m and n indicate that the target pixel is in the mth row and nth column of the target image Y1.
  • the frequency threshold S As the criterion, according to K1, K2, K3, and K4 to determine whether the frequency characteristic of the target pixel is low or high
  • the frequency threshold S and K4 are less than the frequency threshold S, it is determined that the target pixel is a low frequency. Otherwise, the target pixel is determined to be high frequency.
  • the value of S is 20.
  • Ys m, n is the result of intra-frame processing.
  • a larger size mean filter matrix is used for the low frequency part, the effect of greatly reducing spatial domain noise can be achieved.
  • a 3-by-3 filter matrix for high-frequency pixels Perform weighted mean filtering. That is, the intra-frame processing result of the pixel is calculated by the following formula (3):
  • Ys m, n is the result of intra-frame processing.
  • the filter matrix used in weighted average filtering has the characteristics of large center value and small peripheral value, and the matrix size is much smaller than the average filter matrix used in the low frequency part, which can better retain the texture information of the high frequency part.
  • the selected pixel position is represented by Y m,n , the subscripts m and n represent the pixels in the mth row and the nth row, respectively, and the preset sampling window L It is a 3 by 3 rectangle.
  • the time-domain distances of the target image frame Y1 and the target image frame Y2 at all pixel positions of the preset sampling window are sequentially calculated.
  • the following formula (4) is used to determine whether the pixel at the selected pixel position is a stationary pixel or a moving pixel. Only when all the conditions shown in equation (4) are met, the pixel is determined to be a stationary pixel, otherwise it is a moving pixel.
  • T is the variation threshold
  • n is the pixel point of the image frame Y2 in the mth row and nth column.
  • the output result at the selected pixel position can be calculated by the following formula (5).
  • Yt m, n is the output result of the selected pixel position.
  • the gray value of the target image can be directly used as the output result.
  • the inter-frame processing results and intra-frame processing results of the target image can be output.
  • the two After obtaining the inter-frame processing result and the intra-frame processing result of the target image frame, the two can be integrated according to formula (1) to obtain the final night scene effect enhanced image.
  • the computer software may be stored in a computer readable storage medium, and when the program is executed, it may include the processes of the above-mentioned method embodiments.
  • the storage medium can be a magnetic disk, an optical disc, a read-only storage memory or a random storage memory, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本发明实施例涉及图像处理方法、夜间拍摄方法、图像处理芯片以及航拍相机。所述方法包括:获取多帧目标图像;对所述多帧目标图像中的每一帧目标图像均执行第一降噪处理,以获得所述多帧目标图像的帧内处理结果。对所述多帧目标图像中任意相邻两帧目标图像执行第二降噪处理,以获得帧间处理结果;根据所述帧内处理结果和帧间处理结果,形成所述目标图像的夜景效果增强图像。该方法以多个连续的目标图像为基础,分别对空间域和时间域的噪声进行处理,可以极大的提升在低亮度或者昏暗环境下的拍摄效果,为用户呈现锐利并且低噪声的夜景效果增强图像。

Description

图像处理方法、夜间拍摄方法、图像处理芯片及航拍相机
本申请要求于2019年3月6日提交中国专利局、申请号为201910167837.8、申请名称为“图像处理方法、夜间拍摄方法、图像处理芯片及航拍相机”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像处理方法、夜间拍摄方法、图像处理芯片以及航拍相机。
背景技术
自从CMOS和CCD等将光信号采集转换为对应模拟电信号的图像传感器问世以来,数码相机由于具有数据存储便捷、体积小巧、拍摄参数容易调节控制等优势,受到了人们的欢迎,并广泛的应用于许多不同的领域,在人们生活中发挥着重要的作用。
但是,受到图像传感器自身物理结构和制作工艺的限制,在一些光线不足,亮度较低的环境下进行拍摄(例如夜间的户外、环境昏暗的礼堂)时,数码相机很难提供优秀的拍摄效果,容易在照片中出现拖影或者噪点过多等的问题。
提升光学镜头的性能或者增大图像传感器的尺寸是改善拍摄效果的可行方式。但是这些硬件上的提升需要以成本上升或者相机体积增大为代价,能够应用的场合会受到很大的限制。
发明内容
为了解决上述技术问题,本发明实施例提供一种能够改善数码相机拍摄效果的图像处理方法、夜间拍摄方法、图像处理芯片以及航拍相机。
为解决上述技术问题,本发明实施例提供以下技术方案:一种图像处理方法。该图像处理方法包括:
获取多帧目标图像;
对所述多帧目标图像中的每一帧目标图像均执行第一降噪处理,以获得所述多帧目标图像的帧内处理结果,其中,所述第一降噪处理用于滤除所述多帧目标图像中每一帧目标图像的空间域噪声;
对所述多帧目标图像中任意相邻两帧目标图像执行第二降噪处理,以获得 帧间处理结果,其中,所述第二降噪处理用于滤除所述多帧目标图像中任意相邻两帧目标图像之间的时间域噪声;
根据所述帧内处理结果和帧间处理结果,形成所述目标图像的夜景效果增强图像。
可选地,
所述对所述多帧目标图像种的每一帧目标图像均执行第一降噪处理,以获得所述帧内处理结果,包括:
依次检测所述多帧目标图像中每一帧目标图像中每个像素点在空间域上的频率特性;
根据所述每个像素点在空间域上的频率特性,使用与所述频率特性对应的图像滤波处理,以获得所述帧内处理结果。
可选地,
所述依次检测所述多帧目标图像中每一帧目标图像中每个像素点在空间域上的频率特性,包括:
分别计算选定的目标像素点和与所述目标像素点相邻的多个像素点之间的空域距离,其中,所述空域距离是指所述选定的目标像素点和与所述目标像素点相邻的像素点的灰度差;
根据所述空域距离和频率阈值,确定所述目标像素点的频率特性。
可选地,
所述频率特性包括低频和高频;
所述根据所述空域距离和频率阈值,确定所述目标像素点的频率特性,具体包括:
判断所述目标像素点和与所述目标像素点相邻的多个像素点之间的空域距离是否均小于所述频率阈值;
若是,则确定所述目标像素点的频率特性为低频;
若否,则确定所述目标像素点的频率特性为高频。
可选地,
所述根据所述每个像素点在空间域上的频率特性,使用与所述频率特性对应的图像滤波处理,包括:
在所述频率特性为低频的目标像素点上使用第一图像滤波处理,并且在所述频率特性为高频的目标像素点上使用平滑强度小于所述第一图像滤波处理的第二图像滤波处理;
其中,所述第一图像滤波处理为均值滤波,所述第二图像滤波处理为加权均值滤波。
可选地,所述第一图像滤波处理使用的第一滤波矩阵的大于所述第二图像滤波处理使用的第二滤波矩阵。
可选地,所述第二图像滤波处理使用的第二滤波矩阵的中心值大于周边值。
可选地,所述与所述目标像素点相邻的像素点包括:与所述目标像素点在同一行,位于所述目标像素点左侧的左像素点和位于所述目标像素点右侧的右像素点;以及
与所述目标像素点在同一列,位于所述目标像素点上方的上像素点和位于所述目标像素点下方的下像素点。
可选地,
所述对所述多帧目标图像中任意相邻两帧目标图像执行第二降噪处理,以获得帧间处理结果,包括:
依次检测所述多帧目标图像在每个像素位置上的信号变化特性;
根据所述信号变化特性,计算每个像素点位置上的灰度值。
可选地,
所述依次检测所述多帧目标图像在每个像素位置上的信号变化特性,包括:
在选定的像素位置,依次计算所述多帧目标图像中,任意两帧目标图像之间的时域距离,其中,所述时域距离是指所述多帧目标图像中任意相邻两帧目标图像中位置对应的像素的灰度差;
根据所述时域距离,确定每一帧目标图像在所述选定的像素位置的信号变化特性。
可选地,所述信号变化特性包括静止像素和运动像素;则,
所述根据所述时域距离,确定每一帧目标图像在所述选定的像素位置的像 素点的信号变化特性,具体包括:
选定一帧目标图像;
依次计算所述选定的目标图像与另一帧目标图像在预设采样窗的每一个像素位置上的时域距离;所述预设采样窗的中心为所述选定的像素位置;
判断在所述预设采样窗内,所有像素位置上的时域距离是否均大于预设的阈值;
若是,则确定所述选定的目标图像在所述选定的像素位置的像素点为运动像素;
若否,则确定所述选定的目标图像在所述选定的像素位置的像素点为静止像素。
可选地,所述根据所述信号变化特性,计算每个像素点位置上的输出结果,具体包括:
计算在选定的像素位置上,所述多帧目标图像的像素点和所有静止像素的平均值;
确定所述平均值为所述选定的像素位置的灰度值。
可选地,所述根据所述帧内处理结果和帧间处理结果,形成所述目标图像帧的夜景拍摄效果增强图像,具体包括:
确定所述帧内处理结果和所述帧间处理结果的权重;
加权计算所述帧内处理结果和所述帧间处理结果,获得所述夜景效果增强图像。
可选地,通过如下算式加权计算所述帧内处理结果和所述帧间处理结果:
Y=(1-a)*Ys+a*Yt
其中,Y为图像处理结果;Ys为帧内处理结果,Yt为帧间处理结果,a为帧间处理结果的权重。
为解决上述技术问题,本发明实施例还提供以下技术方案:一种夜间拍摄方法。所述夜间拍摄方法包括:
接收拍摄触发指令;
以预设的速度,连续采集两帧或以上的目标图像;
对多帧所述目标图像执行如上所述的图像处理方法,获得夜景效果增强图 像。
为解决上述技术问题,本发明实施例还提供以下技术方案:一种图像处理芯片。
所述图像处理芯片包括:处理器以及与所述处理器通信连接的存储器;所述存储器中存储有计算机程序指令,所述计算机程序指令在被所述处理器调用时,以使所述处理器执行如上所述的图像处理方法。
为解决上述技术问题,本发明实施例还提供以下技术方案:一种航拍相机。所述航拍相机包括:
图像传感器,所述图像传感器用于以设定的拍摄参数,采集多帧图像;控制器;所述控制器与所述图像传感器连接,用于触发所述图像传感器以设定的速度连续曝光以采集至少两帧图像;图像处理器,所述图像处理器用于接收所述图像传感器通过连续曝光采集的至少两帧图像,并对接收到的所述至少两帧图像执行如上所述的图像处理方法,获得夜景效果增强图像;存储设备,所述存储器与所述图像处理器连接,用于存储所述夜景效果增强图像。
可选地,所述航拍相机还包括亮度传感器;所述亮度传感器用于感知当前的环境亮度并提供所述环境亮度至所述控制器;所述控制器用于在所述环境亮度低于设定值时,触发所述图像传感器以设定的速度连续曝光以采集至少两帧图像帧。
与现有技术相比较,本发明实施例的图像处理方法以多个连续的图像帧为基础,分别对空间域和时间域的噪声进行处理,可以达到较好的降噪效果,尤其是能够极大的提升在低亮度或者昏暗环境下的拍摄效果,为用户呈现锐利并且低噪声的图像。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1为本发明实施例的应用场景的示意图;
图2为本发明实施例提供的航拍相机的结构框图;
图3为本发明实施例提供的图像处理芯片的结构示意图;
图4为本发明实施例提供的图像处理方法的方法流程图;
图5为本发明实施例提供的频率特性检测方法的方法流程图;
图6为本发明实施例提供的图像帧处理的示意图;
图7为本发明实施例提供的信号变化特性的方法流程图;
图8为本发明实施例提供的第二降噪处理的方法流程图。
具体实施方式
为了便于理解本发明,下面结合附图和具体实施例,对本发明进行更详细的说明。需要说明的是,当元件被表述“固定于”另一个元件,它可以直接在另一个元件上、或者其间可以存在一个或多个居中的元件。当一个元件被表述“连接”另一个元件,它可以是直接连接到另一个元件、或者其间可以存在一个或多个居中的元件。本说明书所使用的术语“上”、“下”、“内”、“外”、“底部”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”“第三”等仅用于描述目的,而不能理解为指示或暗示相对重要性。
除非另有定义,本说明书所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本说明书中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是用于限制本发明。本说明书所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。
此外,下面所描述的本发明不同实施例中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
在相机拍摄照片时,通过相机的光学组件进入到感光元件的通光量是一项非常重要的指标。因此,在外部环境光线不足(如夜景拍摄)的情况下进行拍摄时,通常会采用延长曝光时间和调高感光元件的灵敏度的方式来应对外部环境光线不足的情况。
在相机中应用本发明实施例提供的拍摄方法可以有效的减缓拍摄图像出现拖影和高ISO引发的噪点过多的问题,以增强昏暗环境下的图像拍摄效果。
图1为本发明实施例提供的航拍相机的应用场景。如图1所示,在该应用 场景中,包括了搭载了航拍相机的无人机10、智能终端20以及无线网络30。
无人机10可以是以任何类型的动力驱动的无人飞行载具,包括但不限于四轴无人机、固定翼飞行器以及直升机模型等。其可以根据实际情况的需要,具备相应的体积或者动力,从而提供能够满足使用需要的载重能力、飞行速度以及飞行续航里程等。
航拍相机可以是任何类型的图像采集设备,包括运动相机、高清相机或者广角相机。其航拍相机作为无人机上搭载的其中一种功能模块,可以通过云台等安装固定支架,安装固定在无人机上,并受控于无人机10,执行图像采集的任务。
当然,无人机上还可以添加有一种或者多种功能模块,令无人机能够实现相应的功能,例如内置的主控芯片,作为无人机飞行和数据传输等的控制核心或者是图传装置,将采集获得的图像信息上传至与无人机建立连接的设备中。
智能终端20可以是任何类型,用以与无人机建立通信连接的智能设备,例如手机、平板电脑或者智能遥控器等。该智能终端20可以装配有一种或者多种不同的用户交互装置,用以采集用户指令或者向用户展示和反馈信息。
这些交互装置包括但不限于:按键、显示屏、触摸屏、扬声器以及遥控操作杆。例如,智能终端20可以装配有触控显示屏,通过该触控显示屏接收用户对无人机的遥控指令并通过触控显示屏向用户展示由航拍相机获得的图像信息,用户还可以通过遥控触摸屏切换显示屏当前显示的图像信息。
在一些实施例中,无人机10与智能终端20之间还可以融合现有的图像视觉处理技术,进一步的提供更智能化的服务。例如无人机10可以通过航拍相机采集图像,然后由智能终端20对图像中的操作手势进行解析,最终实现用户对于无人机10的手势控制。
无线网络30可以是基于任何类型的数据传输原理,用于建立两个节点之间的数据传输信道的无线通信网络,例如位于特定信号频段的蓝牙网络、WiFi网络、无线蜂窝网络或者其结合。
图2为本发明实施例提供的航拍相机11的结构框图。如图2所示,该航拍相机11可以包括:图像传感器111,控制器112,图像处理器113以及存储设备114。
其中,图像传感器111是用于以设定的拍摄参数,采集图像帧的功能模组。其通过镜头和相关光学组件将视觉画面对应的光信号投射到感光元件上,并由感光元件将光信号转换为相应的电信号。
该拍摄参数是图像传感器111在图像采集过程中,与镜头和相关光学组件结构(如快门)相关,可以调整的参数变量,例如光圈、焦距或者曝光时间等。而目标图像是指图像传感器111通过一次曝光采集获得图像数据。
控制器112是图像传感器111的控制核心。其与所述图像传感器连接,可以根据接收到的指令,相应的控制图像传感器111的拍摄行为,例如设定图像传感器111的一个或者多个拍摄参数。
在合适的触发条件下,控制器112可以触发所述图像传感器以设定的速度连续曝光以采集至少两帧目标图像。该设定的速度是一个人为设定的常数值,其可以是技术人员预先设定的默认值,也可以是用户在使用过程中,根据实际情况的需要设定的数值。
图像处理器113是用于图像效果增强的功能模组。其可以接收所述图像传感器通过连续曝光采集的至少两帧目标图像并对其执行相应的图像处理方法,获得夜景效果增强图像。
该夜景效果增强图像是图像处理器113基于采集获得的多帧目标图像,进行图像处理后整合得到输出结果,可以具有比原始采集的目标图像更好的锐度,更低的噪声等。
存储设备114是用于存储航拍相机11在使用过程中产生的数据信息,如夜景效果增强图像的设备。其具体可以采用任何类型的,具有合适容量的非易失性存储器,如SD卡、闪存或者固态硬盘等。
在一些实施例中,存储设备114还可以是可拆卸结构或者是分布式布置的结构。航拍相机可以仅设置有数据接口,将夜景效果增强图像等的数据通过该数据接口传递到相应的设备中进行存储。
应当说明的是,图2所示的航拍相机11的一个或者多个功能模组(如控制器、图像处理器和存储设备)也可以整合到无人机10中,作为无人机10的其中一部分。图2中仅基于图像采集的过程对所述航拍相机11的功能模块进行示例性描述,而不用于限制航拍相机11所具有的功能模组。
图3为本发明实施例提供的图像处理芯片的结构框图。该图像处理芯片可以用于实现图像处理器和/或控制器中的全部或者部分的功能。如图3所示,该图像处理芯片100可以包括:处理器110以及存储器120。
所述处理器110以及存储器120之间通过总线的方式,建立任意两者之间的通信连接。
处理器110可以为任何类型,具备一个或者多个处理核心的处理器。其可以执行单线程或者多线程的操作,用于解析指令以执行获取数据、执行逻辑运算功能以及下发运算处理结果等操作。
存储器120作为一种非易失性计算机可读存储介质,例如至少一个磁盘存储器件、闪存器件、相对于处理器110远程设置的分布式存储设备或者其他非易失性固态存储器件。
存储器120可以具有程序存储区,用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,供处理器110调用以使处理器110执行一个或者多个方法步骤。存储器120还可以具有数据存储区,用以存储处理器110下发输出的运算处理结果。
请继续参阅图1,在实际使用过程中,无人机10搭载的航拍相机根据不同的拍摄环境,具体的使用情况可以采用不同的拍摄模式。
一方面,当环境亮度正常,通光量充足的情况下,无人机10搭载的航拍相机可以根据用户在智能终端20发出的拍摄指令,开始进行拍摄动作,并反馈拍摄获得的目标图像。
另一方面,当无人机10在夜间户外等昏暗环境下进行拍摄任务时,受限于环境光线的问题,在接收到拍摄指令后,航拍相机通常需要使用较长的曝光时间和高的感光灵敏度来进行拍摄。
这样的,原始拍摄获得的图像容易因长曝光时间而出现拖影以及使用高感光度而引发大量背景噪声。
由此,在昏暗环境下拍摄时,航拍相机可以采用本发明实施例提供的图像处理方法以避免拖影和噪点的问题。
控制器可以触发图像传感器连续曝光,采集多帧目标图像并提供给图像处理器进行综合处理。最终经过图像处理器执行的图像处理方法以后,获得图像 背景干净、物体轮廓锐利清晰的夜景效果增强图像。
这些夜景效果增强图像可以提供给智能终端20进行存储或者向用户展示,同时也可以被航拍相机11自身的存储设备存储。
在较佳的实施例中,请继续参阅图2,为了适时的执行该图像处理方法。该航拍相机还包括亮度传感器115。
该亮度传感器115可以设置在航拍相机的外部,用于感知当前的环境亮度。控制器112可以根据当前的环境亮度,确定是否需要执行图像处理方法。例如,在所述环境亮度低于设定值时,执行图像处理方法。
在另一些实施例中,控制器112也可以采用提示信息的方式,在所述环境亮度低于设定值时,在智能终端显示提示信息,提示用户需要执行图像处理方法。
虽然,图1所示的应用场景中以图像处理方法应用在无人机搭载的航拍相机为例。但是,本领域技术人员可以理解的是,该图像处理方法还可以其它类型的图像采集装置使用,以提高图像采集装置在昏暗环境下的拍摄效果。本发明实施例公开图像处理方法并不限于在图1所示的无人机上应用。
图4为本发明实施例提供的图像处理方法的方法流程图。如图4所示,该图像处理方法包括如下步骤:
410、获取多帧目标图像。
获取的各帧图像之间具有时间线性关系,可以按时间排列为一个图像序列。
具体的,其可以由图像传感器以连续曝光的方式获得的一系列图像。连续曝光的次数和间隔时间是人为设定的数值,可以由技术人员或者用户根据实际情况的需要所设定。
420、对所述多帧目标图像中的每一帧目标图像均执行第一降噪处理,获得帧内处理结果。
该第一降噪处理是指在图像空间域上,对各个像素点进行的一个或者多个运算步骤的集合。其处理的效果在于降低目标图像的空间域噪声(亦即图1所示的应用场景中的背景噪点)。
由于在这一降噪过程中,仅参考使用了目标图像帧自身,没有考虑其它的 目标图像帧。因此,在此使用“帧内处理结果”来表示目标图像帧经过第一降噪处理后的输出。
430、对所述多帧目标图像中任意相邻两帧目标图像执行第二降噪处理,以获得帧间处理结果。
该第二降噪处理是指根据相邻目标图像的信号变化,对各个像素点进行的一个或者多个运算步骤的集合。其处理的目的在于尽可能的降低长曝光时间所导致的拖影现象。而拖影是由于图像信号在不同时刻的变化过大所导致的。因此,拖影也可以被认为是时间域噪声。
由于在第二降噪过程中,滤除的是时间域噪声,需要参考使用目标图像帧与相邻图像帧之间的关系。因此,在此使用“帧间处理结果”来表示目标图像帧经过第二降噪处理后的输出。
440、根据所述帧内处理结果和帧间处理结果,形成所述目标图像的夜景效果增强图像。
基于两种不同的降噪目的获得的帧内处理结果和帧间处理结果最终还需要按照设定好的方式结合到一起,形成最终的夜景效果增强图像输出。
“结合”是综合考虑帧内处理结果和帧间处理结果,基于两个输入数据计算确定一个输出的处理过程。其具体可以根据实际应用的需要或者拍摄环境的特点,采用任何类型的模型或者函数计算。
在一些实施例中,可以采用加权计算的方式。首先,确定所述帧内处理结果和所述帧间处理结果的权重。然后,加权计算所述帧内处理结果和所述帧间处理结果,获得所夜景效果增强图像。
具体的,可以通过如下所示的算式(1)计算获得图像处理结果:
Y=(1-a)*Ys+a*Yt          (1)
其中,Y为输出;Ys为帧内处理结果,Yt为帧间处理结果,a为帧间处理结果的权重。
在本实施例中,所述帧内处理结果和所述帧间处理结果的权重之和为1。技术人员可以通过调节数值a的取值,相应的改变图像处理结果对于时间域噪声和空间域噪声的降噪强度以适应不同的拍摄环境,输出相应的夜景效果增强图像。
例如,当数值a提高时,可以提高对于时间域噪声的降噪强度,令图像处理结果更锐利,边缘更清晰。而当数值a降低时,可以提高对于空间域噪声的降噪强度,令图像处理结果背景的噪点可以被平滑消除。
本发明实施例提供的图像处理方法分别使用了第一降噪处理和第二降噪处理,同时考虑到了空间域噪声和时间域噪声的消除,可以对昏暗环境拍摄的图像帧取得较好的降噪效果。
在一些实施例中,为了提高第一降噪处理的效果,还可以根据每个像素点的特点,适应性的调整具体的采用的降噪方法以获得更好的帧内处理结果。
首先,依次检测所述目标图像帧的每个像素点在空间域上的频率特性。然后,对每个所述像素点使用与所述频率特性对应的图像滤波处理。
在空间域上的频率特性是指该像素点相对于周边相邻的像素点之间的信号变化频率的大小。该频率特性与图像画面相关,是图像处理中比较重要的参数。例如,在图像边缘部分的像素点会具有较高的信号变化频率,而图像的背景部分则具有较低的信号变化频率。
典型的降噪处理都是以像素点为单位的图像滤波过程。不同的图像滤波处理基于矩阵尺寸或者权重分配等的区别,都具有对应的特点。
以像素点的频率特性为基准,选择为其分配对应的图像滤波处理可以提供自适应的能力,令第一降噪处理能够根据目标图像帧在不同部分的画面而相应作出调整,从而有效的提升了降噪处理的效果。
惯常可以通过傅里叶变换的方式,获得目标图像帧中每个像素点具体的信号变化频率。但是傅里叶变换是复杂的计算过程,需要耗费大量的计算资源。由此,在较佳的实施例中,还可以使用图5所示的检测方法来简单的确定像素点的频率特性,以降低计算量。
如图5所示,该检测方法可以包括如下步骤:
510、选定目标图像帧中的目标像素点。
具体可以采用任何方式或者遵循任何规则,从目标图像帧中选定目标像素点,只需要保证已经确定频率特性的像素点不会被重复选定即可。
520、分别计算选定的目标像素点和与所述目标像素点相邻的多个像素点之间的空域距离。
空域距离是指在同一个图像帧中的两个像素点之间的信号差别。具体的,在预处理为亮度图像后的图像帧中,空域距离可以是指所述选定的目标像素点和与所述目标像素点相邻的像素点的灰度差。
根据检测时使用的取样模板的不同,相邻像素点的数量也会发生相应的改变。具体的,如图6所示,该取样模板M可以呈十字形,每次采样的相邻像素点包括:与所述目标像素点在同一行,分别位于所述目标像素点左侧和右侧的左像素点L、右像素点R以及与所述目标像素点在同一列,分别位于所述目标像素点上方和下方的上像素点T、下像素点B。
530、判断所述目标像素点与相邻的多个像素点之间的空域距离是否均小于所述频率阈值。若是,执行步骤540,若否,执行步骤550。
“频率阈值”是一个用于划定高频与低频部分的标准,既可以由技术人员根据实际情况的需要而设置,也可以是跟随情况变化而自适应调整的数值。
540、确定所述目标像素点的频率特性为低频。低频是一个相对性概念,其表明当前检测的像素点在整体的图像帧中属于相对低频的部分。
550、确定所述目标像素点的频率特性为高频。高频是一个相对性概念,其表明当前检测的像素点在整体的图像帧中属于相对高频的部分。
560、判断目标图像帧所有像素点的频率特性是否均被确定。若否,返回步骤510,若是,结束检测。
整个检测过程以像素点为单位逐次进行,直至所有的像素点均被遍历一次以后结束。
虽然在图5所示的频率特性检测方法中以二分法(即根据一个频率阈值划分为两种不同的频率特性)为例进行描述。但是本领域技术人员可以理解的是,还可以根据已知的频率阈值和空域距离,使用其它不同类型的判断方式来确定频率特性的种类数量和像素点所属的频率特性,例如根据两个频率阈值,划分为三种不同的频率特性。
基于高频和低频的划分结果,在一些实施例中,可以在低频上使用第一图像滤波处理,并且在高频上使用平滑强度小于所述第一图像滤波处理的第二图像滤波处理。
其中,所述第一图像滤波处理为均值滤波,所述第二图像滤波处理为加权 均值滤波。
均值滤波是以滤波矩阵为基础进行的窗处理过程。如图6所示,其大致可以包括如下的几个步骤:取样窗首先选取与滤波矩阵大小相当的像素区域,然后令每个像素点的信号与滤波矩阵中对应位置的元素相乘。最后,叠加所有像素点的信号计算结果作为目标像素点的信号。
可以理解的,滤波矩阵越大,包含的元素越多,其对应的平滑强度就越大,令像素点的信号趋向于平均。另外,滤波矩阵中的元素可以取不同的值,从而在平滑时为不同的像素点赋予不同的权重,以突出某部分的像素点,使用这样的滤波矩阵时,也可以被称为“加权均值滤波”。
如上实施例所记载的,低频大概率可以被认为是目标图像帧中的背景部分,而背景部分由是在高感光度拍摄时最容易出现噪点的区域。因此,对于低频部分可以选择使用平滑强度非常大的均值滤波进行降噪处理,以尽可能的消除噪点。
具体的,为了保证足够的平滑强度,所述第一图像滤波处理使用的第一滤波矩阵大于所述第二图像滤波处理使用的第二滤波矩阵以取得更强的平滑效果,尽可能的消除噪点。
而高频则应当被归为目标图像帧中的前景部分,包含了用户想要观察的拍摄目标。因此,总是希望尽可能多的保留这些高频部分的纹理信息以确保图像的清晰度。
具体的,为了保留高频的纹理信息,所述第二图像滤波处理使用的第二滤波矩阵的中心值大于周边值。亦即,滤波矩阵中位于中心的元素取值较大,而位于边缘的元素取值较少,从而更好的凸出位于滤波矩阵中心的像素点的信号特点。
这样的第一降噪处理方法将目标图像中高频和低频分别进行不同的滤波处理,各有侧重,可以获得更优秀的降噪效果。
在另一些实施例中,第二降噪处理可以以像素点位置为处理单元,依次检测所述多帧目标图像在每个像素位置上的信号变化特性。然后,根据所述信号变化特性,计算每个像素点位置上的输出结果。
信号变化特性是衡量目标图像与相邻的目标图像之间的信号变化幅度大小的指标。基于相机采集目标图像的方式,技术人员可以理解,该信号变化特 性表示了该像素位置上的拖影现象,可以作为计算的基础来去除时间域噪声。
图7为本发明实施例提供的检测信号变化特性的方法流程图。如图7所示,该方法包括如下步骤:
710、在目标图像帧中选定一个像素位置。
具体可以采用任何方式或者遵循任何规则,从目标图像中依次序的选定一个像素位置,只需要保证目标图像中的每一个像素位置均能够被遍历即可。
720、在选定的像素位置,依次计算所述多帧目标图像中,任意两帧目标图像之间的时域距离。
所述时域距离是指所述多帧目标图像中任意相邻两帧目标图像中,选定位置的灰度差。
具体的,计算所述时域距离时,采用预设采样窗的方式来进行计算。预设采样窗是一个设定尺寸,包含了多个像素位置的区域,用以采集该区域范围内的像素点信号。该预设采样窗以所述选定的像素位置为中心,向外扩展而形成。
时域距离表明了在多次曝光过程中,相邻目标图像之间随时间的信号变化。
730、判断在所述预设采样窗内,所有像素位置上的时域距离是否均大于预设的阈值。若是,执行步骤740,若否,执行步骤750。
该预设的“阈值”是一个用于划定运动和静止的标准,既可以由技术人员根据实际情况的需要而设置,也可以是跟随情况变化而自适应调整的数值。
740、确定所述目标图像在所述选定的像素位置的像素点为运动像素。运动像素是一个相对性概念,其表明在该像素位置的信号变化过大,很有可能出现拖影的现象。
750、确定所述目标图像在所述选定的像素位置的像素点为静止像素。
静止像素也是一个相对性概念,其表明在该像素位置的信号变化比较小,这个位置基本不可能出现拖影。
760、判断在所有像素位置的信号变化特性是否均被确定,若是,返回步骤710;若是,结束信号变化特性的检测。
虽然,在图7所示的实施例中,每个像素位置的信号变化特性根据变化幅度的大小,被简单的划分为运动像素和静止像素两种。但本领域技术人员可以 理解的是,还可以使用其它不同类型的判断方式来确定具体的信号变化特性。
基于运动像素和静止像素的划分结果,在一些实施例中,可以采用图8所示的第二降噪处理方法来实现对时域噪声的滤除,增加图像的锐度。如图8所示,该第二降噪处理方法可以包括如下步骤:
810、计算在选定的像素位置上,所述目标图像的像素点和所有静止像素的平均值。该平均值是指像素点的信号平均值,具体的,可以选择使用灰度值。
820、确定所述平均值为所述选定的像素位置的灰度值。
在一个像素位置的灰度值被确定以后,还可以重复执行步骤810和820直至所有像素位置的灰度值均被确定以获得最终的帧间处理结果。
在本实施例进行第二降噪处理过程中,筛除运动像素的干扰,可以较好的规避拍摄时出现的拖影现象。另外,对静止像素进行均值滤波的过程也可以很好的消除时域噪声。
以下结合具体实例,详细描述本发明实施例揭露的图像处理方法的执行过程。假设在接收到拍摄指令以后,控制器控制图像传感器连续曝光,依次采集获得如图6所示的4帧目标图像Y1,Y2,Y3以及Y4,每个目标图像由M行乘N列的像素点组成,像素点为灰度值。
1)第一降噪处理的过程:选定的目标像素点通过Y 1 m,n表示,其中m和n表示目标像素点处于目标图像Y1的第m行和第n列。
首先通过十字形的检测窗D,获取4个相邻像素点并计算各个相邻像素点与位于中心的目标像素点之间的灰度值的差K1,K2,K3和K4。
以频率阈值S为判断标准,根据K1,K2,K3和K4确定目标像素点的频率特性为低频还是高频,只有当目标像素点同时满足K1小于频率阈值S,K2小于频率阈值S,K3小于频率阈值S以及K4小于频率阈值S时,确定该目标像素点为低频。否则,确定目标像素点为高频。具体的,S的取值为20。
然后,对属于低频的像素点使用5乘5的滤波矩阵
Figure PCTCN2020077821-appb-000001
进行均值滤波。亦即,通过如下算式(2)计算获得该像素点的帧内处理结果:
Figure PCTCN2020077821-appb-000002
其中,Ys m,n为帧内处理结果。对于低频部分采用较大尺寸的均值滤波矩阵时,可以实现大幅度的降低空间域噪声的效果。
最后,对属于高频的像素点3乘3的滤波矩阵
Figure PCTCN2020077821-appb-000003
进行加权均值滤波。亦即,通过如下算式(3)计算获得该像素点的帧内处理结果:
Figure PCTCN2020077821-appb-000004
其中,Ys m,n为帧内处理结果。加权均值滤波使用的滤波矩阵具有中心值大,周边值小的特点,而且矩阵尺寸远小于低频部分使用的均值滤波矩阵,可以更好的保留高频部分的纹理信息。
2)第二降噪处理的过程:在本实施例中,选定的像素位置由Y m,n表示,下标m和n分别表示第m行和第n行的像素,预设采样窗L为3乘3的矩形。
首先,根据选定的像素位置,依次计算目标图像帧Y1和目标图像帧Y2在预设采样窗的所有像素位置上的时域距离。
然后,通过如下算式(4)判断在选定像素位置的像素点为静止像素还是运动像素。只有在算式(4)显示的所有条件均符合时,确定该像素点为静止像素,否则为运动像素。
Figure PCTCN2020077821-appb-000005
其中,T为变动阈值,Y 2 m,n为图像帧Y2在第m行,第n列的像素点。
最后,筛除所有的运动像素,计算所述目标图像帧在选定像素位置的像素点和所有静止像素的灰度平均值作为选定像素位置的帧间处理结果。
例如,当Y2,Y3以及Y4在选定像素位置的像素点都属于静止像素时,可以通过如下算式(5)计算在选定像素位置的输出结果。
Figure PCTCN2020077821-appb-000006
其中,Yt m,n为选定像素位置的输出结果。
而在Y2,Y3以及Y4在选定像素位置的像素点都属于运动像素时,可以直接将目标图像的灰度值作为输出结果。
使用上述第一降噪处理和第二降噪处理的方法遍历目标图像所有的像素点或其像素位置以后,便可以输出获得目标图像的帧间处理结果和帧内处理结果。
3)帧间处理结果和帧内处理结果的结合:
在获得目标图像帧的帧间处理结果和帧内处理结果后,可以按照算式(1)的方式,对两者进行整合以得到最终的夜景效果增强图像。
本领域技术人员应该还可以进一步意识到,结合本文中所公开的实施例描述的示例性的双光图像整合方法的各个步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。
本领域技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。所述的计算机软件可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体或随机存储记忆体等。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;在本发明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本发明的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本 发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (18)

  1. 一种图像处理方法,用于航拍相机,其特征在于,包括:
    获取多帧目标图像;
    对所述多帧目标图像中的每一帧目标图像均执行第一降噪处理,以获得所述多帧目标图像的帧内处理结果,其中,所述第一降噪处理用于滤除所述多帧目标图像中每一帧目标图像的空间域噪声;
    对所述多帧目标图像中任意相邻两帧目标图像执行第二降噪处理,以获得帧间处理结果,其中,所述第二降噪处理用于滤除所述多帧目标图像中任意相邻两帧目标图像之间的时间域噪声;
    根据所述帧内处理结果和帧间处理结果,形成所述目标图像的夜景效果增强图像。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述多帧目标图像种的每一帧目标图像均执行第一降噪处理,以获得所述帧内处理结果,包括:
    依次检测所述多帧目标图像中每一帧目标图像中每个像素点在空间域上的频率特性;
    根据所述每个像素点在空间域上的频率特性,使用与所述频率特性对应的图像滤波处理,以获得所述帧内处理结果。
  3. 根据权利要求2所述的方法,其特征在于,所述依次检测所述多帧目标图像中每一帧目标图像中每个像素点在空间域上的频率特性,包括:
    分别计算选定的目标像素点和与所述目标像素点相邻的多个像素点之间的空域距离,其中,所述空域距离是指所述选定的目标像素点和与所述目标像素点相邻的像素点的灰度差;
    根据所述空域距离和频率阈值,确定所述目标像素点的频率特性。
  4. 根据权利要求3所述的方法,其特征在于,所述频率特性包括低频和高频;则,
    所述根据所述空域距离和频率阈值,确定所述目标像素点的频率特性,具体包括:
    判断所述目标像素点和与所述目标像素点相邻的多个像素点之间的空域 距离是否均小于所述频率阈值;
    若是,则确定所述目标像素点的频率特性为低频;
    若否,则确定所述目标像素点的频率特性为高频。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述每个像素点在空间域上的频率特性,使用与所述频率特性对应的图像滤波处理,包括:
    在所述频率特性为低频的目标像素点上使用第一图像滤波处理,并且在所述频率特性为高频的目标像素点上使用平滑强度小于所述第一图像滤波处理的第二图像滤波处理;
    其中,所述第一图像滤波处理为均值滤波,所述第二图像滤波处理为加权均值滤波。
  6. 根据权利要求5所述的方法,其特征在于,所述第一图像滤波处理使用的第一滤波矩阵的大于所述第二图像滤波处理使用的第二滤波矩阵。
  7. 根据权利要求5或6所述的方法,其特征在于,所述第二图像滤波处理使用的第二滤波矩阵的中心值大于周边值。
  8. 根据权利要求3-7中任一项所述的方法,其特征在于,所述与所述目标像素点相邻的像素点包括:与所述目标像素点在同一行,位于所述目标像素点左侧的左像素点和位于所述目标像素点右侧的右像素点;以及
    与所述目标像素点在同一列,位于所述目标像素点上方的上像素点和位于所述目标像素点下方的下像素点。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述对所述多帧目标图像中任意相邻两帧目标图像执行第二降噪处理,以获得帧间处理结果,包括:
    依次检测所述多帧目标图像在每个像素位置上的信号变化特性;
    根据所述信号变化特性,计算每个像素点位置上的灰度值。
  10. 根据权利要求9所述的方法,其特征在于,所述依次检测所述多帧目标图像在每个像素位置上的信号变化特性,包括:
    在选定的像素位置,依次计算所述多帧目标图像中,任意两帧目标图像之间的时域距离,其中,所述时域距离是指所述多帧目标图像中任意相邻两帧目标图像中位置对应的像素的灰度差;
    根据所述时域距离,确定每一帧目标图像在所述选定的像素位置的信号变化特性。
  11. 根据权利要求10所述的方法,其特征在于,所述信号变化特性包括静止像素和运动像素;则,
    所述根据所述时域距离,确定每一帧目标图像在所述选定的像素位置的像素点的信号变化特性,具体包括:
    选定一帧目标图像;
    依次计算所述选定的目标图像与另一帧目标图像在预设采样窗的每一个像素位置上的时域距离;所述预设采样窗的中心为所述选定的像素位置;
    判断在所述预设采样窗内,所有像素位置上的时域距离是否均大于预设的阈值;
    若是,则确定所述选定的目标图像在所述选定的像素位置的像素点为运动像素;
    若否,则确定所述选定的目标图像在所述选定的像素位置的像素点为静止像素。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述信号变化特性,计算每个像素点位置上的输出结果,具体包括:
    计算在选定的像素位置上,所述多帧目标图像的像素点和所有静止像素的平均值;
    确定所述平均值为所述选定的像素位置的灰度值。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述根据所述帧内处理结果和帧间处理结果,形成所述目标图像帧的夜景拍摄效果增强图像,具体包括:
    确定所述帧内处理结果和所述帧间处理结果的权重;
    加权计算所述帧内处理结果和所述帧间处理结果,获得所述夜景效果增强图像。
  14. 根据权利要求13所述的方法,其特征在于,通过如下算式加权计算所述帧内处理结果和所述帧间处理结果:
    Y=(1-a)*Ys+a*Yt
    其中,Y为图像处理结果;Ys为帧内处理结果,Yt为帧间处理结果,a为帧间处理结果的权重。
  15. 一种夜间拍摄方法,其特征在于,包括:
    接收拍摄触发指令;
    以预设的速度,连续采集两帧或以上的目标图像;
    对多帧所述目标图像执行如权利要求1-14任一项所述的图像处理方法,获得夜景效果增强图像。
  16. 一种图像处理芯片,其特征在于,包括:处理器以及与所述处理器通信连接的存储器;
    所述存储器中存储有计算机程序指令,所述计算机程序指令在被所述处理器调用时,以使所述处理器执行如权利要求1-14任一项所述的图像处理方法。
  17. 一种航拍相机,其特征在于,包括:
    图像传感器,所述图像传感器用于以设定的拍摄参数,采集多帧图像;
    控制器;所述控制器与所述图像传感器连接,用于触发所述图像传感器以设定的速度连续曝光以采集至少两帧图像;
    图像处理器,所述图像处理器用于接收所述图像传感器通过连续曝光采集的至少两帧图像,并对接收到的所述至少两帧图像执行如权利要求1-14任一项所述的图像处理方法,获得夜景效果增强图像;
    存储设备,所述存储设备与所述图像处理器连接,用于存储所述夜景效果增强图像帧。
  18. 根据权利要求17所述的航拍相机,其特征在于,所述航拍相机还包括亮度传感器;所述亮度传感器用于感知当前的环境亮度并提供所述环境亮度至所述控制器。
PCT/CN2020/077821 2019-03-06 2020-03-04 图像处理方法、夜间拍摄方法、图像处理芯片及航拍相机 WO2020177723A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/467,366 US20210400172A1 (en) 2019-03-06 2021-09-06 Imaging processing method and apparatus for a camera module in a night scene, an electronic device, and a storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910167837.8A CN109873953A (zh) 2019-03-06 2019-03-06 图像处理方法、夜间拍摄方法、图像处理芯片及航拍相机
CN201910167837.8 2019-03-06

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/467,366 Continuation US20210400172A1 (en) 2019-03-06 2021-09-06 Imaging processing method and apparatus for a camera module in a night scene, an electronic device, and a storage medium

Publications (1)

Publication Number Publication Date
WO2020177723A1 true WO2020177723A1 (zh) 2020-09-10

Family

ID=66919908

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/077821 WO2020177723A1 (zh) 2019-03-06 2020-03-04 图像处理方法、夜间拍摄方法、图像处理芯片及航拍相机

Country Status (3)

Country Link
US (1) US20210400172A1 (zh)
CN (1) CN109873953A (zh)
WO (1) WO2020177723A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115188091A (zh) * 2022-07-13 2022-10-14 国网江苏省电力有限公司泰州供电分公司 一种融合电力输变配设备的无人机网格化巡检系统及方法
CN115314627A (zh) * 2021-05-08 2022-11-08 杭州海康威视数字技术股份有限公司 一种图像处理方法、系统及摄像机

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109873953A (zh) * 2019-03-06 2019-06-11 深圳市道通智能航空技术有限公司 图像处理方法、夜间拍摄方法、图像处理芯片及航拍相机
CN112929558B (zh) * 2019-12-06 2023-03-28 荣耀终端有限公司 图像处理方法和电子设备
CN111143589A (zh) * 2019-12-06 2020-05-12 Oppo广东移动通信有限公司 一种图像处理方法及装置、存储介质
CN113011433B (zh) * 2019-12-20 2023-10-13 杭州海康威视数字技术股份有限公司 一种滤波参数调整方法及装置
CN111369465B (zh) * 2020-03-04 2024-03-08 东软医疗系统股份有限公司 Ct动态图像增强方法及装置
CN113630545B (zh) * 2020-05-07 2023-03-24 华为技术有限公司 一种拍摄方法及设备
US20220060628A1 (en) * 2020-08-19 2022-02-24 Honeywell International Inc. Active gimbal stabilized aerial visual-inertial navigation system
CN115393406B (zh) * 2022-08-17 2024-05-10 中船智控科技(武汉)有限公司 一种基于孪生卷积网络的图像配准方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102238316A (zh) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 一种3d数字视频图像的自适应实时降噪方案
CN104767913A (zh) * 2015-04-16 2015-07-08 中国科学院自动化研究所 一种对比度自适应的视频去噪系统
US20150373235A1 (en) * 2014-06-24 2015-12-24 Realtek Semiconductor Corp. De-noising method and image system
CN105208376A (zh) * 2015-08-28 2015-12-30 青岛中星微电子有限公司 一种数字降噪方法和装置
CN105654428A (zh) * 2014-11-14 2016-06-08 联芯科技有限公司 图像降噪方法及系统
CN105809630A (zh) * 2014-12-30 2016-07-27 展讯通信(天津)有限公司 一种图像噪声过滤方法及系统
CN109348089A (zh) * 2018-11-22 2019-02-15 Oppo广东移动通信有限公司 夜景图像处理方法、装置、电子设备及存储介质
CN109873953A (zh) * 2019-03-06 2019-06-11 深圳市道通智能航空技术有限公司 图像处理方法、夜间拍摄方法、图像处理芯片及航拍相机

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1761286A (zh) * 2005-11-03 2006-04-19 上海交通大学 用边缘检测、运动检测和中值滤波去除蚊式噪声的方法
CN100420270C (zh) * 2006-06-13 2008-09-17 北京中星微电子有限公司 图像序列帧间闪烁噪声消除方法
CN101106685B (zh) * 2007-08-31 2010-06-02 湖北科创高新网络视频股份有限公司 一种基于运动检测的去隔行方法和装置
US8149336B2 (en) * 2008-05-07 2012-04-03 Honeywell International Inc. Method for digital noise reduction in low light video
CN101299269A (zh) * 2008-06-13 2008-11-05 北京中星微电子有限公司 静止场景的标定方法及装置
CN101640761A (zh) * 2009-08-31 2010-02-03 杭州新源电子研究所 车载数字电视信号处理方法
CN102306278B (zh) * 2011-07-08 2017-05-10 中兴智能交通(无锡)有限公司 一种基于视频的烟火检测方法和装置
CN103179325B (zh) * 2013-03-26 2015-11-04 北京理工大学 一种固定场景下低信噪比视频的自适应3d降噪方法
EP3466051A1 (en) * 2016-05-25 2019-04-10 GoPro, Inc. Three-dimensional noise reduction
US10448014B2 (en) * 2017-05-23 2019-10-15 Intel Corporation Content adaptive motion compensated temporal filtering for denoising of noisy video for efficient coding
CN111010495B (zh) * 2019-12-09 2023-03-14 腾讯科技(深圳)有限公司 一种视频降噪处理方法及装置
US11902661B2 (en) * 2022-01-06 2024-02-13 Lenovo (Singapore) Pte. Ltd. Device motion aware temporal denoising

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102238316A (zh) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 一种3d数字视频图像的自适应实时降噪方案
US20150373235A1 (en) * 2014-06-24 2015-12-24 Realtek Semiconductor Corp. De-noising method and image system
CN105654428A (zh) * 2014-11-14 2016-06-08 联芯科技有限公司 图像降噪方法及系统
CN105809630A (zh) * 2014-12-30 2016-07-27 展讯通信(天津)有限公司 一种图像噪声过滤方法及系统
CN104767913A (zh) * 2015-04-16 2015-07-08 中国科学院自动化研究所 一种对比度自适应的视频去噪系统
CN105208376A (zh) * 2015-08-28 2015-12-30 青岛中星微电子有限公司 一种数字降噪方法和装置
CN109348089A (zh) * 2018-11-22 2019-02-15 Oppo广东移动通信有限公司 夜景图像处理方法、装置、电子设备及存储介质
CN109873953A (zh) * 2019-03-06 2019-06-11 深圳市道通智能航空技术有限公司 图像处理方法、夜间拍摄方法、图像处理芯片及航拍相机

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314627A (zh) * 2021-05-08 2022-11-08 杭州海康威视数字技术股份有限公司 一种图像处理方法、系统及摄像机
CN115314627B (zh) * 2021-05-08 2024-03-01 杭州海康威视数字技术股份有限公司 一种图像处理方法、系统及摄像机
CN115188091A (zh) * 2022-07-13 2022-10-14 国网江苏省电力有限公司泰州供电分公司 一种融合电力输变配设备的无人机网格化巡检系统及方法
CN115188091B (zh) * 2022-07-13 2023-10-13 国网江苏省电力有限公司泰州供电分公司 一种融合电力输变配设备的无人机网格化巡检系统及方法

Also Published As

Publication number Publication date
CN109873953A (zh) 2019-06-11
US20210400172A1 (en) 2021-12-23

Similar Documents

Publication Publication Date Title
WO2020177723A1 (zh) 图像处理方法、夜间拍摄方法、图像处理芯片及航拍相机
CN111641778B (zh) 一种拍摄方法、装置与设备
CN111418201B (zh) 一种拍摄方法及设备
WO2021208706A1 (zh) 高动态范围图像合成方法、装置、图像处理芯片及航拍相机
WO2017215501A1 (zh) 图像降噪处理方法、装置及计算机存储介质
CN110691193B (zh) 摄像头切换方法、装置、存储介质及电子设备
US20140285698A1 (en) Viewfinder Display Based on Metering Images
WO2014093042A1 (en) Determining an image capture payload burst structure based on metering image capture sweep
EP3053332A1 (en) Using a second camera to adjust settings of first camera
WO2014099284A1 (en) Determining exposure times using split paxels
JP5414910B2 (ja) 表示ユニット、撮像ユニット及び表示システム装置
US9906732B2 (en) Image processing device, image capture device, image processing method, and program
CN108200352B (zh) 一种调解图片亮度的方法、终端及存储介质
WO2014093048A1 (en) Determining an image capture payload burst structure
CN110868547A (zh) 拍照控制方法、拍照控制装置、电子设备及存储介质
WO2023137956A1 (zh) 图像处理方法、装置、电子设备及存储介质
US9648261B2 (en) Account for clipped pixels in auto-focus statistics collection
CN110740266B (zh) 图像选帧方法、装置、存储介质及电子设备
WO2017143654A1 (zh) 选择待输出照片的方法、拍照方法、装置及存储介质
CN108833801B (zh) 基于图像序列的自适应运动检测方法
EP3442218A1 (en) Imaging apparatus and control method for outputting images with different input/output characteristics in different regions and region information, client apparatus and control method for receiving images with different input/output characteristics in different regions and region information and displaying the regions in a distinguishable manner
US12041358B2 (en) High dynamic range image synthesis method and apparatus, image processing chip and aerial camera
CN116389885B (zh) 拍摄方法、电子设备及存储介质
JP2019033470A (ja) 画像処理システム、撮像装置、画像処理装置、制御方法およびプログラム
WO2023245391A1 (zh) 一种相机预览的方法及其装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20766764

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20766764

Country of ref document: EP

Kind code of ref document: A1