WO2022078128A1 - 图像处理方法和装置 - Google Patents

图像处理方法和装置 Download PDF

Info

Publication number
WO2022078128A1
WO2022078128A1 PCT/CN2021/117547 CN2021117547W WO2022078128A1 WO 2022078128 A1 WO2022078128 A1 WO 2022078128A1 CN 2021117547 W CN2021117547 W CN 2021117547W WO 2022078128 A1 WO2022078128 A1 WO 2022078128A1
Authority
WO
WIPO (PCT)
Prior art keywords
current frame
frame image
brightness value
image
exposure time
Prior art date
Application number
PCT/CN2021/117547
Other languages
English (en)
French (fr)
Inventor
杨攀
钟磊
王超
李垠
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21879174.7A priority Critical patent/EP4207734A1/en
Publication of WO2022078128A1 publication Critical patent/WO2022078128A1/zh
Priority to US18/295,647 priority patent/US20230234509A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/745Detection of flicker frequency or suppression of flicker wherein the flicker is caused by illumination, e.g. due to fluorescent tube illumination or pulsed LED illumination
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/67Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
    • H04N25/671Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
    • H04N25/677Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction for reducing the column or line fixed pattern noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/706Pixels for exposure or ambient light measuring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/766Addressed sensors, e.g. MOS or CMOS sensors comprising control or output lines used for a plurality of functions, e.g. for pixel output, driving, reset or power

Definitions

  • the present application relates to the field of image processing, and more particularly to an image processing method and apparatus.
  • CMOS image sensors are widely used due to their flexible image capture, high sensitivity and low power consumption.
  • the exposure method used by the CMOS image sensor is row-by-row exposure. If the energy received between different lines is different, the captured image will show periodic light and dark stripes, that is, flicker stripes. If the image has flickering streaks, the quality of the image will be degraded.
  • the prior art usually uses flicker correction to correct the image frame by frame to remove the flicker stripes in the image.
  • flicker correction frame by frame can improve the image quality to a certain extent, it always has a certain degree of interference-free image quality. deviation.
  • the present application provides an image processing method and apparatus, which can improve image quality.
  • an image processing method comprising: acquiring a current frame image, where the current frame image includes flickering fringes; determining the frequency of an interference source causing the flickering fringes according to the current frame image; adjusting the frequency of the interference source according to the frequency of the interference source Exposure time for the next frame to get the next frame without the flicker fringes.
  • the frequency of the interference source that causes the flickering stripes can be determined according to the current frame image, and then the exposure time of the next frame is adjusted according to the frequency of the interference source, so that the next frame The image does not include flickering fringes, so that the influence of the interference source on the image can be avoided at the root, and the image quality can be improved.
  • adjusting the exposure time of the next frame according to the frequency of the interference source to obtain the next frame of image that does not include the flickering fringes includes: adjusting the frequency of the interference source according to the frequency of the interference source.
  • the exposure time of the next frame satisfies the following formula:
  • T AE is the exposure time of the next frame
  • f is the frequency of the interference source
  • n is a positive integer.
  • the light energy period is the reciprocal of the frequency of the light energy, and the frequency of the light energy is twice the frequency of the interference source,
  • the formula is satisfied by adjusting the exposure time of the next frame That is, by adjusting the exposure time of the next frame to be an integer multiple of the light energy period, flickering stripes can be avoided in the next frame of image, thereby improving the image quality.
  • the method further includes: acquiring the exposure time of the current frame; adjusting the exposure time of the next frame according to the frequency of the interference source, so that the exposure time of the current frame is the same as that of the current frame.
  • the absolute value of the difference between the exposure times of the next frame is less than or equal to the first threshold.
  • the obtaining the current frame image includes: obtaining the brightness value of each pixel in the current frame image; calculating the current frame image according to the brightness value of the pixel The row average brightness value of each row of pixel points in ; according to the row average brightness value, it is determined that the current frame image includes the flickering stripes.
  • the line average luminance value of each line of pixels in the current frame image is calculated by the luminance value of each pixel point in the current frame image, and then it is determined according to the line average luminance value that the current frame image includes flickering stripes, thereby The accuracy of flickering fringe determination can be improved.
  • determining that the current frame image includes the flickering stripes according to the row average brightness value includes: fitting the row average brightness value according to an objective function to obtain a fitting function, the objective function is a function formed by the absolute value of the sine function; if the fitting degree of the fitting function is greater than or equal to the second threshold, it is determined that the current frame image includes the flickering stripes.
  • the line average luminance value when it is determined according to the line average luminance value that the current frame image includes flickering stripes, the line average luminance value can be fitted according to the function formed by the absolute value of the sine function to obtain a fitting function.
  • the fitting degree of the resultant function reaches a certain threshold, it is determined that the current frame image includes the flickering stripes, so that the accuracy of determining the flickering stripes can be improved.
  • the determining, according to the current frame image, the frequency of the interference source that causes the flickering fringes includes: determining the frequency of the interference source according to the fitting function.
  • a fitting function is obtained by fitting the line average luminance value of the current frame image, and then the frequency of the interference source causing the flickering fringes is determined according to the fitting function, so that the detection of interference sources of any frequency can be realized. , so that the image processing method can be adapted to more scenes, and the image processing capability and correction range are improved.
  • the method further includes: performing flicker correction on the current frame image according to the luminance value of each pixel point and the fitting function.
  • flicker correction can be performed on the current frame image according to the brightness value of each pixel point and the fitting function, which can improve the quality of the current frame image.
  • performing flicker correction on the current frame image includes: calculating the initial value of each pixel point according to the brightness value of each pixel point and the fitting function Correct the brightness value; calculate the global average brightness value of the current frame image according to the row average brightness value; calculate the each pixel according to the initial corrected brightness value of each pixel point, the global average brightness value and the row minimum brightness value The final corrected brightness value of the point; brightness correction is performed on each pixel point according to the final corrected brightness value of each pixel point.
  • the brightness value of each pixel point is nonlinearly restored according to the brightness value of each pixel point and the fitting function to obtain the initial correction of each pixel point.
  • Brightness value to improve the efficiency and accuracy of the current frame image flicker correction and then perform brightness compensation for each pixel point according to the initial corrected brightness value, global average brightness value and row minimum brightness value of each pixel point to avoid There is a flashing jump phenomenon between different images, which makes the overall brightness smooth; finally, according to the final corrected brightness value of each pixel, brightness correction is performed on each pixel to improve the quality of the current frame image and make the overall visual effect of the current frame image. optimal.
  • an image processing device including: an acquisition module for acquiring a current frame image, where the current frame image includes flickering stripes; a processing module for determining the interference causing the flickering stripes according to the current frame image Source frequency; according to the interference source frequency, adjust the exposure time of the next frame to obtain the next frame of image without flickering fringes.
  • the processing module is further configured to: according to the frequency of the interference source, adjust the exposure time of the next frame to satisfy the following formula:
  • T AE is the exposure time of the next frame
  • f is the frequency of the interference source
  • n is a positive integer.
  • the acquisition module is further configured to: acquire the exposure time of the current frame; the processing module is further configured to: adjust the exposure time of the next frame according to the frequency of the interference source , so that the absolute value of the difference between the exposure time of the current frame and the exposure time of the next frame is less than or equal to the first threshold.
  • the obtaining module is further configured to: obtain the brightness value of each pixel in the current frame image; the processing module is further configured to: according to the brightness of the pixel value, calculate the row average brightness value of each row of pixels in the current frame image; determine that the current frame image includes the flickering stripes according to the row average brightness value.
  • the processing module is also used to: fit the average luminance value of the row according to an objective function to obtain a fitting function, and the objective function is determined by the absolute value of the sine function. If the fitting degree of the fitting function is greater than or equal to the second threshold, it is determined that the current frame image includes the flickering fringes.
  • the processing module is further configured to: determine the interference source frequency according to the fitting function.
  • the processing module is further configured to: perform flicker correction on the current frame image according to the luminance value of each pixel point and the fitting function.
  • the processing module is further configured to: calculate the initial corrected brightness value of each pixel point according to the brightness value of each pixel point and the fitting function; Calculate the global average brightness value of the current frame image according to the row average brightness value; calculate the final correction of each pixel point according to the initial corrected brightness value of each pixel point, the global average brightness value and the row minimum brightness value Brightness value; perform brightness correction for each pixel point according to the final corrected brightness value of each pixel point.
  • a vehicle including the device in the second aspect or any possible implementation of the second aspect.
  • a computer program product containing instructions, when the computer program product is run on a computer, the computer program product causes the computer to execute the method in the first aspect or any implementation manner of the first aspect.
  • a computer-readable storage medium stores program codes for device execution, the program codes including the first aspect or any possible implementation manner of the first aspect. method in the directive.
  • a chip in a sixth aspect, includes a processor and a data interface, the processor reads an instruction stored in a memory through the data interface, and executes the first aspect or any possible implementation of the first aspect method in method.
  • the chip may further include a memory, in which instructions are stored, the processor is configured to execute the instructions stored in the memory, and when the instructions are executed, the The processor is configured to execute the method in the first aspect or any possible implementation manner of the first aspect.
  • FIG. 1 is an example diagram of an image including flickering stripes provided by an embodiment of the present application.
  • FIG. 2 is an example diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 3 is an example diagram of an image processing method provided by an embodiment of the present application.
  • FIG. 4 is an exemplary diagram of an interference source signal waveform and its corresponding optical energy waveform provided by an embodiment of the present application
  • FIG. 5 is an exemplary diagram of a method for performing flicker correction on a current frame image provided by an embodiment of the present application
  • FIG. 6 is an example diagram of an overall image processing flow provided by an embodiment of the present application.
  • FIG. 7 is an exemplary diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 8 is an exemplary block diagram of a hardware structure of an image processing apparatus provided by an embodiment of the present application.
  • CMOS image sensors are widely used due to their excellent characteristics such as flexible image capture, high sensitivity and low power consumption.
  • the exposure method adopted by the CMOS image sensor is to expose line by line.
  • the exposure time of any pixel is the same, that is, the exposure start point and exposure time of each pixel on the same row are the same, so the energy received by all points in the same row is the same.
  • the exposure time is the same between different lines, the starting point of exposure is different, so the energy received between different lines is not necessarily the same. If the energy received between different lines is different, the captured image will show periodic light and dark stripes, that is, flicker stripes.
  • each frame of image information is very important for artificial intelligence (AI) recognition. If the picture includes flickering stripes, as shown in Figure 1, it may cause the AI recognition algorithm to fail to identify the vehicle or other road condition information in the image, resulting in serious traffic accidents.
  • AI artificial intelligence
  • a flicker correction method is usually used to correct an image frame by frame to remove flicker stripes in the image, thereby improving image quality. While flicker correction on a frame-by-frame basis can improve image quality to some extent, there is always a certain deviation from interference-free image quality.
  • the present application provides an image processing method, which mainly determines the frequency of the interference source based on the current frame image with flickering fringes, and then adjusts the exposure time of the next frame according to the frequency of the interference source, so as to avoid the interference source from affecting the image from the root cause. effect and improve image quality.
  • the embodiments of the present application may be applied to the field of traffic roads, the field of automatic driving, and the field of aerospace and navigation, which are not limited in this application.
  • FIG. 2 is an example diagram of a system architecture provided by an embodiment of the present application.
  • the system architecture 200 can be applied to autonomous vehicles, and can collect information such as driving environment or road conditions in the form of images, and process the collected images; it can also be applied to other devices or equipment that need to obtain high-quality images , this application does not limit it.
  • the system architecture 200 includes a camera 11 , a mobile data center (move data center, MDC) 12 , and a display device 18 .
  • the MDC 12 specifically includes an image signal processor (ISP) 13, a central processing unit (CPU) 14, an external interface 15, a camera perception chip 16, and a sensor fusion unit 17 and other module components.
  • the MDC 12 is respectively connected to the camera 11 and the display device 18 through the external interface 15 .
  • the camera 11 is a camera without an ISP, and is used for acquiring images.
  • the MDC 12 is used to process the images acquired by the camera 11 .
  • ISP13 is integrated inside MDC12 for related image processing.
  • ISP13 can be used to identify whether the current frame image includes flicker stripes; when the current frame image includes flicker stripes, ISP13 can be used to perform flicker correction on the current frame image, and can also be used to perform flicker correction on the next frame image. Frame images are processed for flicker avoidance.
  • the CPU 14 is used to perform serial and parallel processing of various tasks.
  • the external interface 15 is used for the MDC 12 to connect and communicate with related peripherals (eg, the camera 11 and the display device 18 ).
  • the camera perception chip 16 is used to perform related segmentation, recognition and other processing on the content in the image (information such as portrait and scene).
  • the sensor fusion unit 17 is used to perform fusion calculation on the sensor units of all MDC12 peripherals, and output the final control strategy. For example, if the system architecture 200 is used in the field of automatic driving, the sensor fusion unit 17 can perform fusion calculation on the data of the image sensor, speed sensor, temperature sensor, torque sensor, etc. of the MDC12 peripherals, and output the final vehicle control strategy.
  • the display device 18 is used for displaying the processed images in the form of pictures or videos.
  • FIG. 3 is an example diagram of an image processing method provided by an embodiment of the present application. It should be understood that the method 300 can be applied in the system architecture 200 . The method 300 includes steps S310-330, which will be described in detail below.
  • S310 Acquire a current frame image, where the current frame image includes flickering stripes.
  • the current frame image may be acquired from an image sensor, or may be acquired from other image acquisition devices, which is not limited in this application. It should be understood that, in actual operation, images are acquired frame by frame. The currently acquired image can be recorded as the current frame image. What is acquired at the next moment can be recorded as the next frame of image.
  • the brightness distribution of the current frame image may be analyzed first to determine that the current frame image includes flickering stripes.
  • the above process can also be understood as: when the current frame image is acquired, it can be first determined whether the current frame image includes flickering stripes. Specifically, when it is determined that the current frame image includes flickering stripes according to the line average brightness value, if the distribution of the line brightness average value meets the requirements, it is determined that the current frame image includes flicker stripes; if the line brightness average value distribution does not meet the requirements, it is determined that The current frame image does not include flickering stripes. It should be understood that, if it is determined that the current frame includes flickering stripes, steps S320-330 may be continued. Correspondingly, if it is determined that the current frame does not include flickering stripes, the next frame of images may continue to be collected.
  • FIG. 4 is an example diagram of an interference source signal waveform and its corresponding optical energy waveform provided by an embodiment of the present application.
  • the interference source signal is usually a sinusoidal waveform satisfying a certain frequency, while the light energy has no directionality, so the corresponding light energy waveform is shown in FIG. 4 .
  • the distribution of light energy can also be considered as the brightness distribution of different rows in an image including flickering fringes. And it is easy to see that the luminance distribution distribution of different rows satisfies the function formed by the absolute value of the sine function.
  • the current frame image includes flickering stripes.
  • the line average luminance values of different lines in the acquired current frame image can be fitted according to the objective function to obtain a fitting function; if the fitting degree of the fitting function is greater than or equal to the second threshold, the current frame image is determined Includes flashing streaks.
  • the objective function is a function composed of the absolute value of the sine function.
  • L is the luminance value
  • A is the peak of the sine wave
  • B is the luminance offset
  • t is the time variable and t ⁇ [0, t' AE ]
  • t' AE is the exposure time of the current frame
  • is the function Initial time offset
  • w is the angular frequency of blinking.
  • the fitting function is compared with the original row average luminance value to determine the fitting degree. If the fit meets a certain threshold, it can be considered that the current frame image includes flickering stripes.
  • the line average luminance value when it is determined according to the line average luminance value that the current frame image includes flickering stripes, the line average luminance value can be fitted according to the function formed by the absolute value of the sine function to obtain a fitting function.
  • the fitting degree of the resultant function reaches a certain threshold, it is determined that the current frame image includes the flickering stripes, so that the accuracy of determining the flickering stripes can be improved.
  • S320 determine the frequency of the interference source that causes the flickering stripes.
  • the frequency of the interference source may be determined according to the above fitting function.
  • the frequency of the interference source is the frequency of the interference source.
  • the frequency of various types of interference sources such as traffic lights, signal lights, and tail lights is often not fixed, and usually has a variety of brightness and light energy frequencies.
  • the vehicle camera should be able to detect the full frequency range (1Hz to tens of KHz) and do corresponding avoidance processing.
  • a fitting function is obtained by fitting the line average luminance value of the current frame image, and then the frequency of the interference source causing the flickering fringes is determined according to the fitting function, so that the detection of interference sources of any frequency can be realized. , so that the image processing method can be adapted to more scenes, and the image processing capability is improved.
  • the frequency of the interference source that causes the flickering stripes can be determined according to the current frame image, and then the exposure time of the next frame is adjusted according to the frequency of the interference source, so that the next frame
  • the image does not include flickering fringes, so that the influence of the interference source on the image can be avoided at the root, and the image quality can be improved.
  • the exposure time is adjusted to be an integer multiple of the light energy period, flicker can be avoided.
  • the light energy period is the inverse of the frequency of the light energy, and the frequency of the light energy is twice the frequency of the interference source.
  • the exposure time of the next frame can be adjusted to satisfy the formula (2):
  • T AE is the exposure time of the next frame
  • f is the frequency of the interference source
  • n is a positive integer. That is, by adjusting the exposure time of the next frame to be an integer multiple of the light energy period, flickering stripes can be avoided in the next frame of image, thereby improving the image quality.
  • the method 300 may further include: acquiring the exposure time of the current frame; adjusting the exposure time of the next frame according to the frequency of the interference source, so that the absolute value of the difference between the exposure time of the current frame and the exposure time of the next frame is less than or equal to the first threshold.
  • the absolute value of the difference between the exposure time of the current frame and the exposure time of the next frame is set to a minimum value. Thereby, the transition between the current frame image and the next frame image is smoother, and flicker between frames is avoided.
  • the method 300 may further include: performing flicker correction on the current frame image according to the brightness value of each pixel point and the fitting function.
  • FIG. 5 is an example diagram of a method for flicker correction for a current frame image provided by an embodiment of the present application.
  • the method 500 includes steps S510-540, which are described in detail below.
  • S510 Calculate the initial corrected brightness value of each pixel point according to the brightness value of each pixel point and the fitting function.
  • a fitting function that takes time as a variable can be converted into a fitting function that takes behavior as a variable.
  • t'AE is the exposure time of the current frame, and under the frequency f of the interference source, there are 2* t'AE *f flicker cycles, then each cycle corresponds to pixels, that is, each cycle corresponds to row pixels, then
  • the initial corrected brightness value of each pixel can be calculated according to the following formula (3):
  • L i,j is the brightness value of the original i-th row and j-column pixel
  • L 1 is the initial corrected brightness value of the i-th row and j-column pixel
  • i ⁇ [0,y] is a positive integer, ⁇ ⁇ [0, ⁇ ).
  • S520 Calculate the global average brightness value of the current frame image according to the row average brightness value.
  • the global average brightness value of the current frame image can be calculated according to the following formula (4):
  • S530 Calculate the final corrected brightness value of each pixel point according to the initial corrected brightness value, the global average brightness value and the row minimum brightness value of each pixel point.
  • the final corrected brightness value of each pixel can be calculated according to the following formula (5):
  • L 2 is the final corrected luminance value of the pixel point in the i-th row and j-column
  • B is the row luminance offset, which can also be regarded as the lowest luminance value of the row.
  • brightness compensation can also be performed first, and then a final corrected brightness value can be obtained according to the compensated brightness value and the fitting function, which is not limited in this application.
  • the brightness value of each pixel point is nonlinearly restored according to the brightness value of each pixel point and the fitting function to obtain the initial correction of each pixel point.
  • Brightness value to improve the efficiency and accuracy of the current frame image flicker correction and then perform brightness compensation for each pixel point according to the initial corrected brightness value, global average brightness value and row minimum brightness value of each pixel point to avoid There is a flashing jump phenomenon between different images, which makes the overall brightness smooth; finally, according to the final corrected brightness value of each pixel, brightness correction is performed on each pixel to improve the quality of the current frame image and make the overall visual effect of the current frame image. optimal.
  • the current frame image includes flickering stripes
  • FIG. 6 is an example diagram of an overall image processing flow provided by an embodiment of the present application.
  • the process 600 includes steps S610-650, which will be described in detail below.
  • images are acquired frame by frame.
  • the currently acquired image can be recorded as the current frame image.
  • the next image to be acquired can be recorded as the next frame image.
  • the brightness distribution of the pixels in the current frame image is analyzed to determine whether the current frame image includes flickering stripes.
  • the specific determination method has been described in detail above, and will not be repeated here.
  • flicker correction is performed on the current frame image.
  • the specific flicker correction method has been described in detail above, and will not be repeated here.
  • the frequency of the interference source is determined according to the current frame image.
  • the manner of determining the frequency of the interference source has been described in detail above, and will not be repeated here.
  • FIG. 7 is an example diagram of an image processing apparatus provided by an embodiment of the present application.
  • the apparatus 700 includes an acquisition module 710 and a processing module 720 .
  • the obtaining module 710 is configured to obtain a current frame image, where the current frame image includes flickering stripes.
  • the processing module 720 is configured to determine the frequency of the interference source causing the flickering fringes according to the current frame image; and adjust the exposure time of the next frame according to the frequency of the interference source to obtain the next frame of image that does not include the flickering fringes.
  • the processing module 720 can also be used to: according to the frequency of the interference source, adjust the exposure time of the next frame to satisfy the following formula:
  • T AE is the exposure time of the next frame
  • f is the frequency of the interference source
  • n is a positive integer.
  • the acquisition module 710 can also be used to: acquire the exposure time of the current frame; the processing module 820 can also be used to: adjust the exposure time of the next frame according to the frequency of the interference source, so that the exposure time of the current frame is the same as the exposure time of the next frame.
  • the absolute value of the difference in exposure time is less than or equal to the first threshold.
  • the obtaining module 710 can also be used to: obtain the brightness value of each pixel in the current frame image; the processing module 720 can also be used to: calculate the brightness value of each row of pixels in the current frame image according to the brightness value of the pixel. Line average brightness value; according to the line average brightness value, it is determined that the current frame image includes flickering stripes.
  • the processing module 720 can also be used for: fitting the row average luminance value according to the objective function to obtain a fitting function, and the objective function is a function formed by the absolute value of the sine function; if the fitting degree of the fitting function is Greater than or equal to the second threshold, it is determined that the current frame image includes flickering stripes.
  • the processing module 720 may be further configured to: determine the frequency of the interference source according to the fitting function.
  • the processing module 720 may be further configured to: perform flicker correction on the current frame image according to the brightness value of each pixel point and the fitting function.
  • the processing module 720 can also be used to: calculate the initial corrected brightness value of each pixel point according to the brightness value of each pixel point and the fitting function; calculate the global average brightness of the current frame image according to the row average brightness value value; calculate the final corrected brightness value of each pixel point according to the initial corrected brightness value, global average brightness value and row minimum brightness value of each pixel point; according to the final corrected brightness value of each pixel point, for each pixel point Perform brightness correction.
  • FIG. 8 is an exemplary block diagram of a hardware structure of an image processing apparatus provided by an embodiment of the present application.
  • the apparatus 800 (the apparatus 800 may specifically be a computer device) includes a memory 810 , a processor 820 , a communication interface 830 and a bus 840 .
  • the memory 810 , the processor 820 , and the communication interface 830 are connected to each other through the bus 840 for communication.
  • the memory 810 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM).
  • the memory 810 may store a program, and when the program stored in the memory 810 is executed by the processor 820, the processor 820 is configured to execute each step of the planning method in the embodiment of the present application.
  • the processor 820 may adopt a general-purpose central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), a graphics processing unit (GPU), or one or more
  • the integrated circuit is used to execute the relevant program to realize the calibration method of the method embodiment of the present application.
  • the processor 820 may also be an integrated circuit chip with signal processing capability.
  • each step of the calibration method of the present application may be completed by an integrated logic circuit of hardware in the processor 820 or instructions in the form of software.
  • the above-mentioned processor 820 may also be a general-purpose processor, a digital signal processor (digital signal processing, DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, Discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 810, and the processor 820 reads the information in the memory 810 and, in combination with its hardware, completes the functions required by the modules included in the planning apparatus of the embodiment of the present application, or executes the correction method of the method embodiment of the present application.
  • the communication interface 830 uses a transceiving device such as, but not limited to, a transceiver to implement communication between the device 800 and other devices or a communication network.
  • a transceiving device such as, but not limited to, a transceiver to implement communication between the device 800 and other devices or a communication network.
  • Bus 840 may include pathways for communicating information between various components of device 800 (eg, memory 810, processor 820, communication interface 830).
  • An embodiment of the present application further provides a vehicle, including the above-mentioned device 700 .
  • the apparatus 700 may perform the method 300 or 600.
  • the vehicle may be a smart car, a new energy car, a connected car, an intelligent driving car, etc., which is not specifically limited in this application.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution, and the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Studio Devices (AREA)
  • Picture Signal Circuits (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • Liquid Crystal Display Device Control (AREA)

Abstract

本申请提供了一种图像处理方法和装置,可以应用在智能汽车、新能源汽车、网联汽车、智能驾驶汽车等车辆上。该图像处理方法包括:获取当前帧图像,该当前帧图像包括闪烁条纹;根据当前帧图像,确定引起闪烁条纹的干扰源频率;根据干扰源频率,调节下一帧的曝光时间以得到不包括闪烁条纹的下一帧图像。本申请实施例的图像处理方法能够从根源上规避干扰源对图像的影响,进而能够提升图像质量。

Description

图像处理方法和装置
本申请要求于2020年10月12日提交中国专利局、申请号为202011084864.8、申请名称为“图像处理方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,并且更具体地涉及一种图像处理方法和装置。
背景技术
在常见的图像传感器中,互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)图像传感器由于具有图像捕获灵活、灵敏度高以及功耗低等特性而被广泛应用。但CMOS图像传感器所采用的曝光方式是逐行曝光。若不同行之间接收的能量不同,所拍摄的图像就会呈现出周期性的明暗条纹,即出现了闪烁条纹。若图像存在闪烁条纹,就会导致图像的质量降低。
现有技术通常采用闪烁校正的方式逐帧地对图像进行校正以去除图像中的闪烁条纹,虽然逐帧地进行闪烁校正在一定程度上可以改善图像质量,但始终与无干扰的图像质量有一定偏差。
因此,如何提升图像质量是一个亟待解决的技术问题。
发明内容
本申请提供一种图像处理方法和装置,能够提升图像质量。
第一方面,提供了一种图像处理方法,包括:获取当前帧图像,该当前帧图像包括闪烁条纹;根据该当前帧图像,确定引起该闪烁条纹的干扰源频率;根据该干扰源频率,调节下一帧的曝光时间以得到不包括闪烁条纹的下一帧图像。
在本申请实施例中,如果当前帧图像存在闪烁条纹,则可以根据当前帧图像确定引起该闪烁条纹的干扰源频率,然后根据干扰源频率对下一帧的曝光时间进行调节,使得下一帧图像不包括闪烁条纹,从而能够从根源上规避干扰源对图像的影响,进而能够提升图像质量。
结合第一方面,在第一方面的某些实现方式中,该根据该干扰源频率,调节下一帧的曝光时间以得到不包括闪烁条纹的下一帧图像包括:根据该干扰源频率,调节该下一帧的曝光时间满足如下公式:
Figure PCTCN2021117547-appb-000001
其中,T AE为该下一帧的曝光时间,f为该干扰源频率,n为正整数。
应理解,在调节曝光时间为光能量周期的整数倍时间时,就能够避开闪烁。其中,光 能量周期为光能量的频率的倒数,光能量的频率为干扰源频率的2倍,
因此,在本申请实施例中,通过调节下一帧的曝光时间满足公式
Figure PCTCN2021117547-appb-000002
即调节下一帧的曝光时间为光能量周期的整数倍时间,就能够避免下一帧图像出现闪烁条纹,从而使得图像质量得到提升。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:获取当前帧的曝光时间;根据该干扰源频率调节该下一帧的曝光时间,使得该当前帧的曝光时间与该下一帧的曝光时间之差的绝对值小于或等于第一阈值。
应理解,当前帧的曝光时间与下一帧的曝光时间之差的绝对值越小,曝光过度就会越平稳;反之,如果差值越大,曝光过度就会产生较大的跳变,出现闪亮、闪灭现象。因此,在本申请实施例中,通过调节当前帧的曝光时间与下一帧的曝光时间之差的绝对值小于或等于第一阈值,能够使得当前帧图像与下一帧图像之间过度更平滑,从而能够避免出现帧间闪烁。
结合第一方面,在第一方面的某些实现方式中,该获取当前帧图像包括:获取该当前帧图像中每个像素点的亮度值;根据该像素点的亮度值,计算该当前帧图像中每行像素点的行平均亮度值;根据该行平均亮度值确定该当前帧图像包括该闪烁条纹。
在本申请实施例中,通过当前帧图像中每个像素点的亮度值,计算当前帧图像中每行像素点的行平均亮度值,然后根据行平均亮度值确定当前帧图像包括闪烁条纹,从而能够提高闪烁条纹确定的准确性。
结合第一方面,在第一方面的某些实现方式中,该根据该行平均亮度值确定该当前帧图像包括该闪烁条纹包括:根据目标函数对该行平均亮度值进行拟合,得到拟合函数,该目标函数为由正弦函数的绝对值构成的函数;若该拟合函数的拟合度大于或等于第二阈值,确定该当前帧图像包括所述闪烁条纹。
在本申请实施例中,在根据行平均亮度值确定当前帧图像包括闪烁条纹时,可以根据正弦函数的绝对值构成的函数对该行平均亮度值进行拟合,得到拟合函数,在该拟合函数的拟合度达到一定阈值时,确定该当前帧图像包括所述闪烁条纹,从而能够提高闪烁条纹确定的准确性。
结合第一方面,在第一方面的某些实现方式中,该根据该当前帧图像,确定引起该闪烁条纹的干扰源频率包括:根据该拟合函数,确定该干扰源频率。
在本申请实施例中,通过对当前帧图像的行平均亮度值进行拟合得到拟合函数,再根据拟合函数确定引起该闪烁条纹的干扰源频率,从而能够实现任何频率的干扰源的检测,使得该图像处理方法能够适应于更多的场景,提升了图像处理能力和校正范围。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据该每个像素点的亮度值和该拟合函数,对该当前帧图像进行闪烁校正。
在本申请实施例中,可以根据每个像素点的亮度值和拟合函数对当前帧图像进行闪烁校正,能够使得当前帧图像的质量得到提升。
结合第一方面,在第一方面的某些实现方式中,该对该当前帧图像进行闪烁校正包括:根据该每个像素点的亮度值和该拟合函数,计算该每个像素点的初始校正亮度值;根据该行平均亮度值,计算该当前帧图像的全局平均亮度值;根据该每个像素点的初始校正亮度 值、该全局平均亮度值和行最低亮度值,计算该每个像素点的最终校正亮度值;根据该每个像素点的最终校正亮度值,对该每个像素点进行亮度校正。
在本申请实施例中,在对当前帧图像进行闪烁校正时,首先根据每个像素点的亮度值和拟合函数对每个像素点的亮度值进行非线性还原得到每个像素点的初始校正亮度值,以提升当前帧图像闪烁校正的高效性和精准性;然后再根据每个像素点的初始校正亮度值、全局平均亮度值和行最低亮度值对每个像素点进行亮度补偿,以避免不同图像间出现闪亮跳变现象,使得整体亮度平滑;最后根据每个像素点的最终校正亮度值,对每个像素点进行亮度校正,以提高当前帧图像质量,使得当前帧图像整体视觉效果最佳。
第二方面,提供了一种图像处理装置,包括:获取模块,用于获取当前帧图像,该当前帧图像包括闪烁条纹;处理模块,用于根据该当前帧图像,确定引起该闪烁条纹的干扰源频率;根据该干扰源频率,调节下一帧的曝光时间以得到不包括闪烁条纹的下一帧图像。
结合第二方面,在第二方面的某些实现方式中,该处理模块还用于:根据该干扰源频率,调节该下一帧的曝光时间满足如下公式:
Figure PCTCN2021117547-appb-000003
其中,T AE为该下一帧的曝光时间,f为该干扰源频率,n为正整数。
结合第二方面,在第二方面的某些实现方式中,该获取模块还用于:获取当前帧的曝光时间;该处理模块还用于:根据该干扰源频率调节该下一帧的曝光时间,使得该当前帧的曝光时间与该下一帧的曝光时间之差的绝对值小于或等于第一阈值。
结合第二方面,在第二方面的某些实现方式中,该获取模块还用于:获取该当前帧图像中每个像素点的亮度值;该处理模块还用于:根据该像素点的亮度值,计算该当前帧图像中每行像素点的行平均亮度值;根据该行平均亮度值确定该当前帧图像包括该闪烁条纹。
结合第二方面,在第二方面的某些实现方式中,该处理模块还用于:根据目标函数对该行平均亮度值进行拟合,得到拟合函数,该目标函数为由正弦函数的绝对值构成的函数;若该拟合函数的拟合度大于或等于第二阈值,确定该当前帧图像包括该闪烁条纹。
结合第二方面,在第二方面的某些实现方式中,该处理模块还用于:根据该拟合函数,确定该干扰源频率。
结合第二方面,在第二方面的某些实现方式中,该处理模块还用于:根据该每个像素点的亮度值和该拟合函数,对该当前帧图像进行闪烁校正。
结合第二方面,在第二方面的某些实现方式中,该处理模块还用于:根据该每个像素点的亮度值和该拟合函数,计算该每个像素点的初始校正亮度值;根据该行平均亮度值,计算该当前帧图像的全局平均亮度值;根据该每个像素点的初始校正亮度值、该全局平均亮度值和行最低亮度值,计算该每个像素点的最终校正亮度值;根据该每个像素点的最终校正亮度值,对该每个像素点进行亮度校正。
第三方面,提供了一种车辆,包括如第二方面或者第二方面任一可能的实现方式中的装置。
第四方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面或者第一方面的任一实现方式中的方法。
第五方面,提供一种计算机可读存储介质,所述计算机可读介质存储用于设备执行的程序代码,所述程序代码包括用于执行第一方面或者第一方面的任一可能的实现方式中的方法的指令。
第六方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行第一方面或者第一方面的任一可能的实现方式中的方法。
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行第一方面或者第一方面的任一可能的实现方式中的方法。
附图说明
图1是本申请实施例提供的一种包括闪烁条纹的图像示例图;
图2是本申请实施例提供的一种系统架构示例图;
图3是本申请实施例提供的一种图像处理的方法示例图;
图4是本申请实施例提供的一种干扰源信号波形及其对应的光能量波形的示例图;
图5是本申请实施例提供的一种对当前帧图像进行闪烁校正的方法示例图;
图6是本申请实施例提供的一种整体图像处理流程示例图;
图7是本申请实施例提供的一种图像处理装置的示例图;
图8是本申请实施例提供的一种图像处理装置的硬件结构示例性框图。
具体实施方式
为便于理解,首先对本申请实施例涉及的背景技术进行详细介绍。
目前,在常见的图像传感器中,互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)图像传感器由于具有图像捕获灵活、灵敏度高以及功耗低等优良特性而被广泛应用。但CMOS图像传感器所采用的曝光方式是逐行进行曝光。任何一个像素点的曝光时间是一样的,也就是同一行上的每个像素点的曝光开始点和曝光时间都是一样的,所以同一行的所有点所接收到的能量是一样的。而在不同行之间虽然曝光时间都是一样的,但是曝光的开始点是不同的,所以不同行之间所接受到的能量不一定相同。若不同行之间接收的能量不同,所拍摄的图像就会呈现出周期性的明暗条纹,即出现了闪烁条纹。
若图像中包括闪烁条纹,图像的质量就会降低,进一步可能会造成图像内容识别率的降低。在一些领域中,图像内容识别率较低时可能会引发一定的安全风险。例如,在自动驾驶领域,每帧图像信息对于人工智能(artificial intelligence,AI)识别及其重要。若图片中包括闪烁条纹,如图1所示,可能导致AI识别算法无法对图像中的车辆或其他路况信息进行识别,从而出现严重的交通事故。
现有技术通常采用闪烁校正的方式逐帧地对图像进行校正以去除图像中的闪烁条纹,实现图像质量的提升。虽然逐帧地进行闪烁校正在一定程度上可以改善图像质量,但始终与无干扰的图像质量有一定偏差。
针对上述问题,本申请提供了一种图像处理方法,主要基于存在闪烁条纹的当前帧图 像,确定干扰源频率,再根据干扰源频率调整下一帧曝光时间,以从根源上规避干扰源对图像的影响,提升图像质量。
本申请实施例可以应用于交通道路领域、自动驾驶领域、航空航天航海领域,本申请对此不做限定。
为了更好的理解本申请实施例的方案,在进行方法的描述之前,首先结合附图2对本申请实施例的系统架构进行简单的描述。
图2是本申请实施例提供的一种系统架构示例图。该系统架构200可以应用在自动驾驶车辆中,能够对驾驶环境或路况等信息以图像的方式进行采集,并对所采集的图像进行处理;也可以应用在其他需要获取高质量图像的装置或设备中,本申请对此不做限定。
如图2所示,该系统架构200包括摄像头11、移动数据中心(move data center,MDC)12、显示装置18。MDC12具体包括图像信号处理器(image signal processor,ISP)13、中央处理器(central processing unit,CPU)14、外部接口15、摄像机感知芯片16以及传感器融合单元17等多个模块组件。其中,MDC12通过外部接口15分别与摄像头11和显示装置18相连。
具体地,摄像头11为不带ISP的摄像头,用于获取图像。
MDC12用于对摄像头11所获取的图像进行处理。
ISP13集成在MDC12的内部,用于进行相关的图像处理。例如,在本申请实施例中,ISP13可以用于识别当前帧图像是否包括闪烁条纹;在当前帧图像包括闪烁条纹时,ISP13可以用于对当前帧图像进行闪烁校正,还可以用于对下一帧图像进行闪烁规避处理。
CPU14用于进行各类任务的串行与并行处理。
外部接口15用于MDC12与相关外设(例如摄像头11和显示装置18)进行连接和通信。
摄像机感知芯片16用于对图像中的内容(人像、场景等信息)进行相关的分割、识别等处理。
传感器融合单元17用于将所有MDC12外设的传感器单元进行融合计算,输出最终的控制策略。例如,若系统架构200用于自动驾驶领域,该传感器融合单元17可以将MDC12外设的图像传感器、速度传感器、温度传感器、扭矩传感器等的数据进行融合计算,输出最终的车控策略。
显示装置18用于将经过处理的图像以图片或视频的形式进行显示。
图3是本申请实施例提供的一种图像处理的方法示例图。应理解,该方法300可以应用在系统架构200中。该方法300包括步骤S310-330,下面对这些步骤进行详细的描述。
S310,获取当前帧图像,该当前帧图像包括闪烁条纹。
可选地,当前帧图像可以从图像传感器获取,也可以从其他图像采集装置获取,本申请对此不做限定。应理解,在实际操作中,图像是逐帧进行获取的。在当前获取的图像可以记为是当前帧图像。在下一刻获取的可以记为是下一帧图像。
应理解,在获取到当前帧图像时,可以先对当前帧图像的亮度分布进行分析,以确定当前帧图像包括闪烁条纹。可选地,可以通过如下方式确定当前帧图像包括闪烁条纹:获取当前帧图像中每个像素点的亮度值;根据像素点的亮度值,计算当前帧图像中每行像素点的行平均亮度值;根据行平均亮度值确定当前帧图像包括闪烁条纹。
上述过程也可以理解成:在获取到当前帧图像时,可以先判断当前帧图像是否包括闪烁条纹。具体地,在根据行平均亮度值确定当前帧图像包括闪烁条纹时,若行亮度平均值的分布满足要求,则确定当前帧图像包括闪烁条纹;若行亮度平均值的分布不满足要求,则确定当前帧图像不包括闪烁条纹。应理解,若确定当前帧包括闪烁条纹,则可以继续执行步骤S320-330。相应地,若确定当前帧不包括闪烁条纹,则可以继续采集下一帧图像。
应理解,通常在干扰源的影响下获取的图像会包括闪烁条纹,其根本原因是照在不同行上的光能量不同,而不同行之间所接受的光能量不同,就会使得图像的亮度的不同。图4是本申请实施例提供的一种干扰源信号波形及其对应的光能量波形的示例图。应理解,干扰源信号通常为满足一定频率的正弦波形,而光能量是没有方向性的,因此对应的光能量波形如图4所示。其中,光能量的分布也可以认为是包括闪烁条纹的图像中不同行的亮度分布。且容易看出,不同行的亮度分布分布满足正弦函数的绝对值构成的函数。
因此,在实际操作中,若所获取的当前帧图像中不同行的亮度分布情况符合正弦函数的绝对值构成的函数,则可以认为当前帧图像包括闪烁条纹。
具体地,可以根据目标函数对所获取的当前帧图像中不同行的行平均亮度值进行拟合,得到拟合函数;若拟合函数的拟合度大于或等于第二阈值,确定当前帧图像包括闪烁条纹。其中,目标函数为由正弦函数的绝对值构成的函数。
应理解,由于拟合函数是根据正弦函数的绝对值构成的函数形式进行拟合的,因而,对行平均亮度值进行拟合得到的拟合函数满足公式(1):
L=A*|sin(wt+θ)|+B    (1)
其中,L为亮度值,A为正弦波的波峰,B称为亮度偏移量,t为时间变量且t∈[0,t' AE],t' AE为当前帧的曝光时间,θ为函数初始时间偏移量,w为闪烁的角频率。
在获得到上述拟合函数之后,将拟合函数与原始行平均亮度值进行比较,确定拟合度。若拟合度符合一定阈值,则可以认为当前帧图像包括闪烁条纹。
在本申请实施例中,在根据行平均亮度值确定当前帧图像包括闪烁条纹时,可以根据正弦函数的绝对值构成的函数对该行平均亮度值进行拟合,得到拟合函数,在该拟合函数的拟合度达到一定阈值时,确定该当前帧图像包括所述闪烁条纹,从而能够提高闪烁条纹确定的准确性。
应理解,以上确定当前帧包括闪烁条纹的方式仅仅是一种示例,在实际操作中,还可以通过现有其他方式确定当前帧包括闪烁条纹,本申请对此不做限定。
S320,根据当前帧图像,确定引起闪烁条纹的干扰源频率。
可选地,在本申请实施例中,可以根据上述拟合函数,确定干扰源频率。
具体地,对于拟合函数L=A*|sin(wt+θ)|+B,周期为
Figure PCTCN2021117547-appb-000004
故干扰源的周期为
Figure PCTCN2021117547-appb-000005
干扰源的频率为
Figure PCTCN2021117547-appb-000006
应理解,在某些领域中,如自动驾驶领域,交通灯、信号灯、车尾灯等各类型的干扰源,频率往往不固定,通常具备多样化的亮度与光能量频率,为了使图像不受或降低干扰光源的影响,车载摄像头应能够对全频段(1Hz至几十KHz)进行检测及做相应的规避处理。在本申请实施例中,通过对当前帧图像的行平均亮度值进行拟合得到拟合函数,再根 据拟合函数确定引起该闪烁条纹的干扰源频率,从而能够实现任何频率的干扰源的检测,使得该图像处理方法能够适应于更多的场景,提升了图像处理能力。
S330,根据干扰源频率,调节下一帧的曝光时间以得到不包括闪烁条纹的下一帧图像。
在本申请实施例中,如果当前帧图像存在闪烁条纹,则可以根据当前帧图像确定引起该闪烁条纹的干扰源频率,然后根据干扰源频率对下一帧的曝光时间进行调节,使得下一帧图像不包括闪烁条纹,从而能够从根源上规避干扰源对图像的影响,进而能够提升图像质量。而且,不需要对每帧包括闪烁条纹的图像进行校正,降低了计算机的负担。
应理解,在调节曝光时间为光能量周期的整数倍时间时,就能够避开闪烁。其中,光能量周期为光能量的频率的倒数,光能量的频率为干扰源频率的2倍。
因此,可以根据干扰源频率,调节下一帧的曝光时间满足公式(2):
Figure PCTCN2021117547-appb-000007
其中,T AE为所述下一帧的曝光时间,f为所述干扰源频率,n为正整数。即调节下一帧的曝光时间为光能量周期的整数倍时间,就能够避免下一帧图像出现闪烁条纹,从而使得图像质量得到提升。
还应理解,当前帧的曝光时间与下一帧的曝光时间之差的绝对值越小,曝光过度就会越平稳;反之,如果差值越大,曝光过度就会产生较大的跳变,出现闪亮、闪灭现象。
因此,可选地,方法300还可以包括:获取当前帧的曝光时间;根据干扰源频率调节下一帧的曝光时间,使得当前帧的曝光时间与下一帧的曝光时间之差的绝对值小于或等于第一阈值。或者,使得当前帧的曝光时间与下一帧的曝光时间之差的绝对值取极小值。从而使得当前帧图像与下一帧图像之间过度更平滑,避免出现帧间闪烁。
可选地,方法300还可以包括:根据每个像素点的亮度值和拟合函数,对当前帧图像进行闪烁校正。
具体地,图5是本申请实施例提供的一种对当前帧图像进行闪烁校正的方法示例图。该方法500包括步骤S510-540,下面对这些步骤进行详细的描述。
S510,根据每个像素点的亮度值和拟合函数,计算每个像素点的初始校正亮度值。
具体地,可以先将以时间为变量的拟合函数转化为以行为变量的拟合函数。
应理解,对于水平像素x*垂直像素y的图像,t' AE为当前帧的曝光时间,在干扰源的频率f下,具备2*t' AE*f个闪烁周期,则每个周期对应有
Figure PCTCN2021117547-appb-000008
个像素,也就是每个周期对应
Figure PCTCN2021117547-appb-000009
行像素,则
Figure PCTCN2021117547-appb-000010
进一步地,可以按照如下公式(3)计算每个像素点的初始校正亮度值:
Figure PCTCN2021117547-appb-000011
其中,L i,j为原第i行j列像素点的亮度值,L 1为第i行j列像素点初始校正后的亮度值,i∈[0,y],i为正整数,θ∈[0,π)。
S520,根据行平均亮度值,计算当前帧图像的全局平均亮度值。
具体地,当前帧图像的全局平均亮度值可以按照如下公式(4)进行计算:
Figure PCTCN2021117547-appb-000012
其中,
Figure PCTCN2021117547-appb-000013
为全局平均亮度值,
Figure PCTCN2021117547-appb-000014
为第i行像素的行平均亮度值。
S530,根据每个像素点的初始校正亮度值、全局平均亮度值和行最低亮度值,计算每个像素点的最终校正亮度值。
在本申请实施例中,为了使得整幅图整体亮度不变,图像效果处于最佳状态,需要对本帧初始校正亮度值做额外补偿,使本帧全局平均亮度保持不变。
具体地,可以按照如下公式(5)计算每个像素点的最终校正亮度值:
Figure PCTCN2021117547-appb-000015
其中,L 2为第i行j列像素点的最终校正后的亮度值,B为行亮度偏移量,也可以认为是行最低亮度值。
S540,根据每个像素点的最终校正亮度值,对每个像素点进行亮度校正。
可选地,在实际操作中,也可以先进行亮度补偿,再根据补偿后的亮度值和拟合函数得到最终校正亮度值,本申请对此不做限定。
在本申请实施例中,在对当前帧图像进行闪烁校正时,首先根据每个像素点的亮度值和拟合函数对每个像素点的亮度值进行非线性还原得到每个像素点的初始校正亮度值,以提升当前帧图像闪烁校正的高效性和精准性;然后再根据每个像素点的初始校正亮度值、全局平均亮度值和行最低亮度值对每个像素点进行亮度补偿,以避免不同图像间出现闪亮跳变现象,使得整体亮度平滑;最后根据每个像素点的最终校正亮度值,对每个像素点进行亮度校正,以提高当前帧图像质量,使得当前帧图像整体视觉效果最佳。
优选地,为提高图像质量,使得图像整体视觉效果最佳,在实际操作中,若当前帧图像包括闪烁条纹,应能够同时实现对当前帧图像的闪烁校正,下一帧图像的闪烁规避及帧间闪烁规避。下面将结合附图6对本申请的一种优选方案进行详细介绍。
图6是本申请实施例提供的一种整体图像处理流程示例图。该流程600包括步骤S610-650,下面对这些步骤进行详细的描述。
S610,获取图像。
在实际操作中,图像是一帧一帧进行采集的。在当前获取的图像可以记为当前帧图像。即将获取的下一个图像可以记为是下一帧图像。
S620,判断当前帧图像是否包括闪烁条纹。
具体地,在获取到当前帧图像后,对当前帧图像中像素点的亮度分布进行分析,判断当前帧图像是否包括闪烁条纹。具体地判断方式在上文已进行详细描述,此处不再赘述。
应理解,若当前帧图像包括闪烁条纹,则继续执行步骤S630-650,若当前帧图像不包括闪烁条纹,则可以转S610继续获取下一帧图像。
S630,对当前帧图像进行校正。
在本实施例中,在确认当前帧图像包括闪烁条纹后,对当前帧图像进行闪烁校正,具体闪烁校正方式在上文已进行详细描述,此处不再赘述。
S640,确定干扰源频率。
在本实施例中,在确定当前帧图像包括闪烁条纹后,根据当前帧图像确定干扰源频率。干扰源频率确定方式在上文已进行详细描述,此处不再赘述。
S640,调节下一帧的曝光时间。
在本实施例中,在确定干扰源频率之后,还需要根据干扰源频率调节下一帧曝光时间,使得下一帧曝光时间满足如下公式:
Figure PCTCN2021117547-appb-000016
同时,使得当前帧的曝光时间与下一帧的曝光时间之差的绝对值小于或等于第一阈值,以使得下一帧图像不包括闪烁条纹且当前帧和下一帧图像之间不存在帧间闪烁。
在本实施例中,通过对包括闪烁条纹的当前帧图像进行闪烁校正以及对下一帧图像进行闪烁规避和帧间规避,使得图像质量提升,图像整体视觉效果最佳。
图7是本申请实施例提供的一种图像处理装置的示例图。该装置700包括获取模块710和处理模块720。其中,获取模块710,用于获取当前帧图像,该当前帧图像包括闪烁条纹。处理模块720,用于根据当前帧图像,确定引起闪烁条纹的干扰源频率;根据干扰源频率,调节下一帧的曝光时间以得到不包括闪烁条纹的下一帧图像。
可选地,处理模块720还可以用于:根据干扰源频率,调节下一帧的曝光时间满足如下公式:
Figure PCTCN2021117547-appb-000017
其中,T AE为下一帧的曝光时间,f为干扰源频率,n为正整数。
可选地,获取模块710还可以用于:获取当前帧的曝光时间;处理模块820还可以用于:根据干扰源频率调节下一帧的曝光时间,使得当前帧的曝光时间与下一帧的曝光时间之差的绝对值小于或等于第一阈值。
可选地,获取模块710还可以用于:获取当前帧图像中每个像素点的亮度值;处理模块720还可以用于:根据像素点的亮度值,计算当前帧图像中每行像素点的行平均亮度值;根据行平均亮度值确定当前帧图像包括闪烁条纹。
可选地,处理模块720还可以用于:根据目标函数对行平均亮度值进行拟合,得到拟合函数,目标函数为由正弦函数的绝对值构成的函数;若拟合函数的拟合度大于或等于第二阈值,确定当前帧图像包括闪烁条纹。
可选地,处理模块720还可以用于:根据拟合函数,确定干扰源频率。
可选地,处理模块720还可以用于:根据所述每个像素点的亮度值和所述拟合函数,对所述当前帧图像进行闪烁校正。
可选地,处理模块720还可以用于:根据每个像素点的亮度值和拟合函数,计算每个像素点的初始校正亮度值;根据行平均亮度值,计算当前帧图像的全局平均亮度值;根据每个像素点的初始校正亮度值、全局平均亮度值和行最低亮度值,计算每个像素点的最终校正亮度值;根据每个像素点的最终校正亮度值,对每个像素点进行亮度校正。
图8是本申请实施例提供的一种图像处理装置的硬件结构示例性框图。该装置800(该装置800具体可以是一种计算机设备)包括存储器810、处理器820、通信接口830以及总线840。其中,存储器810、处理器820、通信接口830通过总线840实现彼此之间的通信连接。
存储器810可以是只读存储器(read only memory,ROM),静态存储设备,动态存 储设备或者随机存取存储器(random access memory,RAM)。存储器810可以存储程序,当存储器810中存储的程序被处理器820执行时,处理器820用于执行本申请实施例的规划方法的各个步骤。
处理器820可以采用通用的中央处理器(central processing unit,CPU),微处理器,应用专用集成电路(application specific integrated circuit,ASIC),图形处理器(graphics processing unit,GPU)或者一个或多个集成电路,用于执行相关程序,以实现本申请方法实施例的校正方法。
处理器820还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请的校正方法的各个步骤可以通过处理器820中的硬件的集成逻辑电路或者软件形式的指令完成。
上述处理器820还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器810,处理器820读取存储器810中的信息,结合其硬件完成本申请实施例的规划装置中包括的模块所需执行的功能,或者执行本申请方法实施例的校正方法。
通信接口830使用例如但不限于收发器一类的收发装置,来实现装置800与其他设备或通信网络之间的通信。
总线840可包括在装置800各个部件(例如,存储器810、处理器820、通信接口830)之间传送信息的通路。
本申请实施例还提供了一种车辆,包括上述装置700。其中,装置700可以执行方法300或600。应理解,车辆可以是智能汽车、新能源汽车、网联汽车、智能驾驶汽车等,本申请对此不做具体限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的 间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种图像处理方法,其特征在于,包括:
    获取当前帧图像,所述当前帧图像包括闪烁条纹;
    根据所述当前帧图像,确定引起所述闪烁条纹的干扰源频率;
    根据所述干扰源频率,调节下一帧的曝光时间以得到不包括闪烁条纹的下一帧图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述干扰源频率,调节下一帧的曝光时间以得到不包括闪烁条纹的下一帧图像包括:
    根据所述干扰源频率,调节所述下一帧的曝光时间满足如下公式:
    Figure PCTCN2021117547-appb-100001
    其中,T AE为所述下一帧的曝光时间,f为所述干扰源频率,n为正整数。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    获取当前帧的曝光时间;
    根据所述干扰源频率调节所述下一帧的曝光时间,使得所述当前帧的曝光时间与所述下一帧的曝光时间之差的绝对值小于或等于第一阈值。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述获取当前帧图像包括:
    获取所述当前帧图像中每个像素点的亮度值;
    根据所述像素点的亮度值,计算所述当前帧图像中每行像素点的行平均亮度值;
    根据所述行平均亮度值确定所述当前帧图像包括所述闪烁条纹。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述行平均亮度值确定所述当前帧图像包括所述闪烁条纹包括:
    根据目标函数对所述行平均亮度值进行拟合,得到拟合函数,所述目标函数为由正弦函数的绝对值构成的函数;
    若所述拟合函数的拟合度大于或等于第二阈值,确定所述当前帧图像包括所述闪烁条纹。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述当前帧图像,确定引起所述闪烁条纹的干扰源频率包括:
    根据所述拟合函数,确定所述干扰源频率。
  7. 根据权利要求5或6所述的方法,其特征在于,所述方法还包括:
    根据所述每个像素点的亮度值和所述拟合函数,对所述当前帧图像进行闪烁校正。
  8. 根据权利要求7所述的方法,其特征在于,所述对所述当前帧图像进行闪烁校正包括:
    根据所述每个像素点的亮度值和所述拟合函数,计算所述每个像素点的初始校正亮度值;
    根据所述行平均亮度值,计算所述当前帧图像的全局平均亮度值;
    根据所述每个像素点的初始校正亮度值、所述全局平均亮度值和行最低亮度值,计算所述每个像素点的最终校正亮度值;
    根据所述每个像素点的最终校正亮度值,对所述每个像素点进行亮度校正。
  9. 一种图像处理装置,其特征在于,包括:
    获取模块,用于获取当前帧图像,所述当前帧图像包括闪烁条纹;
    处理模块,用于根据所述当前帧图像,确定引起所述闪烁条纹的干扰源频率;根据所述干扰源频率,调节下一帧的曝光时间以得到不包括闪烁条纹的下一帧图像。
  10. 根据权利要求9所述的装置,其特征在于,所述处理模块还用于:
    根据所述干扰源频率,调节所述下一帧的曝光时间满足如下公式:
    Figure PCTCN2021117547-appb-100002
    其中,T AE为所述下一帧的曝光时间,f为所述干扰源频率,n为正整数。
  11. 根据权利要求9或10所述的装置,其特征在于,
    所述获取模块还用于:获取当前帧的曝光时间;
    所述处理模块还用于:根据所述干扰源频率调节所述下一帧的曝光时间,使得所述当前帧的曝光时间与所述下一帧的曝光时间之差的绝对值小于或等于第一阈值。
  12. 根据权利要求9至11中任一项所述的装置,其特征在于,
    所述获取模块还用于:获取所述当前帧图像中每个像素点的亮度值;
    所述处理模块还用于:根据所述像素点的亮度值,计算所述当前帧图像中每行像素点的行平均亮度值;根据所述行平均亮度值确定所述当前帧图像包括所述闪烁条纹。
  13. 根据权利要求12所述的装置,其特征在于,所述处理模块还用于:
    根据目标函数对所述行平均亮度值进行拟合,得到拟合函数,所述目标函数为由正弦函数的绝对值构成的函数;若所述拟合函数的拟合度大于或等于第二阈值,确定所述当前帧图像包括所述闪烁条纹。
  14. 根据权利要求13所述的装置,其特征在于,所述处理模块还用于:
    根据所述拟合函数,确定所述干扰源频率。
  15. 根据权利要求13或14所述的装置,其特征在于,所述处理模块还用于:
    根据所述每个像素点的亮度值和所述拟合函数,对所述当前帧图像进行闪烁校正。
  16. 根据权利要求15所述的装置,其特征在于,所述处理模块还用于:
    根据所述每个像素点的亮度值和所述拟合函数,计算所述每个像素点的初始校正亮度值;根据所述行平均亮度值,计算所述当前帧图像的全局平均亮度值;根据所述每个像素点的初始校正亮度值、所述全局平均亮度值和行最低亮度值,计算所述每个像素点的最终校正亮度值;根据所述每个像素点的最终校正亮度值,对所述每个像素点进行亮度校正。
  17. 一种图像处理装置,其特征在于,包括:处理器,所述处理器与存储器耦合;
    所述存储器用于存储指令;
    所述处理器用于执行所述存储器中存储的指令,以使得所述装置执行如权利要求1至8中任一项所述的方法。
  18. 一种计算机可读介质,其特征在于,包括指令,当所述指令在处理器上运行时,使得所述处理器执行如权利要求1至8中任一项所述的方法。
PCT/CN2021/117547 2020-10-12 2021-09-10 图像处理方法和装置 WO2022078128A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21879174.7A EP4207734A1 (en) 2020-10-12 2021-09-10 Image processing method and apparatus
US18/295,647 US20230234509A1 (en) 2020-10-12 2023-04-04 Image processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011084864.8A CN114422656A (zh) 2020-10-12 2020-10-12 图像处理方法和装置
CN202011084864.8 2020-10-12

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/295,647 Continuation US20230234509A1 (en) 2020-10-12 2023-04-04 Image processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2022078128A1 true WO2022078128A1 (zh) 2022-04-21

Family

ID=81207696

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/117547 WO2022078128A1 (zh) 2020-10-12 2021-09-10 图像处理方法和装置

Country Status (4)

Country Link
US (1) US20230234509A1 (zh)
EP (1) EP4207734A1 (zh)
CN (1) CN114422656A (zh)
WO (1) WO2022078128A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116184364A (zh) * 2023-04-27 2023-05-30 上海杰茗科技有限公司 一种iToF相机的多机干扰检测与去除方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070146500A1 (en) * 2005-12-22 2007-06-28 Magnachip Semiconductor Ltd. Flicker detecting circuit and method in image sensor
CN106572345A (zh) * 2015-10-13 2017-04-19 富士通株式会社 闪烁检测装置及方法
CN106973239A (zh) * 2016-01-13 2017-07-21 三星电子株式会社 图像捕获设备和操作该图像捕获设备的方法
CN110213497A (zh) * 2019-05-15 2019-09-06 成都微光集电科技有限公司 一种检测图像闪烁条纹方法及调整图像曝光时间的方法
CN111355864A (zh) * 2020-04-16 2020-06-30 浙江大华技术股份有限公司 一种图像闪烁消除方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3823314B2 (ja) * 2001-12-18 2006-09-20 ソニー株式会社 撮像信号処理装置及びフリッカ検出方法
JP2004260574A (ja) * 2003-02-26 2004-09-16 Matsushita Electric Ind Co Ltd フリッカ検出方法およびフリッカ検出装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070146500A1 (en) * 2005-12-22 2007-06-28 Magnachip Semiconductor Ltd. Flicker detecting circuit and method in image sensor
CN106572345A (zh) * 2015-10-13 2017-04-19 富士通株式会社 闪烁检测装置及方法
CN106973239A (zh) * 2016-01-13 2017-07-21 三星电子株式会社 图像捕获设备和操作该图像捕获设备的方法
CN110213497A (zh) * 2019-05-15 2019-09-06 成都微光集电科技有限公司 一种检测图像闪烁条纹方法及调整图像曝光时间的方法
CN111355864A (zh) * 2020-04-16 2020-06-30 浙江大华技术股份有限公司 一种图像闪烁消除方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116184364A (zh) * 2023-04-27 2023-05-30 上海杰茗科技有限公司 一种iToF相机的多机干扰检测与去除方法及装置

Also Published As

Publication number Publication date
EP4207734A1 (en) 2023-07-05
CN114422656A (zh) 2022-04-29
US20230234509A1 (en) 2023-07-27

Similar Documents

Publication Publication Date Title
CN111368706B (zh) 基于毫米波雷达和机器视觉的数据融合动态车辆检测方法
US11244464B2 (en) Method and apparatus for performing depth estimation of object
US20200193832A1 (en) Image generating apparatus, image generating method, and recording medium
CN103400150B (zh) 一种基于移动平台进行道路边缘识别的方法及装置
US9432590B2 (en) DCT based flicker detection
US20200250454A1 (en) Video data processing
US20210014402A1 (en) Flicker mitigation via image signal processing
CN102932582A (zh) 实现运动检测的方法及装置
CN113029128B (zh) 视觉导航方法及相关装置、移动终端、存储介质
US20130093923A1 (en) Image generation device and image generation system, method and program
CN111027415B (zh) 一种基于偏振图像的车辆检测方法
CN115226406A (zh) 图像生成装置、图像生成方法、记录介质生成方法、学习模型生成装置、学习模型生成方法、学习模型、数据处理装置、数据处理方法、推断方法、电子装置、生成方法、程序和非暂时性计算机可读介质
US20220270266A1 (en) Foreground image acquisition method, foreground image acquisition apparatus, and electronic device
WO2022078128A1 (zh) 图像处理方法和装置
WO2021102893A1 (zh) 视频防抖优化处理方法和装置、电子设备
CN109327626A (zh) 图像采集方法、装置、电子设备和计算机可读存储介质
CN113344820B (zh) 图像处理方法及装置、计算机可读介质、电子设备
US11544918B2 (en) Vehicle to infrastructure system and method with long wave infrared capability
CN103604945A (zh) 一种三通道cmos同步偏振成像系统
JP2010226652A (ja) 画像処理装置、画像処理方法、および、コンピュータプログラム
CN110930340B (zh) 一种图像处理方法及装置
US11706529B2 (en) Blur correction device, imaging apparatus, monitoring system, and non-transitory computer-readable storage medium
CN104376316B (zh) 车牌图像采集方法及装置
CN115278189A (zh) 图像色调映射方法及装置、计算机可读介质和电子设备
US11270412B2 (en) Image signal processor, method, and system for environmental mapping

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21879174

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023519805

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021879174

Country of ref document: EP

Effective date: 20230329

NENP Non-entry into the national phase

Ref country code: DE