WO2021168755A1 - 图像处理方法、装置及设备 - Google Patents

图像处理方法、装置及设备 Download PDF

Info

Publication number
WO2021168755A1
WO2021168755A1 PCT/CN2020/077036 CN2020077036W WO2021168755A1 WO 2021168755 A1 WO2021168755 A1 WO 2021168755A1 CN 2020077036 W CN2020077036 W CN 2020077036W WO 2021168755 A1 WO2021168755 A1 WO 2021168755A1
Authority
WO
WIPO (PCT)
Prior art keywords
image frame
area
current image
region
previous
Prior art date
Application number
PCT/CN2020/077036
Other languages
English (en)
French (fr)
Inventor
马元蛟
罗俊
权威
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN202080094354.0A priority Critical patent/CN115004227A/zh
Priority to PCT/CN2020/077036 priority patent/WO2021168755A1/zh
Priority to EP20921228.1A priority patent/EP4105886A4/en
Publication of WO2021168755A1 publication Critical patent/WO2021168755A1/zh
Priority to US17/896,903 priority patent/US20220414896A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the embodiments of the present application relate to electronic technology, and relate to, but are not limited to, image processing methods, devices, and equipment.
  • the difficulty of multi-frame noise reduction algorithms is that when image acquisition modules such as camera global motion and local motion of moving objects appear at the same time.
  • the large changes in the amount of motion in the domain make it difficult to detect the correlation between adjacent image frames, and it is difficult to accurately calculate the motion information of objects between adjacent image frames.
  • the accuracy of motion vector detection decreases, which causes poor inter-frame alignment, so that ghosts, increased noise, or blurring occurs after the two frames are fused; it can be seen that the related technologies are in In the process of capturing video images, when the global motion of the image capture device and the local motion of the moving object occur at the same time, the noise reduction effect of the video multi-frame is poor.
  • the embodiments of the present application provide image processing methods, devices, and equipment.
  • an embodiment of the present application provides an image processing method to obtain characteristic information of a first region in a current image frame; wherein, the first region includes performing light-based processing on the current image frame and the previous image frame. Streaming motion estimation, the determined area in the current image frame;
  • the second region includes a plurality of first pixels in the current image frame and a plurality of second pixels in the previous image frame.
  • the previous image frame and the current image frame are fused to obtain a processed current image frame; wherein, the processed current image frame
  • the current image frame of is used as the previous image frame of the next image frame, and the next image frame is processed.
  • an embodiment of the present application provides an image processing device, including: a first obtaining module, configured to: obtain feature information of a first region in a current image frame; wherein, the first region includes a reference to the current The image frame and the previous image frame are subjected to motion estimation based on the optical flow method, and the area in the current image frame is determined;
  • the second obtaining module is configured to: obtain the characteristic information of the second region in the current image frame; wherein the second region includes the pixel points and the previous pixels among the plurality of first pixels in the current image frame. An area corresponding to a pixel point whose association relationship among the plurality of second pixel points of an image frame meets the condition;
  • a processing module configured to perform fusion processing on the previous image frame and the current image frame based on the feature information of the first region and the feature information of the second region to obtain a processed current image frame; Wherein, the processed current image frame is used as the previous image frame of the next image frame, and the next image frame is processed.
  • an embodiment of the present application provides an electronic device including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the steps in the image processing method in the first aspect when the processor executes the program.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the image processing method in the first aspect are implemented.
  • the embodiments of the present application provide an image processing method, device, and equipment to obtain characteristic information of a first region in a current image frame; wherein, the first region includes performing an optical flow method based on the current image frame and the previous image frame.
  • Motion estimation determine the area in the current image frame; obtain the characteristic information of the second area in the current image frame; wherein, the second area includes the pixel points of the multiple first pixels in the current image frame and the previous image frame The area corresponding to the pixel point whose association relationship among the plurality of second pixel points meets the conditions; here, the characteristic information of the first area and the characteristic information of the second area are used to characterize the correlation between adjacent image frames
  • the previous image frame and the current image frame are fused to obtain the processed current image frame; in this way, during the fusion process Based on the feature information of the first region and the feature information of the second region, not only can the motion information of the objects between adjacent image frames be accurately calculated, but also the alignment accuracy between
  • FIG. 1 is a schematic diagram of an implementation flow of an image processing method provided by an embodiment of the application
  • FIG. 2 is a schematic diagram of the implementation flow of another image processing method provided by an embodiment of the application.
  • FIG. 3 is a schematic flowchart of a fusion process for a previous image frame and a current image frame according to an embodiment of the application;
  • FIG. 4 is a schematic diagram of an implementation flow of another image processing method provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of determining characteristic information of an exemplary first area provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of determining characteristic information of an exemplary second area provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of an implementation flow of an image processing method provided by another embodiment of this application.
  • FIG. 8 is a schematic diagram of an implementation flow of another image processing method provided by another embodiment of this application.
  • FIG. 9 is a schematic diagram of an exemplary area corresponding to a first parameter and an area corresponding to a second parameter according to an embodiment of the application;
  • FIG. 10 is a first schematic diagram of an exemplary image before and after fusion provided by an embodiment of the application.
  • FIG. 11 is a second schematic diagram of an exemplary image before and after fusion provided by an embodiment of the application.
  • FIG. 12 is a schematic diagram obtained by fusing adjacent frames based on an image processing method in related technologies
  • FIG. 13 is a schematic diagram obtained by fusing adjacent frames based on the image processing method implemented in this application.
  • FIG. 14 is a schematic diagram of the composition structure of an image processing device provided by an embodiment of the application.
  • FIG. 15 is a schematic diagram of a hardware entity of an electronic device provided by an embodiment of this application.
  • first ⁇ second ⁇ third referred to in the embodiments of this application only distinguishes similar objects, and does not represent a specific order for the objects. Understandably, “first ⁇ second ⁇ third” "When permitted, the specific order or sequence can be interchanged, so that the embodiments of the present application described herein can be implemented in a sequence other than those illustrated or described herein.
  • the motion vector (motion vector) is first detected between two consecutive frames before and after.
  • the previous frame is the image frame after noise reduction
  • the current frame is the image frame with noise.
  • the two frames are aligned to minimize the difference in the motion vector between the two frames; then the two frames are merged to achieve noise reduction of the current frame.
  • the global motion of the image acquisition module and the local motion of the moving object occur at the same time, for example, a scene where a moving object such as a car passes while the handheld image acquisition module is moving.
  • the two frames before and after are in the time domain.
  • the embodiment of the present application provides an image processing method, which is applied to an electronic device. As shown in FIG. 1, the method includes the following steps:
  • Step 101 Obtain feature information of the first region in the current image frame.
  • the first area includes the area in the current image frame determined by performing motion estimation based on the optical flow method on the current image frame and the previous image frame.
  • electronic devices may include mobile terminal devices such as mobile phones, tablet computers, notebook computers, personal digital assistants (PDAs), cameras, wearable devices, and fixed terminal devices such as desktop computers.
  • mobile terminal devices such as mobile phones, tablet computers, notebook computers, personal digital assistants (PDAs), cameras, wearable devices, and fixed terminal devices such as desktop computers.
  • PDAs personal digital assistants
  • cameras wearable devices
  • fixed terminal devices such as desktop computers.
  • the electronic device may include an image acquisition module and acquire a video image through the image acquisition module.
  • the video image includes a plurality of image frames, and then the plurality of image frames are processed.
  • the electronic device may also establish a communication connection with the image acquisition module to acquire the video image collected by the image acquisition module, and then process multiple image frames contained in the video image.
  • the feature information of the first region is used to characterize the correlation between two adjacent image frames, the current image frame and the previous image frame.
  • Step 102 Obtain feature information of the second region in the current image frame.
  • the second area includes an area corresponding to a pixel point in which the association relationship between a plurality of first pixel points of the current image frame and a plurality of second pixel points of the previous image frame meets the conditions.
  • the feature information of the second region is used to characterize the correlation between two adjacent image frames, the current image frame and the previous image frame. Wherein, there may be no overlap between the first area and the second area, or there may be a partial overlap area between the first area and the second area.
  • Step 103 Perform fusion processing on the previous image frame and the current image frame based on the feature information of the first region and the feature information of the second region to obtain the processed current image frame.
  • the processed current image frame is used as the previous image frame of the next image frame, and the next image frame is processed.
  • the electronic device after the electronic device obtains the feature information of the first region and the feature information of the second region, it can be based on the feature information of the first region and the feature information of the second region, that is, the correlation between adjacent image frames Integral processing is performed on the previous image frame and the current image frame to obtain the processed current image frame.
  • the image processing method provided by the embodiment of the present application obtains the characteristic information of the first region in the current image frame; wherein the first region includes motion estimation based on the optical flow method on the current image frame and the previous image frame, and the determined The area in the current image frame; to obtain the characteristic information of the second area in the current image frame; wherein the second area includes a plurality of first pixels in the current image frame and a plurality of second pixels in the previous image frame The area corresponding to the pixel point where the association relationship between the pixels in the point meets the conditions; here, the feature information of the first area and the feature information of the second area are used to characterize the correlation between adjacent image frames, and further, Based on the feature information of the first region and the feature information of the second region, the previous image frame and the current image frame are fused to obtain the processed current image frame; in this way, based on the fusion process of the first region
  • the feature information and the feature information of the second region can not only accurately calculate the motion information of the object between adjacent image frames, but also improve the accuracy of the
  • the embodiment of the present application provides an image processing method, which is applied to an electronic device. As shown in FIG. 2, the method includes the following steps:
  • Step 201 Obtain feature information of the first region in the current image frame.
  • the first area includes the area in the current image frame determined by performing motion estimation based on the optical flow method on the current image frame and the previous image frame.
  • Step 202 Obtain feature information of the second region in the current image frame.
  • the second area includes an area corresponding to a pixel point in which the association relationship between a plurality of first pixel points of the current image frame and a plurality of second pixel points of the previous image frame meets the conditions.
  • the characteristic information of the first region is used to identify the first region
  • the characteristic information of the second region is used to identify the second region. That is, after the electronic device obtains the characteristic information for identifying the first area and the characteristic information for identifying the second area, it can perform noise reduction processing on multiple frames based on the characteristic information to obtain a high-quality image.
  • Step 203 Determine the third area and the fourth area in the current image frame based on the feature information of the first area and the feature information of the second area.
  • the third area represents the area where the current image frame has local motion relative to the previous image frame
  • the fourth area represents the area where the current image frame has global motion relative to the previous image frame.
  • Step 204 Obtain a first parameter corresponding to the third area.
  • the first parameter is used to reduce the ratio of the previous image frame when the two frames are fused, and can also be understood as used to increase the ratio of the current image frame when the two frames are fused.
  • Step 205 Obtain a second parameter corresponding to the fifth area in the previous image frame associated with the fourth area.
  • the second parameter is used to increase the ratio of the previous image frame when the two frames are fused, and can also be understood as used to reduce the ratio of the current image frame when the two frames are fused.
  • Step 206 Perform fusion processing on the previous image frame and the current image frame based on the first parameter and the second parameter to obtain the processed current image frame.
  • the processed current image frame is used as the previous image frame of the next image frame, and the next image frame is processed.
  • the pixel value of the third area of the current target image frame is greater than the pixel value of the third area of the current image frame, and/or the pixel value of the fifth area of the previous target image frame is greater than that of the previous image The pixel value of the fifth area of the frame.
  • step 206 performs fusion processing on the previous image frame and the current image frame based on the first parameter and the second parameter to obtain the processed current image frame, including:
  • Step 206a Adjust the pixel value of the third area of the current image frame based on the first parameter to obtain the current target image frame.
  • the electronic device adjusts the pixel value of the third region of the current image frame based on the first parameter, which may be to increase the pixel value of the third region of the current image frame based on the first parameter to obtain the current target image frame.
  • Step 206b Adjust the pixel value of the fifth area of the previous image frame based on the second parameter to obtain the previous target image frame.
  • the electronic device adjusts the pixel value of the fifth area of the previous image frame based on the second parameter, which may be to increase the pixel value of the fifth area of the previous image frame based on the second parameter to obtain the previous target image frame .
  • Step 206c Perform fusion processing on the current target image frame and the previous target image frame to obtain the processed current image frame.
  • the electronic device is based on the fusion processing of the current target image frame and the previous target image frame, which may be the average of the two pixel values of the pixels having a one-to-one correspondence in the current target image frame and the previous target image frame to obtain The current image frame after processing.
  • the electronic device calculates the optical flow result from the previous image frame to the current image frame based on the previous image frame and the current image frame, and the optical flow result represents an area with optical flow.
  • the electronic device calculates the difference result between the previous image frame and the current image frame based on the previous image frame and the current image frame; the difference result represents the area where the absolute value of the difference is greater than the target threshold represents the area with optical flow.
  • the electronic device determines, based on the optical flow result and the difference result, that the moving object area in the current image frame includes the third area, that is, the local motion area, and the fourth area, that is, the global motion area.
  • the electronic device performs fusion processing on the previous image frame and the current image frame based on the attribute characteristics of the third area and the fourth area to obtain the processed current image frame. Finally, the electronic device uses the processed current image frame as the previous image frame of the next frame to perform multi-frame noise reduction processing.
  • the result of the optical flow from the previous image frame to the current image frame calculated by the electronic device is the black block area in FIG. 5.
  • the difference result between the previous image frame and the current image frame calculated by the electronic device is the black block area in FIG. 6.
  • the image processing method provided by the embodiment of the present application uses the optical flow results and the difference results between the two frames before and after to accurately distinguish the local motion area from the global motion area. Then according to the segmentation results, the attribute characteristics of the local motion area and the global motion area are determined, and different noise reduction parameters are used for the local motion area and the global motion area to adjust the fusion ratio of each area to eliminate the ghosting of moving objects and ensure at the same time The purpose of noise reduction effect in other areas of the background.
  • the embodiment of the present application provides an image processing method applied to an electronic device. As shown in FIG. 7, the method includes the following steps:
  • Step 301 Down-sampling the current image frame to obtain the first target image frame.
  • the electronic device downsampling the current image frame, such as reducing the size of the current image frame, to obtain the first target image frame, so as to improve the processing speed of the motion area detection.
  • Step 302 Down-sampling the previous image frame to obtain a second target image frame.
  • the electronic device down-samples the previous image frame, such as reducing the size of the previous image frame, to obtain the second target image frame, so as to improve the processing speed of the motion area detection.
  • Step 303 Obtain a first optical flow prediction area from the second target image frame to the first target image frame based on the optical flow method.
  • Step 304 Determine that an area in the current image frame that has a mapping relationship with the first optical flow prediction area is the first area.
  • the electronic device determines that the area in the current image frame that has a mapping relationship with the first optical flow prediction area is the first area, that is, the calculated optical flow result is mapped to the current image frame.
  • Step 305 Obtain feature information of the first region in the current image frame.
  • the first area includes the area in the current image frame determined by performing motion estimation based on the optical flow method on the current image frame and the previous image frame.
  • the electronic device maps the calculated optical flow result to the current image frame, it marks the block with the optical flow to obtain the characteristic information of the first region in the current image frame.
  • Step 306 Obtain the second absolute value of the difference between the gray values of the plurality of third pixels of the first target image frame and the gray values of the plurality of fourth pixels of the second target image frame.
  • Step 307 Determine that the area corresponding to the pixel point with the second absolute value greater than the second threshold is the second optical flow prediction area.
  • Step 308 Determine that an area in the current image frame that has a mapping relationship with the second optical flow prediction area is the second area.
  • the electronic device determines that the area in the current image frame that has a mapping relationship with the second optical flow prediction area is the second area, that is, the calculated difference result is mapped to the current image frame.
  • Step 309 Obtain feature information of the second region in the current image frame.
  • the second area includes an area corresponding to a pixel point in which the association relationship between a plurality of first pixel points of the current image frame and a plurality of second pixel points of the previous image frame meets the conditions.
  • the electronic device maps the calculated difference result to the current image frame, it marks the block with optical flow to obtain the characteristic information of the second region in the current image frame.
  • Step 310 Perform fusion processing on the previous image frame and the current image frame based on the feature information of the first region and the feature information of the second region to obtain the processed current image frame.
  • the processed current image frame is used as the previous image frame of the next image frame, and the next image frame is processed.
  • the embodiment of the present application provides an image processing method. As shown in FIG. 8, the method includes the following steps:
  • Step 1 The electronic device calculates the optical flow result from the previous image frame (reference image) to the current image frame (base image).
  • the size of the reference image and the base image are first reduced, for example, both are reduced by 1/2.
  • the optical flow is detected based on the block matching algorithm on the reduced size image, where the block size can be 16x16.
  • the electronic device maps the calculated optical flow result to the base image, and marks the block with the optical flow as 0, as shown in the black block area in FIG. 5.
  • the electronic device may use a block matching (Block Matching) optical flow calculation method to detect the optical flow of the moving object, so as to mark the area as an area with a larger motion vector.
  • Block Matching Block Matching
  • Step2 The electronic device calculates the difference between the reference image and the base image.
  • the size of reference image and base image is reduced first. Then calculate the absolute value of the difference between each pixel of the two frames on the reduced size image. If the absolute value is less than the threshold, the two frames of the pixel are considered to be highly correlated. If it is greater than the threshold, it means the pixel of the two frames. If the difference is large, it may be a moving object or an area with a large change in illumination, it is marked as 0. Further, the electronic device maps the calculated difference result to the base image, and marks the block with optical flow as 0, as shown in the black block area in Figure 6.
  • Step3 The electronic device detects the moving object area and the global moving area.
  • the electronic device accurately detects the moving object area based on the optical flow results and the difference results of the two frames before and after calculated in the above two steps.
  • the inventor of the present invention knows through experiments that the optical flow result may misdetect the optical flow in the global motion area of the background, and the difference result can avoid this part.
  • the difference result may misdetect the part occluded by the moving object between the two frames, and the result of the optical flow can just cover this part. Therefore, the inventor has obtained through many experiments that only the part where both results are 0 is the final local moving object area.
  • Step4 The electronic device performs the fusion processing of the front and rear frames.
  • the electronic device After the electronic device detects the local moving object area and the global moving area, it provides different noise reduction parameters for different areas to adaptively adjust the fusion ratio to perform the fusion processing of the two frames.
  • the local moving object area is prone to ghosting. Therefore, the ratio of the reference image during the fusion of two frames is reduced, and the pixel value of the base image is used as much as possible to avoid ghosting during the fusion. Further, in the embodiment of the present application, a smaller sigma value is provided for the area to minimize the proportion of the reference image.
  • the pixel value of the previous frame reference image after noise reduction will be used to the maximum, so a larger sigma will be provided to improve the fusion ratio of the reference image.
  • the electronic device performs noise reduction processing on the fused image based on the above parameters by using the following formula.
  • the noise reduction filter may use a Gaussian Gaussian noise reduction filter, and the calculation formula is as follows:
  • Base_Y base_Y*(1-weight)+ref_Y*weight
  • Y represents the gray value
  • base_Y represents the gray value of the current image frame
  • ref_Y represents the gray value of the previous image frame
  • diffVal represents the current image frame
  • mu represents the Gaussian noise reduction function value
  • sigma represents the coefficient of the Gaussian function
  • lambda represents the weight of the function value
  • weight represents the weight of the area
  • Base_Y represents the fusion The gray value of the current image frame after.
  • the fused image is denoised to obtain the processed current Image frame.
  • the embodiment of the present application adopts different sigmas for noise reduction based on local and global motion regions, which effectively eliminates ghosting around moving objects and ensures the noise reduction effect of the background region.
  • the multi-frame noise reduction method in the related technology is used to fuse the previous image frame and the current image frame, and the obtained current image frame after denoising still has noise and ghosts.
  • the present application in order to obtain the detection result of the local motion area more accurately, the present application not only considers the optical flow result, but also considers the result of the grayscale difference of the preceding and following frames.
  • the embodiment of the application uses different sigma values for the separated moving object area and the global motion area to perform noise reduction processing.
  • the image processing method provided by this application is adopted.
  • the previous image frame and the current image frame are fused to adjust the fusion ratio of the moving object area and the global motion area to avoid ghosting on the moving object, and to ensure the noise reduction effect of the global motion area of the background.
  • the embodiments of the present application can obtain the following beneficial effects: the optical flow detection result and the gray value difference result between the two frames before and after are used to accurately distinguish the local motion area from the global motion area. Then, according to the segmentation result, different noise reduction parameters are used to adjust the fusion ratio of each area for the local motion area and the global motion area to eliminate the ghosting of moving objects and at the same time ensure the noise reduction effect of other areas of the background.
  • the embodiment of the present application provides an image processing device.
  • the modules included in the device can be implemented by a processor in an electronic device; of course, it can also be implemented by a specific logic circuit; in the process of implementation, the processor can be Central processing unit (CPU), microprocessor (MPU), digital signal processor (DSP) or field programmable gate array (FPGA), etc.
  • CPU Central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • FIG. 14 is a schematic diagram of the composition structure of an image processing device according to an embodiment of the application.
  • the image processing device 400 includes a first obtaining module 401, a second obtaining module 402, and a processing module 403, in which:
  • the first obtaining module 401 is configured to: obtain characteristic information of the first region in the current image frame; wherein the first region includes the current image determined by performing motion estimation based on the optical flow method on the current image frame and the previous image frame The area in the frame;
  • the second obtaining module 402 is configured to: obtain the characteristic information of the second region in the current image frame; wherein the second region includes a plurality of first pixels of the current image frame and a plurality of first pixels of the previous image frame. The area corresponding to the pixel in which the association relationship between the two pixels meets the condition;
  • the processing module 403 is configured to: based on the feature information of the first region and the feature information of the second region, perform fusion processing on the previous image frame and the current image frame to obtain the processed current image frame; wherein, the processed current image The frame is used as the previous image frame of the next image frame to process the next image frame.
  • the processing module 403 is configured to determine the third area and the fourth area in the current image frame based on the feature information of the first area and the feature information of the second area; wherein, the third area represents The area where the current image frame has local motion relative to the previous image frame, and the fourth area represents the area where the current image frame has global motion relative to the previous image frame;
  • fusion processing is performed on the previous image frame and the current image frame to obtain the processed current image frame.
  • the processing module 403 is configured to adjust the pixel value of the third region of the current image frame based on the first parameter to obtain the current target image frame;
  • the pixel value of the third area of the current target image frame is greater than the pixel value of the third area of the current image frame, and/or the pixel value of the fifth area of the previous target image frame is greater than the previous one.
  • the pixel value of the fifth area of the image frame is greater than the previous one.
  • the processing module 403 is configured to: down-sample the current image frame to obtain the first target image frame;
  • that the association relationship meets the condition indicates that the first absolute value of the difference between the gray value of the first pixel and the gray value of the second pixel is greater than the first threshold.
  • the processing module 403 is configured to: down-sample the current image frame to obtain the first target image frame;
  • the area in the current image frame that has a mapping relationship with the second optical flow prediction area is the second area.
  • the characteristic information of the first area is used to identify the first area
  • the characteristic information of the second area is used to identify the second area
  • the image processing device obtains the characteristic information of the first region in the current image frame; wherein the first region includes motion estimation based on the optical flow method for the current image frame and the previous image frame, and the determined The area in the current image frame; to obtain the characteristic information of the second area in the current image frame; wherein the second area includes a plurality of first pixels in the current image frame and a plurality of second pixels in the previous image frame The area corresponding to the pixel point where the association relationship between the pixels in the point meets the conditions; here, the feature information of the first area and the feature information of the second area are used to characterize the correlation between adjacent image frames, and further, Based on the feature information of the first region and the feature information of the second region, the previous image frame and the current image frame are fused to obtain the processed current image frame; in this way, based on the fusion process of the first region
  • the feature information and the feature information of the second region can not only accurately calculate the motion information of the object between adjacent image frames, but also improve the accuracy of the inter-frame
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to enable An electronic device (which can be a mobile phone, a tablet computer, a desktop computer, a server, a TV, an audio player, etc.) executes all or part of the methods in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes. In this way, the embodiments of the present application are not limited to any specific combination of hardware and software.
  • FIG. 15 is a schematic diagram of the hardware entity of an electronic device according to an embodiment of the application.
  • the electronic device 500 includes a memory 501 and a processor 502.
  • the computer program running on the processor 502 implements the image processing method provided in the foregoing embodiment when the processor 502 executes the program.
  • the memory 501 is configured to store instructions and applications executable by the processor 502, and can also cache data to be processed or processed by the processor 502 and each module in the electronic device 500 (for example, image data, audio data, etc.).
  • Voice communication data and video communication data which can be implemented through flash memory (FLASH) or random access memory (Random Access Memory, RAM).
  • the embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the image processing method provided in the above-mentioned embodiments is implemented.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined or integrated To another system, or some features can be ignored, or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the embodiments of the present application can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the aforementioned integrated unit of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the computer software products are stored in a storage medium and include several instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a desktop computer, a server, a TV, an audio player, etc.) executes all or part of the methods of the various embodiments of the present application.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks, or optical disks and other media that can store program codes.
  • the embodiments of the present application provide an image processing method, device, and equipment to obtain characteristic information of a first region in a current image frame; wherein, the first region includes performing an optical flow method based on the current image frame and the previous image frame.
  • Motion estimation determine the area in the current image frame; obtain the characteristic information of the second area in the current image frame; wherein, the second area includes the pixel points of the multiple first pixels in the current image frame and the previous image frame The area corresponding to the pixel point whose association relationship among the plurality of second pixel points meets the conditions; here, the characteristic information of the first area and the characteristic information of the second area are used to characterize the correlation between adjacent image frames
  • the previous image frame and the current image frame are fused to obtain the processed current image frame; in this way, during the fusion process Based on the feature information of the first region and the feature information of the second region, not only can the motion information of the objects between adjacent image frames be accurately calculated, but also the alignment accuracy between

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例公开了图像处理方法、装置及设备,其中,方法包括:获得当前图像帧中的第一区域的特征信息;其中,第一区域包括对当前图像帧和前一图像帧进行基于光流法的运动估计,确定的当前图像帧中的区域;获得当前图像帧中的第二区域的特征信息;其中,第二区域包括当前图像帧的多个第一像素点中像素点与前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域;基于第一区域的特征信息和第二区域的特征信息,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧;其中,处理后的当前图像帧用于作为下一图像帧的前一图像帧,对下一图像帧进行处理。

Description

图像处理方法、装置及设备 技术领域
本申请实施例涉及电子技术,涉及但不限于图像处理方法、装置及设备。
背景技术
目前,多帧降噪算法的难点是当图像采集模块如相机全局运动和运动物体的局部运动同时出现,例如手持相机移动的同时有运动物体如车经过的场景,采集的图像前后两帧在时间域上的运动量变化大,导致很难检测出相邻图像帧之间的相关性,从而很难精确计算出相邻图像帧之间物体的运动信息。进一步地,在多帧降噪的过程中运动矢量检出精度下降而引起帧间对齐效果不佳,以至于两帧融合后出现重影,噪声增多或是模糊等现象;可见,相关技术中在采集视频图像的过程中,当图像采集设备全局运动和运动物体的局部运动同时出现时,视频多帧的降噪效果较差。
发明内容
本申请实施例提供图像处理方法、装置及设备。
第一方面,本申请实施例提供一种图像处理方法,获得当前图像帧中的第一区域的特征信息;其中,所述第一区域包括对所述当前图像帧和前一图像帧进行基于光流法的运动估计,确定的所述当前图像帧中的区域;
获得所述当前图像帧中的第二区域的特征信息;其中,所述第二区域包括所述当前图像帧的多个第一像素点中像素点与所述前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域;
基于所述第一区域的特征信息和所述第二区域的特征信息,对所述前一图像帧和所述当前图像帧进行融合处理,得到处理后的当前图像帧;其中,所述处理后的当前图像帧用于作为下一图像帧的前一图像帧,对所述下一图像帧进行处理。
第二方面,本申请实施例提供一种图像处理装置,包括:第一获得模块,配置为:获得当前图像帧中的第一区域的特征信息;其中,所述第一区域包括对所述当前图像帧和前一图像帧进行基于光流法的运动估计,确定的所述当前图像帧中的区域;
第二获得模块,配置为:获得所述当前图像帧中的第二区域的特征信息;其中,所述第二区域包括所述当前图像帧的多个第一像素点中像素点与所述前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域;
处理模块,配置为:基于所述第一区域的特征信息和所述第二区域的特征信息,对所述前一图像帧和所述当前图像帧进行融合处理,得到处理后的当前图像帧;其中,所述处理后的当前图像帧用于作为下一图像帧的前一图像帧,对所述下一图像帧进行处理。
第三方面,本申请实施例提供一种电子设备,包括存储器和处理器,存储器存储有可在处理器上运行的计算机程序,处理器执行程序时实现第一方面中图像处理方法中的步骤。
第四方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序, 该计算机程序被处理器执行时实现第一方面中图像处理方法中的步骤。
本申请实施例提供了一种图像处理方法、装置及设备,获得当前图像帧中的第一区域的特征信息;其中,第一区域包括对当前图像帧和前一图像帧进行基于光流法的运动估计,确定的当前图像帧中的区域;获得当前图像帧中的第二区域的特征信息;其中,第二区域包括当前图像帧的多个第一像素点中像素点与前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域;这里,第一区域的特征信息和第二区域的特征信息用于表征相邻图像帧之间的相关性,进一步地,基于第一区域的特征信息和第二区域的特征信息,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧;如此,在进行融合处理的过程中基于第一区域的特征信息和第二区域的特征信息不仅能精确计算出相邻图像帧之间物体的运动信息,而且提高了帧间对齐准确性,从而避免了融合出现重影;同时,处理后的当前图像帧用于作为下一图像帧的前一图像帧,对下一图像帧进行处理;如此,解决了相关技术中在采集视频图像的过程中,当图像采集设备全局运动和运动物体的局部运动同时出现时,视频多帧的降噪效果较差的问题,提高了视频多帧的降噪效果,提高了视频质量。
附图说明
图1为本申请实施例提供的一种图像处理方法的实现流程示意图;
图2为本申请实施例提供的另一种图像处理方法的实现流程示意图;
图3为本申请实施例提供的一种对前一图像帧和当前图像帧进行融合处理的流程示意图;
图4为本申请实施例提供的又一种图像处理方法的实现流程示意图;
图5为本申请实施例提供的示例性的第一区域的特征信息的确定示意图;
图6为本申请实施例提供的示例性的第二区域的特征信息的确定示意图;
图7为本申请另一实施例提供的一种图像处理方法的实现流程示意图;
图8为本申请另一实施例提供的另一种图像处理方法的实现流程示意图;
图9为本申请实施例提供的示例性的第一参数对应的区域和第二参数对应的区域的示意图;
图10为本申请实施例提供的示例性的融合前后的图像的示意图一;
图11为本申请实施例提供的示例性的融合前后的图像的示意图二;
图12为基于相关技术中的图像处理方法对相邻帧进行融合得到的示意图;
图13为基于本申请实施的图像处理方法对相邻帧进行融合得到的示意图;
图14为本申请实施例提供的一种图像处理装置的组成结构示意图;
图15为本申请实施例提供的电子设备的一种硬件实体示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请的具体技术方案做进一步详细描述。以下实施例用于说明本申请,但不用来限制本申请的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是 可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
需要指出,本申请实施例所涉及的术语“第一\第二\第三”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
相关技术中,视频多帧降噪的过程中先对前后的连续两帧间检出运动矢量(motion vector),前一帧是降噪后的图像帧,当前帧是有噪声的图像帧,然后经过图像扭转(warping)等对两帧进行对齐处理,尽可能减少两帧间的运动矢量上的差异;接着对这两帧进行融合以实现对当前帧的降噪。然而,视频多帧降噪的过程中,是当图像采集模块全局运动和运动物体的局部运动同时出现,例如手持图像采集模块移动的同时有车等运动物体经过的场景,前后两帧在时间域上的运动量变化大,光照变化大或者有运动遮挡等导致很难检测出相邻帧之间的相关性,从而很难精确计算出相邻帧之间物体的运动信息。运动矢量检出精度下降而引起帧间对齐效果不佳,以至于两帧融合后出现重影,噪声增多或是模糊等问题,而影响了最终的降噪效果。
本申请的实施例提供一种图像处理方法,应用于电子设备,参照图1所示,该方法包括以下步骤:
步骤101、获得当前图像帧中的第一区域的特征信息。
其中,第一区域包括对当前图像帧和前一图像帧进行基于光流法的运动估计,确定的当前图像帧中的区域。
本申请实施例中,电子设备可以包括诸如手机、平板电脑、笔记本电脑、个人数字助理(Personal Digital Assistant,PDA)、相机、可穿戴设备等移动终端设备,以及诸如台式计算机等固定终端设备。
在一些实施例中,电子设备可以包括图像采集模块,并通过图像采集模块采集视频图像,视频图像包括多个图像帧,进而对多个图像帧进行处理。当然,电子设备还可以与图像采集模块建立通信连接,以获取图像采集模块采集的视频图像,进而对视频图像包含的多个图像帧进行处理。其中,第一区域的特征信息用于表征当前图像帧和前一图像帧这两个相邻图像帧之间的相关性。
步骤102、获得当前图像帧中的第二区域的特征信息。
其中,第二区域包括当前图像帧的多个第一像素点中像素点与前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域。
其中,第二区域的特征信息用于表征当前图像帧和前一图像帧这两个相邻图像帧之间的相关性。其中,第一区域与第二区域之间可以不存在重叠部分,或者,第一区域与第二区域之间可以存在部分重叠区域。
步骤103、基于第一区域的特征信息和第二区域的特征信息,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧。
其中,处理后的当前图像帧用于作为下一图像帧的前一图像帧,对下一图像帧进行处理。
本申请实施例中,电子设备在获得第一区域的特征信息和第二区域的特征信息后,可以基于第一区域的特征信息和第二区域的特征信息,即相邻图像帧之间的相关性,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧。
本申请实施例提供了的图像处理方法,获得当前图像帧中的第一区域的特征信息;其中,第一区域包括对当前图像帧和前一图像帧进行基于光流法的运动估计,确定的当前图像帧中的区域;获得当前图像帧中的第二区域的特征信息;其中,第二区域包括当 前图像帧的多个第一像素点中像素点与前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域;这里,第一区域的特征信息和第二区域的特征信息用于表征相邻图像帧之间的相关性,进一步地,基于第一区域的特征信息和第二区域的特征信息,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧;如此,在进行融合处理的过程中基于第一区域的特征信息和第二区域的特征信息不仅能精确计算出相邻图像帧之间物体的运动信息,而且提高了帧间对齐准确性,从而避免了融合出现重影;同时,处理后的当前图像帧用于作为下一图像帧的前一图像帧,对下一图像帧进行处理;如此,解决了相关技术中在采集视频图像的过程中,当图像采集设备全局运动和运动物体的局部运动同时出现时,视频多帧的降噪效果较差的问题,提高了视频多帧的降噪效果,提高了视频质量。
本申请的实施例提供一种图像处理方法,应用于电子设备,参照图2所示,该方法包括以下步骤:
步骤201、获得当前图像帧中的第一区域的特征信息。
其中,第一区域包括对当前图像帧和前一图像帧进行基于光流法的运动估计,确定的当前图像帧中的区域。
步骤202、获得当前图像帧中的第二区域的特征信息。
其中,第二区域包括当前图像帧的多个第一像素点中像素点与前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域。
本申请一些实施例中,第一区域的特征信息用于标识第一区域,第二区域的特征信息用于标识第二区域。也就是说,电子设备获得对第一区域进行标识的特征信息和对第二区域进行标识的特征信息后,可以基于这些特征信息对多帧进行降噪处理,以得到高质量的图像。
步骤203、基于第一区域的特征信息和第二区域的特征信息,确定当前图像帧中的第三区域和第四区域。
其中,第三区域表征当前图像帧相对于前一图像帧出现局部运动的区域,第四区域表征当前图像帧相对于前一图像帧出现全局运动的区域。
步骤204、获得与第三区域对应的第一参数。
这里,第一参数用于对减少两帧融合时前一图像帧的比率,也可以理解为用于增加两帧融合时当前图像帧的比率。
步骤205、获得与第四区域关联的前一图像帧中的第五区域对应的第二参数。
这里,第二参数用于对增加两帧融合时前一图像帧的比率,也可以理解为用于减少两帧融合时当前图像帧的比率。
步骤206、基于第一参数和第二参数,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧。
其中,处理后的当前图像帧用于作为下一图像帧的前一图像帧,对下一图像帧进行处理。
本申请一些实施例中,当前目标图像帧的第三区域的像素值大于当前图像帧的第三区域的像素值,和/或,前一目标图像帧的第五区域的像素值大于前一图像帧的第五区域的像素值。
本申请实施例中,参见图3所示,步骤206基于第一参数和第二参数,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧,包括:
步骤206a、基于第一参数对当前图像帧的第三区域的像素值进行调整,得到当前目标图像帧。
这里,电子设备基于第一参数对当前图像帧的第三区域的像素值进行调整,可以是 基于第一参数对当前图像帧的第三区域的像素值进行提升,得到当前目标图像帧。
步骤206b、基于第二参数对前一图像帧的第五区域的像素值进行调整,得到前一目标图像帧。
这里,电子设备基于第二参数对前一图像帧的第五区域的像素值进行调整,可以是基于第二参数对前一图像帧的第五区域的像素值进行提升,得到前一目标图像帧。
步骤206c、对当前目标图像帧和前一目标图像帧进行融合处理,得到处理后的当前图像帧。
这里,电子设备基于对当前目标图像帧和前一目标图像帧进行融合处理,可以是将当前目标图像帧和前一目标图像帧中具有一一对应关系的像素的两个像素值求均值,得到处理后的当前图像帧。
示例性的,参见图4所示,首先,电子设备基于前一图像帧和当前图像帧,计算出前一图像帧到当前图像帧的光流结果,光流结果表征有光流的区域。其次,电子设备基于前一图像帧和当前图像帧,计算出前一图像帧和当前图像帧的差值结果;差值结果表征差值的绝对值大于目标阈值的区域表征有光流的区域。然后,电子设备基于光流结果和差值结果,确定当前图像帧中的运动物体区域包括第三区域即局部运动区域和第四区域即全局运动区域。进一步地,电子设备基于第三区域和第四区域的属性特征,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧。最后,电子设备用处理后的当前图像帧作为下一帧的前一图像帧,进行多帧降噪处理。
示例性的,参见图5所示,电子设备计算的前一图像帧到当前图像帧的光流结果如图5中的黑色块区域。
示例性的,参见图6所示,电子设备计算的前一图像帧和当前图像帧的差值结果如图6中的黑色块区域。
由上述可知,本申请实施例提供的图像处理方法,利用前后两帧间的光流结果及差值结果来准确区分开局部运动区域和全局运动区域。然后根据此分割结果确定局部运动区域和全局运动区域的属性特征,对局部运动区域和全局运动区域用不同的降噪参数来调整每个区域的融合比率来达到消除运动物体重影并同时保证了背景其他区域的降噪效果的目的。
需要说明的是,本实施例中与其它实施例中相同步骤和相同内容的说明,可以参照其它实施例中的描述,此处不再赘述。
基于前述内容,本申请的实施例提供一种图像处理方法,应用于电子设备,参照图7所示,该方法包括以下步骤:
步骤301、对当前图像帧进行下采样,得到第一目标图像帧。
本申请实施例中,电子设备对当前图像帧进行下采样如对当前图像帧的尺寸进行缩小,得到第一目标图像帧,以提高运动区域检测的处理速度。
步骤302、对前一图像帧进行下采样,得到第二目标图像帧。
本申请实施例中,电子设备对前一图像帧进行下采样如对前一图像帧的尺寸进行缩小,得到第二目标图像帧,以提高运动区域检测的处理速度。
步骤303、基于光流法获得第二目标图像帧到第一目标图像帧的第一光流预测区域。
步骤304、确定当前图像帧中与第一光流预测区域存在映射关系的区域为第一区域。
这里,电子设备确定当前图像帧中与第一光流预测区域存在映射关系的区域为第一区域,即将计算出的光流结果映射到当前图像帧上。
步骤305、获得当前图像帧中的第一区域的特征信息。
其中,第一区域包括对当前图像帧和前一图像帧进行基于光流法的运动估计,确定的当前图像帧中的区域。这里,电子设备将计算出的光流结果映射到当前图像帧上之后, 将有光流的块进行标记,得到当前图像帧中的第一区域的特征信息。
步骤306、获得第一目标图像帧的多个第三像素点的灰度值,与第二目标图像帧的多个第四像素点的灰度值的差值的第二绝对值。
步骤307、确定第二绝对值大于第二阈值的像素点对应的区域为第二光流预测区域。
步骤308、确定当前图像帧中与第二光流预测区域存在映射关系的区域为第二区域。
这里,电子设备确定当前图像帧中与第二光流预测区域存在映射关系的区域为第二区域,即将计算出的差值结果映射到当前图像帧上。
步骤309、获得当前图像帧中的第二区域的特征信息。
其中,第二区域包括当前图像帧的多个第一像素点中像素点与前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域。这里,电子设备将计算出的差值结果映射到当前图像帧上之后,将有光流的块进行标记,得到当前图像帧中的第二区域的特征信息。
步骤310、基于第一区域的特征信息和第二区域的特征信息,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧。
其中,处理后的当前图像帧用于作为下一图像帧的前一图像帧,对下一图像帧进行处理。
需要说明的是,本实施例中与其它实施例中相同步骤和相同内容的说明,可以参照其它实施例中的描述,此处不再赘述。
本申请的实施例提供一种图像处理方法,参照图8所示,该方法包括以下步骤:
Step1、电子设备计算前一图像帧(reference image)到当前图像帧(base image)的光流结果。这里,为了节省处理速度,先将reference image和base image的尺寸进行缩小,例如均缩小1/2。然后在缩小尺寸的图像上基于块匹配算法(block matching)检出光流,这里,块大小可以为16x16。
进一步地,电子设备将计算出的光流结果映射到base image上,并将有光流的块标记为0,如图5中黑色块区域。
这里,电子设备可以采用块匹配(Block Matching)光流计算方法来检测出运动物体的光流,以此标记该区域是运动矢量较大区域。
Step2、电子设备计算reference image和base image的差值结果。
这里,同样的为了节省处理速度,先缩小reference image和base image的尺寸。然后在缩小尺寸的图像上对两帧的每一个像素求差值的绝对值,如果该绝对值小于阈值,则认为该像素的两帧相关性大,如果大于阈值,则表示两帧的这个像素差异较大,有可能是运动物体或是光照变化较大的区域,则标记为0。进一步地,电子设备将计算出的差值结果映射到base image上,并将有光流的块标记为0,如图6中黑色块区域
Step3、电子设备检出运动物体区域和全局运动区域。
这里,电子设备基于上述两个步骤计算出的前后两帧的光流结果和差值结果来准确地检测出运动物体区域。需要说明的是,本发明人实验获知光流结果有可能将背景的全局运动区域也误检出光流,而差值结果可以避免这部分。同时差值结果有可能误检出两帧间运动物体遮挡的部分,而光流的结果刚好可以覆盖这部分。所以发明人经过多次实验得到只有两个结果都是0的部分才是最终的局部运动物体区域。
Step4、电子设备进行前后帧融合处理。
这里,电子设备检测出局部运动物体区域和全局运动区域之后,针对不同区域提供不同的降噪参数自适应地调整融合比率来进行两帧的融合处理。
本申请实施例中,局部运动物体区域容易产生重影,因此减少两帧融合时候reference image的比率,尽量多用base image的像素值来避免融合时候产生重影。进一 步地,本申请实施例中,对该区域提供一个较小的sigma值尽量降低reference image的比重。
相反地,针对背景等全局运动区域,为了保证降噪效果,将最大限地利用降噪过的前一帧reference image的像素值,所以将提供一个较大的sigma来提高reference image的融合比率。
本申请实施例中,示例性的,在进行相邻帧融合时涉及的参数的如下表所示:
  全局区域 局部区域
Lambda 0.7 0.7
sigma 16 8
diffVal 5 10
mu 0.9523448 0.457833362
weight 0.6666414 0.320483353
示例性的,电子设备基于上述参数,通过如下的公式对融合图像进行降噪处理,这里,降噪滤波filter可以采用高斯Gaussian降噪filter,计算公式如下:
diffVal=|base_Y-ref_Y|
Figure PCTCN2020077036-appb-000001
weight=mu*lambda,lambda=0.7
Base_Y=base_Y*(1-weight)+ref_Y*weight
这里,对上述表格和公式中的涉及的参数进行说明,其中,Y表示灰度值,base_Y表示当前图像帧的灰度值,ref_Y表示前一图像帧的灰度值,diffVal表示当前图像帧的灰度值与前一图像帧的灰度值的差值的绝对值;mu表示高斯Gaussian降噪函数值,sigma表示Gaussian函数的系数,lambda表示函数值的权重,weight表示区域权重,Base_Y表示融合后的当前图像帧的灰度值。
示例性的,参见图9所示,对于融合后的图像,局部运动区域的sigma为8,全局运动区域的sigma为14,进而,基于上述公式,对融合图像进行降噪,得到处理后的当前图像帧。
示例性的,参见图10所示,本申请实施例基于局部和全局运动区域,采用不同的sigma进行降噪,有效地消除运动物体周边的重影,并且保证了背景区域的降噪效果。
示例性的,参见图11所示,在一些实施例中,如果用相同的sigma融合降噪,虽然保证了背景等区域的降噪效果,却带来了严重的重影。可见,通过不同的sigma进行降噪,得到的处理后的当前图像帧的图像质量较好。
需要说明的是,视频拍摄时候,常常图像采集模块运动和移动物体运动是同时发生的,所以背景的全局运动区域也被包含在光流检出结果里。另外,在暗光拍摄场景下,因为光照条件等影响,前后帧的灰度变化过大也会引起光流检出结果不准确。参见图12所示,采用相关技术中的多帧降噪方法,对前一图像帧和当前图像帧进行融合,得到的去噪后的当前图像帧中仍然有噪声且出现重影。
本申请实施例提供的图像处理方法,为了更精确的得到局部运动区域检出结果,本申请不仅考虑了光流结果,还考虑了前后帧的灰度差值结果。接着在两帧的融合处理阶段,本申请实施例给分离出来的运动物体区域和全局运动区域分别用不同的sigma值做降噪处理,参见图13所示,采用本申请所提供的图像处理方法,对前一图像帧和当前图像帧进行融合,实现调整运动物体区域和全局运动区域的融合比率来避免运动物体上出现重影,且保证了背景的全局运动区域的降噪效果。
本申请实施例能够获得以下有益效果:利用前后两帧间的光流检出结果及灰度值差值结果来准确区分开局部运动区域和全局运动区域。然后根据此分割结果对局部运动区域和全局运动区域用不同的降噪参数来调整每个区域的融合比率来达到消除运动物体重影,并同时保证了背景其他区域的降噪效果的目的。
本申请实施例提供一种图像处理装置,该装置所包括的各模块,可以通过电子设备中的处理器来实现;当然也可通过具体的逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
图14为本申请实施例图像处理装置的组成结构示意图,如图14所示,图像处理装置400包括第一获得模块401、第二获得模块402和处理模块403,其中:
第一获得模块401,配置为:获得当前图像帧中的第一区域的特征信息;其中,第一区域包括对当前图像帧和前一图像帧进行基于光流法的运动估计,确定的当前图像帧中的区域;
第二获得模块402,配置为:获得当前图像帧中的第二区域的特征信息;其中,第二区域包括当前图像帧的多个第一像素点中像素点与前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域;
处理模块403,配置为:基于第一区域的特征信息和第二区域的特征信息,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧;其中,处理后的当前图像帧用于作为下一图像帧的前一图像帧,对下一图像帧进行处理。
在本申请其他实施例中,处理模块403,配置为:基于第一区域的特征信息和第二区域的特征信息,确定当前图像帧中的第三区域和第四区域;其中,第三区域表征当前图像帧相对于前一图像帧出现局部运动的区域,第四区域表征当前图像帧相对于前一图像帧出现全局运动的区域;
获得与第三区域对应的第一参数;
获得与第四区域关联的前一图像帧中的第五区域对应的第二参数;
基于第一参数和第二参数,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧。
在本申请其他实施例中,处理模块403,配置为:基于第一参数对当前图像帧的第三区域的像素值进行调整,得到当前目标图像帧;
基于第二参数对前一图像帧的第五区域的像素值进行调整,得到前一目标图像帧;
对当前目标图像帧和前一目标图像帧进行融合处理,得到处理后的当前图像帧。
在本申请其他实施例中,当前目标图像帧的第三区域的像素值大于当前图像帧的第三区域的像素值,和/或,前一目标图像帧的第五区域的像素值大于前一图像帧的第五区域的像素值。
在本申请其他实施例中,处理模块403,配置为:对当前图像帧进行下采样,得到第一目标图像帧;
对前一图像帧进行下采样,得到第二目标图像帧;
基于光流法获得第二目标图像帧到第一目标图像帧的第一光流预测区域;
确定当前图像帧中与第一光流预测区域存在映射关系的区域为第一区域。
在本申请其他实施例中,关联关系符合条件表征第一像素点的灰度值与第二像素点的灰度值的差值的第一绝对值大于第一阈值。
在本申请其他实施例中,处理模块403,配置为:对当前图像帧进行下采样,得到第一目标图像帧;
对前一图像帧进行下采样,得到第二目标图像帧;
获得第一目标图像帧的多个第三像素点的灰度值,与第二目标图像帧的多个第四像素点的灰度值的差值的第二绝对值;
确定第二绝对值大于第二阈值的像素点对应的区域为第二光流预测区域;
确定当前图像帧中与第二光流预测区域存在映射关系的区域为第二区域。
在本申请其他实施例中,第一区域的特征信息用于标识第一区域,第二区域的特征信息用于标识第二区域。
本申请实施例所提供的图像处理设备,获得当前图像帧中的第一区域的特征信息;其中,第一区域包括对当前图像帧和前一图像帧进行基于光流法的运动估计,确定的当前图像帧中的区域;获得当前图像帧中的第二区域的特征信息;其中,第二区域包括当前图像帧的多个第一像素点中像素点与前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域;这里,第一区域的特征信息和第二区域的特征信息用于表征相邻图像帧之间的相关性,进一步地,基于第一区域的特征信息和第二区域的特征信息,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧;如此,在进行融合处理的过程中基于第一区域的特征信息和第二区域的特征信息不仅能精确计算出相邻图像帧之间物体的运动信息,而且提高了帧间对齐准确性,从而避免了融合出现重影;同时,处理后的当前图像帧用于作为下一图像帧的前一图像帧,对下一图像帧进行处理;如此,解决了相关技术中在采集视频图像的过程中,当图像采集设备全局运动和运动物体的局部运动同时出现时,视频多帧的降噪效果较差的问题,提高了视频多帧的降噪效果,提高了视频质量。
以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
需要说明的是,本申请实施例中,当以软件功能模块的形式实现上述的图像处理方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得电子设备(可以是手机、平板电脑、台式机、服务器、电视机、音频播放器等)执行本申请各个实施例方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。
本申请实施例提供一种电子设备,图15为本申请实施例的一种电子设备的硬件实体示意图,如图15所示,电子设备500包括存储器501和处理器502,存储器501存储有可在处理器502上运行的计算机程序,处理器502执行程序时实现上述实施例中提供的图像处理方法。
需要说明的是,存储器501配置为存储由处理器502可执行的指令和应用,还可以缓存待处理器502以及电子设备500中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。
本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中提供的图像处理方法。
这里需要指出的是:以上存储介质和设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请存储介质和设备实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关 的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台电子设备(可以是手机、平板电脑、台式机、服务器、电视机、音频播放器等)执行本申请各个实施例方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。以上,仅为本申请的实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。
工业实用性
本申请实施例提供了一种图像处理方法、装置及设备,获得当前图像帧中的第一区域的特征信息;其中,第一区域包括对当前图像帧和前一图像帧进行基于光流法的运动 估计,确定的当前图像帧中的区域;获得当前图像帧中的第二区域的特征信息;其中,第二区域包括当前图像帧的多个第一像素点中像素点与前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域;这里,第一区域的特征信息和第二区域的特征信息用于表征相邻图像帧之间的相关性,进一步地,基于第一区域的特征信息和第二区域的特征信息,对前一图像帧和当前图像帧进行融合处理,得到处理后的当前图像帧;如此,在进行融合处理的过程中基于第一区域的特征信息和第二区域的特征信息不仅能精确计算出相邻图像帧之间物体的运动信息,而且提高了帧间对齐准确性,从而避免了融合出现重影;同时,处理后的当前图像帧用于作为下一图像帧的前一图像帧,对下一图像帧进行处理;如此,解决了相关技术中在采集视频图像的过程中,当图像采集设备全局运动和运动物体的局部运动同时出现时,视频多帧的降噪效果较差的问题,提高了视频多帧的降噪效果,提高了视频质量。

Claims (10)

  1. 一种图像处理方法,包括:
    获得当前图像帧中的第一区域的特征信息;其中,所述第一区域包括对所述当前图像帧和前一图像帧进行基于光流法的运动估计,确定的所述当前图像帧中的区域;
    获得所述当前图像帧中的第二区域的特征信息;其中,所述第二区域包括所述当前图像帧的多个第一像素点中像素点与所述前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域;
    基于所述第一区域的特征信息和所述第二区域的特征信息,对所述前一图像帧和所述当前图像帧进行融合处理,得到处理后的当前图像帧;其中,所述处理后的当前图像帧用于作为下一图像帧的前一图像帧,对所述下一图像帧进行处理。
  2. 根据权利要求1所述方法,所述基于所述第一区域的特征信息和所述第二区域的特征信息,对所述前一图像帧和所述当前图像帧进行融合处理,得到处理后的当前图像帧,包括:
    基于所述第一区域的特征信息和所述第二区域的特征信息,确定所述当前图像帧中的第三区域和第四区域;其中,所述第三区域表征所述当前图像帧相对于所述前一图像帧出现局部运动的区域,所述第四区域表征所述当前图像帧相对于所述前一图像帧出现全局运动的区域;
    获得与所述第三区域对应的第一参数;
    获得与所述第四区域关联的所述前一图像帧中的第五区域对应的第二参数;
    基于所述第一参数和所述第二参数,对所述前一图像帧和所述当前图像帧进行融合处理,得到所述处理后的当前图像帧。
  3. 根据权利要求2所述方法,所述基于所述第一参数和所述第二参数,对所述前一图像帧和所述当前图像帧进行融合处理,得到所述处理后的当前图像帧,包括:
    基于所述第一参数对所述当前图像帧的所述第三区域的像素值进行调整,得到当前目标图像帧;
    基于所述第二参数对所述前一图像帧的所述第五区域的像素值进行调整,得到前一目标图像帧;
    对所述当前目标图像帧和所述前一目标图像帧进行融合处理,得到所述处理后的当前图像帧。
  4. 根据权利要求3所述方法,所述当前目标图像帧的所述第三区域的像素值大于所述当前图像帧的所述第三区域的像素值,和/或,所述前一目标图像帧的所述第五区域的像素值大于所述前一图像帧的所述第五区域的像素值。
  5. 根据权利要求1至4中任一项所述方法,所述方法还包括:
    对所述当前图像帧进行下采样,得到第一目标图像帧;
    对所述前一图像帧进行下采样,得到第二目标图像帧;
    基于光流法获得所述第二目标图像帧到所述第一目标图像帧的第一光流预测区域;
    确定所述当前图像帧中与所述第一光流预测区域存在映射关系的区域为所述第一区域。
  6. 根据权利要求1至4中任一项所述方法,所述关联关系符合条件表征所述第一像素点的灰度值与所述第二像素点的灰度值的差值的第一绝对值大于第一阈值。
  7. 根据权利要求1至4中任一项所述方法,所述方法还包括:
    对所述当前图像帧进行下采样,得到第一目标图像帧;
    对所述前一图像帧进行下采样,得到第二目标图像帧;
    获得所述第一目标图像帧的多个第三像素点的灰度值,与所述第二目标图像帧的多个第四像素点的灰度值的差值的第二绝对值;
    确定所述第二绝对值大于第二阈值的像素点对应的区域为第二光流预测区域;
    确定所述当前图像帧中与所述第二光流预测区域存在映射关系的区域为所述第二区域。
  8. 根据权利要求1至4中任一项所述方法,所述第一区域的特征信息用于标识所述第一区域,所述第二区域的特征信息用于标识所述第二区域。
  9. 一种图像处理装置,包括:
    第一获得模块,配置为:获得当前图像帧中的第一区域的特征信息;其中,所述第一区域包括对所述当前图像帧和前一图像帧进行基于光流法的运动估计,确定的所述当前图像帧中的区域;
    第二获得模块,配置为:获得所述当前图像帧中的第二区域的特征信息;其中,所述第二区域包括所述当前图像帧的多个第一像素点中像素点与所述前一图像帧的多个第二像素点中的像素点之间的关联关系符合条件的像素点对应的区域;
    处理模块,配置为:基于所述第一区域的特征信息和所述第二区域的特征信息,对所述前一图像帧和所述当前图像帧进行融合处理,得到处理后的当前图像帧;其中,所述处理后的当前图像帧用于作为下一图像帧的前一图像帧,对所述下一图像帧进行处理。
  10. 一种电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至8中任一项所述的图像处理方法。
PCT/CN2020/077036 2020-02-27 2020-02-27 图像处理方法、装置及设备 WO2021168755A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202080094354.0A CN115004227A (zh) 2020-02-27 2020-02-27 图像处理方法、装置及设备
PCT/CN2020/077036 WO2021168755A1 (zh) 2020-02-27 2020-02-27 图像处理方法、装置及设备
EP20921228.1A EP4105886A4 (en) 2020-02-27 2020-02-27 IMAGE PROCESSING METHOD AND DEVICE AND DEVICE
US17/896,903 US20220414896A1 (en) 2020-02-27 2022-08-26 Image processing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/077036 WO2021168755A1 (zh) 2020-02-27 2020-02-27 图像处理方法、装置及设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/896,903 Continuation US20220414896A1 (en) 2020-02-27 2022-08-26 Image processing method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
WO2021168755A1 true WO2021168755A1 (zh) 2021-09-02

Family

ID=77490584

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/077036 WO2021168755A1 (zh) 2020-02-27 2020-02-27 图像处理方法、装置及设备

Country Status (4)

Country Link
US (1) US20220414896A1 (zh)
EP (1) EP4105886A4 (zh)
CN (1) CN115004227A (zh)
WO (1) WO2021168755A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082711A (zh) * 2022-08-22 2022-09-20 广东新禾道信息科技有限公司 土壤普查数据处理方法、系统及云平台
CN116193279A (zh) * 2022-12-29 2023-05-30 影石创新科技股份有限公司 视频处理方法、装置、计算机设备和存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230059035A1 (en) * 2021-08-23 2023-02-23 Netflix, Inc. Efficient encoding of film grain noise

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070030522A1 (en) * 2005-08-08 2007-02-08 Casio Computer Co., Ltd. Image processing apparatus and image processing method
CN103065326A (zh) * 2012-12-26 2013-04-24 西安理工大学 基于时-空多尺度运动注意力分析的目标检测方法
CN103632352A (zh) * 2013-11-01 2014-03-12 华为技术有限公司 一种噪声图像的时域降噪方法和相关装置
CN104052990A (zh) * 2014-06-30 2014-09-17 山东大学 一种基于融合深度线索的全自动二维转三维方法和装置
US20160107595A1 (en) * 2011-04-27 2016-04-21 Mobileye Vision Technologies Ltd. Pedestrian collision warning system
CN106845552A (zh) * 2017-01-31 2017-06-13 东南大学 在光强分布不均匀环境下的融合光流和sift特征点匹配的低动态载体速度计算方法
CN107507225A (zh) * 2017-09-05 2017-12-22 明见(厦门)技术有限公司 运动目标检测方法、装置、介质及计算设备
CN108230245A (zh) * 2017-12-26 2018-06-29 中国科学院深圳先进技术研究院 图像拼接方法、图像拼接装置及电子设备
CN108391060A (zh) * 2018-03-26 2018-08-10 华为技术有限公司 一种图像处理方法、图像处理装置和终端
CN110619652A (zh) * 2019-08-19 2019-12-27 浙江大学 一种基于光流映射重复区域检测的图像配准鬼影消除方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4725001B2 (ja) * 2000-08-07 2011-07-13 ソニー株式会社 画像処理装置及び方法、並びに記録媒体
JP6543214B2 (ja) * 2016-04-28 2019-07-10 砂防エンジニアリング株式会社 動き監視装置

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070030522A1 (en) * 2005-08-08 2007-02-08 Casio Computer Co., Ltd. Image processing apparatus and image processing method
US20160107595A1 (en) * 2011-04-27 2016-04-21 Mobileye Vision Technologies Ltd. Pedestrian collision warning system
CN103065326A (zh) * 2012-12-26 2013-04-24 西安理工大学 基于时-空多尺度运动注意力分析的目标检测方法
CN103632352A (zh) * 2013-11-01 2014-03-12 华为技术有限公司 一种噪声图像的时域降噪方法和相关装置
CN104052990A (zh) * 2014-06-30 2014-09-17 山东大学 一种基于融合深度线索的全自动二维转三维方法和装置
CN106845552A (zh) * 2017-01-31 2017-06-13 东南大学 在光强分布不均匀环境下的融合光流和sift特征点匹配的低动态载体速度计算方法
CN107507225A (zh) * 2017-09-05 2017-12-22 明见(厦门)技术有限公司 运动目标检测方法、装置、介质及计算设备
CN108230245A (zh) * 2017-12-26 2018-06-29 中国科学院深圳先进技术研究院 图像拼接方法、图像拼接装置及电子设备
CN108391060A (zh) * 2018-03-26 2018-08-10 华为技术有限公司 一种图像处理方法、图像处理装置和终端
CN110619652A (zh) * 2019-08-19 2019-12-27 浙江大学 一种基于光流映射重复区域检测的图像配准鬼影消除方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4105886A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082711A (zh) * 2022-08-22 2022-09-20 广东新禾道信息科技有限公司 土壤普查数据处理方法、系统及云平台
CN116193279A (zh) * 2022-12-29 2023-05-30 影石创新科技股份有限公司 视频处理方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
EP4105886A4 (en) 2023-04-19
CN115004227A (zh) 2022-09-02
EP4105886A1 (en) 2022-12-21
US20220414896A1 (en) 2022-12-29

Similar Documents

Publication Publication Date Title
WO2021168755A1 (zh) 图像处理方法、装置及设备
Tao et al. Low-light image enhancement using CNN and bright channel prior
JP6469678B2 (ja) 画像アーティファクトを補正するシステム及び方法
US10165248B2 (en) Optimization method of image depth information and image processing apparatus
US9202263B2 (en) System and method for spatio video image enhancement
WO2018136373A1 (en) Image fusion and hdr imaging
WO2022141178A1 (zh) 图像处理方法及装置
JP6615917B2 (ja) 実時間ビデオエンハンスメント方法、端末及び非一時的コンピュータ可読記憶媒体
WO2021114868A1 (zh) 降噪方法、终端及存储介质
US20120155764A1 (en) Image processing device, image processing method and program
KR20230084486A (ko) 이미지 효과를 위한 세그먼트화
CN106210448B (zh) 一种视频图像抖动消除处理方法
KR102038789B1 (ko) 포커스 검출
WO2017100971A1 (zh) 一种失焦模糊图像的去模糊方法和装置
CN110796615A (zh) 一种图像去噪方法、装置以及存储介质
US8693783B2 (en) Processing method for image interpolation
CN112584196A (zh) 视频插帧方法、装置及服务器
KR20150069585A (ko) 히스토그램 구간 교정을 이용한 스테레오 영상의 휘도 보정 방법 및 이에 이용되는 기록매체
Kim et al. Multi-frame de-raining algorithm using a motion-compensated non-local mean filter for rainy video sequences
Hua et al. Low-light image enhancement based on joint generative adversarial network and image quality assessment
Xu et al. Features based spatial and temporal blotch detection for archive video restoration
CN113223083B (zh) 一种位置确定方法、装置、电子设备及存储介质
CN111754411B (zh) 图像降噪方法、图像降噪装置及终端设备
CN112911262B (zh) 一种视频序列的处理方法及电子设备
TWI752496B (zh) 影像處理方法及影像處理系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20921228

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020921228

Country of ref document: EP

Effective date: 20220916

NENP Non-entry into the national phase

Ref country code: DE