CN111369482A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111369482A
CN111369482A CN202010140777.3A CN202010140777A CN111369482A CN 111369482 A CN111369482 A CN 111369482A CN 202010140777 A CN202010140777 A CN 202010140777A CN 111369482 A CN111369482 A CN 111369482A
Authority
CN
China
Prior art keywords
image
processed
ith
information
repaired
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010140777.3A
Other languages
Chinese (zh)
Other versions
CN111369482B (en
Inventor
林松楠
张佳维
任思捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010140777.3A priority Critical patent/CN111369482B/en
Publication of CN111369482A publication Critical patent/CN111369482A/en
Application granted granted Critical
Publication of CN111369482B publication Critical patent/CN111369482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium, the method including: determining brightness increment information of the ith image to be processed according to event information of the ith image to be processed and event information of an (i-1) th image to be processed, wherein the event information is acquired through event acquisition equipment, and i is an integer greater than 1; and determining the repaired image of the ith image to be processed according to the ith image to be processed, the repaired image of the (i-1) th image to be processed and the brightness increment information of the ith image to be processed, wherein the definition of the repaired image is greater than that of the image to be processed. The embodiment of the disclosure can improve the deblurring effect of the image.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
The conventional image capturing apparatus can capture an image, such as an RGB image, according to the viewing habit of people. Whereas an Event capture device (e.g., Event Camera) can capture asynchronous brightness changes (i.e., events) at a high temporal frequency. In the related art, the blurred image may be deblurred by an event corresponding to the blurred image, but the processing method of the related art has a poor image processing effect.
Disclosure of Invention
The present disclosure proposes an image processing technical solution.
According to an aspect of the present disclosure, there is provided an image processing method including: determining brightness increment information of the ith image to be processed according to event information of the ith image to be processed and event information of an (i-1) th image to be processed, wherein the event information is acquired through event acquisition equipment, and i is an integer greater than 1; and determining the repaired image of the ith image to be processed according to the ith image to be processed, the repaired image of the (i-1) th image to be processed and the brightness increment information of the ith image to be processed, wherein the definition of the repaired image is greater than that of the image to be processed.
In a possible implementation manner, the determining brightness increment information of the ith image to be processed according to the event information of the ith image to be processed and the event information of the (i-1) th image to be processed includes: performing feature extraction on the event information of the ith image to be processed and the event information of the (i-1) th image to be processed to obtain a first event feature of the ith image to be processed; convolving the first event feature according to the convolution kernel tensor of the first event feature to obtain a second event feature of the ith image to be processed, wherein the convolution kernel tensor comprises convolution kernels of all channels of the first event feature; and predicting brightness increment of the second event characteristic to obtain brightness increment information of the ith image to be processed.
In a possible implementation manner, the determining brightness increment information of the ith image to be processed according to the event information of the ith image to be processed and the event information of the (i-1) th image to be processed further includes: performing convolution kernel prediction on each channel of the first event characteristic according to reference information corresponding to the ith image to be processed to obtain a convolution kernel tensor of the first event characteristic, wherein the number of channels of the convolution kernel tensor is the same as that of the channels of the first event characteristic, and the reference information includes the ith image to be processed and/or event information of the ith image to be processed.
In a possible implementation manner, the reference information further includes at least one of the i-1 th to-be-processed image, event information of the i-1 th to-be-processed image, and a restored image of the i-1 th to-be-processed image.
In one possible implementation, the brightness increment information includes: determining a repaired image of the ith image to be processed according to the first brightness increment information of the ith image to be processed relative to the ith-1 image to be processed and the second brightness increment information of the ith image to be processed relative to the repaired image of the ith-1 image to be processed, wherein the determining comprises the following steps: multiplying the ith image to be processed by the first brightness increment information to obtain a first repair image of the ith image to be processed; multiplying the repaired image of the ith-1 th image to be processed by the second brightness increment information to obtain a second repaired image of the ith image to be processed; according to the ith image to be processed and the event information of the ith image to be processed, performing weight prediction on the first repair image and the second repair image to obtain a first weight of the first repair image and a second weight of the second repair image; and according to the first weight and the second weight, carrying out weighted addition on the first repaired image and the second repaired image to obtain a repaired image of the ith image to be processed.
In a possible implementation manner, the repairing image of the to-be-processed image includes 2n +1 repairing images, n is a positive integer, the brightness increment information further includes third brightness increment information of the 2n +1 repairing image of the ith to-be-processed image relative to the 2n +1 repairing image of the i-1 to-be-processed image, and the determining the repairing image of the ith to-be-processed image according to the repairing image of the ith to-be-processed image, the i-1 to-be-processed image, and the brightness increment information of the ith to-be-processed image further includes: multiplying the first repaired image and the second repaired image by the third brightness increment information respectively to obtain 2n +1 groups of third repaired images; respectively performing weight prediction on each third repair image in each group of the 2n +1 groups of third repair images according to the ith image to be processed and the event information of the ith image to be processed to obtain a third weight of the 2n +1 groups of third repair images; and according to the third weight, performing weighted addition on each third repaired image in each group of the 2n +1 groups of third repaired images to obtain 2n +1 repaired images of the ith image to be processed.
In a possible implementation manner, the second brightness increment information includes 2n +1 second brightness increment information, and the multiplying the repaired image of the i-1 th to-be-processed image by the second brightness increment information to obtain the second repaired image of the i-th to-be-processed image includes: and multiplying the 2n +1 repaired images of the i-1 th image to be processed by corresponding second brightness increment information in the 2n +1 second brightness increment information respectively to obtain 2n +1 second repaired images of the i-1 th image to be processed.
In one possible implementation manner, the number of the images to be processed is N, where N is an integer and 1< i ≦ N, and the method further includes: and determining a repair video corresponding to the N images to be processed according to the repair images of the N images to be processed.
According to an aspect of the present disclosure, there is provided an image processing apparatus including:
the incremental information determining module is used for determining the brightness incremental information of the ith image to be processed according to the event information of the ith image to be processed and the event information of the (i-1) th image to be processed, wherein the event information is acquired by the event acquisition equipment, and i is an integer greater than 1; and the image restoration module is used for determining the restored image of the ith image to be processed according to the ith image to be processed, the restored image of the (i-1) th image to be processed and the brightness increment information of the ith image to be processed, wherein the definition of the restored image is greater than that of the image to be processed.
In a possible implementation manner, the incremental information determining module includes: the characteristic extraction submodule is used for carrying out characteristic extraction on the event information of the ith image to be processed and the event information of the (i-1) th image to be processed to obtain a first event characteristic of the ith image to be processed; the convolution submodule is used for performing convolution on the first event characteristic according to the convolution kernel tensor of the first event characteristic to obtain a second event characteristic of the ith image to be processed, and the convolution kernel tensor comprises convolution kernels of all channels of the first event characteristic; and the increment prediction submodule is used for performing brightness increment prediction on the second event characteristic to obtain brightness increment information of the ith image to be processed.
In a possible implementation manner, the incremental information determining module further includes: and the convolution kernel prediction sub-module is configured to perform convolution kernel prediction on each channel of the first event characteristic according to reference information corresponding to the ith image to be processed to obtain a convolution kernel tensor of the first event characteristic, where a channel number of the convolution kernel tensor is the same as a channel number of the first event characteristic, and the reference information includes event information of the ith image to be processed and/or the ith image to be processed.
In a possible implementation manner, the reference information further includes at least one of the i-1 th to-be-processed image, event information of the i-1 th to-be-processed image, and a restored image of the i-1 th to-be-processed image.
In one possible implementation, the brightness increment information includes: the image restoration module comprises:
the first restoration submodule is used for multiplying the ith image to be processed by the first brightness increment information to obtain a first restoration image of the ith image to be processed; the second restoration submodule is used for multiplying the restoration image of the ith-1 th image to be processed with the second brightness increment information to obtain a second restoration image of the ith image to be processed; a first weight prediction sub-module, configured to perform weight prediction on the first repaired image and the second repaired image according to the ith to-be-processed image and event information of the ith to-be-processed image, so as to obtain a first weight of the first repaired image and a second weight of the second repaired image; and the first weighted addition submodule is used for carrying out weighted addition on the first repaired image and the second repaired image according to the first weight and the second weight to obtain a repaired image of the ith image to be processed.
In a possible implementation manner, the repair images of the to-be-processed images include 2n +1 repair images, n is a positive integer, the brightness increment information further includes third brightness increment information of the 2n +1 repair images of the ith to-be-processed image relative to the 2n +1 repair images of the i-1 to-be-processed image, and the image repair module further includes:
the third repairing sub-module is used for multiplying the first repairing image and the second repairing image by the third brightness increment information respectively to obtain 2n +1 groups of third repairing images; a second weight prediction sub-module, configured to perform weight prediction on each third repair image of each of the 2n +1 groups of third repair images according to the ith to-be-processed image and event information of the ith to-be-processed image, respectively, to obtain a third weight of the 2n +1 groups of third repair images; and the second weighted addition submodule is used for carrying out weighted addition on each third repaired image in each group of the 2n +1 groups of third repaired images according to the third weight to obtain 2n +1 repaired images of the ith image to be processed.
In a possible implementation manner, the second brightness increment information includes 2n +1 second brightness increment information, and the second repair sub-module is configured to: and multiplying the 2n +1 repaired images of the i-1 th image to be processed by corresponding second brightness increment information in the 2n +1 second brightness increment information respectively to obtain 2n +1 second repaired images of the i-1 th image to be processed.
In one possible implementation manner, the number of the images to be processed is N, where N is an integer and 1< i ≦ N, and the apparatus further includes: and the video restoration module is used for determining restoration videos corresponding to the N images to be processed according to the restoration images of the N images to be processed.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the brightness increment of the current image can be determined through the event information of the current image and the previous image; and determining the repaired image of the current image according to the current image, the repaired image of the previous image and the brightness increment information, so that the image is repaired through the event information and the repaired result of the previous image, the details in the image are reserved, and the image deblurring effect is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
Fig. 2a, 2b, and 2c show schematic diagrams of neural networks according to embodiments of the present disclosure.
Fig. 3 shows a schematic diagram of a process of an image processing method according to an embodiment of the present disclosure.
Fig. 4 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Fig. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure, as shown in fig. 1, the method comprising:
in step S11, determining brightness increment information of an ith to-be-processed image according to event information of the ith to-be-processed image and event information of an i-1 th to-be-processed image, where the event information is acquired by an event acquisition device, and i is an integer greater than 1;
in step S12, a restored image of the ith to-be-processed image is determined according to the ith to-be-processed image, the restored image of the (i-1) th to-be-processed image, and the brightness increment information of the ith to-be-processed image, where the definition of the restored image is greater than that of the to-be-processed image.
In one possible implementation, the image processing method may be performed by an electronic device such as a terminal device or a server, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
In a possible implementation manner, the image to be processed may be, for example, a video frame acquired by an image acquisition device (e.g., a camera), and the image to be processed may have low definition, blurred image, small dynamic range, and the like. In this case, the image to be processed is restored, i.e., deblurred, by the event information corresponding to the image to be processed, which is acquired by the event acquisition device (e.g., event camera). Corresponding to the image to be processed, the time for acquiring the image to be processed can be within a preset time period for acquiring the event information. The event information is used for representing the brightness change of each pixel point in the image in the preset time period, and the value of the event information can be positive number for brightness change, negative number for dimming, and zero for brightness change. The present disclosure is not so limited.
In one possible implementation, N images to be processed are provided, where N is an integer greater than 1. For the current i-th image to be processed (1< i ≦ N), the brightness increment information of the i-th image to be processed may be determined in step S11 according to the event information of the i-th image to be processed and the event information of the i-1-th image to be processed. The brightness increment information is used for representing the brightness difference between the event information of the ith image to be processed and the event information of the (i-1) th image to be processed.
The event information can be processed through a convolutional neural network, for example, the event information of the ith image to be processed and the event information of the (i-1) th image to be processed are superposed, the event feature of the superposed event information is extracted, and then the brightness increment information of the ith image to be processed is determined according to the event feature. The network structure of the convolutional neural network is not limited by this disclosure.
In a possible implementation manner, after obtaining the brightness increment information of the ith to-be-processed image, in step S12, the brightness increment information may be multiplied by the i th to-be-processed image and the repaired image of the (i-1) th to-be-processed image, respectively, to obtain a plurality of preliminarily repaired images; and then fusing the primarily repaired images to obtain the repaired image of the ith image to be processed.
Event information may represent the difference between a blurred image and a sharp image, and information in a restored image of a previous image may preserve details in the image. Therefore, the image to be processed is restored through the brightness increment information and the information in the restored image of the previous image, so that the definition of the restored image is greater than that of the image to be processed, and the loss of image details is reduced. That is, after the processing of steps S11-S12, the deblurring of the image is realized.
In a possible implementation manner, for the 1 st to-be-processed image in the N to-be-processed images, the event information of the initial 0 th to-be-processed image (e.g., a preset grayscale image, a random grayscale image, or the 1 st to-be-processed image itself) and the restored image of the 0 th to-be-processed image (e.g., deblurred by the related art) may be set, and the restored image of the 1 st to-be-processed image is obtained through the processing in steps S11-S12. The present disclosure does not limit the image content and the specific repairing manner of the 0 th image to be processed and the repaired image thereof.
In one possible implementation manner, the N images to be processed are sequentially processed through steps S11-S12, so that a restored image of the N images to be processed can be obtained. The restored images of the N images to be processed can be respectively output, and the restored images of the N images to be processed can also be combined into a restored video, so that the deblurring process of the whole video is completed.
According to the embodiment of the disclosure, the brightness increment of the current image can be determined through the event information of the current image and the previous image; and determining the repaired image of the current image according to the current image, the repaired image of the previous image and the brightness increment information, so that the image is repaired through the event information and the repaired result of the previous image, the details in the image are reserved, and the image deblurring effect is improved.
In one possible implementation, step S11 may include:
performing feature extraction on the event information of the ith image to be processed and the event information of the (i-1) th image to be processed to obtain a first event feature of the ith image to be processed;
convolving the first event feature according to the convolution kernel tensor of the first event feature to obtain a second event feature of the ith image to be processed, wherein the convolution kernel tensor comprises convolution kernels of all channels of the first event feature;
and predicting brightness increment of the second event characteristic to obtain brightness increment information of the ith image to be processed.
For example, the event information of the ith image to be processed and the event information of the (i-1) th image to be processed may be superimposed, the superimposed event information may be input into a preset feature extraction network for processing, and the first event feature may be output. The feature extraction network may include a plurality of convolutional layers, a plurality of residual blocks, etc., which are not limited by this disclosure.
In one possible implementation, a three-dimensional convolution kernel tensor of the first event feature may be set, where the convolution kernel tensor includes convolution kernels of the channels of the first event feature, the size of the convolution kernels is a preset size K × K, and K is a positive integer.
In one possible implementation, the first event feature may be convolved by the convolution kernel tensor to obtain the second event feature. Therefore, the enhancement of the event characteristics can be realized, and the precision of subsequent image processing is improved.
In a possible implementation manner, the second event characteristic may be input into a preset increment prediction network, and a brightness increment of the second event characteristic is predicted to obtain brightness increment information. The brightness increment information may be a plurality of information, for example, brightness increment information of the ith image to be processed relative to the (i-1) th image to be processed (obtained by an Event-based Double Integral (EDI) principle), brightness increment information of the ith image to be processed relative to the repaired image of the (i-1) th image to be processed (obtained by an Event-based single Integral principle), and the like. The delta prediction network may include multiple deconvolution layers, multiple residual blocks, etc., which are not limited by this disclosure. In this way, the brightness increment information of the ith image to be processed can be obtained for subsequent processing.
During the collection of event information, an event is triggered once a brightness change exceeds a preset threshold, and the sum of events captured within a certain time interval may represent the proportion of the brightness change. In a physical model of event-based video reconstruction, a blurred image frame (i.e., an image to be processed) may be approximated as an average of a number of potential image frames that are discrete over the time interval. In this case, the brightness increment information of the image to be processed may be obtained by calculating a preset threshold of the brightness change and an average of the proportions of the brightness change in the time interval. That is, the first brightness increment information of the ith to-be-processed image relative to the (i-1) th to-be-processed image is obtained by the event-based double integration principle.
In one possible implementation, the brightness increment information of the blurred image frame relative to each potential image frame may be obtained by scaling the brightness change between the blurred image frame and each potential image frame. That is, the second brightness increment information of the i-th image to be processed relative to the restored image of the i-1-th image to be processed is obtained by the event-based single integration principle.
Fig. 2a, 2b, and 2c show schematic diagrams of neural networks according to embodiments of the present disclosure. As shown in fig. 2a, a neural network according to an embodiment of the present disclosure includes a feature extraction network 21 and an incremental prediction network 22. Event information E of the ith image to be processediAnd event information E of the i-1 st image to be processedi-1Superposing; inputting the superposed information into the feature extraction network 21, processing by a plurality of network blocks (each network block comprises a convolution layer and a residual block) of the feature extraction network 21, sequentially setting the number of channels to be 64, 96 and 128, and outputting a first event feature Q by the last network blocki(number of channels is 128); tensor H by convolution kerneliFor the first event characteristic QiConvolution is carried out to obtain a second event characteristic Gi(ii) a Characterizing the second event GiInputting into increment prediction network 22, processing by multiple network blocks (each network block comprises deconvolution layer and residual block) of increment prediction network 22, the number of channels is 96, 64, 32, the last three network blocks output three brightness increment information including first brightness increment information CiSecond brightness increment information PiAnd third brightness increment information Ii
In one possible implementation, step S11 may further include:
performing convolution kernel prediction on each channel of the first event characteristic according to the reference information corresponding to the ith image to be processed to obtain a convolution kernel tensor of the first event characteristic, wherein the number of the channels of the convolution kernel tensor is the same as that of the channels of the first event characteristic,
wherein the reference information includes the ith image to be processed and/or event information of the ith image to be processed.
For example, convolution kernel prediction may be performed on each channel of the first event feature according to information (which may be referred to as reference information) corresponding to the image to be processed, so as to generate a dynamic convolution kernel according to the information of the image to be processed, thereby achieving enhancement of the event feature. The reference information may include at least the ith to-be-processed image itself and/or event information of the ith to-be-processed image, thereby improving the accuracy of the predicted convolution kernel.
In one possible implementation, the reference information may further include: at least one of the i-1 th image to be processed, the event information of the i-1 th image to be processed and the repaired image of the i-1 th image to be processed. That is, the convolution kernels of the respective channels of the first event feature may be predicted using more information associated with the image to be processed as reference information.
In a possible implementation mode, the ith image to be processed, the (i-1) th image to be processed, the event information of the ith image to be processed, the event information of the (i-1) th image to be processed and the restored image of the (i-1) th image to be processed can be overlapped, and the overlapped information is input into a convolution kernel prediction network to be processed to obtain a convolution kernel matrix (with the size of 128 × K)2) And shaping (Reshape) the convolution kernel matrix to obtain a convolution kernel tensor (with the scale of K × K × 128) of the first event characteristic, wherein the number of channels of the convolution kernel tensor is the same as that of the first event characteristic.
As shown in fig. 2b, the neural network according to an embodiment of the present disclosure further includes a convolution kernel prediction network 23. The ith image B to be processediI-1 th image to be processed Bi-1Event information E of the ith image to be processediEvent information E of the i-1 st image to be processedi-1And the restored image S of the i-1 st image to be processedi-1The superimposed information is input into a convolution kernel prediction network 23 and processed by a plurality of network blocks (each network block comprises a convolution layer and a residual block), the number of channels is 64, 96 and 128 in sequence, and the last network block outputs a convolution kernel matrix (the size is 128 × K)2) (ii) a Shaping (Reshape) the convolution kernel matrix to obtain a convolution kernel tensor H of the first event characteristici(scale K × K × 128).
By the method, the current image, the event of the current image and the information of the previous image can be introduced to participate in the prediction of the convolution kernel, so that the accuracy of the obtained dynamic convolution kernel is further improved, and the effect of enhancing the event characteristics is improved.
In a possible implementation manner, the brightness increment information obtained in step S11 may include first brightness increment information (e.g., double-integration brightness increment information obtained based on a double-integration principle) of the ith image to be processed relative to the (i-1) th image to be processed, and second brightness increment information (e.g., single-integration brightness increment information obtained based on a single-integration principle) of the ith image to be processed relative to the repaired image of the (i-1) th image to be processed.
After the brightness increment information is obtained, the i-th image to be processed may be subjected to the deblurring process in step S12. Wherein, the step S12 may include:
multiplying the ith image to be processed by the first brightness increment information to obtain a first repair image of the ith image to be processed;
multiplying the repaired image of the ith-1 th image to be processed by the second brightness increment information to obtain a second repaired image of the ith image to be processed;
according to the ith image to be processed and the event information of the ith image to be processed, performing weight prediction on the first repair image and the second repair image to obtain a first weight of the first repair image and a second weight of the second repair image;
and according to the first weight and the second weight, carrying out weighted addition on the first repaired image and the second repaired image to obtain a repaired image of the ith image to be processed.
For example, a preliminary repair may be performed on the i-th image to be processed at step S12. Multiplying the ith image to be processed by the first brightness increment information to obtain a first repaired image of the ith image to be processed, wherein the first repaired image is a preliminarily repaired image; and meanwhile, multiplying the repaired image of the (i-1) th image to be processed by the second brightness increment information to obtain a second repaired image of the (i) th image to be processed, wherein the second repaired image is also an initially repaired image.
In a possible implementation manner, the ith image to be processed, the event information of the ith image to be processed, the first repair image and the second repair image may be superimposed; and inputting the superposed information into a weight prediction network to perform weight prediction to obtain a first weight of the first repaired image and a second weight of the second repaired image. The weight prediction network may be a convolutional neural network, which includes a plurality of convolutional layers, activation layers, etc., and the disclosure is not limited thereto.
As shown in fig. 2c, the neural network according to the embodiment of the present disclosure further includes a weight prediction network 24. The ith image B to be processediEvent information E of the ith image to be processediFirst repair image FiThe first restored image and the second restored image are overlapped after being shaped; inputting the superposed information into a weight prediction network 24, processing the superposed information by a plurality of 3D convolution layers (the number of channels is 64), and performing Sigmoid activation on the obtained characteristics to obtain a weight graph M of a first repair image and a second repair imageiThereby, the first weight and the second weight of the first repairing image and the second repairing image can be determined. The precision of weight prediction can be improved by processing the 3D convolution layer.
In a possible implementation manner, the first repaired image and the second repaired image may be added in a weighted manner according to the first weight and the second weight, so as to obtain a repaired image of the ith image to be processed. By the method, the restored image with higher quality can be obtained, and the image deblurring effect is improved.
In a possible implementation manner, after obtaining the brightness increment information of the ith image to be processed in step S11, the ith image to be processed may be deblurred and frame-interpolated in step S12. Each image to be processed may be interpolated, for example, n images (n is a positive integer) before and after each image to be processed, so as to increase the frame rate of the image to be processed. In this case, the repair image of the image to be processed includes 2n +1 repair images. For example, when n is 3, the repair image of the image to be processed includes 7 repair images.
In a possible implementation manner, the i-1 st to-be-processed image has 2n +1 repaired images, and therefore, the second brightness increment information of the i-th to-be-processed image relative to the repaired image of the i-1 st to-be-processed image is also 2n + 1. In this case, the step of multiplying the restored image of the i-1 th to-be-processed image by the second brightness increment information to obtain a second restored image of the i-th to-be-processed image may include:
and multiplying the 2n +1 repaired images of the i-1 th image to be processed by corresponding second brightness increment information in the 2n +1 second brightness increment information respectively to obtain 2n +1 second repaired images of the i-1 th image to be processed.
That is, 2n +1 second brightness increment information can be obtained in step S11, and further, in step S12, 2n +1 repaired images of the i-1 st to-be-processed image can be multiplied by corresponding second brightness increment information in the 2n +1 second brightness increment information, so as to obtain 2n +1 second repaired images, which are images preliminarily repaired by the i-1 st to-be-processed image from the 2n +1 repaired images of the i-1 st to-be-processed image. In this way, the restoration effect of the image can be improved.
In a possible implementation manner, the brightness increment information further includes third brightness increment information (for example, single-integration brightness increment information obtained based on a single-integration principle) of 2n +1 repaired images of the i-th image to be processed relative to 2n +1 repaired images of the i-1 th image to be processed. That is, each of the repaired images has a corresponding third luminance increment information, and 2n +1 third luminance increment information is obtained.
In one possible implementation, step S12 may further include:
multiplying the first repaired image and the second repaired image by the third brightness increment information respectively to obtain 2n +1 groups of third repaired images;
respectively performing weight prediction on each third repair image in each group of the 2n +1 groups of third repair images according to the ith image to be processed and the event information of the ith image to be processed to obtain a third weight of the 2n +1 groups of third repair images;
and according to the third weight, performing weighted addition on each third repaired image in each group of the 2n +1 groups of third repaired images to obtain 2n +1 repaired images of the ith image to be processed.
For example, the first repair image (1) and the second repair image (2n + 1) may be multiplied by 2n +1 third luminance increment information, respectively, resulting in a 2n +1 set of third repair images. Each third brightness increment information results in a set of third repair images, each set of third repair images having 2n +2 images. For example, n-3, 7 sets of third repair images are obtained, 8 for each set, for a total of 56 images.
In a possible implementation manner, the weight value prediction may be performed on each third repair image of each group. For any one of the 2n +1 groups of third repair images, the ith to-be-processed image, the event information of the ith to-be-processed image, and the 2n +2 third repair images of the group may be superimposed, and the superimposed information is input to a weight prediction network to perform weight prediction, so as to obtain a third weight of the 2n +2 third repair images of the group. In this way, the 2n +1 groups of third repair images are grouped and processed, so that the third weight values of the 2n +1 groups of third repair images can be obtained. The weight prediction network may be a convolutional neural network, which includes a plurality of convolutional layers, activation layers, etc., and the disclosure is not limited thereto.
In a possible implementation manner, for any one of the 2n +1 groups of third repaired images, the 2n +2 third repaired images of the group may be subjected to weighted addition according to the third weights of the 2n +2 third repaired images of the group, so as to obtain 1 repaired image of the ith to-be-processed image. Thus, the 2n +1 groups of the third repair images are subjected to weighted addition, and 2n +1 repair images of the ith image to be processed can be obtained.
By the method, the deblurring and frame interpolation processes of the image to be processed can be realized, the deblurring effect of the image is improved, and the frame rate of the image is improved.
FIG. 3 is a schematic diagram illustrating a processing procedure of an image processing method according to an embodiment of the disclosure, and as shown in FIG. 3, in step 31, an ith image B to be processed may be processediI-1 th image to be processed Bi-1Event information E of the ith image to be processediEvent information E of the i-1 st image to be processedi-1And the restored image S of the i-1 st image to be processedi-1Inputting the integrating network 311 (which may be called integrating net, including the above-mentioned feature extraction network, increment prediction network and convolution kernel prediction network) to obtain the first brightness increment information C of the ith image to be processedi(1) second luminance increment information Pi(2n + 1) and third luminance increment information Ii(2n + 1).
In step 32, the ith image B to be processed may be processediAnd first brightness increment information CiMultiplying to obtain a first repairing image (1 image); the restored image S of the i-1 st image to be processedi-1Respectively corresponding to the second brightness increment information PiThe multiplication results in a second restored image (2n +1 images), resulting in 2n +2 preliminary restored images 321.
In step 33, 2n +2 preliminary repaired images 321 may be respectively matched with 2n +1 third brightness increment information IiThe multiplication results in 2n +1 sets of third repair images 331, each set of third repair images comprising 2n +2 images.
In step 34, the ith image B to be processed may be processediEvent information E of the ith image to be processediAfter the group of third repairing images 33 are overlapped, the overlapped group of third repairing images are input into a weight prediction network 341 (which may be called GateNet), and a weight map M of the group of third repairing images is obtainedi(ii) a And according to the weight value of each third repairing image of the group, carrying out weighted addition on the 2n +2 third repairing images of the group to obtain 1 repairing image of the ith image to be processed. Thus, the 2n +1 groups of the third repair images are processed, and 2n +1 repair images S of the ith image to be processed can be obtainediThus completing the processes of deblurring and frame interpolation of the ith image to be processed.
In one possible implementation, the method may further include: and determining a repair video corresponding to the N images to be processed according to the repair images of the N images to be processed.
That is to say, the N to-be-processed images may be N image frames of a blurred to-be-processed video at a low frame rate, and the N to-be-processed images may be sequentially processed to obtain N restored images of the to-be-processed images, where the restored images may be one image of an uninserted frame or multiple images of an inserted frame. And generating a repair video corresponding to the N images to be processed according to the repair images. The definition of the restored video is higher than that of the video to be processed, so that the deblurring of the video is realized; under the condition of frame interpolation, the frame rate of the video is improved, and a clear video with a high frame rate is obtained.
In a possible implementation manner, before deploying the neural network, the neural network may be trained, and the image processing method according to an embodiment of the present disclosure further includes:
and training the neural network according to a preset training set, wherein the training set comprises a plurality of fuzzy sample images and a plurality of clear images corresponding to each sample image. For example, frame extraction and fuzzification processing can be carried out on a clear video with a high frame rate to obtain a plurality of fuzzy sample images; and taking the original clear video as a clear image corresponding to the sample image.
In a possible implementation manner, the sample images in the training set can be input into the neural network for processing, so as to obtain restored images of the sample images; determining the loss of the neural network according to the difference between the restored image of the sample image and the clear image; inversely adjusting network parameters of the neural network according to the loss; after a plurality of iterations, when a training condition (such as network convergence) is met, a trained neural network is obtained. In this way, a training process of the neural network can be achieved.
According to the image processing method disclosed by the embodiment of the disclosure, based on the trigger principle of the event camera, the events corresponding to the image frames are processed in the characteristic domain, the brightness difference between the image frames is determined, the image frames are deblurred by using the double-integral principle and the single-integral principle, and frame interpolation is realized. The method utilizes the time sequence information of the video, and transmits the deblurring result of the previous image frame to the current image frame, thereby retaining more details in the image and improving the deblurring effect of the image. The method can restore clear video images by fusing the deep learning theory and the event camera deblurring theory through the end-to-end convolutional neural network, can achieve higher precision, and can improve the visual effect.
The image processing method according to the embodiment of the disclosure can be applied to application scenes such as movie shooting, monitoring security protection, video processing and the like, is deployed in electronic equipment such as a mobile terminal and a robot, realizes high-frame-rate video recording, synthesis and the like, and effectively improves the frame rate of a video and the dynamic range of image frames of the video.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an image processing apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the image processing methods provided by the present disclosure, and the descriptions and corresponding descriptions of the corresponding technical solutions and the corresponding descriptions in the methods section are omitted for brevity.
Fig. 4 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure, which includes, as shown in fig. 4:
an increment information determining module 41, configured to determine brightness increment information of an ith image to be processed according to event information of the ith image to be processed and event information of an (i-1) th image to be processed, where the event information is obtained by an event acquisition device, and i is an integer greater than 1; and the image restoration module 42 is configured to determine a restored image of the ith image to be processed according to the ith image to be processed, the restored image of the (i-1) th image to be processed, and the brightness increment information of the ith image to be processed, where the definition of the restored image is greater than that of the image to be processed.
In a possible implementation manner, the incremental information determining module includes: the characteristic extraction submodule is used for carrying out characteristic extraction on the event information of the ith image to be processed and the event information of the (i-1) th image to be processed to obtain a first event characteristic of the ith image to be processed; the convolution submodule is used for performing convolution on the first event characteristic according to the convolution kernel tensor of the first event characteristic to obtain a second event characteristic of the ith image to be processed, and the convolution kernel tensor comprises convolution kernels of all channels of the first event characteristic; and the increment prediction submodule is used for performing brightness increment prediction on the second event characteristic to obtain brightness increment information of the ith image to be processed.
In a possible implementation manner, the incremental information determining module further includes: and the convolution kernel prediction sub-module is configured to perform convolution kernel prediction on each channel of the first event characteristic according to reference information corresponding to the ith image to be processed to obtain a convolution kernel tensor of the first event characteristic, where a channel number of the convolution kernel tensor is the same as a channel number of the first event characteristic, and the reference information includes event information of the ith image to be processed and/or the ith image to be processed.
In a possible implementation manner, the reference information further includes at least one of the i-1 th to-be-processed image, event information of the i-1 th to-be-processed image, and a restored image of the i-1 th to-be-processed image.
In one possible implementation, the brightness increment information includes: the image restoration module comprises:
the first restoration submodule is used for multiplying the ith image to be processed by the first brightness increment information to obtain a first restoration image of the ith image to be processed; the second restoration submodule is used for multiplying the restoration image of the ith-1 th image to be processed with the second brightness increment information to obtain a second restoration image of the ith image to be processed; a first weight prediction sub-module, configured to perform weight prediction on the first repaired image and the second repaired image according to the ith to-be-processed image and event information of the ith to-be-processed image, so as to obtain a first weight of the first repaired image and a second weight of the second repaired image; and the first weighted addition submodule is used for carrying out weighted addition on the first repaired image and the second repaired image according to the first weight and the second weight to obtain a repaired image of the ith image to be processed.
In a possible implementation manner, the repair images of the to-be-processed images include 2n +1 repair images, n is a positive integer, the brightness increment information further includes third brightness increment information of the 2n +1 repair images of the ith to-be-processed image relative to the 2n +1 repair images of the i-1 to-be-processed image, and the image repair module further includes:
the third repairing sub-module is used for multiplying the first repairing image and the second repairing image by the third brightness increment information respectively to obtain 2n +1 groups of third repairing images; a second weight prediction sub-module, configured to perform weight prediction on each third repair image of each of the 2n +1 groups of third repair images according to the ith to-be-processed image and event information of the ith to-be-processed image, respectively, to obtain a third weight of the 2n +1 groups of third repair images; and the second weighted addition submodule is used for carrying out weighted addition on each third repaired image in each group of the 2n +1 groups of third repaired images according to the third weight to obtain 2n +1 repaired images of the ith image to be processed.
In a possible implementation manner, the second brightness increment information includes 2n +1 second brightness increment information, and the second repair sub-module is configured to: and multiplying the 2n +1 repaired images of the i-1 th image to be processed by corresponding second brightness increment information in the 2n +1 second brightness increment information respectively to obtain 2n +1 second repaired images of the i-1 th image to be processed.
In one possible implementation manner, the number of the images to be processed is N, where N is an integer and 1< i ≦ N, and the apparatus further includes: and the video restoration module is used for determining restoration videos corresponding to the N images to be processed according to the restoration images of the N images to be processed.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The embodiments of the present disclosure also provide a computer program product, which includes computer readable code, and when the computer readable code runs on a device, a processor in the device executes instructions for implementing the image processing method provided in any one of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the image processing method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 5 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 5, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 6 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 6, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as Windows Server, stored in memory 1932TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (11)

1. An image processing method, comprising:
determining brightness increment information of the ith image to be processed according to event information of the ith image to be processed and event information of an (i-1) th image to be processed, wherein the event information is acquired through event acquisition equipment, and i is an integer greater than 1;
and determining the repaired image of the ith image to be processed according to the ith image to be processed, the repaired image of the (i-1) th image to be processed and the brightness increment information of the ith image to be processed, wherein the definition of the repaired image is greater than that of the image to be processed.
2. The method according to claim 1, wherein the determining the brightness increment information of the ith image to be processed according to the event information of the ith image to be processed and the event information of the (i-1) th image to be processed comprises:
performing feature extraction on the event information of the ith image to be processed and the event information of the (i-1) th image to be processed to obtain a first event feature of the ith image to be processed;
convolving the first event feature according to the convolution kernel tensor of the first event feature to obtain a second event feature of the ith image to be processed, wherein the convolution kernel tensor comprises convolution kernels of all channels of the first event feature;
and predicting brightness increment of the second event characteristic to obtain brightness increment information of the ith image to be processed.
3. The method according to claim 2, wherein the determining the brightness increment information of the ith image to be processed according to the event information of the ith image to be processed and the event information of the (i-1) th image to be processed further comprises:
performing convolution kernel prediction on each channel of the first event characteristic according to the reference information corresponding to the ith image to be processed to obtain a convolution kernel tensor of the first event characteristic, wherein the number of the channels of the convolution kernel tensor is the same as that of the channels of the first event characteristic,
wherein the reference information includes the ith image to be processed and/or event information of the ith image to be processed.
4. The method according to claim 3, wherein the reference information further includes at least one of the i-1 th to-be-processed image, event information of the i-1 th to-be-processed image, and a restored image of the i-1 th to-be-processed image.
5. The method according to any one of claims 1 to 4, wherein the brightness increment information comprises: first brightness increment information of the ith image to be processed relative to the (i-1) th image to be processed, and second brightness increment information of the ith image to be processed relative to a restored image of the (i-1) th image to be processed,
determining the repaired image of the ith image to be processed according to the ith image to be processed, the repaired image of the (i-1) th image to be processed and the brightness increment information of the ith image to be processed, including:
multiplying the ith image to be processed by the first brightness increment information to obtain a first repair image of the ith image to be processed;
multiplying the repaired image of the ith-1 th image to be processed by the second brightness increment information to obtain a second repaired image of the ith image to be processed;
according to the ith image to be processed and the event information of the ith image to be processed, performing weight prediction on the first repair image and the second repair image to obtain a first weight of the first repair image and a second weight of the second repair image;
and according to the first weight and the second weight, carrying out weighted addition on the first repaired image and the second repaired image to obtain a repaired image of the ith image to be processed.
6. The method according to claim 5, wherein the repair image of the to-be-processed images includes 2n +1 repair images, n being a positive integer, the luminance increment information further includes third luminance increment information of the 2n +1 repair images of the i-th to-be-processed image with respect to the 2n +1 repair images of the i-1-th to-be-processed image,
determining the repaired image of the ith image to be processed according to the ith image to be processed, the repaired image of the (i-1) th image to be processed and the brightness increment information of the ith image to be processed, and further comprising:
multiplying the first repaired image and the second repaired image by the third brightness increment information respectively to obtain 2n +1 groups of third repaired images;
respectively performing weight prediction on each third repair image in each group of the 2n +1 groups of third repair images according to the ith image to be processed and the event information of the ith image to be processed to obtain a third weight of the 2n +1 groups of third repair images;
and according to the third weight, performing weighted addition on each third repaired image in each group of the 2n +1 groups of third repaired images to obtain 2n +1 repaired images of the ith image to be processed.
7. The method according to claim 5 or 6, wherein the second brightness increment information includes 2n +1 second brightness increment information,
the multiplying the repaired image of the i-1 th image to be processed by the second brightness increment information to obtain a second repaired image of the i-th image to be processed, including:
and multiplying the 2n +1 repaired images of the i-1 th image to be processed by corresponding second brightness increment information in the 2n +1 second brightness increment information respectively to obtain 2n +1 second repaired images of the i-1 th image to be processed.
8. The method according to any one of claims 1 to 7, wherein the number of images to be processed is N, N being an integer and 1< i ≦ N,
the method further comprises the following steps: and determining a repair video corresponding to the N images to be processed according to the repair images of the N images to be processed.
9. An image processing apparatus characterized by comprising:
the incremental information determining module is used for determining the brightness incremental information of the ith image to be processed according to the event information of the ith image to be processed and the event information of the (i-1) th image to be processed, wherein the event information is acquired by the event acquisition equipment, and i is an integer greater than 1;
and the image restoration module is used for determining the restored image of the ith image to be processed according to the ith image to be processed, the restored image of the (i-1) th image to be processed and the brightness increment information of the ith image to be processed, wherein the definition of the restored image is greater than that of the image to be processed.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 8.
11. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 8.
CN202010140777.3A 2020-03-03 2020-03-03 Image processing method and device, electronic equipment and storage medium Active CN111369482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010140777.3A CN111369482B (en) 2020-03-03 2020-03-03 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010140777.3A CN111369482B (en) 2020-03-03 2020-03-03 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111369482A true CN111369482A (en) 2020-07-03
CN111369482B CN111369482B (en) 2023-06-23

Family

ID=71211189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010140777.3A Active CN111369482B (en) 2020-03-03 2020-03-03 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111369482B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738442A (en) * 2020-12-24 2021-04-30 中标慧安信息技术股份有限公司 Intelligent monitoring video storage method and system
WO2022141418A1 (en) * 2020-12-31 2022-07-07 华为技术有限公司 Image processing method and device
WO2022209253A1 (en) * 2021-04-02 2022-10-06 ソニーセミコンダクタソリューションズ株式会社 Sensor device, and semiconductor device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070160128A1 (en) * 2005-10-17 2007-07-12 Qualcomm Incorporated Method and apparatus for shot detection in video streaming
US20170213324A1 (en) * 2016-01-21 2017-07-27 Samsung Electronics Co., Ltd. Image deblurring method and apparatus
CN106991650A (en) * 2016-01-21 2017-07-28 北京三星通信技术研究有限公司 A kind of method and apparatus of image deblurring
CN107330867A (en) * 2017-06-16 2017-11-07 广东欧珀移动通信有限公司 Image combining method, device, computer-readable recording medium and computer equipment
US20180063506A1 (en) * 2015-03-16 2018-03-01 Universite Pierre Et Marie Curie (Paris 6) Method for the 3d reconstruction of a scene
WO2019105305A1 (en) * 2017-11-28 2019-06-06 Oppo广东移动通信有限公司 Image brightness processing method, computer readable storage medium and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070160128A1 (en) * 2005-10-17 2007-07-12 Qualcomm Incorporated Method and apparatus for shot detection in video streaming
US20180063506A1 (en) * 2015-03-16 2018-03-01 Universite Pierre Et Marie Curie (Paris 6) Method for the 3d reconstruction of a scene
US20170213324A1 (en) * 2016-01-21 2017-07-27 Samsung Electronics Co., Ltd. Image deblurring method and apparatus
CN106991650A (en) * 2016-01-21 2017-07-28 北京三星通信技术研究有限公司 A kind of method and apparatus of image deblurring
CN107330867A (en) * 2017-06-16 2017-11-07 广东欧珀移动通信有限公司 Image combining method, device, computer-readable recording medium and computer equipment
WO2019105305A1 (en) * 2017-11-28 2019-06-06 Oppo广东移动通信有限公司 Image brightness processing method, computer readable storage medium and electronic device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JINSHAN PAN 等: "Deblurring Images via Dark Channel Prior", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
LIYUAN PAN 等: "Bringing a blurry frame alive at high frame-rate with an event camera", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
MEIGUANG JIN 等: "Learning to extract a video sequence from a single motion-blurred image", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
江盟 等: "低维流形约束下的事件相机去噪算法", 《信号处理》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738442A (en) * 2020-12-24 2021-04-30 中标慧安信息技术股份有限公司 Intelligent monitoring video storage method and system
CN112738442B (en) * 2020-12-24 2021-10-08 中标慧安信息技术股份有限公司 Intelligent monitoring video storage method and system
WO2022141418A1 (en) * 2020-12-31 2022-07-07 华为技术有限公司 Image processing method and device
WO2022209253A1 (en) * 2021-04-02 2022-10-06 ソニーセミコンダクタソリューションズ株式会社 Sensor device, and semiconductor device

Also Published As

Publication number Publication date
CN111369482B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN111462268B (en) Image reconstruction method and device, electronic equipment and storage medium
CN110378976B (en) Image processing method and device, electronic equipment and storage medium
CN111340731B (en) Image processing method and device, electronic equipment and storage medium
CN109118430B (en) Super-resolution image reconstruction method and device, electronic equipment and storage medium
CN112241673B (en) Video processing method and device, electronic equipment and storage medium
CN110060215B (en) Image processing method and device, electronic equipment and storage medium
CN111445414B (en) Image processing method and device, electronic equipment and storage medium
CN111369482B (en) Image processing method and device, electronic equipment and storage medium
CN112465843A (en) Image segmentation method and device, electronic equipment and storage medium
CN111553864A (en) Image restoration method and device, electronic equipment and storage medium
US20220188982A1 (en) Image reconstruction method and device, electronic device, and storage medium
CN109840890B (en) Image processing method and device, electronic equipment and storage medium
CN109903252B (en) Image processing method and device, electronic equipment and storage medium
CN111861942A (en) Noise reduction method and device, electronic equipment and storage medium
CN113177890B (en) Image processing method and device, electronic equipment and storage medium
CN113689361B (en) Image processing method and device, electronic equipment and storage medium
CN113660531A (en) Video processing method and device, electronic equipment and storage medium
CN111553865B (en) Image restoration method and device, electronic equipment and storage medium
CN112837237A (en) Video repair method and device, electronic equipment and storage medium
CN111507131B (en) Living body detection method and device, electronic equipment and storage medium
CN109816620B (en) Image processing method and device, electronic equipment and storage medium
CN109635926B (en) Attention feature acquisition method and device for neural network and storage medium
CN111583142A (en) Image noise reduction method and device, electronic equipment and storage medium
CN109068138B (en) Video image processing method and device, electronic equipment and storage medium
CN114240787A (en) Compressed image restoration method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant