CN114494085B - Video stream restoration method, system, electronic device and storage medium - Google Patents

Video stream restoration method, system, electronic device and storage medium Download PDF

Info

Publication number
CN114494085B
CN114494085B CN202210390552.2A CN202210390552A CN114494085B CN 114494085 B CN114494085 B CN 114494085B CN 202210390552 A CN202210390552 A CN 202210390552A CN 114494085 B CN114494085 B CN 114494085B
Authority
CN
China
Prior art keywords
video stream
information
motion
target object
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210390552.2A
Other languages
Chinese (zh)
Other versions
CN114494085A (en
Inventor
蔡维嘉
古家威
李腾
张晟东
王济宇
张立华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210390552.2A priority Critical patent/CN114494085B/en
Publication of CN114494085A publication Critical patent/CN114494085A/en
Application granted granted Critical
Publication of CN114494085B publication Critical patent/CN114494085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction

Abstract

The application relates to the technical field of computer vision, in particular to a method, a system, electronic equipment and a storage medium for restoring a video stream, which comprise the following steps: acquiring event information and the video stream with the motion blur, which are calibrated mutually, wherein the event information is acquired and generated by an event camera, and the video stream with the motion blur is acquired by a visual camera; acquiring target object information with motion blur according to the video stream; acquiring motion trail information of a target object according to the event information and the target object information; removing the motion blur corresponding to the target object information in the video stream according to the motion trail information to generate a restored video stream; the method restores the video stream with the motion blur through the event information, and can accurately distinguish a motion area from a non-motion area according to the event information because the event information can clearly reflect the motion trail information of a target object in the video stream, thereby improving the restoring effect of the motion blur of the video stream.

Description

Video stream restoration method, system, electronic device and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method, a system, an electronic device, and a storage medium for restoring a video stream.
Background
The video stream is composed of multiple frames of images, the motion blur of the video stream is related to the image motion blur, the generation of the image motion blur is related to the exposure mode of an area array image sensor of a traditional RGB camera, in the exposure process, relative displacement occurs between the camera and a shot object, the shot object area array image sensors with different time stamps are recorded in the same image, and therefore the final display of the RGB camera is fuzzy smear, moving objects in the RGB images lose edge information and detail information, the image quality of the RGB images is reduced, and negative effects are caused to the subsequent application of the RGB images.
In the prior art, an image with motion blur is generally analyzed in a deep learning manner to predict a sharp image, so that motion blur restoration of the image and further motion blur restoration of a video stream are realized. However, this scheme cannot accurately segment the motion region and the non-motion region in the image, thereby affecting the restoration effect of the motion blur of the video stream.
In view of the above problems, no effective technical solution exists at present.
Disclosure of Invention
The present application aims to provide a method, a system, an electronic device and a storage medium for restoring a video stream, which can effectively and accurately distinguish a motion region from a non-motion region, thereby effectively improving the restoration effect of motion blur of the video stream.
In a first aspect, the present application provides a method for restoring a video stream with motion blur, comprising the steps of:
acquiring event information and the video stream with the motion blur, wherein the event information is acquired and generated by an event camera, and the video stream with the motion blur is acquired by a visual camera;
acquiring target object information with motion blur according to the video stream;
acquiring motion trail information of a target object according to the event information and the target object information;
and removing the motion blur corresponding to the target object information in the video stream according to the motion trail information to generate a restored video stream.
According to the video stream restoration method, the video stream with the motion blur is restored through the event information, the event information can clearly reflect the motion track information of the target object in the video stream, so that the motion area and the non-motion area in the video stream can be accurately distinguished according to the event information, the restoration effect of the motion blur of the video stream is effectively improved, the motion blur in the video stream can be directionally removed according to the motion track information, data of the target object in other directions do not need to be analyzed, and the data processing amount during the removal of the motion blur is effectively reduced.
Optionally, the step of obtaining the target object information with motion blur according to the video stream includes the following sub-steps:
and analyzing the video stream with the motion blur by using a preset target detection model to acquire target object information with the motion blur.
Optionally, the step of removing the motion blur corresponding to the target object information in the video stream according to the motion trajectory information to generate a restored video stream includes the following substeps:
acquiring the motion direction information of each frame of image of the target object information in the video stream according to the motion track information;
removing the motion blur of each frame image in the video stream according to the motion direction information to generate a restored image frame;
and generating the restored video stream according to all the restored image frames.
Optionally, the step of obtaining the motion direction information of each frame of image of the target object information in the video stream according to the motion trajectory information includes the following sub-steps:
and acquiring the motion direction information of the target frame according to the motion track information corresponding to the target object information between the target frame and the previous frame in the video stream so as to acquire the motion direction information of each frame of image of the target object information in the video stream.
Optionally, the step of obtaining the motion direction information of each frame of the target object information in the video stream according to the motion trail information includes the following steps:
interpolating the video stream according to the motion trail information to generate a compensated video stream;
and obtaining the motion direction information of the target frame according to the motion track information corresponding to the target object information between the target frame and the previous frame in the compensation video stream, so as to obtain the motion direction information of each frame of image of the target object information in the compensation video stream.
According to the video stream restoration method, frames are inserted into the video stream according to the motion track information to generate the compensation video stream, then the compensation video stream is used for obtaining the motion direction information of each frame of image of the target object information in the compensation video stream, and the motion track of the target object in the video stream is clearer after the compensation frame is inserted, so that the accuracy of obtaining the motion direction information of each frame of image is effectively improved.
Optionally, the step of removing the motion blur of each frame of image in the video stream according to the motion direction information to generate a restored image frame includes the following sub-steps:
and performing directional gray value filtering on each frame of image in the video stream according to the target object information and the corresponding motion direction information to generate a restored image frame.
Optionally, the step of obtaining the motion trajectory information of the target object according to the event information and the target object information includes the following substeps:
acquiring target event information according to the event information and the target object information;
and calculating the motion trail information according to the target object mass center of the target event information with different time stamps.
In a second aspect, the present application further provides a video stream restoration system for restoring a video stream with motion blur, comprising:
the event camera is used for acquiring and generating event information;
a vision camera for capturing a video stream with motion blur;
a controller electrically connected to the event camera and the vision camera;
the controller is configured to obtain the event information and the video stream with motion blur, which are calibrated with each other, obtain target object information with motion blur according to the video stream, obtain motion trajectory information of a target object according to the event information and the target object information, and remove the motion blur corresponding to the target object information in the video stream according to the motion trajectory information to generate a restored video stream.
According to the video stream restoration system, the video stream with the motion blur is restored through the event information, the event information can clearly reflect the motion track information of the target object in the video stream, so that the motion area and the non-motion area in the video stream can be accurately distinguished according to the event information, the restoration effect of the motion blur of the video stream is effectively improved, the motion blur in the video stream can be directionally removed according to the motion track information, the data of the target object in other directions do not need to be analyzed, and the data processing amount during the removal of the motion blur is effectively reduced.
In a third aspect, the present application further provides an electronic device, comprising a processor and a memory, where the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the steps in the method provided in the first aspect are executed.
In a fourth aspect, the present application also provides a storage medium having a computer program stored thereon, where the computer program runs the steps of the method as provided in the first aspect when executed by a processor.
As can be seen from the above, according to the video stream restoration method, system, electronic device and storage medium provided by the present application, a video stream with motion blur is restored through event information, and since the event information can clearly reflect the motion trajectory information of a target object in the video stream, a motion region and a non-motion region in the video stream can be accurately distinguished according to the event information, so as to effectively improve the restoration effect of the motion blur of the video stream.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
Fig. 1 is a flowchart of a video stream restoration method according to an embodiment of the present disclosure.
Fig. 2 is a comparison diagram of effects before and after restoration of a video stream restoration method according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a video stream restoration system according to an embodiment of the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Reference numerals are as follows: 1. an event camera; 2. an RGB camera; 3. a controller; 401. a processor; 402. a memory; 403. a communication bus.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not construed as indicating or implying relative importance.
In a first aspect, as shown in fig. 3, the present application provides a video stream restoration system for restoring a video stream with motion blur, including:
the event camera 1 is used for collecting and generating event information;
a visual camera for capturing a video stream with motion blur;
the controller 3 is electrically connected with the event camera 1 and the vision camera;
the controller 3 is configured to obtain event information and a video stream with motion blur, further obtain target object information with motion blur according to the video stream, further obtain motion trajectory information of a target object according to the event information and the target object information, and further remove the motion blur corresponding to the target object information in the video stream according to the motion trajectory information to generate a restored video stream.
The video stream with motion blur may be acquired by a visual camera in the prior art, where the visual camera may be an RGB camera 2, an area-array camera, a line-scan camera, and the like, and the visual camera is preferably the RGB camera 2. The working principle of a video stream restoration system provided by the present application is the same as that of a video stream restoration method provided by the second aspect, and will not be discussed in detail here. According to the video stream restoration system, the video stream with the motion blur is restored through the event information, and the event information can clearly reflect the motion trail information of the target object in the video stream, so that the motion area and the non-motion area in the video stream can be accurately distinguished according to the event information, the motion blur in the video stream is directionally removed according to the motion trail information, and the restoration effect of the motion blur of the video stream is effectively improved.
In a second aspect, as shown in fig. 1, an embodiment of the present application provides a video stream restoration method for restoring a video stream with motion blur, especially for restoring a video stream with motion blur captured by an RGB camera 2, which includes the following steps:
s1, acquiring event information and a video stream with motion blur, wherein the event information is acquired and generated by an event camera 1, and the video stream with motion blur is acquired by a visual camera;
s2, acquiring target object information with motion blur according to the video stream;
s3, acquiring the motion trail information of the target object according to the event information and the target object information;
and S4, removing the motion blur corresponding to the target object information in the video stream according to the motion trail information to generate a restored video stream.
The event camera 1 is a bionic sensor, and the frame rate of the event camera 1 is much higher than that of the visual camera. The event information of step S1 includes an event stream, which can represent the motion trajectory of the target object in the event camera 1, and the video stream with motion blur may be collected by a vision camera in the prior art, which may be an RGB camera 2, an area camera, a line camera, etc., and the vision camera is preferably an RGB camera 2. The causes of motion blur are mainly classified into two categories: the first type is that the position of the visual camera is fixed and the position of the object to be shot is changed in the exposure process of the visual camera, so that the object to be shot in the image generates motion blur, and the second type is that the position of the visual camera is changed in the exposure process of the visual camera, so that the whole image generates motion blur.
Since the event information obtained in step S1 and the video stream with motion blur are in a mutual calibration relationship, the coordinates of the target object in the video stream in the event information or the coordinates of the target object in the event information in the video stream may be obtained by using the preset transformation matrix M corresponding to the mutual calibration relationship.
In step S2, the target object information with motion blur in the video stream with motion blur is automatically obtained based on methods such as deep learning, machine learning, OpenCV, and the like, where the target object information can represent a displacement condition of a target object in the video stream. It should be understood that the video stream with motion blur includes at least one piece of target object information with motion blur, and when the video stream includes a plurality of target objects with motion blur, step S2 obtains a plurality of pieces of target object information with motion blur at the same time.
The target object information represents the displacement condition of the target object in the video stream, so that the coordinates of the target object in the event information can be converted according to a preset transformation matrix M, and each event stream in the event information can accurately represent the displacement condition of different objects based on the time line, namely each event stream represents the motion trail of different objects. Therefore, the motion trajectory information in step S3 is the motion trajectory of the target object formed by the event stream corresponding to the target object information, and the operation principle thereof is as follows: the method comprises the steps of firstly converting coordinates of a target object in event information according to target object information, then determining an event stream of the target object in the event information according to the coordinates, wherein the event stream represents a motion track of the target object, and therefore obtaining the motion track information of the target object according to the target object information and the event information is achieved. It should be understood that the operating principle of the event camera 1 is: when an object moves, the event camera 1 encodes the moving object to generate and output an event point, and the event camera 1 does not encode the object which does not move, so that the area where the moving target object is located can be divided into a moving area according to the event point, and the area outside the moving area is divided into a non-moving area. Since the event stream is composed of a plurality of event points, the event stream can accurately reflect the motion trajectory of the moving object, and therefore, the motion trajectory information of step S3 can more clearly reflect the motion trajectory of the target object, so that the motion trajectory information of step S3 can indirectly reflect the direction in which the motion blur occurs. Therefore, the step S4 can remove the motion blur in the video stream according to the motion blur generation direction to generate the restored video stream, thereby removing the motion blur corresponding to the target object information in the video stream according to the motion trail information to generate the restored video stream.
According to the video stream restoration method provided by the embodiment of the application, the video stream with the motion blur is restored through the event information, and the event information can clearly reflect the motion trail information of the target object in the video stream, so that the motion area and the non-motion area in the video stream can be accurately distinguished according to the event information, and the restoration effect of the motion blur of the video stream is effectively improved.
The calibration relationship between the event information and the video stream in step S1 may be obtained in advance, or obtained during the process of obtaining the event information and the video stream, where the former corresponds to a case where the positional relationship between the visual camera and the event camera 1 is calibrated in advance, and the latter corresponds to a case where the positional relationship between the visual camera and the event camera 1 is not calibrated in advance, where, for a case where the positional relationship between the visual camera and the event camera 1 is not calibrated in advance, in some embodiments, step S1 includes the following sub-steps:
s11, adjusting the visual angle of the visual camera and the visual angle of the event camera 1 to enable the visual camera and the event camera 1 to be aligned to the same acquisition area, and fixing the positions of the visual camera and the event camera 1 after the adjustment is completed;
s12, collecting event information by using the event camera 1, and collecting a video stream with motion blur by using a visual camera;
s13, acquiring a first gray level image according to the event information, and carrying out gray level processing on each frame image in the video stream with motion blur acquired by the visual camera to generate a second gray level image;
s14, performing corner detection and sub-pixelation on the first gray level image to generate a first corner detection result, and performing corner detection and sub-pixelation on the second gray level image to generate a second corner detection result;
and S15, analyzing the first corner detection result and the second corner detection result by using a preset camera projection model to obtain the calibration relation between the visual camera and the event camera 1.
In step S11, the viewing angle of the vision camera and the viewing angle of the event camera 1 are both 70 °, and if the overlapping viewing angles of the vision camera and the event camera 1 for the same acquisition area are greater than or equal to 60 °, it is determined that the vision camera and the event camera 1 are aligned with the same acquisition area.
The event information comprises a plurality of events, and the data format of each event is (x, y, t, p), wherein x and y refer to the coordinates triggered by the event, t refers to the time triggered by the event, and p refers to the polarity of the event. Since noise reduction and other related processing cannot be performed on the event information in the data format, step S13 needs to first perform processing such as noise reduction by using a time window method in the prior art, and then perform event stream-to-gray stream conversion on the processed event information to obtain a first gray scale image.
In step S14, corner detection is performed based on the feature point extraction techniques such as FAST, GFTT, SIFT, and the like. The preset camera projection model in step S15 can project the first corner detection result into the second grayscale image, and can also project the second corner detection result into the first grayscale image, so as to obtain the calibration relationship between the visual camera and the event camera 1. The transformation matrix M can be calculated from the calibration relationship in step S15.
In some embodiments, step S2 includes the following sub-steps:
and S21, analyzing the video stream with the motion blur by using a preset target detection model to acquire target object information with the motion blur.
It should be understood that the preset target detection model is obtained by a pre-trained learning model, and the corresponding target object information can be segmented according to the video stream.
In some embodiments, the target detection model can automatically segment the target objects with motion blur from the video stream with motion blur and generate target object information.
In other embodiments, the object detection model can detect and classify the object from the video stream with motion blur, for example, dividing the object in the video stream into an object whose position is changed and an object whose position is not changed. After the classification is completed, the target detection model generates target object information. When the target detection model acquires information of a plurality of target objects, the target object to be subjected to motion blur restoration may be specified in a human-computer interaction manner, for example, the target object to be subjected to motion blur restoration may be specified in a screen-clicking manner, a voice interaction manner, or a keyboard selection manner. In some preferred embodiments, after a target object which needs to be subjected to motion blur restoration is specified, the specified target object is reminded to confirm in a human-computer interaction mode.
In some embodiments, step S3 includes the following sub-steps:
s31, acquiring target event information according to the event information and the target object information;
and S32, calculating the motion trail information according to the target object mass centers of the target event information of different time stamps.
The calculation method of step S31 includes: multiplying the position P of the target object in the video stream image coordinate system by a transformation matrix M between the RGB camera 2 and the event camera 1 obtained by calibration to obtain the coordinate of the target object in the event image
Figure 474850DEST_PATH_IMAGE001
In which
Figure 923149DEST_PATH_IMAGE002
Figure 973058DEST_PATH_IMAGE003
Figure 899426DEST_PATH_IMAGE004
Respectively representing the left boundary coordinate and the right boundary coordinate of the target object in the x direction in the video stream image coordinate system,
Figure 244956DEST_PATH_IMAGE005
Figure 801840DEST_PATH_IMAGE006
respectively representing the upper boundary coordinates and the lower boundary coordinates of the target object in the y direction in the video stream image coordinate system.
Figure 147502DEST_PATH_IMAGE007
Figure 80822DEST_PATH_IMAGE008
Figure 280860DEST_PATH_IMAGE009
Representing the left and right boundary coordinates of the target object in the x-direction in the event map coordinate system,
Figure 805382DEST_PATH_IMAGE010
Figure 90870DEST_PATH_IMAGE011
representing the upper and lower boundary coordinates of the target object in the y-direction in the event map coordinate system. Let the current time be
Figure 375352DEST_PATH_IMAGE012
The set of target event information may be represented as
Figure 429895DEST_PATH_IMAGE013
And N is the total number of the event points, and the calculation formula is shown as the following formula:
Figure 390898DEST_PATH_IMAGE014
wherein the index i indicates the ith event point,
Figure 163682DEST_PATH_IMAGE015
and
Figure 173226DEST_PATH_IMAGE016
for defining the extent of the target object in the x and y directions, respectively, on the basis of a primary boundary, in relation to the size of the target object, the resolution of the event camera 1And typically takes on a value of 5 to 50 pixels,
Figure 98588DEST_PATH_IMAGE017
the limited range of the target object on the time stamp is related to the exposure time of the RGB camera 2, the moving speed of the object, and the like, and is usually 0.01s to 1 s.
The method of step S32 includes: and calculating the centroid of the target object in the event stream by using a centroid detection algorithm, and calculating the motion trail of the target object according to the centroids of the target objects with different timestamps. Preferably, when the motion trajectory of the target object is simple, for example, the motion trajectory is a straight line or a parabolic line, the motion trajectory of the target object is subjected to straight line or curve fitting, a centroid, whose distance to the fitted straight line/curve is greater than a preset threshold, is regarded as an error point and the error point is removed, so as to avoid negative effects on the motion trajectory of the target object due to illumination changes, camera shake and other factors.
In some embodiments, step S4 includes the following sub-steps:
s41, acquiring the motion direction information of each frame of image of the target object information in the video stream according to the motion track information;
s42, removing the motion blur of each frame image in the video stream according to the motion direction information to generate a restored image frame;
s43, a restored video stream is generated from all the restored image frames.
Since the motion direction of the motion blur is the same as the motion direction of the target object, step S41 may calculate the motion direction information of the target object in each frame of image in the video stream according to the motion trajectory information. Since the motion direction information can represent the motion direction of the motion blur of the frame image, step S42 removes the motion blur of each frame image in the video stream according to the motion direction information, thereby generating a restored image frame (refer to fig. 2). The step S43 combines the restored image frames in order to generate a restored video stream.
In some embodiments, step S41 includes the following sub-steps:
s411, obtaining the motion direction information of the target frame according to the corresponding motion track information between the target frame and the previous frame of the target object information in the video stream, so as to obtain the motion direction information of each frame of image of the target object information in the video stream.
Step 411 intercepts the motion trail information corresponding to the current frame and the previous frame from the complete motion trail information according to the time nodes of the current frame and the previous frame, and obtains the motion direction information of the target frame according to the corresponding motion trail information.
In some embodiments, step S41 includes the steps of:
s411', inserting frames into the video stream according to the motion trail information to generate a compensation video stream;
s412', motion direction information of the target frame is obtained according to the motion track information corresponding to the target frame and the previous frame of the target object information in the compensation video stream, so as to obtain the motion direction information of each frame of image of the target object information in the compensation video stream.
In step S411', a compensation frame is inserted between adjacent image frames of the video stream according to the motion trajectory information, the compensation frame may be obtained by interpolation between a current frame and a reference frame, and a corresponding policy may be selected according to different application scenarios to obtain the reference frame, where the policy includes block motion compensation, variable block motion compensation, overlapped block motion compensation, and the like, and the interpolation method may adopt linear interpolation, bicubic interpolation, and the like. It should be understood that the insertion interval of the compensation frames can be adjusted according to the speed of the target object moving or the actual requirement, for example, the frame rate of the video stream is 30fps, and one compensation frame is inserted every 2 to 4 frames. Step S412 'differs from step S411 only in that step S412' acquires motion direction information by compensating the video stream. According to the embodiment of the application, frames are inserted into the video stream according to the motion track information to generate the compensation video stream, then the compensation video stream is used for obtaining the motion direction information of each frame of image of the target object information in the compensation video stream, and the motion track of the target object in the video stream is clearer after the compensation frame is inserted, so that the accuracy of obtaining the motion direction information of each frame of image is effectively improved.
In some embodiments, step S42 includes the following sub-steps:
and S421, performing directional gray value filtering on each frame of image in the video stream according to the target object information and the corresponding motion direction information thereof to generate a restored image frame.
Taking the example of removing the motion blur of a certain pixel point, the working process of step S421 is: obtaining the coordinate p of a pixel point in the space domain and the coordinate h (p) of the pixel point in the time domain, wherein the filter coefficient is
Figure 964913DEST_PATH_IMAGE018
The filter coefficient when j =0 is the filter coefficient at the current time, and the filter coefficient may be an operator of a filter system such as gaussian filter, laplacian filter, median filter, or the like. Taking the current moment as the center, taking k as a time axis to obtain the radius, and taking a filter coefficient set from-k moment to + k moment
Figure 224993DEST_PATH_IMAGE019
. The result of performing directional gray value filtering on the pixel point is shown as the following formula:
Figure 100545DEST_PATH_IMAGE020
wherein, the first and the second end of the pipe are connected with each other,
Figure 880413DEST_PATH_IMAGE021
representing oriented gray value filtered gray
Figure 183219DEST_PATH_IMAGE022
The value of the intensity of the light beam is calculated,
Figure 868278DEST_PATH_IMAGE023
a sampling function that represents the time-space domain,
Figure 281942DEST_PATH_IMAGE024
and representing the motion direction information of the pixel point and representing the coordinates of the pixel point in a space-time domain. The value range of k is usuallyThe value range is between 2 and 5, and the value range is the time axis value radius obtained by a large number of experiments by the applicant when the filtering effect is good. The principle is as follows: and carrying out gray value filtering on the pixel point at the current moment according to all the motion direction information of the pixel point at the front k moments and all the motion direction information of the pixel point at the back k moments, thereby removing the motion blur of the pixel point.
Therefore, the video stream restoration method provided by the application restores the video stream with the motion blur through the event information, and the event information can clearly reflect the motion trail information of the target object in the video stream, so that the motion area and the non-motion area in the video stream can be accurately distinguished according to the event information, thereby effectively improving the restoration effect of the motion blur of the video stream.
In a third aspect, referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the present application provides an electronic device 4, including: the processor 401 and the memory 402, the processor 401 and the memory 402 are interconnected and communicate with each other through a communication bus 403 and/or other forms of connection mechanism (not shown), the memory 402 stores a computer program executable by the processor 401, and when the computing device runs, the processor 401 executes the computer program to perform the method in any alternative implementation of the embodiment to realize the following functions: acquiring event information and a video stream with motion blur, wherein the event information is acquired and generated by an event camera 1, and the video stream with motion blur is acquired by a visual camera; acquiring target object information with motion blur according to the video stream; acquiring motion trail information of a target object according to the event information and the target object information; and removing the motion blur corresponding to the target object information in the video stream according to the motion trail information to generate a restored video stream.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program executes the method in any optional implementation manner of the embodiments to implement the following functions: acquiring event information and a video stream with motion blur, wherein the event information is acquired and generated by an event camera 1, and the video stream with motion blur is acquired by a visual camera; acquiring target object information with motion blur according to the video stream; acquiring motion trail information of the target object according to the event information and the target object information; and removing the motion blur corresponding to the target object information in the video stream according to the motion trail information to generate a restored video stream. The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
As can be seen from the above, according to the video stream restoration method, system, electronic device and storage medium provided by the present application, a video stream with motion blur is restored through event information, and since the event information can clearly reflect the motion trajectory information of a target object in the video stream, a motion region and a non-motion region in the video stream can be accurately distinguished according to the event information, so that the restoration effect of the motion blur of the video stream is effectively improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the above-described units is only one type of logical functional division, and other divisions may be realized in practice, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A video stream restoration method for restoring a video stream with motion blur, the video stream restoration method comprising the steps of:
acquiring event information and the video stream with the motion blur, wherein the event information is acquired and generated by an event camera, and the video stream with the motion blur is acquired by a visual camera;
acquiring target object information with motion blur according to the video stream;
acquiring motion trail information of a target object according to the event information and the target object information;
removing the motion blur corresponding to the target object information in the video stream according to the motion trail information to generate a restored video stream;
the step of removing the motion blur corresponding to the target object information in the video stream according to the motion trail information to generate a restored video stream includes:
acquiring the motion direction information of each frame of image of the target object information in the video stream according to the motion track information;
performing gray value filtering on the pixel point at the current moment according to all the motion direction information of the pixel point at the first k moments and all the motion direction information of the pixel point at the last k moments so as to remove the motion blur of each frame image in the video stream and generate a restored image frame;
and generating the restored video stream according to all the restored image frames.
2. The video stream restoration method according to claim 1, wherein the step of obtaining target object information with motion blur from the video stream comprises the sub-steps of:
and analyzing the video stream with the motion blur by using a preset target detection model to acquire target object information with the motion blur.
3. The method for restoring a video stream according to claim 1, wherein the step of obtaining the motion direction information of each frame of image of the target object information in the video stream according to the motion trail information comprises the sub-steps of:
and acquiring the motion direction information of the target frame according to the motion track information corresponding to the target object information between the target frame and the previous frame in the video stream so as to acquire the motion direction information of each frame of image of the target object information in the video stream.
4. The method for restoring a video stream according to claim 1, wherein the step of obtaining the motion direction information of each frame of image of the target object information in the video stream according to the motion trail information comprises the steps of:
interpolating the video stream according to the motion trail information to generate a compensated video stream;
and obtaining the motion direction information of the target frame according to the motion track information corresponding to the target object information between the target frame and the previous frame in the compensation video stream so as to obtain the motion direction information of each frame of image of the target object information in the compensation video stream.
5. The method for restoring a video stream according to claim 1, wherein the step of obtaining motion trajectory information of a target object based on the event information and the target object information comprises the sub-steps of:
acquiring target event information according to the event information and the target object information;
and calculating the motion trail information according to the target object mass center of the target event information with different time stamps.
6. A video stream restoration system for restoring a video stream with motion blur, the video stream restoration system comprising:
the event camera is used for acquiring and generating event information;
a vision camera for capturing a video stream with motion blur;
a controller electrically connected to the event camera and the vision camera;
the controller is used for acquiring the event information and the video stream with the motion blur which are calibrated with each other, acquiring target object information with the motion blur according to the video stream, acquiring motion trail information of a target object according to the event information and the target object information, and removing the motion blur corresponding to the target object information in the video stream according to the motion trail information to generate a restored video stream;
the process of removing the motion blur corresponding to the target object information in the video stream according to the motion trail information to generate a restored video stream includes:
acquiring the motion direction information of each frame of image of the target object information in the video stream according to the motion track information;
performing gray value filtering on the pixel point at the current moment according to all the motion direction information of the pixel point at the first k moments and all the motion direction information of the pixel point at the last k moments so as to remove the motion blur of each frame image in the video stream and generate a restored image frame;
and generating the restored video stream according to all the restored image frames.
7. An electronic device comprising a processor and a memory, the memory storing computer readable instructions which, when executed by the processor, perform the steps of the method of any one of claims 1 to 5.
8. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method according to any one of claims 1-5.
CN202210390552.2A 2022-04-14 2022-04-14 Video stream restoration method, system, electronic device and storage medium Active CN114494085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210390552.2A CN114494085B (en) 2022-04-14 2022-04-14 Video stream restoration method, system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210390552.2A CN114494085B (en) 2022-04-14 2022-04-14 Video stream restoration method, system, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN114494085A CN114494085A (en) 2022-05-13
CN114494085B true CN114494085B (en) 2022-07-15

Family

ID=81488608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210390552.2A Active CN114494085B (en) 2022-04-14 2022-04-14 Video stream restoration method, system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114494085B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708478B (en) * 2022-06-06 2022-09-02 季华实验室 Data fusion method, device, equipment and medium for event camera and standard camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991650A (en) * 2016-01-21 2017-07-28 北京三星通信技术研究有限公司 A kind of method and apparatus of image deblurring
CN112131991A (en) * 2020-09-15 2020-12-25 厦门大学 Data association method based on event camera
CN112511859A (en) * 2020-11-12 2021-03-16 Oppo广东移动通信有限公司 Video processing method, device and storage medium
CN113497889A (en) * 2020-04-08 2021-10-12 杭州萤石软件有限公司 Object tracking method and device under motion shooting condition and storage medium
CN113688741A (en) * 2021-08-26 2021-11-23 成都大学 Motion training evaluation system and method based on cooperation of event camera and visual camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11288818B2 (en) * 2019-02-19 2022-03-29 The Trustees Of The University Of Pennsylvania Methods, systems, and computer readable media for estimation of optical flow, depth, and egomotion using neural network trained using event-based learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991650A (en) * 2016-01-21 2017-07-28 北京三星通信技术研究有限公司 A kind of method and apparatus of image deblurring
CN113497889A (en) * 2020-04-08 2021-10-12 杭州萤石软件有限公司 Object tracking method and device under motion shooting condition and storage medium
CN112131991A (en) * 2020-09-15 2020-12-25 厦门大学 Data association method based on event camera
CN112511859A (en) * 2020-11-12 2021-03-16 Oppo广东移动通信有限公司 Video processing method, device and storage medium
CN113688741A (en) * 2021-08-26 2021-11-23 成都大学 Motion training evaluation system and method based on cooperation of event camera and visual camera

Also Published As

Publication number Publication date
CN114494085A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN109284738B (en) Irregular face correction method and system
CN107256225B (en) Method and device for generating heat map based on video analysis
CN109299703B (en) Method and device for carrying out statistics on mouse conditions and image acquisition equipment
CN108492287B (en) Video jitter detection method, terminal equipment and storage medium
EP3798975B1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN106934806B (en) It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus
CN109685045B (en) Moving target video tracking method and system
TW200844873A (en) Moving object detection apparatus and method by using optical flow analysis
CN110415186B (en) Method and equipment for image de-jittering
WO2021093534A1 (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN107248174A (en) A kind of method for tracking target based on TLD algorithms
KR101051389B1 (en) Adaptive background-based object detection and tracking device and method
WO2019010932A1 (en) Image region selection method and system favorable for fuzzy kernel estimation
KR101202642B1 (en) Method and apparatus for estimating global motion using the background feature points
CN114494085B (en) Video stream restoration method, system, electronic device and storage medium
CN112396073A (en) Model training method and device based on binocular images and data processing equipment
CN109784215B (en) In-vivo detection method and system based on improved optical flow method
CN112991159B (en) Face illumination quality evaluation method, system, server and computer readable medium
JP4898655B2 (en) Imaging apparatus and image composition program
CN116385316B (en) Multi-target image dynamic capturing method and related device
DE102004026782A1 (en) Method and apparatus for computer-aided motion estimation in at least two temporally successive digital images, computer-readable storage medium and computer program element
CN117315547A (en) Visual SLAM method for solving large duty ratio of dynamic object
CN115880683B (en) Urban waterlogging ponding intelligent water level detection method based on deep learning
CN111127355A (en) Method for finely complementing defective light flow graph and application thereof
CN102542530A (en) Motion-blurred image defining device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant