CN115835035A - Image frame interpolation method, device and equipment and computer readable storage medium - Google Patents

Image frame interpolation method, device and equipment and computer readable storage medium Download PDF

Info

Publication number
CN115835035A
CN115835035A CN202211441273.0A CN202211441273A CN115835035A CN 115835035 A CN115835035 A CN 115835035A CN 202211441273 A CN202211441273 A CN 202211441273A CN 115835035 A CN115835035 A CN 115835035A
Authority
CN
China
Prior art keywords
frame
image
frame image
pixel point
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211441273.0A
Other languages
Chinese (zh)
Inventor
邱绪东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202211441273.0A priority Critical patent/CN115835035A/en
Publication of CN115835035A publication Critical patent/CN115835035A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses an image frame interpolation method, device and equipment and a computer readable storage medium, and relates to the technical field of image frame interpolation. The image frame interpolation method comprises the following steps: acquiring current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is pixel point movement information between the current frame image and the next frame image; generating at least one frame of insertion frame image according to the current frame image and the inter-frame pixel point movement information; and sequentially outputting the current frame image and the insertion frame image to a preset display screen. The invention improves the frame rate of the output image and ensures the MR experience of the user.

Description

Image frame interpolation method, device and equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image frame interpolation technologies, and in particular, to an image frame interpolation method, an image frame interpolation device, an image frame interpolation apparatus, and a computer-readable storage medium.
Background
With the recent two-year outbreak of VR (Vi rtua l real) and AR (Augmented real) industries, values in the fields of industry, medical treatment, entertainment, office, social contact and the like gradually show, and at present, various manufacturers also explore and lay out in the MR (mixed real) field.
The main technical route for implementing MR is based on VST (Vi deo se-Through). In VST technology, a real-time view of the real world is captured by a camera and then combined with a virtual view of the digital world and presented on a display screen to deliver the combined image to the eyes of a user. Visual integration can be fully controlled based on VST technology, allowing for full occlusion between virtual and real objects, and even higher levels of modification to real objects can be made.
However, cameras that capture real-world scenes using VST technology typically use a CMOS Image Sensor (cci) to Image, and the cci is based on charge and requires integration of the charge before output is available. The sampling frequency of the C is also limited, which results in a low frame rate of the output image (usually in tens to hundreds of frames per second), and affects the MR experience of the user.
Disclosure of Invention
The invention mainly aims to provide an image frame interpolation method, an image frame interpolation device, image frame interpolation equipment and a computer readable storage medium, and aims to solve the technical problem that the frame rate of an image output by a camera adopting a CIS (China railway infrastructure) for imaging is low.
In order to achieve the above object, in a first aspect, the present invention provides an image interpolation method, including:
acquiring current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is pixel point movement information between the current frame image and the next frame image;
generating at least one frame of insertion frame image according to the current frame image and the inter-frame pixel point movement information;
and sequentially outputting the current frame image and the insertion frame image to a preset display screen.
Based on the technical scheme, the method comprises the steps of obtaining the movement information of pixel points between a current frame image and an inter frame image, wherein the movement information of the pixel points between the current frame image and a next frame image is the movement information of the pixel points between the current frame image and the next frame image; generating at least one frame of insertion frame image according to the current frame image and the inter-frame pixel point movement information; and sequentially outputting the current frame image and the insertion frame image to a preset display screen. Therefore, in the embodiment, the current frame image is subjected to image compensation through the inter-frame pixel point movement information, an insertion frame image behind the current frame image is generated, and after the current frame image and the insertion frame image are sequentially output to the preset display screen, the frame rate of the output image can be improved, and the MR experience of a user is ensured.
According to the first aspect, the moving pixel points in the current frame image and the moving tracks of the moving pixel points are determined according to the inter-frame pixel point moving information;
and moving the moving pixel points in the current frame image according to the moving track to generate at least one frame of insertion frame image.
Based on the technical scheme, the moving pixel points in the current frame image and the moving tracks of the moving pixel points are determined according to the inter-frame pixel point moving information; and moving the moving pixel points in the current frame image according to the moving track to generate at least one frame of insertion frame image. Compared with the mode of generating the insertion frame image through the motion vector and the neural network model, the method has higher accuracy of generating the insertion frame image.
According to the first aspect or any one implementation manner of the first aspect, before the step of determining a moving pixel point in the current frame image and a moving track of the moving pixel point according to the inter-frame pixel point movement information, the method includes:
acquiring an inter-frame time interval and a target insertion frame number between the first shooting time of the current frame image and the second shooting time of the next frame image;
and determining the corresponding frame inserting time in the interframe time interval according to the interframe time interval and the target inserting frame number.
Based on the technical scheme, acquiring the inter-frame time interval and the target insertion frame number between the first shooting time of the current frame image and the second shooting time of the next frame image; and determining the corresponding frame inserting time in the interframe time interval according to the interframe time interval and the target inserting frame number. So that the frame insertion time when the frame insertion is needed in the inter-frame period between the current frame image and the next frame image can be determined.
According to the first aspect, or any implementation manner of the first aspect, time differences between adjacent frame insertion time instants are equal.
Based on the technical scheme, the time difference between the adjacent frame interpolation moments is equal, so that the time intervals of the output frame interpolation images are consistent, and the fluency of the output images can be ensured.
According to the first aspect, or any implementation manner of the first aspect, the moving trajectory includes a sub-moving trajectory from the first shooting time to each of the frame interpolation times, and the step of moving a moving pixel point in the current frame image according to the moving trajectory to generate at least one frame of frame interpolation image includes:
determining the target position of the moving pixel point at the frame inserting moment corresponding to the sub-moving track according to the sub-moving track;
and moving the moving pixel point to the target position to generate an insertion frame image corresponding to the frame insertion time.
Based on the technical scheme, the corresponding insertion frame image is generated through the sub-movement track from the first shooting time of the standard camera to each frame insertion time, and compared with a mode of generating the insertion frame image through a motion vector and a neural network model, the accuracy of generating the insertion frame image is higher.
According to the first aspect, or any one of the foregoing implementation manners of the first aspect, before the step of obtaining the motion information of the pixel points between the current frame image and the inter-frame image, where the motion information of the pixel points between the current frame image and the next frame image, the method includes:
shooting a target scene through a standard camera to obtain a current frame image;
and detecting pixel points in an imaging picture of the standard camera by using an event-driven camera to obtain inter-frame pixel point movement information.
Based on the technical scheme, after a target scene is shot through a standard camera to obtain a current frame image, an event driving camera is used for collecting inter-frame pixel point movement information in an inter-frame time interval between a first shooting time of the current frame image and a second shooting time of a next frame image, so that the movement condition of pixel points in the inter-frame time interval is determined, and a corresponding insertion frame image can be generated according to the inter-frame pixel point movement information subsequently.
According to the first aspect, or any implementation manner of the first aspect, the step of sequentially outputting the current frame image and the insertion frame image to a preset display screen includes:
outputting the current frame image to a preset display screen, and acquiring the frame insertion time of the inserted frame image;
and sequentially outputting the inserted frame images to the preset display screen according to the sequence of the frame inserting moments.
Based on the above technical solution, after the current frame image is obtained, the current frame image may be output to a preset display screen in this embodiment. And then acquiring the frame insertion time of the frame insertion images, and sequentially outputting the frame insertion images to a preset display screen according to the time sequence of the frame insertion time so as to ensure the accuracy of the image output sequence.
In a second aspect, the present invention provides an image interpolation apparatus, comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring the motion information of pixel points between a current frame image and an inter frame image, and the inter frame pixel point motion information is the pixel point motion information between the current frame image and a next frame image;
the generating module is used for generating at least one frame of insertion frame image according to the current frame image and the inter-frame pixel point movement information;
and the output module is used for sequentially outputting the current frame image and the insertion frame image to a preset display screen.
According to a second aspect, the generating module is further configured to:
determining a moving pixel point in the current frame image and a moving track of the moving pixel point according to the inter-frame pixel point moving information;
and moving the moving pixel points in the current frame image according to the moving track to generate at least one frame of insertion frame image.
According to a second aspect or any implementation manner of the second aspect above, the generating module is further configured to:
acquiring an inter-frame time interval and a target insertion frame number between the first shooting time of the current frame image and the second shooting time of the next frame image;
and determining the corresponding frame inserting time in the interframe time interval according to the interframe time interval and the target inserting frame number.
According to a second aspect, or any implementation manner of the second aspect above, the time differences between adjacent frame insertion time instants are equal.
According to a second aspect or any implementation manner of the second aspect above, the generating module is further configured to:
determining the target position of the moving pixel point at the frame inserting moment corresponding to the sub-moving track according to the sub-moving track;
and moving the moving pixel point to the target position to generate an insertion frame image corresponding to the frame insertion time.
According to a second aspect, or any implementation manner of the second aspect above, the obtaining module is further configured to:
shooting a target scene through a standard camera to obtain a current frame image;
and detecting pixel points in an imaging picture of the standard camera by using an event-driven camera to obtain inter-frame pixel point movement information.
According to the second aspect, or any implementation manner of the second aspect above, the output module is further configured to:
outputting the current frame image to a preset display screen, and acquiring the frame insertion time of the inserted frame image;
and sequentially outputting the inserted frame images to the preset display screen according to the sequence of the frame inserting moments.
In a third aspect, the present invention provides an image interpolation apparatus, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the image framing method as described above.
Any one implementation manner of the third aspect corresponds to any one implementation manner of the first aspect. For technical effects corresponding to any one implementation manner of the third aspect and the third aspect, reference may be made to the technical effects corresponding to any one implementation manner of the first aspect and the first aspect, and details are not repeated here.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the image interpolation method according to the first aspect or any of the possible implementations of the first aspect.
Any one implementation manner of the fourth aspect and the fourth aspect corresponds to any one implementation manner of the first aspect and the first aspect, respectively. For technical effects corresponding to any one implementation manner of the fourth aspect and the fourth aspect, reference may be made to the technical effects corresponding to any one implementation manner of the first aspect and the first aspect, and details are not repeated here.
In a fifth aspect, an embodiment of the present invention provides a computer program, where the computer program includes instructions for executing the image framing method in the first aspect and any possible implementation manner of the first aspect.
Any one implementation manner of the fifth aspect and the fifth aspect corresponds to any one implementation manner of the first aspect and the first aspect, respectively. For technical effects corresponding to any one of the implementation manners of the fifth aspect and the fifth aspect, reference may be made to the technical effects corresponding to any one of the implementation manners of the first aspect and the first aspect, and details are not repeated here.
Drawings
FIG. 1 is a schematic diagram of a video perspective technique according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an image framing apparatus in a hardware operating environment according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a first embodiment of an image frame interpolation method according to the present invention;
FIG. 4 is a flowchart illustrating an image frame interpolation method according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image frame interpolation apparatus according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second," and the like, in the description and in the claims of embodiments of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first target object and the second target object, etc. are specific sequences for distinguishing different target objects, rather than describing target objects.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, the meaning of "a plurality" means two or more unless otherwise specified. For example, a plurality of processing units refers to two or more processing units; the plurality of systems refers to two or more systems.
For clarity and conciseness of the following description of the embodiments, a brief introduction of an implementation of an image interpolation method is first given:
with the recent two-year outbreak of VR (Vi rtua l real) and AR (Augmented real) industries, values in the fields of industry, medical treatment, entertainment, office, social contact and the like gradually show, and at present, various manufacturers also explore and lay out in the MR (Mixed real) field.
While the current main technical route for implementing MR is based on VST (Video See-Through) technology. Referring to fig. 1, fig. 1 is a schematic diagram of a video perspective technology according to an embodiment of the present invention. In the VST technology, a real-time view of the real world (real world) is captured by the camera 10, and then the real-time view is combined with a virtual view of the digital world (digital world) and presented on the display screen 20, so as to send the combined image to the eyes of the user. The VST-based technology allows full control of visual integration, allows full occlusion between virtual and real objects, and allows even higher levels of modification to real objects.
However, a camera that captures a real-world scene by using the VST technology generally performs imaging using a CMOS (CMOS Image Sensor), and the CI S is based on charges and requires integration of the charges before output. The sampling frequency of the C is also limited, resulting in a low frame rate of the output image (usually in tens to hundreds of frames per second), which affects the MR experience of the user.
The invention designs an image frame interpolation method for performing image compensation on the current frame image through the pixel point movement information, thereby improving the frame rate of the output image and ensuring the MR experience of a user.
In some embodiments, the inter-frame pixel point movement information is pixel point movement information between the current frame image and the next frame image by acquiring the current frame image and the inter-frame pixel point movement information; generating at least one frame of insertion frame image according to the current frame image and the inter-frame pixel point movement information; and sequentially outputting the current frame image and the insertion frame image to a preset display screen. Therefore, in the embodiment, the current frame image is subjected to image compensation through the inter-frame pixel point movement information, an insertion frame image behind the current frame image is generated, and after the current frame image and the insertion frame image are sequentially output to the preset display screen, the frame rate of the output image can be improved, and the MR experience of a user is ensured.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an image frame interpolation device in a hardware operating environment according to an embodiment of the present invention.
Specifically, the image interpolation device may be a MR device, a PC (personal Computer), a tablet Computer, a portable Computer, a server, or the like.
As shown in fig. 2, the image interpolation apparatus may include: the processor 1001 is, for example, a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a display screen (Di sp ay), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., a WI re l ess-fde l ity, WI-fi) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or a Non-Vo l at i e Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 2 does not constitute a limitation of the image framing apparatus and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 2, the memory 1005, which is a storage medium, may include therein an operating system, a data storage module, a network communication module, a user interface module, and a computer program.
In the image framing apparatus shown in fig. 2, the network interface 1004 is mainly used for data communication with other apparatuses; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the image framing apparatus of the present invention may be provided in the image framing apparatus, which calls the computer program stored in the memory 1005 through the processor 1001 and executes the image framing method provided by the embodiment of the present invention.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
The following describes the image frame interpolation method in detail with reference to the flowchart of the first embodiment of the image frame interpolation method shown in fig. 3.
Referring to fig. 3, an image frame interpolation method provided in an embodiment of the image frame interpolation method of the present invention includes:
step S100, acquiring the motion information of pixel points between a current frame image and an inter frame, wherein the motion information of the pixel points between the current frame image and a next frame image is the motion information of the pixel points between the current frame image and the next frame image;
in this embodiment, it can be understood that the current frame Image is an Image obtained by shooting a target scene with a standard camera, where the standard camera is a camera provided with a conventional Image Sensor such as a CI S (CMOS Image Sensor). The pixel point movement information at least includes movement information such as a coordinate position of a pixel point in an imaging picture of the standard camera and a time point corresponding to the coordinate position in an inter-frame time period from when the current frame image is obtained by shooting to when the next frame image is obtained by shooting, which is acquired by an Event-based vision Sensor. It is understood that the event-driven camera with EVS may be used to collect the pixel point movement information.
For example, a standard camera using the cei and an event-driven camera using the EVS may be provided in the MR device, so that when the standard camera photographs a target scene, inter-frame pixel point movement information between adjacent frames may be acquired by the event-driven camera.
Before the step S100 of obtaining the motion information of the pixel points between the current frame image and the frame, the method includes:
step S110, shooting a target scene through a standard camera to obtain a current frame image;
and step S120, detecting pixel points in an imaging picture of the standard camera through the event-driven camera to obtain inter-frame pixel point movement information.
Specifically, the target scene is a scene that a user desires to shoot. In this embodiment, a standard camera is used to shoot a target scene, so that a current frame image corresponding to the target scene can be obtained. Since the EVS is based on PD (Photo-diode) current, it is monitored whether the current signal of the PD has changed. The transition exceeds a given threshold, for example, a2 bit signal output, i.e., a current signal becomes strong and a 01 can be output, a current signal becomes weak and a10 can be output, and if the current signal does not exceed the threshold, the output is 00. Meanwhile, the calculation process is that all pixels carry out digital-to-analog conversion at the same time, which is a parallel process. And because the conversion is simple, the digital-to-analog conversion of the 2 bit is very quick, so that the conversion and output of the whole pixel array are realized at very high speed, and the sampling frequency of the event-driven camera is far higher than that of a standard camera. Therefore, after the current frame image is obtained by shooting with the standard camera, the pixel points in the imaging picture of the standard camera can be detected through the event-driven camera, and the inter-frame pixel point movement information in the inter-frame time period from the time when the current frame image is obtained by shooting with the standard camera to the time when the next frame image is obtained by shooting can be acquired.
In the embodiment, after a target scene is shot by a standard camera to obtain a current frame image, an event-driven camera is used for acquiring inter-frame pixel point movement information in an inter-frame time interval between a first shooting time of the current frame image and a second shooting time of a next frame image, so that the movement condition of pixel points in the inter-frame time interval is determined, and a corresponding insertion frame image can be generated subsequently according to the inter-frame pixel point movement information.
Step S200, generating at least one frame of insertion frame image according to the current frame image and the inter-frame pixel point movement information;
after obtaining the current frame image and the inter-frame pixel point movement information, it may be determined, according to the inter-frame pixel point movement information, that a moving pixel point is generated after the first shooting time of the current frame image and a moving track of the moving pixel point in an inter-frame time period between the first shooting time of the current frame image and the second shooting time of the next frame image. And then, the corresponding moving pixel points in the current frame image can be controlled to move according to the moving track, and at least one frame of insertion frame image is generated and used for being inserted between the current frame image and the next frame image so as to improve the frame rate of the output image.
Step S200 is to generate at least one frame of insertion frame image according to the current frame image and the inter-frame pixel point movement information, and includes:
step S210, determining a moving pixel point in the current frame image and a moving track of the moving pixel point according to the inter-frame pixel point moving information;
step S220, moving the moving pixel points in the current frame image according to the moving track, and generating at least one frame of insertion frame image.
It can be understood that the EVS detects the change of each pixel in each imaging frame in an asynchronous manner, so that the moving pixel generated by the current frame image in the inter-frame period between the first shooting time of the current frame image and the second shooting time of the next frame image can be determined according to the inter-frame pixel movement information. And determining the coordinate position and the corresponding time point of the mobile pixel point in the inter-frame time interval, thereby generating the mobile track of the mobile pixel point. And then generating at least one frame of insertion frame image according to the moving pixel points in the current frame image moved according to the moving track. Exemplarily, if a frame of insertion frame image needs to be inserted between a current frame image and a next frame image, a coordinate position corresponding to a certain time point of a moving pixel point in the inter-frame time period may be selected according to the moving track, and the moving pixel point in the current frame image is moved to the coordinate position, so that a frame of insertion frame image may be generated.
In this embodiment, according to the inter-frame pixel point movement information, a moving pixel point in the current frame image and a movement track of the moving pixel point are determined; and moving the moving pixel points in the current frame image according to the moving track to generate at least one frame of insertion frame image. Compared with the mode of generating the insertion frame image through the motion vector and the neural network model, the method has higher accuracy of generating the insertion frame image.
And step S300, sequentially outputting the current frame image and the insertion frame image to a preset display screen.
After the current frame image and the insertion frame image are obtained, the current frame image and the insertion frame image are sequentially output to a preset display screen according to the corresponding moments of the current frame image and the insertion frame image, so that the image frame rate output to the preset display screen is improved.
Further, the step S300 of sequentially outputting the current frame image and the insertion frame image to a preset display screen includes:
step S310, outputting the current frame image to a preset display screen, and acquiring the frame insertion time of the inserted frame image;
and step S320, sequentially outputting the frame insertion images to the preset display screen according to the sequence of the frame insertion time.
After the current frame image is obtained, the current frame image may be output to a preset display screen. And then acquiring the frame insertion time of the frame insertion images, and sequentially outputting the frame insertion images to a preset display screen according to the time sequence of the frame insertion time so as to ensure the accuracy of the image output sequence.
In the first embodiment of the invention, the current frame image and the inter-frame pixel point movement information are obtained, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image; generating at least one frame of insertion frame image according to the current frame image and the inter-frame pixel point movement information; and sequentially outputting the current frame image and the insertion frame image to a preset display screen. Therefore, in the embodiment, the current frame image is subjected to image compensation through the inter-frame pixel point movement information, an insertion frame image behind the current frame image is generated, and after the current frame image and the insertion frame image are sequentially output to the preset display screen, the frame rate of the output image can be improved, and the MR experience of a user is ensured.
The following describes the image frame interpolation method in detail with reference to a flowchart of a second embodiment of the image frame interpolation method shown in fig. 4.
Referring to fig. 4, in another embodiment of the image interpolation method of the present invention, before the step of determining the moving pixel point in the current frame image and the moving track of the moving pixel point according to the inter-frame pixel point movement information, step S200 includes:
step A10, acquiring an inter-frame time interval and a target insertion frame number between a first shooting time of the current frame image and a second shooting time of the next frame image;
and A20, determining the corresponding frame inserting time in the interframe period according to the interframe period and the target inserting frame number.
Specifically, the target insertion frame number is a frame number to be inserted between two adjacent images, and the target insertion frame number may be an insertion frame number input by a user or an insertion frame number calculated according to a preset output frame number, and for example, it is assumed that the image output frame rate of the standard camera is 60 frames/second (i.e., 60 images are output per second), and the preset output frame number is 240 frames/second. It can be calculated that (240-60)/60 =3 frames, that is, the number of target insertion frames to be inserted between two adjacent frames is 3 frames. The inter-frame period is a time difference between a first shooting time of a current frame image and a second shooting time of a next frame image, and the inter-frame period can be obtained according to an image output frame rate of the standard camera. Illustratively, assuming that the image output frame rate of the standard camera is 60 frames/second, the inter-frame period is 1/60=16.6ms. Therefore, the corresponding frame inserting time in the inter-frame period can be determined according to the inter-frame period and the target inserting frame number. The time difference between the adjacent frame insertion moments is equal, and the time difference between the adjacent frame insertion moments may also be unequal. Of course, it can be understood that the frame insertion time can also be set by the user according to the requirement.
In the embodiment, the inter-frame time interval and the target insertion frame number between the first shooting time of the current frame image and the second shooting time of the next frame image are obtained; and determining the corresponding frame inserting time in the interframe time interval according to the interframe time interval and the target inserting frame number. So that the frame insertion time when the frame insertion is needed in the inter-frame period between the current frame image and the next frame image can be determined.
Wherein the time difference between adjacent frame interpolation time instants is equal.
In this embodiment, the time difference between the adjacent frame interpolation moments is made equal, so that the time intervals of the output interpolated frame images are consistent, and the fluency of the output image can be ensured.
The step S220 of moving the moving pixel point in the current frame image according to the moving trajectory to generate at least one frame of insertion frame image includes:
step B10, determining the target position of the moving pixel point at the frame inserting moment corresponding to the sub-moving track according to the sub-moving track;
and step B20, moving the moving pixel point to the target position, and generating an insertion frame image corresponding to the frame insertion time.
Specifically, the movement trajectory includes a sub movement trajectory from the first shooting time to each of the interpolation time. In this embodiment, the target position of the moving pixel point at the frame insertion time corresponding to the sub moving track is determined according to the sub moving track, where the target position is the position of the moving pixel point on the sub moving track at the frame insertion time. And then generating an insertion frame image corresponding to the frame insertion time by moving the moving pixel point to the target position.
In this embodiment, a corresponding insertion frame image is generated through a sub-movement trajectory from the first shooting time of the standard camera to each of the frame insertion times, and the accuracy of generating the insertion frame image is higher in this embodiment compared with a mode of generating the insertion frame image through a motion vector and a neural network model.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image frame interpolation apparatus according to an embodiment of the present invention.
Referring to fig. 5, the present invention provides an image interpolation apparatus, including:
an obtaining module 10, configured to obtain current frame image and inter-frame pixel point movement information, where the inter-frame pixel point movement information is pixel point movement information between the current frame image and a next frame image;
a generating module 20, configured to generate an insertion frame image of at least one frame according to the current frame image and the inter-frame pixel point movement information;
and the output module 30 is configured to sequentially output the current frame image and the inserted frame image to a preset display screen.
Optionally, the generating module 20 is further configured to:
determining a moving pixel point in the current frame image and a moving track of the moving pixel point according to the inter-frame pixel point moving information;
and moving the moving pixel points in the current frame image according to the moving track to generate at least one frame of insertion frame image.
Optionally, the generating module 20 is further configured to:
acquiring an inter-frame time interval and a target insertion frame number between the first shooting time of the current frame image and the second shooting time of the next frame image;
and determining the corresponding frame inserting time in the interframe time interval according to the interframe time interval and the target inserting frame number.
Alternatively, or in any implementation manner of the second aspect above, the time differences between adjacent frame insertion time instants are equal.
Optionally, the generating module 20 is further configured to:
determining the target position of the moving pixel point at the frame inserting moment corresponding to the sub-moving track according to the sub-moving track;
and moving the moving pixel point to the target position to generate an insertion frame image corresponding to the frame insertion time.
Optionally, the obtaining module 10 is further configured to:
shooting a target scene through a standard camera to obtain a current frame image;
and detecting pixel points in an imaging picture of the standard camera by using an event-driven camera to obtain inter-frame pixel point movement information.
Optionally, the output module 30 is further configured to:
outputting the current frame image to a preset display screen, and acquiring the frame insertion time of the inserted frame image;
and sequentially outputting the inserted frame images to the preset display screen according to the sequence of the frame inserting moments.
It can be understood that, the image frame interpolation apparatus implements the operations in the image frame interpolation method provided in the foregoing embodiment, and the specific implementation steps may refer to the description of the foregoing embodiment, which is not repeated herein.
In addition, an embodiment of the present invention further provides a computer storage medium, where a computer program is stored on the computer storage medium, and when the computer program is executed by a processor, the computer program implements the operations in the image frame interpolation method provided in the foregoing embodiment, and specific steps are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or system comprising the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An image interpolation method, comprising:
acquiring current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is pixel point movement information between the current frame image and the next frame image;
generating at least one frame of insertion frame image according to the current frame image and the inter-frame pixel point movement information;
and sequentially outputting the current frame image and the insertion frame image to a preset display screen.
2. The image interpolation method of claim 1, wherein the step of generating the interpolated frame image of at least one frame based on the current frame image and the inter-frame pixel movement information comprises:
determining a moving pixel point in the current frame image and a moving track of the moving pixel point according to the inter-frame pixel point moving information;
and moving the moving pixel points in the current frame image according to the moving track to generate at least one frame of insertion frame image.
3. The image interpolation method according to claim 2, wherein before the step of determining the moving pixels in the current frame image and the moving tracks of the moving pixels according to the inter-frame pixel movement information, the method comprises:
acquiring an inter-frame time interval and a target insertion frame number between the first shooting time of the current frame image and the second shooting time of the next frame image;
and determining the corresponding frame inserting time in the interframe time interval according to the interframe time interval and the target inserting frame number.
4. The image interpolation method of claim 3, wherein time differences between adjacent ones of the interpolation time instants are equal.
5. The image interpolation method according to claim 3, wherein the motion trajectory includes a sub-motion trajectory from the first capturing time to each of the interpolation times, and the step of generating at least one frame of interpolated frame image by moving the moving pixel in the current frame image according to the motion trajectory includes:
determining the target position of the moving pixel point at the frame inserting moment corresponding to the sub-moving track according to the sub-moving track;
and moving the moving pixel point to the target position to generate an insertion frame image corresponding to the frame insertion time.
6. The image interpolation method of claim 1, wherein before the step of obtaining the motion information of the pixels between the current frame image and the inter-frame image, wherein the motion information of the pixels between the current frame image and the next frame image, the method comprises:
shooting a target scene through a standard camera to obtain a current frame image;
and detecting pixel points in an imaging picture of the standard camera by using an event-driven camera to obtain inter-frame pixel point movement information.
7. The image interpolation method according to any one of claims 1 to 6, wherein the step of sequentially outputting the current frame image and the interpolated frame image to a preset display screen comprises:
outputting the current frame image to a preset display screen, and acquiring the frame insertion time of the inserted frame image;
and sequentially outputting the inserted frame images to the preset display screen according to the sequence of the frame inserting moments.
8. An image interpolation apparatus, characterized in that the image interpolation apparatus comprises:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring the motion information of pixel points between a current frame image and an inter frame image, and the inter frame pixel point motion information is the pixel point motion information between the current frame image and a next frame image;
the generating module is used for generating at least one frame of insertion frame image according to the current frame image and the inter-frame pixel point movement information;
and the output module is used for sequentially outputting the current frame image and the insertion frame image to a preset display screen.
9. An image interpolation apparatus characterized by comprising: memory, processor and computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the image framing method according to any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the image interpolation method according to one of claims 1 to 7.
CN202211441273.0A 2022-11-17 2022-11-17 Image frame interpolation method, device and equipment and computer readable storage medium Pending CN115835035A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211441273.0A CN115835035A (en) 2022-11-17 2022-11-17 Image frame interpolation method, device and equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211441273.0A CN115835035A (en) 2022-11-17 2022-11-17 Image frame interpolation method, device and equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115835035A true CN115835035A (en) 2023-03-21

Family

ID=85528848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211441273.0A Pending CN115835035A (en) 2022-11-17 2022-11-17 Image frame interpolation method, device and equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115835035A (en)

Similar Documents

Publication Publication Date Title
CN111641835B (en) Video processing method, video processing device and electronic equipment
US9710923B2 (en) Information processing system, information processing device, imaging device, and information processing method
US20120105657A1 (en) Image processing apparatus, image pickup apparatus, image processing method, and program
CN113973190A (en) Video virtual background image processing method and device and computer equipment
CN110084765B (en) Image processing method, image processing device and terminal equipment
US8629908B2 (en) Method for detecting a moving object in a sequence of images captured by a moving camera, computer system and computer program product
CN113554726B (en) Image reconstruction method and device based on pulse array, storage medium and terminal
CN112954212B (en) Video generation method, device and equipment
CN112511859B (en) Video processing method, device and storage medium
EP2684349A1 (en) Video processing apparatus, video processing system, and video processing method
CN114881921A (en) Event and video fusion based occlusion-removing imaging method and device
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN112995491B (en) Video generation method and device, electronic equipment and computer storage medium
CN100594723C (en) Image processor having frame speed conversion and its method
CN115835035A (en) Image frame interpolation method, device and equipment and computer readable storage medium
CN112738398B (en) Image anti-shake method and device and electronic equipment
CN103139524B (en) Method for optimizing video and messaging device
CN107534736B (en) Image registration method and device of terminal and terminal
KR20190122692A (en) Image processing apparatus and image processing method
CN114286011A (en) Focusing method and device
CN113891057A (en) Video processing method and device, electronic equipment and storage medium
CN115834795A (en) Image processing method, device, equipment and computer readable storage medium
JP2007028208A (en) Image processing apparatus and imaging apparatus
JPWO2020039898A1 (en) Station monitoring equipment, station monitoring methods and programs
CN113709372B (en) Image generation method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination