WO2024104439A1 - Image frame interpolation method and apparatus, device, and computer readable storage medium - Google Patents

Image frame interpolation method and apparatus, device, and computer readable storage medium Download PDF

Info

Publication number
WO2024104439A1
WO2024104439A1 PCT/CN2023/132099 CN2023132099W WO2024104439A1 WO 2024104439 A1 WO2024104439 A1 WO 2024104439A1 CN 2023132099 W CN2023132099 W CN 2023132099W WO 2024104439 A1 WO2024104439 A1 WO 2024104439A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame image
image
frame
inter
movement information
Prior art date
Application number
PCT/CN2023/132099
Other languages
French (fr)
Chinese (zh)
Inventor
邱绪东
Original Assignee
歌尔科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔科技有限公司 filed Critical 歌尔科技有限公司
Publication of WO2024104439A1 publication Critical patent/WO2024104439A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • the present application relates to the field of image interpolation technology, and in particular to an image interpolation method, device, equipment and computer-readable storage medium.
  • VST Video See-Through
  • a real-time view of the real world is captured by a camera, and then the real-time view is combined with a virtual view of the digital world and presented on a display screen to send the combined image to the user's eyes.
  • VST technology visual integration can be fully controlled, allowing complete occlusion between virtual and real objects, and even higher-level modifications to real objects.
  • CMOS Image Sensor CMOS Image Sensor
  • the main purpose of the present application is to provide an image interpolation method, device, equipment and computer-readable storage medium, aiming to solve the technical problem of low image frame rate output by a camera using CIS for imaging.
  • the present application provides an image interpolation method, the image interpolation method comprising:
  • inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image
  • the current frame image and the inserted frame image are outputted sequentially to a preset display screen.
  • this embodiment performs image compensation on the current frame image through the inter-frame pixel point movement information, generates an inserted frame image after the current frame image, and after outputting the current frame image and the inserted frame image to the preset display screen in sequence, the frame rate of the output image can be improved, thereby ensuring the user's MR experience.
  • the moving pixels in the current frame image and the moving trajectory of the moving pixels are determined according to the inter-frame pixel movement information
  • the moving pixels in the current frame image are moved according to the moving trajectory to generate at least one inserted frame image.
  • this embodiment Based on the above technical solution, by determining the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information; moving the moving pixels in the current frame image according to the moving trajectory to generate at least one inserted frame image.
  • this embodiment generates an inserted frame image with higher accuracy.
  • the step before the step of determining the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information, the step includes:
  • a corresponding inter-frame time point in the inter-frame period is determined.
  • the interframe period and the target number of inserted frames between the first shooting moment of the current frame image and the second shooting moment of the next frame image are obtained; according to the interframe period and the target number of inserted frames, the corresponding interframe time in the interframe period is determined.
  • the interframe time that needs to be inserted in the interframe period between the current frame image and the next frame image can be determined.
  • the time differences between adjacent interpolation moments are equal.
  • the movement trajectory includes a sub-movement trajectory from the first shooting moment to each of the interpolation frame moments
  • the step of moving the moving pixel points in the current frame image according to the movement trajectory to generate at least one interpolation frame image includes:
  • the moving pixel point is moved to the target position to generate an insertion frame image corresponding to the insertion frame moment.
  • the corresponding inserted frame image is generated through the sub-movement trajectory from the first shooting moment of the standard camera to each of the inserted frame moments.
  • the accuracy of generating the inserted frame image in this embodiment is higher.
  • the step before the step of obtaining the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image, the step includes:
  • the target scene is photographed by a standard camera to obtain the current frame image
  • the event-driven camera is used to detect pixel points in the imaging picture of the standard camera to obtain the pixel point movement information between frames.
  • the inter-frame pixel movement information in the inter-frame period between the first shooting moment of the current frame image and the second shooting moment of the next frame image is collected by an event-driven camera, thereby determining the movement of the pixels in the inter-frame period, so that the corresponding inserted frame image can be generated subsequently according to the inter-frame pixel movement information.
  • the step of outputting the current frame image and the inserted frame image sequentially to a preset display screen includes:
  • the inserted frame images are output to the preset display screen in sequence.
  • this embodiment can output the current frame image to a preset display screen. Then, the insertion frame time of the inserted frame image is obtained, and according to the time sequence of the insertion frame time, the inserted frame images are sequentially output to the preset display screen to ensure the accuracy of the image output sequence.
  • the present application provides an image interpolation device, the image interpolation device comprising:
  • An acquisition module used to acquire the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image;
  • a generating module used for generating at least one inserted frame image according to the current frame image and the inter-frame pixel point movement information
  • the output module is used to output the current frame image and the inserted frame image to a preset display screen in sequence.
  • the generating module is further used for:
  • the moving pixels in the current frame image are moved according to the moving trajectory to generate at least one inserted frame image.
  • the generating module is further used to:
  • a corresponding inter-frame time point in the inter-frame period is determined.
  • the time differences between adjacent interpolation moments are equal.
  • the generating module is further used to:
  • the moving pixel point is moved to the target position to generate an insertion frame image corresponding to the insertion frame moment.
  • the acquisition module is further used to:
  • the target scene is photographed by a standard camera to obtain a current frame image
  • the event-driven camera is used to detect pixel points in the imaging picture of the standard camera to obtain the pixel point movement information between frames.
  • the output module is further used to:
  • the inserted frame images are output to the preset display screen in sequence.
  • the present application provides an image interpolation device, comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the computer program is configured to implement the steps of the image interpolation method as described above.
  • the third aspect and any implementation of the third aspect correspond to the first aspect and any implementation of the first aspect, respectively.
  • the technical effects corresponding to the third aspect and any implementation of the third aspect can refer to the technical effects corresponding to the first aspect and any implementation of the first aspect, which will not be repeated here.
  • the present application provides a computer-readable storage medium, in which a computer program is stored.
  • the processor executes an image interpolation method as described in any one of the above-mentioned first aspect or possible implementations of the first aspect.
  • the fourth aspect and any implementation of the fourth aspect correspond to the first aspect and any implementation of the first aspect, respectively.
  • the technical effects corresponding to the fourth aspect and any implementation of the fourth aspect can refer to the technical effects corresponding to the above-mentioned first aspect and any implementation of the first aspect, which will not be repeated here.
  • an embodiment of the present application provides a computer program, which includes instructions for executing the image interpolation method in the first aspect and any possible implementation of the first aspect.
  • the fifth aspect and any implementation of the fifth aspect correspond to the first aspect and any implementation of the first aspect, respectively.
  • the technical effects corresponding to the fifth aspect and any implementation of the fifth aspect can refer to the technical effects corresponding to the first aspect and any implementation of the first aspect, which will not be repeated here.
  • FIG1 is a schematic diagram of a video perspective technology involved in an embodiment of the present application.
  • FIG2 is a schematic diagram of the structure of an image interpolation device in a hardware operating environment according to an embodiment of the present application
  • FIG3 is a schematic diagram of a flow chart of a first embodiment of an image interpolation method of the present application
  • FIG4 is a schematic diagram of a flow chart of a second embodiment of the image interpolation method of the present application.
  • FIG. 5 is a schematic diagram of the structure of an image interpolation device according to an embodiment of the present application.
  • a and/or B in this article is merely a description of the association relationship of associated objects, indicating that three relationships may exist.
  • a and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone.
  • first and second in the description and claims of the embodiments of the present application are used to distinguish different objects rather than to describe a specific order of objects.
  • a first target object and a second target object are used to distinguish different target objects rather than to describe a specific order of target objects.
  • words such as “exemplary” or “for example” are used to indicate examples, illustrations or descriptions. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as “exemplary” or “for example” is intended to present related concepts in a specific way.
  • multiple refers to two or more than two.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • FIG. 1 is a schematic diagram of the video see-through technology involved in the embodiment of the present application.
  • VST technology a real-time view of the real world is captured by a camera 10, and then the real-time view is combined with a virtual view of the digital world, and presented on a display screen 20 to send the combined image to the user's eyes.
  • VST technology visual integration can be fully controlled, allowing complete occlusion between virtual and real objects, and even higher-level modifications to real objects.
  • CMOS Image Sensor CMOS Image Sensor
  • the present application designs an image interpolation method for performing image compensation on the current frame image through pixel movement information, thereby improving the frame rate of the output image and ensuring the user's MR experience.
  • the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image; generating at least one frame of inserted frame image according to the current frame image and the inter-frame pixel point movement information; and outputting the current frame image and the inserted frame image to a preset display screen in sequence.
  • image compensation is performed on the current frame image through the inter-frame pixel point movement information, and an inserted frame image after the current frame image is generated. After the current frame image and the inserted frame image are output to the preset display screen in sequence, the frame rate of the output image can be increased, thereby ensuring the user's MR experience.
  • FIG. 2 is a schematic diagram of the structure of an image interpolation device in the hardware operating environment involved in the embodiment of the present application.
  • the image interpolation device can be an MR device, a PC (Personal Computer), a tablet computer, a portable computer or a server.
  • the image interpolation device may include: a processor 1001, such as a central processing unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
  • the communication bus 1002 is used to realize the connection and communication between these components.
  • the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the user interface 1003 may optionally include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a wireless fidelity (WIreless-FIdelity, WI-FI) interface).
  • WIreless-FIdelity WI-FI
  • the memory 1005 may be a high-speed random access memory (Random Access Memory, RAM) memory, or a stable non-volatile memory (Non-Volatile Memory, NVM), such as a disk memory.
  • RAM Random Access Memory
  • NVM Non-Volatile Memory
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • FIG. 2 does not constitute a limitation on the image interpolation device, and may include more or fewer components than shown in the figure, or a combination of certain components, or a different arrangement of components.
  • the memory 1005 as a storage medium may include an operating system, a data storage module, a network communication module, a user interface module, and a computer program.
  • the network interface 1004 is mainly used for data communication with other devices;
  • the user interface 1003 is mainly used for data interaction with the user;
  • the processor 1001 and the memory 1005 in the image interpolation device of the present application can be set in the image interpolation device, and the image interpolation device calls the computer program stored in the memory 1005 through the processor 1001, and executes the image interpolation method provided in the embodiment of the present application.
  • the image interpolation method includes:
  • Step S100 obtaining the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image;
  • the current frame image is an image obtained by photographing the target scene with a standard camera, wherein the standard camera is a camera equipped with a conventional image sensor such as CIS (CMOS Image Sensor).
  • the pixel point movement information at least includes the coordinate position of the pixel point in the imaging picture of the standard camera and the corresponding time point of the coordinate position, etc., collected by EVS (Event-based Vision Sensor) during the inter-frame period from the current frame image to the next frame image.
  • EVS Event-based Vision Sensor
  • a standard camera using CIS and an event-driven camera using EVS may be provided in the MR device, so that when the standard camera shoots a target scene, the event-driven camera may collect inter-frame pixel movement information between adjacent frames.
  • Step S110 photographing the target scene with a standard camera to obtain a current frame image
  • Step S120 Detecting pixels in the imaging picture of the standard camera by using an event-driven camera to obtain inter-frame pixel movement information.
  • the target scene is the scene that the user expects to shoot.
  • the target scene is photographed by a standard camera, and the current frame image corresponding to the target scene can be obtained.
  • EVS is based on PD (Photo-Diode, photodiode) current
  • the current signal of the PD is monitored to see if it has changed.
  • the transformation exceeds a given threshold, taking the 2-bit signal output as an example, that is, if the current signal becomes stronger, 01 can be output, and if the current signal becomes weaker, 10 can be output. If the current signal does not change beyond the threshold, the output is 00.
  • the calculation process is that all pixels are simultaneously converted to analog, which is a parallel process.
  • this 2-bit digital-to-analog conversion is very fast, thereby realizing a very high-speed conversion and output of the entire pixel array, so the sampling frequency of the event-driven camera is much higher than that of the standard camera. Therefore, after obtaining the current frame image captured by the standard camera, the pixel points in the imaging screen of the standard camera can be detected by the event-driven camera, and the inter-frame pixel point movement information of the standard camera in the inter-frame period after the current frame image is captured to the next frame image can be collected.
  • the event-driven camera collects inter-frame pixel movement information in the inter-frame period between the first shooting moment of the current frame image and the second shooting moment of the next frame image, thereby determining the movement of the pixels in the inter-frame period, so that the corresponding inserted frame image can be generated subsequently according to the inter-frame pixel movement information.
  • Step S200 generating at least one inserted frame image according to the current frame image and the inter-frame pixel movement information
  • the moving pixel points that have moved after the first shooting moment of the current frame image and the movement trajectory of the moving pixel points in the inter-frame period between the first shooting moment of the current frame image and the second shooting moment of the next frame image can be determined according to the inter-frame pixel point movement information. Then, the corresponding moving pixel points in the current frame image can be controlled to move according to the movement trajectory to generate at least one frame of inserted frame image for insertion between the current frame image and the next frame image to improve the frame rate of the output image.
  • step S200 generates at least one inserted frame image according to the current frame image and the inter-frame pixel movement information, including:
  • Step S210 determining the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information
  • Step S220 moving pixels in the current frame image according to the moving trajectory to generate at least one inserted frame image.
  • EVS detects the changes of each pixel in each imaging screen in an asynchronous manner. Therefore, according to the inter-frame pixel movement information, it can determine the moving pixel points that move in the inter-frame period between the first shooting moment of the current frame image and the second shooting moment of the next frame image. And determine the coordinate position and corresponding time point of the moving pixel point in the inter-frame period, thereby generating a moving trajectory of the moving pixel point. Then, at least one frame of inserted frame image can be generated by moving the moving pixel points in the current frame image according to the moving trajectory.
  • the coordinate position corresponding to a certain time point of the moving pixel point in the inter-frame period can be selected according to the moving trajectory, and the moving pixel point in the current frame image can be moved to the coordinate position, so that an inserted frame image can be generated.
  • the moving pixels in the current frame image and the moving trajectory of the moving pixels are determined; and according to the moving trajectory, the moving pixels in the current frame image are moved to generate at least one inserted frame image.
  • the accuracy of generating an inserted frame image in this embodiment is higher.
  • Step S300 output the current frame image and the inserted frame image to a preset display screen in sequence.
  • the current frame image and the inserted frame image are output to a preset display screen in sequence according to the corresponding moments of the current frame image and the inserted frame image, so as to increase the image frame rate output to the preset display screen.
  • step S300 of outputting the current frame image and the inserted frame image to a preset display screen in sequence includes:
  • Step S310 outputting the current frame image to a preset display screen, and obtaining the insertion frame time of the inserted frame image;
  • Step S320 output the inserted frame images in sequence to the preset display screen according to the sequence of the inserted frame moments.
  • this embodiment can output the current frame image to a preset display screen, and then obtain the insertion frame time of the inserted frame image, and output the inserted frame images to the preset display screen in sequence according to the time sequence of the insertion frame time, so as to ensure the accuracy of the image output sequence.
  • this embodiment performs image compensation on the current frame image through the inter-frame pixel point movement information, generates an inserted frame image after the current frame image, and after outputting the current frame image and the inserted frame image to the preset display screen in sequence, the frame rate of the output image can be increased, thereby ensuring the user's MR experience.
  • step S200 before the step of determining the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information, step S200 includes:
  • Step A10 obtaining an inter-frame period and a target number of inserted frames between a first shooting moment of the current frame image and a second shooting moment of the next frame image;
  • Step A20 determining the corresponding frame insertion time in the inter-frame period according to the inter-frame period and the target number of inserted frames.
  • the target insertion frame number is the number of frames that need to be inserted between two adjacent frames of images.
  • the inter-frame period is the time difference between the first shooting moment of the current frame image and the second shooting moment of the next frame image, and the inter-frame period can be obtained according to the image output frame rate of the standard camera.
  • the image output frame rate of the standard camera is 60 frames/second
  • the time difference between adjacent interpolation moments is equal, and the time difference between adjacent interpolation moments may also be unequal.
  • the interpolation moment can also be set by the user according to needs.
  • the interframe period and the target number of inserted frames between the first shooting moment of the current frame image and the second shooting moment of the next frame image are obtained; and the corresponding interframe insertion time in the interframe period is determined according to the interframe period and the target number of inserted frames.
  • the interframe insertion time that needs to be inserted in the interframe period between the current frame image and the next frame image can be determined.
  • the time intervals of the output interpolation frame images are consistent, thereby ensuring the smoothness of the output image.
  • the movement trajectory includes sub-movement trajectories from the first shooting moment to each of the interpolation frame moments, and step S220 moves the moving pixel points in the current frame image according to the movement trajectory to generate at least one interpolation frame image, including:
  • Step B10 determining the target position of the moving pixel point at the interpolation time corresponding to the sub-movement trajectory according to the sub-movement trajectory;
  • Step B20 moving the moving pixel point to the target position, and generating an insertion frame image corresponding to the insertion frame moment.
  • the movement trajectory includes a sub-movement trajectory from the first shooting moment to each of the interpolation moments.
  • the target position of the moving pixel point at the interpolation moment corresponding to the sub-movement trajectory is determined according to the sub-movement trajectory, wherein the target position is the position of the moving pixel point on the sub-movement trajectory at the interpolation moment.
  • the interpolation frame image corresponding to the interpolation moment is generated by moving the moving pixel point to the target position.
  • the corresponding inserted frame image is generated through the sub-movement trajectory from the first shooting moment of the standard camera to each of the inserted frame moments.
  • the accuracy of generating the inserted frame image in this embodiment is higher.
  • FIG. 5 is a schematic diagram of the structure of an image interpolation device involved in an embodiment of the present application.
  • the present application provides an image interpolation device, the image interpolation device comprising:
  • An acquisition module 10 is used to acquire the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image;
  • a generating module 20 configured to generate at least one inserted frame image according to the current frame image and the inter-frame pixel point movement information
  • the output module 30 is used to output the current frame image and the inserted frame image to a preset display screen in sequence.
  • the generating module 20 is further configured to:
  • the moving pixels in the current frame image are moved according to the moving trajectory to generate at least one inserted frame image.
  • the generating module 20 is further configured to:
  • a corresponding inter-frame time point in the inter-frame period is determined.
  • the time differences between adjacent interpolation moments are equal.
  • the generating module 20 is further configured to:
  • the moving pixel point is moved to the target position to generate an insertion frame image corresponding to the insertion frame moment.
  • the acquisition module 10 is further used for:
  • the target scene is photographed by a standard camera to obtain a current frame image
  • the event-driven camera is used to detect pixel points in the imaging picture of the standard camera to obtain the pixel point movement information between frames.
  • the output module 30 is further used for:
  • the inserted frame images are output to the preset display screen in sequence.
  • the image interpolation device implements the operations in the image interpolation method provided in the above embodiment.
  • the specific implementation steps can refer to the contents recorded in the above embodiment, and will not be elaborated here.
  • an embodiment of the present application further proposes a computer storage medium, on which a computer program is stored.
  • the computer program is executed by a processor, the operations in the image interpolation method provided in the above embodiment are implemented, and the specific steps are not repeated here.
  • the technical solution of the present application is essentially or the part that contributes to the prior art can be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above, and includes a number of instructions for a terminal device (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods described in each embodiment of the present application.
  • a storage medium such as ROM/RAM, magnetic disk, optical disk
  • a terminal device which can be a mobile phone, computer, server, or network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Television Systems (AREA)

Abstract

The present application relates to the technical field of image frame interpolation and discloses an image frame interpolation method and apparatus, a device, and a computer readable storage medium. The image frame interpolation method comprises: obtaining a current image frame and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is pixel point movement information between the current image frame and a next image frame; generating at least one interpolated image frame according to the current image frame and the inter-frame pixel point movement information; and sequentially outputting the current image frame and the interpolated image frame to a preset display screen. The present application increases output image frame rate, and the MR experience of a user is ensured.

Description

图像插帧方法、装置、设备及计算机可读存储介质Image interpolation method, device, equipment and computer-readable storage medium
本申请要求于2022年11月17日提交中国专利局、申请号202211441273.0、申请名称为“图像插帧方法、装置、设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the China Patent Office on November 17, 2022, with application number 202211441273.0 and application name “Image interpolation method, device, equipment and computer-readable storage medium”, the entire contents of which are incorporated by reference in this application.
技术领域Technical Field
本申请涉及图像插帧技术领域,尤其涉及一种图像插帧方法、装置、设备及计算机可读存储介质。The present application relates to the field of image interpolation technology, and in particular to an image interpolation method, device, equipment and computer-readable storage medium.
背景技术Background technique
随着VR(Virtual Reality,虚拟现实)和AR(Augmented Reality,增强现实)行业近两年迎来爆发,在工业、医疗、娱乐、办公、社交等方面的价值逐渐展现,目前各大厂商也纷纷在MR(Mixed Reality,混合现实)领域上进行探索和布局。With the explosion of VR (Virtual Reality) and AR (Augmented Reality) industries in the past two years, their value in industry, medical care, entertainment, office, and social interaction has gradually been revealed. Currently, major manufacturers are also exploring and deploying in the MR (Mixed Reality) field.
而当前实现MR的主要技术路线是基于VST(Video See-Through,视频透视)技术。VST技术中是通过相机捕捉到真实世界的实时视图,然后将实时视图与数字世界的虚拟视图结合在一起,呈现在显示屏上,以将结合后的图像送至用户的眼睛。基于VST技术可以完全控制视觉集成,允许虚拟和真实物体之间的完全遮挡,甚至可以对真实物体进行更高级别的修改。The main technical route for realizing MR at present is based on VST (Video See-Through) technology. In VST technology, a real-time view of the real world is captured by a camera, and then the real-time view is combined with a virtual view of the digital world and presented on a display screen to send the combined image to the user's eyes. Based on VST technology, visual integration can be fully controlled, allowing complete occlusion between virtual and real objects, and even higher-level modifications to real objects.
但是VST技术对现实世界的场景进行拍摄的相机通常是采用CIS(CMOS Image Sensor,CMOS图像传感器)进行成像,而CIS是基于电荷,需要对电荷进行积分之后,才能有输出。也就限制CIS的采样频率,导致输出的图像帧率偏低(通常在几十到百帧每秒),影响用户的MR体验。However, the cameras that use VST technology to shoot real-world scenes usually use CIS (CMOS Image Sensor) for imaging, and CIS is based on charge, which needs to be integrated before output. This limits the sampling frequency of CIS, resulting in a low frame rate of the output image (usually dozens to hundreds of frames per second), affecting the user's MR experience.
发明内容Summary of the invention
本申请的主要目的在于提供一种图像插帧方法、装置、设备及计算机可读存储介质,旨在解决采用CIS进行成像的相机输出的图像帧率偏低技术问题。The main purpose of the present application is to provide an image interpolation method, device, equipment and computer-readable storage medium, aiming to solve the technical problem of low image frame rate output by a camera using CIS for imaging.
为实现上述目的,第一方面,本申请提供一种图像插帧方法,所述图像插帧方法包括:To achieve the above objectives, in a first aspect, the present application provides an image interpolation method, the image interpolation method comprising:
获取当前帧图像和帧间像素点移动信息,其中所述帧间像素点移动信息为所述当前帧图像与下一帧图像之间的像素点移动信息;Acquire the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image;
根据当前帧图像和帧间像素点移动信息,生成至少一帧的插入帧图像;Generate at least one inserted frame image according to the current frame image and inter-frame pixel movement information;
将所述当前帧图像和所述插入帧图像依次输出至预设显示屏。 The current frame image and the inserted frame image are outputted sequentially to a preset display screen.
基于以上技术方案,通过获取当前帧图像和帧间像素点移动信息,其中所述帧间像素点移动信息为所述当前帧图像与下一帧图像之间的像素点移动信息;根据当前帧图像和帧间像素点移动信息,生成至少一帧的插入帧图像;将所述当前帧图像和所述插入帧图像依次输出至预设显示屏。从而本实施例通过帧间像素点移动信息对当前帧图像进行图像补偿,生成了当前帧图像之后的插入帧图像,在将所述当前帧图像和所述插入帧图像依次输出至预设显示屏之后,则可以提高输出图像的帧率,保证了用户的MR体验。Based on the above technical solution, by obtaining the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image; generating at least one frame of inserted frame image according to the current frame image and the inter-frame pixel point movement information; and outputting the current frame image and the inserted frame image to the preset display screen in sequence. Thus, this embodiment performs image compensation on the current frame image through the inter-frame pixel point movement information, generates an inserted frame image after the current frame image, and after outputting the current frame image and the inserted frame image to the preset display screen in sequence, the frame rate of the output image can be improved, thereby ensuring the user's MR experience.
根据第一方面,所述根据所述帧间像素点移动信息,确定所述当前帧图像中的移动像素点以及所述移动像素点的移动轨迹;According to the first aspect, the moving pixels in the current frame image and the moving trajectory of the moving pixels are determined according to the inter-frame pixel movement information;
根据所述移动轨迹移动所述当前帧图像中的移动像素点,生成至少一帧插入帧图像。The moving pixels in the current frame image are moved according to the moving trajectory to generate at least one inserted frame image.
基于以上技术方案,通过根据所述帧间像素点移动信息,确定所述当前帧图像中的移动像素点以及所述移动像素点的移动轨迹;根据所述移动轨迹移动所述当前帧图像中的移动像素点,生成至少一帧插入帧图像。相对于通过运动矢量和神经网络模型生成插入帧图像的方式,本实施例生成插入帧图像的准确性更高。Based on the above technical solution, by determining the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information; moving the moving pixels in the current frame image according to the moving trajectory to generate at least one inserted frame image. Compared with the method of generating an inserted frame image through motion vectors and a neural network model, this embodiment generates an inserted frame image with higher accuracy.
根据第一方面,或者以上第一方面的任意一种实现方式,在所述根据所述帧间像素点移动信息,确定所述当前帧图像中的移动像素点以及所述移动像素点的移动轨迹的步骤之前,包括:According to the first aspect, or any implementation of the first aspect above, before the step of determining the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information, the step includes:
获取所述当前帧图像的第一拍摄时刻与下一帧图像的第二拍摄时刻之间的帧间时段和目标插入帧数;Acquire an inter-frame period and a target number of inserted frames between a first shooting moment of the current frame image and a second shooting moment of the next frame image;
根据所述帧间时段和所述目标插入帧数,确定所述帧间时段内对应的插帧时刻。According to the inter-frame period and the target number of inserted frames, a corresponding inter-frame time point in the inter-frame period is determined.
基于以上技术方案,获取所述当前帧图像的第一拍摄时刻与下一帧图像的第二拍摄时刻之间的帧间时段和目标插入帧数;根据所述帧间时段和所述目标插入帧数,确定所述帧间时段内对应的插帧时刻。从而可以确定需要在所述当前帧图像与下一帧图像之间的帧间时段需要插帧的插帧时刻。Based on the above technical solution, the interframe period and the target number of inserted frames between the first shooting moment of the current frame image and the second shooting moment of the next frame image are obtained; according to the interframe period and the target number of inserted frames, the corresponding interframe time in the interframe period is determined. Thus, the interframe time that needs to be inserted in the interframe period between the current frame image and the next frame image can be determined.
根据第一方面,或者以上第一方面的任意一种实现方式,相邻的所述插帧时刻之间的时间差相等。According to the first aspect, or any implementation of the first aspect above, the time differences between adjacent interpolation moments are equal.
基于以上技术方案,通过使相邻的所述插帧时刻之间的时间差相等,由此输出的插入帧图像时间间隔一致,可以保证输出图像的流畅性。 Based on the above technical solution, by making the time difference between adjacent interpolation frame moments equal, the time intervals of the output interpolation frame images are consistent, thereby ensuring the smoothness of the output image.
根据第一方面,或者以上第一方面的任意一种实现方式,所述移动轨迹包括所述第一拍摄时刻至各所述插帧时刻的子移动轨迹,所述根据所述移动轨迹移动所述当前帧图像中的移动像素点,生成至少一帧插入帧图像的步骤,包括:According to the first aspect, or any implementation of the first aspect above, the movement trajectory includes a sub-movement trajectory from the first shooting moment to each of the interpolation frame moments, and the step of moving the moving pixel points in the current frame image according to the movement trajectory to generate at least one interpolation frame image includes:
根据所述子移动轨迹,确定所述移动像素点在所述子移动轨迹对应的插帧时刻的目标位置;Determine, according to the sub-movement trajectory, a target position of the moving pixel point at the interpolation time corresponding to the sub-movement trajectory;
将所述移动像素点移动至所述目标位置,生成所述插帧时刻对应的插入帧图像。The moving pixel point is moved to the target position to generate an insertion frame image corresponding to the insertion frame moment.
基于以上技术方案,通过所述标准相机的第一拍摄时刻至各所述插帧时刻的子移动轨迹,生成了对应的插入帧图像,相对于通过运动矢量和神经网络模型生成插入帧图像的方式,本实施例生成插入帧图像的准确性更高。Based on the above technical solution, the corresponding inserted frame image is generated through the sub-movement trajectory from the first shooting moment of the standard camera to each of the inserted frame moments. Compared with the method of generating the inserted frame image through the motion vector and the neural network model, the accuracy of generating the inserted frame image in this embodiment is higher.
根据第一方面,或者以上第一方面的任意一种实现方式,在所述获取当前帧图像和帧间像素点移动信息,其中所述帧间像素点移动信息为所述当前帧图像与下一帧图像之间的像素点移动信息的步骤之前,包括:According to the first aspect, or any implementation of the first aspect above, before the step of obtaining the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image, the step includes:
通过标准相机对目标场景进行拍摄,获得当前帧图像;The target scene is photographed by a standard camera to obtain the current frame image;
通过事件驱动相机,对所述标准相机的成像画面中的像素点进行检测,获得帧间像素点移动信息。The event-driven camera is used to detect pixel points in the imaging picture of the standard camera to obtain the pixel point movement information between frames.
基于以上技术方案,在通过标准相机对目标场景进行拍摄,获得当前帧图像后,通过事件驱动相机采集当前帧图像的第一拍摄时刻和下一帧图像的第二拍摄时刻之间的帧间时段内的帧间像素点移动信息,由此确定在所述帧间时段内像素点的移动情况,从而后续可以根据所述帧间像素点移动信息生成对应的插入帧图像。Based on the above technical solution, after the target scene is photographed by a standard camera and the current frame image is obtained, the inter-frame pixel movement information in the inter-frame period between the first shooting moment of the current frame image and the second shooting moment of the next frame image is collected by an event-driven camera, thereby determining the movement of the pixels in the inter-frame period, so that the corresponding inserted frame image can be generated subsequently according to the inter-frame pixel movement information.
根据第一方面,或者以上第一方面的任意一种实现方式,所述将所述当前帧图像和所述插入帧图像依次输出至预设显示屏的步骤,包括:According to the first aspect, or any implementation of the first aspect above, the step of outputting the current frame image and the inserted frame image sequentially to a preset display screen includes:
将所述当前帧图像输出至预设显示屏,并获取所述插入帧图像的插帧时刻;Outputting the current frame image to a preset display screen, and obtaining the insertion frame time of the inserted frame image;
根据所述插帧时刻的顺序,将所述插入帧图像依次输出至所述预设显示屏。According to the sequence of the frame insertion moments, the inserted frame images are output to the preset display screen in sequence.
基于以上技术方案,本实施例在获得所述当前帧图像后,则可以将所述当前帧图像输出至预设显示屏。然后再获取所述插入帧图像的插帧时刻,根据所述插帧时刻的时间先后顺序,将所述插入帧图像依次输出至预设显示屏,以保证图像输出顺序的准确性。 Based on the above technical solution, after obtaining the current frame image, this embodiment can output the current frame image to a preset display screen. Then, the insertion frame time of the inserted frame image is obtained, and according to the time sequence of the insertion frame time, the inserted frame images are sequentially output to the preset display screen to ensure the accuracy of the image output sequence.
第二方面,本申请提供了一种图像插帧装置,所述图像插帧装置包括:In a second aspect, the present application provides an image interpolation device, the image interpolation device comprising:
获取模块,用于获取当前帧图像和帧间像素点移动信息,其中所述帧间像素点移动信息为所述当前帧图像与下一帧图像之间的像素点移动信息;An acquisition module, used to acquire the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image;
生成模块,用于根据当前帧图像和帧间像素点移动信息,生成至少一帧的插入帧图像;A generating module, used for generating at least one inserted frame image according to the current frame image and the inter-frame pixel point movement information;
输出模块,用于将所述当前帧图像和所述插入帧图像依次输出至预设显示屏。The output module is used to output the current frame image and the inserted frame image to a preset display screen in sequence.
根据第二方面,生成模块,还用于:According to the second aspect, the generating module is further used for:
根据所述帧间像素点移动信息,确定所述当前帧图像中的移动像素点以及所述移动像素点的移动轨迹;Determine the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information;
根据所述移动轨迹移动所述当前帧图像中的移动像素点,生成至少一帧插入帧图像。The moving pixels in the current frame image are moved according to the moving trajectory to generate at least one inserted frame image.
根据第二方面,或者以上第二方面的任意一种实现方式,生成模块,还用于:According to the second aspect, or any implementation of the second aspect above, the generating module is further used to:
获取所述当前帧图像的第一拍摄时刻与下一帧图像的第二拍摄时刻之间的帧间时段和目标插入帧数;Acquire an inter-frame period and a target number of inserted frames between a first shooting moment of the current frame image and a second shooting moment of the next frame image;
根据所述帧间时段和所述目标插入帧数,确定所述帧间时段内对应的插帧时刻。According to the inter-frame period and the target number of inserted frames, a corresponding inter-frame time point in the inter-frame period is determined.
据第二方面,或者以上第二方面的任意一种实现方式,相邻的所述插帧时刻之间的时间差相等。According to the second aspect, or any implementation of the second aspect above, the time differences between adjacent interpolation moments are equal.
根据第二方面,或者以上第二方面的任意一种实现方式,生成模块,还用于:According to the second aspect, or any implementation of the second aspect above, the generating module is further used to:
根据所述子移动轨迹,确定所述移动像素点在所述子移动轨迹对应的插帧时刻的目标位置;Determine, according to the sub-movement trajectory, a target position of the moving pixel point at the interpolation time corresponding to the sub-movement trajectory;
将所述移动像素点移动至所述目标位置,生成所述插帧时刻对应的插入帧图像。The moving pixel point is moved to the target position to generate an insertion frame image corresponding to the insertion frame moment.
根据第二方面,或者以上第二方面的任意一种实现方式,获取模块,还用于:According to the second aspect, or any implementation of the second aspect, the acquisition module is further used to:
通过标准相机对目标场景进行拍摄,获得当前帧图像;The target scene is photographed by a standard camera to obtain a current frame image;
通过事件驱动相机,对所述标准相机的成像画面中的像素点进行检测,获得帧间像素点移动信息。The event-driven camera is used to detect pixel points in the imaging picture of the standard camera to obtain the pixel point movement information between frames.
根据第二方面,或者以上第二方面的任意一种实现方式,输出模块,还用于:According to the second aspect, or any implementation of the second aspect above, the output module is further used to:
将所述当前帧图像输出至预设显示屏,并获取所述插入帧图像的插帧时刻;Outputting the current frame image to a preset display screen, and obtaining the insertion frame time of the inserted frame image;
根据所述插帧时刻的顺序,将所述插入帧图像依次输出至所述预设显示屏。According to the sequence of the frame insertion moments, the inserted frame images are output to the preset display screen in sequence.
第三方面,本申请提供了一种图像插帧设备,所述图像插帧设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序配置为实现如上所述的图像插帧方法的步骤。 In a third aspect, the present application provides an image interpolation device, comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the computer program is configured to implement the steps of the image interpolation method as described above.
第三方面以及第三方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第三方面以及第三方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。The third aspect and any implementation of the third aspect correspond to the first aspect and any implementation of the first aspect, respectively. The technical effects corresponding to the third aspect and any implementation of the third aspect can refer to the technical effects corresponding to the first aspect and any implementation of the first aspect, which will not be repeated here.
第四方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质中存储了计算机程序,当所述计算机程序被处理器执行时,使得处理器执行如上述第一方面或第一方面的可能的实现方式中任一项所述的图像插帧方法。In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored. When the computer program is executed by a processor, the processor executes an image interpolation method as described in any one of the above-mentioned first aspect or possible implementations of the first aspect.
第四方面以及第四方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第四方面以及第四方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。The fourth aspect and any implementation of the fourth aspect correspond to the first aspect and any implementation of the first aspect, respectively. The technical effects corresponding to the fourth aspect and any implementation of the fourth aspect can refer to the technical effects corresponding to the above-mentioned first aspect and any implementation of the first aspect, which will not be repeated here.
第五方面,本申请实施例提供了一种计算机程序,该计算机程序包括用于执行第一方面以及第一方面的任意可能的实现方式中的图像插帧方法的指令。In a fifth aspect, an embodiment of the present application provides a computer program, which includes instructions for executing the image interpolation method in the first aspect and any possible implementation of the first aspect.
第五方面以及第五方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第五方面以及第五方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。The fifth aspect and any implementation of the fifth aspect correspond to the first aspect and any implementation of the first aspect, respectively. The technical effects corresponding to the fifth aspect and any implementation of the fifth aspect can refer to the technical effects corresponding to the first aspect and any implementation of the first aspect, which will not be repeated here.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本申请实施例涉及的视频透视技术的示意图;FIG1 is a schematic diagram of a video perspective technology involved in an embodiment of the present application;
图2为本申请实施例方案涉及的硬件运行环境的图像插帧设备的结构示意图;FIG2 is a schematic diagram of the structure of an image interpolation device in a hardware operating environment according to an embodiment of the present application;
图3为本申请图像插帧方法第一实施例的流程示意图;FIG3 is a schematic diagram of a flow chart of a first embodiment of an image interpolation method of the present application;
图4为本申请图像插帧方法第二实施例的流程示意图;FIG4 is a schematic diagram of a flow chart of a second embodiment of the image interpolation method of the present application;
图5为本申请实施例涉及的图像插帧装置的结构示意图。FIG. 5 is a schematic diagram of the structure of an image interpolation device according to an embodiment of the present application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization of the purpose, functional features and advantages of this application will be further explained in conjunction with embodiments and with reference to the accompanying drawings.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。 The following will be combined with the drawings in the embodiments of the present application to clearly and completely describe the technical solutions in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of this application.
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。The term "and/or" in this article is merely a description of the association relationship of associated objects, indicating that three relationships may exist. For example, A and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone.
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。The terms "first" and "second" in the description and claims of the embodiments of the present application are used to distinguish different objects rather than to describe a specific order of objects. For example, a first target object and a second target object are used to distinguish different target objects rather than to describe a specific order of target objects.
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。In the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "for example" in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as "exemplary" or "for example" is intended to present related concepts in a specific way.
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。In the description of the embodiments of the present application, unless otherwise specified, the meaning of "multiple" refers to two or more than two. For example, multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
为了下述各实施例的描述清楚简洁,首先给出一种图像插帧方法的实现方案的简要介绍:In order to make the description of the following embodiments clear and concise, a brief introduction to an implementation scheme of an image interpolation method is first given:
随着VR(Virtual Reality,虚拟现实)和AR(Augmented Reality,增强现实)行业近两年迎来爆发,在工业、医疗、娱乐、办公、社交等方面的价值逐渐展现,目前各大厂商也纷纷在MR(Mixed Reality,混合现实)领域上进行探索和布局。With the explosion of VR (Virtual Reality) and AR (Augmented Reality) industries in the past two years, their value in industry, medical care, entertainment, office, and social interaction has gradually been revealed. Currently, major manufacturers are also exploring and deploying in the MR (Mixed Reality) field.
而当前实现MR的主要技术路线是基于VST(Video See-Through,视频透视)技术。参照图1,图1为本申请实施例涉及的视频透视技术的示意图。VST技术中是通过相机10捕捉到真实世界(Real world)的实时视图,然后将实时视图与数字世界(Digital world)的虚拟视图结合在一起,呈现在显示屏20上,以将结合后的图像送至用户的眼睛。基于VST技术可以完全控制视觉集成,允许虚拟和真实物体之间的完全遮挡,甚至可以对真实物体进行更高级别的修改。The main technical route for realizing MR at present is based on VST (Video See-Through) technology. Referring to Figure 1, Figure 1 is a schematic diagram of the video see-through technology involved in the embodiment of the present application. In VST technology, a real-time view of the real world is captured by a camera 10, and then the real-time view is combined with a virtual view of the digital world, and presented on a display screen 20 to send the combined image to the user's eyes. Based on VST technology, visual integration can be fully controlled, allowing complete occlusion between virtual and real objects, and even higher-level modifications to real objects.
但是VST技术对现实世界的场景进行拍摄的相机通常是采用CIS(CMOS Image Sensor,CMOS图像传感器)进行成像,而CIS是基于电荷,需要对电荷进行积分之后,才能有输出。也就限制CIS的采样频率,导致输出的图像帧率偏低(通常在几十到百帧每秒),影响用户的MR体验。However, the cameras that use VST technology to shoot real-world scenes usually use CIS (CMOS Image Sensor) for imaging, and CIS is based on charge, which needs to be integrated before output. This limits the sampling frequency of CIS, resulting in a low frame rate of the output image (usually dozens to hundreds of frames per second), affecting the user's MR experience.
本申请设计了一种通过像素点移动信息对当前帧图像进行图像补偿的图像插帧方法,从而提高了输出图像的帧率,保证了用户的MR体验。 The present application designs an image interpolation method for performing image compensation on the current frame image through pixel movement information, thereby improving the frame rate of the output image and ensuring the user's MR experience.
在一些实施例中,通过获取当前帧图像和帧间像素点移动信息,其中所述帧间像素点移动信息为所述当前帧图像与下一帧图像之间的像素点移动信息;根据当前帧图像和帧间像素点移动信息,生成至少一帧的插入帧图像;将所述当前帧图像和所述插入帧图像依次输出至预设显示屏。从而本实施例通过帧间像素点移动信息对当前帧图像进行图像补偿,生成了当前帧图像之后的插入帧图像,在将所述当前帧图像和所述插入帧图像依次输出至预设显示屏之后,则可以提高输出图像的帧率,保证了用户的MR体验。In some embodiments, by obtaining the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image; generating at least one frame of inserted frame image according to the current frame image and the inter-frame pixel point movement information; and outputting the current frame image and the inserted frame image to a preset display screen in sequence. Thus, in this embodiment, image compensation is performed on the current frame image through the inter-frame pixel point movement information, and an inserted frame image after the current frame image is generated. After the current frame image and the inserted frame image are output to the preset display screen in sequence, the frame rate of the output image can be increased, thereby ensuring the user's MR experience.
参照图2,图2为本申请实施例方案涉及的硬件运行环境的图像插帧设备结构示意图。Refer to Figure 2, which is a schematic diagram of the structure of an image interpolation device in the hardware operating environment involved in the embodiment of the present application.
具体地,所述图像插帧设备可以是MR设备、PC(Personal Computer,个人计算机)、平板电脑、便携式计算机或者服务器等设备。Specifically, the image interpolation device can be an MR device, a PC (Personal Computer), a tablet computer, a portable computer or a server.
如图2所示,该图像插帧设备可以包括:处理器1001,例如中央处理器(Central Processing Unit,CPU),通信总线1002、用户接口1003,网络接口1004,存储器1005。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如无线保真(WIreless-FIdelity,WI-FI)接口)。存储器1005可以是高速的随机存取存储器(Random Access Memory,RAM)存储器,也可以是稳定的非易失性存储器(Non-Volatile Memory,NVM),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。As shown in FIG. 2 , the image interpolation device may include: a processor 1001, such as a central processing unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. The communication bus 1002 is used to realize the connection and communication between these components. The user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the user interface 1003 may optionally include a standard wired interface and a wireless interface. The network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a wireless fidelity (WIreless-FIdelity, WI-FI) interface). The memory 1005 may be a high-speed random access memory (Random Access Memory, RAM) memory, or a stable non-volatile memory (Non-Volatile Memory, NVM), such as a disk memory. The memory 1005 may also be a storage device independent of the aforementioned processor 1001.
本领域技术人员可以理解,图2中示出的结构并不构成对图像插帧设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art will appreciate that the structure shown in FIG. 2 does not constitute a limitation on the image interpolation device, and may include more or fewer components than shown in the figure, or a combination of certain components, or a different arrangement of components.
如图2所示,作为一种存储介质的存储器1005中可以包括操作系统、数据存储模块、网络通信模块、用户接口模块以及计算机程序。As shown in FIG. 2 , the memory 1005 as a storage medium may include an operating system, a data storage module, a network communication module, a user interface module, and a computer program.
在图2所示的图像插帧设备中,网络接口1004主要用于与其他设备进行数据通信;用户接口1003主要用于与用户进行数据交互;本申请图像插帧设备中的处理器1001、存储器1005可以设置在图像插帧设备中,所述图像插帧设备通过处理器1001调用存储器1005中存储的计算机程序,并执行本申请实施例提供的图像插帧方法。In the image interpolation device shown in FIG2 , the network interface 1004 is mainly used for data communication with other devices; the user interface 1003 is mainly used for data interaction with the user; the processor 1001 and the memory 1005 in the image interpolation device of the present application can be set in the image interpolation device, and the image interpolation device calls the computer program stored in the memory 1005 through the processor 1001, and executes the image interpolation method provided in the embodiment of the present application.
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。 It should be understood that the above description is merely an example listed for a better understanding of the technical solution of this embodiment, and is not intended to be the sole limitation to this embodiment.
下面结合图3所示的图像插帧方法第一实施例的流程示意图,对图像插帧方法进行详细说明。The image interpolation method will be described in detail below with reference to the flowchart of the first embodiment of the image interpolation method shown in FIG. 3 .
参见图3,本申请的图像插帧方法一实施例提供的图像插帧方法,所述图像插帧方法包括:Referring to FIG. 3 , an image interpolation method according to an embodiment of the present application is provided. The image interpolation method includes:
步骤S100,获取当前帧图像和帧间像素点移动信息,其中所述帧间像素点移动信息为所述当前帧图像与下一帧图像之间的像素点移动信息;Step S100, obtaining the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image;
在本实施例中,可以理解的是,所述当前帧图像为通过标准相机对目标场景进行拍摄后得到的图像,其中所述标准相机为具备CIS(CMOS Image Sensor,CMOS图像传感器)等常规图像传感器的相机。所述像素点移动信息至少包括EVS(Event-based Vision Sensor,基于事件的视觉传感器)采集到的在拍摄得到所述当前帧图像之后至拍摄得到下一帧图像这一帧间时段内,所述标准相机的成像画面中像素点的坐标位置以及所述坐标位置对应时间点等移动信息。可以理解的是,可以采用具备EVS的事件驱动相机采集所述像素点移动信息。In this embodiment, it can be understood that the current frame image is an image obtained by photographing the target scene with a standard camera, wherein the standard camera is a camera equipped with a conventional image sensor such as CIS (CMOS Image Sensor). The pixel point movement information at least includes the coordinate position of the pixel point in the imaging picture of the standard camera and the corresponding time point of the coordinate position, etc., collected by EVS (Event-based Vision Sensor) during the inter-frame period from the current frame image to the next frame image. It can be understood that an event-driven camera equipped with EVS can be used to collect the pixel point movement information.
示例性地,可以在MR设备中设置采用CIS的标准相机和采用EVS的事件驱动相机,从而在标准相机对目标场景进行拍摄时,通过事件驱动相机可以采集在相邻帧之间的帧间像素点移动信息。Exemplarily, a standard camera using CIS and an event-driven camera using EVS may be provided in the MR device, so that when the standard camera shoots a target scene, the event-driven camera may collect inter-frame pixel movement information between adjacent frames.
其中,在步骤S100获取当前帧图像和帧间像素点移动信息的之前,包括:Before the step S100 of acquiring the current frame image and the inter-frame pixel movement information, the following steps are included:
步骤S110,通过标准相机对目标场景进行拍摄,获得当前帧图像;Step S110, photographing the target scene with a standard camera to obtain a current frame image;
步骤S120,通过事件驱动相机,对所述标准相机的成像画面中的像素点进行检测,获得帧间像素点移动信息。Step S120: Detecting pixels in the imaging picture of the standard camera by using an event-driven camera to obtain inter-frame pixel movement information.
具体地,所述目标场景为用户期望拍摄的场景。本实施例通过标准相机对目标场景进行拍摄,则可以获得所述目标场景对应的当前帧图像。由于EVS是基于PD(Photo-Diode,光电二极管)电流,监测PD的电流信号是否发生了变化。变换超过一个给定的阈值,以2bit信号输出为例,即电流信号变强则可以输出01,电流信号变弱则可以输出10,若电流信号变化不超过阈值,则输出为00。同时其计算过程是所有像素同时进行数模转换,这是一个并行过程。而且因为转换简单,这个2bit的数模转换非常迅速,从而实现了非常高速的整个像素阵列的转换与输出,因此所述事件驱动相机的采样频率远高于标准相机。因此,在获得所述标准相机拍摄得到当前帧图像后,可以通过事件驱动相机对所述标准相机的成像画面中的像素点进行检测,可以采集得到所述标准相机在拍摄得到所述当前帧图像之后至拍摄得到下一帧图像这一帧间时段内的帧间像素点移动信息。 Specifically, the target scene is the scene that the user expects to shoot. In this embodiment, the target scene is photographed by a standard camera, and the current frame image corresponding to the target scene can be obtained. Since EVS is based on PD (Photo-Diode, photodiode) current, the current signal of the PD is monitored to see if it has changed. The transformation exceeds a given threshold, taking the 2-bit signal output as an example, that is, if the current signal becomes stronger, 01 can be output, and if the current signal becomes weaker, 10 can be output. If the current signal does not change beyond the threshold, the output is 00. At the same time, the calculation process is that all pixels are simultaneously converted to analog, which is a parallel process. And because the conversion is simple, this 2-bit digital-to-analog conversion is very fast, thereby realizing a very high-speed conversion and output of the entire pixel array, so the sampling frequency of the event-driven camera is much higher than that of the standard camera. Therefore, after obtaining the current frame image captured by the standard camera, the pixel points in the imaging screen of the standard camera can be detected by the event-driven camera, and the inter-frame pixel point movement information of the standard camera in the inter-frame period after the current frame image is captured to the next frame image can be collected.
本实施例在通过标准相机对目标场景进行拍摄,获得当前帧图像后,通过事件驱动相机采集当前帧图像的第一拍摄时刻和下一帧图像的第二拍摄时刻之间的帧间时段内的帧间像素点移动信息,由此确定在所述帧间时段内像素点的移动情况,从而后续可以根据所述帧间像素点移动信息生成对应的插入帧图像。In this embodiment, after photographing the target scene with a standard camera and obtaining the current frame image, the event-driven camera collects inter-frame pixel movement information in the inter-frame period between the first shooting moment of the current frame image and the second shooting moment of the next frame image, thereby determining the movement of the pixels in the inter-frame period, so that the corresponding inserted frame image can be generated subsequently according to the inter-frame pixel movement information.
步骤S200,根据当前帧图像和帧间像素点移动信息,生成至少一帧的插入帧图像;Step S200, generating at least one inserted frame image according to the current frame image and the inter-frame pixel movement information;
在获得当前帧图像和帧间像素点移动信息之后,则可以根据所述帧间像素点移动信息确定在所述当前帧图像的第一拍摄时刻后产生了移动的移动像素点和所述移动像素点在所述当前帧图像的第一拍摄时刻至所述下一帧图像的第二拍摄时刻之间的帧间时段内的移动轨迹。进而可以根据该移动轨迹控制所述当前帧图像中对应的移动像素点进行移动,生成至少一帧的插入帧图像,用于插入所述当前帧图像与下一帧图像之间,以提高输出图像的帧率。After obtaining the current frame image and inter-frame pixel point movement information, the moving pixel points that have moved after the first shooting moment of the current frame image and the movement trajectory of the moving pixel points in the inter-frame period between the first shooting moment of the current frame image and the second shooting moment of the next frame image can be determined according to the inter-frame pixel point movement information. Then, the corresponding moving pixel points in the current frame image can be controlled to move according to the movement trajectory to generate at least one frame of inserted frame image for insertion between the current frame image and the next frame image to improve the frame rate of the output image.
其中,步骤S200根据当前帧图像和帧间像素点移动信息,生成至少一帧的插入帧图像,包括:Wherein, step S200 generates at least one inserted frame image according to the current frame image and the inter-frame pixel movement information, including:
步骤S210,根据所述帧间像素点移动信息,确定所述当前帧图像中的移动像素点以及所述移动像素点的移动轨迹;Step S210, determining the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information;
步骤S220,根据所述移动轨迹移动所述当前帧图像中的移动像素点,生成至少一帧插入帧图像。Step S220 , moving pixels in the current frame image according to the moving trajectory to generate at least one inserted frame image.
可以理解的是,EVS是以异步方式检测各成像画面中各像素的变化,因此可以根据所述帧间像素点移动信息,确定所述当前帧图像在当前帧图像的第一拍摄时刻和下一帧图像的第二拍摄时刻之间的帧间时段内产生移动的移动像素点。并确定所述移动像素点在所述帧间时段内的坐标位置和对应的时间点,由此生成所述移动像素点的移动轨迹。然后则可以根据根据所述移动轨迹移动所述当前帧图像中的移动像素点,生成至少一帧插入帧图像。示例性地,若需要在当前帧图像和下一帧图像之间插入一帧插入帧图像,则可以根据所述移动轨迹,选取所述帧间时段内的移动像素点的某一时间点对应的坐标位置,将所述当前帧图像中移动像素点移动至所述坐标位置,则可生成一帧插入帧图像。It can be understood that EVS detects the changes of each pixel in each imaging screen in an asynchronous manner. Therefore, according to the inter-frame pixel movement information, it can determine the moving pixel points that move in the inter-frame period between the first shooting moment of the current frame image and the second shooting moment of the next frame image. And determine the coordinate position and corresponding time point of the moving pixel point in the inter-frame period, thereby generating a moving trajectory of the moving pixel point. Then, at least one frame of inserted frame image can be generated by moving the moving pixel points in the current frame image according to the moving trajectory. Exemplarily, if it is necessary to insert an inserted frame image between the current frame image and the next frame image, the coordinate position corresponding to a certain time point of the moving pixel point in the inter-frame period can be selected according to the moving trajectory, and the moving pixel point in the current frame image can be moved to the coordinate position, so that an inserted frame image can be generated.
本实施例通过根据所述帧间像素点移动信息,确定所述当前帧图像中的移动像素点以及所述移动像素点的移动轨迹;根据所述移动轨迹移动所述当前帧图像中的移动像素点,生成至少一帧插入帧图像。相对于通过运动矢量和神经网络模型生成插入帧图像的方式,本实施例生成插入帧图像的准确性更高。In this embodiment, according to the inter-frame pixel movement information, the moving pixels in the current frame image and the moving trajectory of the moving pixels are determined; and according to the moving trajectory, the moving pixels in the current frame image are moved to generate at least one inserted frame image. Compared with the method of generating an inserted frame image by using motion vectors and a neural network model, the accuracy of generating an inserted frame image in this embodiment is higher.
步骤S300,将所述当前帧图像和所述插入帧图像依次输出至预设显示屏。 Step S300: output the current frame image and the inserted frame image to a preset display screen in sequence.
在获得所述当前帧图像和所述插入帧图像后,根据所述当前帧图像和所述插入帧图像对应的时刻,将所述当前帧图像和所述插入帧图像依次输出至预设显示屏,以提高输出至所述预设显示屏的图像帧率。After obtaining the current frame image and the inserted frame image, the current frame image and the inserted frame image are output to a preset display screen in sequence according to the corresponding moments of the current frame image and the inserted frame image, so as to increase the image frame rate output to the preset display screen.
进一步地,步骤S300将所述当前帧图像和所述插入帧图像依次输出至预设显示屏包括:Furthermore, step S300 of outputting the current frame image and the inserted frame image to a preset display screen in sequence includes:
步骤S310,将所述当前帧图像输出至预设显示屏,并获取所述插入帧图像的插帧时刻;Step S310, outputting the current frame image to a preset display screen, and obtaining the insertion frame time of the inserted frame image;
步骤S320,根据所述插帧时刻的顺序,将所述插入帧图像依次输出至所述预设显示屏。Step S320: output the inserted frame images in sequence to the preset display screen according to the sequence of the inserted frame moments.
本实施例在获得所述当前帧图像后,则可以将所述当前帧图像输出至预设显示屏。然后再获取所述插入帧图像的插帧时刻,根据所述插帧时刻的时间先后顺序,将所述插入帧图像依次输出至预设显示屏,以保证图像输出顺序的准确性。After obtaining the current frame image, this embodiment can output the current frame image to a preset display screen, and then obtain the insertion frame time of the inserted frame image, and output the inserted frame images to the preset display screen in sequence according to the time sequence of the insertion frame time, so as to ensure the accuracy of the image output sequence.
在本申请第一实施例中,通过获取当前帧图像和帧间像素点移动信息,其中所述帧间像素点移动信息为所述当前帧图像与下一帧图像之间的像素点移动信息;根据当前帧图像和帧间像素点移动信息,生成至少一帧的插入帧图像;将所述当前帧图像和所述插入帧图像依次输出至预设显示屏。从而本实施例通过帧间像素点移动信息对当前帧图像进行图像补偿,生成了当前帧图像之后的插入帧图像,在将所述当前帧图像和所述插入帧图像依次输出至预设显示屏之后,则可以提高输出图像的帧率,保证了用户的MR体验。In the first embodiment of the present application, by obtaining the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image; generating at least one frame of inserted frame image according to the current frame image and the inter-frame pixel point movement information; and outputting the current frame image and the inserted frame image to a preset display screen in sequence. Thus, this embodiment performs image compensation on the current frame image through the inter-frame pixel point movement information, generates an inserted frame image after the current frame image, and after outputting the current frame image and the inserted frame image to the preset display screen in sequence, the frame rate of the output image can be increased, thereby ensuring the user's MR experience.
下面结合图4所示的图像插帧方法第二实施例的流程示意图,对所述图像插帧方法进行详细说明。The image interpolation method will be described in detail below with reference to the flowchart diagram of the second embodiment of the image interpolation method shown in FIG. 4 .
参见图4,本申请的图像插帧方法另一实施例中,步骤S200在所述根据所述帧间像素点移动信息,确定所述当前帧图像中的移动像素点以及所述移动像素点的移动轨迹的步骤之前,包括:Referring to FIG. 4 , in another embodiment of the image interpolation method of the present application, before the step of determining the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information, step S200 includes:
步骤A10,获取所述当前帧图像的第一拍摄时刻与下一帧图像的第二拍摄时刻之间的帧间时段和目标插入帧数;Step A10, obtaining an inter-frame period and a target number of inserted frames between a first shooting moment of the current frame image and a second shooting moment of the next frame image;
步骤A20,根据所述帧间时段和所述目标插入帧数,确定所述帧间时段内对应的插帧时刻。Step A20, determining the corresponding frame insertion time in the inter-frame period according to the inter-frame period and the target number of inserted frames.
具体地,所述目标插入帧数为相邻的两帧图像之间需要插入的帧数,所述目标插入帧数可以是用户输入的插入帧数,也可以是根据预设输出帧数计算得到的插入帧数,示例性地,假设所述标准相机的图像输出帧率为60帧/秒(即每秒输出60张图像),而预设输 出帧数为240帧/秒。则可以计算得到(240-60)/60=3帧,即相邻的两帧之间需要插入的目标插入帧数为3帧。所述帧间时段为当前帧图像的第一拍摄时刻与下一帧图像的第二拍摄时刻之间的时间差,可以根据所述标准相机的图像输出帧率得到所述帧间时段。示例性地,假设所述标准相机的图像输出帧率为60帧/秒,则所述帧间时段为1/60=16.6ms。由此,则可以根据所述帧间时段和所述目标插入帧数,确定所述帧间时段内对应的插帧时刻。其中,相邻的所述插帧时刻之间的时间差相等,相邻的所述插帧时刻之间的时间差也可以不相等。当然可以理解的是,所述插帧时刻也可以由用户根据需求自行设置。Specifically, the target insertion frame number is the number of frames that need to be inserted between two adjacent frames of images. The target insertion frame number can be the insertion frame number input by the user, or the insertion frame number calculated according to the preset output frame number. For example, assuming that the image output frame rate of the standard camera is 60 frames per second (i.e., 60 images are output per second), and the preset output frame rate is 60 frames per second. The number of frames is 240 frames/second. Then it can be calculated that (240-60)/60=3 frames, that is, the target number of inserted frames to be inserted between two adjacent frames is 3 frames. The inter-frame period is the time difference between the first shooting moment of the current frame image and the second shooting moment of the next frame image, and the inter-frame period can be obtained according to the image output frame rate of the standard camera. Exemplarily, assuming that the image output frame rate of the standard camera is 60 frames/second, the inter-frame period is 1/60=16.6ms. Therefore, the corresponding interpolation moment in the inter-frame period can be determined according to the inter-frame period and the target number of inserted frames. Among them, the time difference between adjacent interpolation moments is equal, and the time difference between adjacent interpolation moments may also be unequal. Of course, it can be understood that the interpolation moment can also be set by the user according to needs.
本实施例中,通过获取所述当前帧图像的第一拍摄时刻与下一帧图像的第二拍摄时刻之间的帧间时段和目标插入帧数;根据所述帧间时段和所述目标插入帧数,确定所述帧间时段内对应的插帧时刻。从而可以确定需要在所述当前帧图像与下一帧图像之间的帧间时段需要插帧的插帧时刻。In this embodiment, the interframe period and the target number of inserted frames between the first shooting moment of the current frame image and the second shooting moment of the next frame image are obtained; and the corresponding interframe insertion time in the interframe period is determined according to the interframe period and the target number of inserted frames. Thus, the interframe insertion time that needs to be inserted in the interframe period between the current frame image and the next frame image can be determined.
其中,相邻的所述插帧时刻之间的时间差相等。The time differences between adjacent interpolation frame moments are equal.
本实施例中,通过使相邻的所述插帧时刻之间的时间差相等,由此输出的插入帧图像时间间隔一致,可以保证输出图像的流畅性。In this embodiment, by making the time differences between adjacent interpolation frame moments equal, the time intervals of the output interpolation frame images are consistent, thereby ensuring the smoothness of the output image.
其中,所述移动轨迹包括所述第一拍摄时刻至各所述插帧时刻的子移动轨迹,步骤S220根据所述移动轨迹移动所述当前帧图像中的移动像素点,生成至少一帧插入帧图像,包括:The movement trajectory includes sub-movement trajectories from the first shooting moment to each of the interpolation frame moments, and step S220 moves the moving pixel points in the current frame image according to the movement trajectory to generate at least one interpolation frame image, including:
步骤B10,根据所述子移动轨迹,确定所述移动像素点在所述子移动轨迹对应的插帧时刻的目标位置;Step B10, determining the target position of the moving pixel point at the interpolation time corresponding to the sub-movement trajectory according to the sub-movement trajectory;
步骤B20,将所述移动像素点移动至所述目标位置,生成所述插帧时刻对应的插入帧图像。Step B20, moving the moving pixel point to the target position, and generating an insertion frame image corresponding to the insertion frame moment.
具体地,所述移动轨迹包括所述第一拍摄时刻至各所述插帧时刻的子移动轨迹。本实施例通过根据所述子移动轨迹,确定所述移动像素点在所述子移动轨迹对应的插帧时刻的目标位置,其中所述目标位置为在所述插帧时刻时,所述移动像素点在所述子移动轨迹上的位置。然后通过将所述移动像素点移动至所述目标位置,生成所述插帧时刻对应的插入帧图像。Specifically, the movement trajectory includes a sub-movement trajectory from the first shooting moment to each of the interpolation moments. In this embodiment, the target position of the moving pixel point at the interpolation moment corresponding to the sub-movement trajectory is determined according to the sub-movement trajectory, wherein the target position is the position of the moving pixel point on the sub-movement trajectory at the interpolation moment. Then, the interpolation frame image corresponding to the interpolation moment is generated by moving the moving pixel point to the target position.
本实施例中,通过所述标准相机的第一拍摄时刻至各所述插帧时刻的子移动轨迹,生成了对应的插入帧图像,相对于通过运动矢量和神经网络模型生成插入帧图像的方式,本实施例生成插入帧图像的准确性更高。 In this embodiment, the corresponding inserted frame image is generated through the sub-movement trajectory from the first shooting moment of the standard camera to each of the inserted frame moments. Compared with the method of generating the inserted frame image through the motion vector and the neural network model, the accuracy of generating the inserted frame image in this embodiment is higher.
参考图5,图5为本申请实施例涉及的图像插帧装置的结构示意图。Refer to FIG. 5 , which is a schematic diagram of the structure of an image interpolation device involved in an embodiment of the present application.
参见图5,本申请提供了一种图像插帧装置,所述图像插帧装置包括:Referring to FIG. 5 , the present application provides an image interpolation device, the image interpolation device comprising:
获取模块10,用于获取当前帧图像和帧间像素点移动信息,其中所述帧间像素点移动信息为所述当前帧图像与下一帧图像之间的像素点移动信息;An acquisition module 10 is used to acquire the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image;
生成模块20,用于根据当前帧图像和帧间像素点移动信息,生成至少一帧的插入帧图像;A generating module 20, configured to generate at least one inserted frame image according to the current frame image and the inter-frame pixel point movement information;
输出模块30,用于将所述当前帧图像和所述插入帧图像依次输出至预设显示屏。The output module 30 is used to output the current frame image and the inserted frame image to a preset display screen in sequence.
可选地,生成模块20,还用于:Optionally, the generating module 20 is further configured to:
根据所述帧间像素点移动信息,确定所述当前帧图像中的移动像素点以及所述移动像素点的移动轨迹;Determine the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information;
根据所述移动轨迹移动所述当前帧图像中的移动像素点,生成至少一帧插入帧图像。The moving pixels in the current frame image are moved according to the moving trajectory to generate at least one inserted frame image.
可选地,生成模块20,还用于:Optionally, the generating module 20 is further configured to:
获取所述当前帧图像的第一拍摄时刻与下一帧图像的第二拍摄时刻之间的帧间时段和目标插入帧数;Acquire an inter-frame period and a target number of inserted frames between a first shooting moment of the current frame image and a second shooting moment of the next frame image;
根据所述帧间时段和所述目标插入帧数,确定所述帧间时段内对应的插帧时刻。According to the inter-frame period and the target number of inserted frames, a corresponding inter-frame time point in the inter-frame period is determined.
可选地,或者以上第二方面的任意一种实现方式,相邻的所述插帧时刻之间的时间差相等。Optionally, or in any implementation of the second aspect above, the time differences between adjacent interpolation moments are equal.
可选地,生成模块20,还用于:Optionally, the generating module 20 is further configured to:
根据所述子移动轨迹,确定所述移动像素点在所述子移动轨迹对应的插帧时刻的目标位置;Determine, according to the sub-movement trajectory, a target position of the moving pixel point at the interpolation time corresponding to the sub-movement trajectory;
将所述移动像素点移动至所述目标位置,生成所述插帧时刻对应的插入帧图像。The moving pixel point is moved to the target position to generate an insertion frame image corresponding to the insertion frame moment.
可选地,获取模块10,还用于:Optionally, the acquisition module 10 is further used for:
通过标准相机对目标场景进行拍摄,获得当前帧图像;The target scene is photographed by a standard camera to obtain a current frame image;
通过事件驱动相机,对所述标准相机的成像画面中的像素点进行检测,获得帧间像素点移动信息。The event-driven camera is used to detect pixel points in the imaging picture of the standard camera to obtain the pixel point movement information between frames.
可选地,输出模块30,还用于:Optionally, the output module 30 is further used for:
将所述当前帧图像输出至预设显示屏,并获取所述插入帧图像的插帧时刻;Outputting the current frame image to a preset display screen, and obtaining the insertion frame time of the inserted frame image;
根据所述插帧时刻的顺序,将所述插入帧图像依次输出至所述预设显示屏。According to the sequence of the frame insertion moments, the inserted frame images are output to the preset display screen in sequence.
可以理解的是,所述图像插帧装置实现上述实施例提供的图像插帧方法中的操作,具体实施步骤可参照上述实施例的记载内容,在此不多做赘述。 It can be understood that the image interpolation device implements the operations in the image interpolation method provided in the above embodiment. The specific implementation steps can refer to the contents recorded in the above embodiment, and will not be elaborated here.
此外,本申请实施例还提出一种计算机存储介质,所述计算机存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现上述实施例提供的图像插帧方法中的操作,具体步骤此处不再过多赘述。In addition, an embodiment of the present application further proposes a computer storage medium, on which a computer program is stored. When the computer program is executed by a processor, the operations in the image interpolation method provided in the above embodiment are implemented, and the specific steps are not repeated here.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It should be noted that, in this article, the terms "include", "comprises" or any other variations thereof are intended to cover non-exclusive inclusion, so that a process, method, article or system including a series of elements includes not only those elements, but also other elements not explicitly listed, or also includes elements inherent to such process, method, article or system. In the absence of further restrictions, an element defined by the sentence "comprises a ..." does not exclude the existence of other identical elements in the process, method, article or system including the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the embodiments of the present application are for description only and do not represent the advantages or disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above implementation methods, those skilled in the art can clearly understand that the above-mentioned embodiment methods can be implemented by means of software plus a necessary general hardware platform, and of course by hardware, but in many cases the former is a better implementation method. Based on such an understanding, the technical solution of the present application is essentially or the part that contributes to the prior art can be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above, and includes a number of instructions for a terminal device (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods described in each embodiment of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。 The above are only preferred embodiments of the present application, and are not intended to limit the patent scope of the present application. Any equivalent structure or equivalent process transformation made using the contents of the present application specification and drawings, or directly or indirectly applied in other related technical fields, are also included in the patent protection scope of the present application.

Claims (10)

  1. 一种图像插帧方法,其特征在于,所述图像插帧方法包括:An image interpolation method, characterized in that the image interpolation method comprises:
    获取当前帧图像和帧间像素点移动信息,其中所述帧间像素点移动信息为所述当前帧图像与下一帧图像之间的像素点移动信息;Acquire the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image;
    根据当前帧图像和帧间像素点移动信息,生成至少一帧的插入帧图像;Generate at least one inserted frame image according to the current frame image and inter-frame pixel movement information;
    将所述当前帧图像和所述插入帧图像依次输出至预设显示屏。The current frame image and the inserted frame image are outputted sequentially to a preset display screen.
  2. 如权利要求1所述的图像插帧方法,其特征在于,所述根据当前帧图像和帧间像素点移动信息,生成至少一帧的插入帧图像的步骤,包括:The image interpolation method according to claim 1, wherein the step of generating at least one interpolation frame image according to the current frame image and the inter-frame pixel movement information comprises:
    根据所述帧间像素点移动信息,确定所述当前帧图像中的移动像素点以及所述移动像素点的移动轨迹;Determine the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information;
    根据所述移动轨迹移动所述当前帧图像中的移动像素点,生成至少一帧插入帧图像。The moving pixels in the current frame image are moved according to the moving trajectory to generate at least one inserted frame image.
  3. 如权利要求2所述的图像插帧方法,其特征在于,在所述根据所述帧间像素点移动信息,确定所述当前帧图像中的移动像素点以及所述移动像素点的移动轨迹的步骤之前,包括:The image interpolation method according to claim 2, characterized in that before the step of determining the moving pixels in the current frame image and the moving trajectory of the moving pixels according to the inter-frame pixel movement information, it comprises:
    获取所述当前帧图像的第一拍摄时刻与下一帧图像的第二拍摄时刻之间的帧间时段和目标插入帧数;Acquire an inter-frame period and a target number of inserted frames between a first shooting moment of the current frame image and a second shooting moment of the next frame image;
    根据所述帧间时段和所述目标插入帧数,确定所述帧间时段内对应的插帧时刻。According to the inter-frame period and the target number of inserted frames, a corresponding inter-frame time point in the inter-frame period is determined.
  4. 如权利要求3所述的图像插帧方法,其特征在于,相邻的所述插帧时刻之间的时间差相等。The image interpolation method according to claim 3, wherein the time differences between adjacent interpolation moments are equal.
  5. 如权利要求3所述的图像插帧方法,其特征在于,所述移动轨迹包括所述第一拍摄时刻至各所述插帧时刻的子移动轨迹,所述根据所述移动轨迹移动所述当前帧图像中的移动像素点,生成至少一帧插入帧图像的步骤,包括:The image interpolation method according to claim 3, characterized in that the movement trajectory includes a sub-movement trajectory from the first shooting moment to each of the interpolation moments, and the step of moving the moving pixel points in the current frame image according to the movement trajectory to generate at least one interpolation frame image comprises:
    根据所述子移动轨迹,确定所述移动像素点在所述子移动轨迹对应的插帧时刻的目标位置;Determine, according to the sub-movement trajectory, a target position of the moving pixel point at the interpolation time corresponding to the sub-movement trajectory;
    将所述移动像素点移动至所述目标位置,生成所述插帧时刻对应的插入帧图像。 The moving pixel point is moved to the target position to generate an insertion frame image corresponding to the insertion frame moment.
  6. 如权利要求1所述的图像插帧方法,其特征在于,在所述获取当前帧图像和帧间像素点移动信息,其中所述帧间像素点移动信息为所述当前帧图像与下一帧图像之间的像素点移动信息的步骤之前,包括:The image interpolation method according to claim 1, characterized in that before the step of obtaining the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image, it comprises:
    通过标准相机对目标场景进行拍摄,获得当前帧图像;The target scene is photographed by a standard camera to obtain a current frame image;
    通过事件驱动相机,对所述标准相机的成像画面中的像素点进行检测,获得帧间像素点移动信息。The event-driven camera is used to detect pixel points in the imaging picture of the standard camera to obtain the pixel point movement information between frames.
  7. 如权利要求1至6中任一项所述的图像插帧方法,其特征在于,在所述将所述当前帧图像和所述插入帧图像依次输出至预设显示屏的步骤,包括:The image interpolation method according to any one of claims 1 to 6, characterized in that the step of sequentially outputting the current frame image and the inserted frame image to a preset display screen comprises:
    将所述当前帧图像输出至预设显示屏,并获取所述插入帧图像的插帧时刻;Outputting the current frame image to a preset display screen, and obtaining the insertion frame time of the inserted frame image;
    根据所述插帧时刻的顺序,将所述插入帧图像依次输出至所述预设显示屏。According to the sequence of the frame insertion moments, the inserted frame images are output to the preset display screen in sequence.
  8. 一种图像插帧装置,其特征在于,所述图像插帧装置包括:An image interpolation device, characterized in that the image interpolation device comprises:
    获取模块,用于获取当前帧图像和帧间像素点移动信息,其中所述帧间像素点移动信息为所述当前帧图像与下一帧图像之间的像素点移动信息;An acquisition module, used to acquire the current frame image and inter-frame pixel point movement information, wherein the inter-frame pixel point movement information is the pixel point movement information between the current frame image and the next frame image;
    生成模块,用于根据当前帧图像和帧间像素点移动信息,生成至少一帧的插入帧图像;A generating module, used for generating at least one inserted frame image according to the current frame image and the inter-frame pixel point movement information;
    输出模块,用于将所述当前帧图像和所述插入帧图像依次输出至预设显示屏。The output module is used to output the current frame image and the inserted frame image to a preset display screen in sequence.
  9. 一种图像插帧设备,其特征在于,所述图像插帧设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序配置为实现如权利要求1至7中任一项所述的图像插帧方法的步骤。An image interpolation device, characterized in that the image interpolation device comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the computer program is configured to implement the steps of the image interpolation method as described in any one of claims 1 to 7.
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的图像插帧方法的步骤。 A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the image interpolation method according to any one of claims 1 to 7 are implemented.
PCT/CN2023/132099 2022-11-17 2023-11-16 Image frame interpolation method and apparatus, device, and computer readable storage medium WO2024104439A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211441273.0A CN115835035A (en) 2022-11-17 2022-11-17 Image frame interpolation method, device and equipment and computer readable storage medium
CN202211441273.0 2022-11-17

Publications (1)

Publication Number Publication Date
WO2024104439A1 true WO2024104439A1 (en) 2024-05-23

Family

ID=85528848

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/132099 WO2024104439A1 (en) 2022-11-17 2023-11-16 Image frame interpolation method and apparatus, device, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN115835035A (en)
WO (1) WO2024104439A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115835035A (en) * 2022-11-17 2023-03-21 歌尔科技有限公司 Image frame interpolation method, device and equipment and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180098082A1 (en) * 2016-09-30 2018-04-05 Intel Corporation Motion estimation using hybrid video imaging system
CN112532880A (en) * 2020-11-26 2021-03-19 展讯通信(上海)有限公司 Video processing method and device, terminal equipment and storage medium
CN112584234A (en) * 2020-12-09 2021-03-30 广州虎牙科技有限公司 Video image frame complementing method and related device
CN112596843A (en) * 2020-12-29 2021-04-02 北京元心科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114071223A (en) * 2020-07-30 2022-02-18 武汉Tcl集团工业研究院有限公司 Optical flow-based video interpolation frame generation method, storage medium and terminal equipment
CN115835035A (en) * 2022-11-17 2023-03-21 歌尔科技有限公司 Image frame interpolation method, device and equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180098082A1 (en) * 2016-09-30 2018-04-05 Intel Corporation Motion estimation using hybrid video imaging system
CN114071223A (en) * 2020-07-30 2022-02-18 武汉Tcl集团工业研究院有限公司 Optical flow-based video interpolation frame generation method, storage medium and terminal equipment
CN112532880A (en) * 2020-11-26 2021-03-19 展讯通信(上海)有限公司 Video processing method and device, terminal equipment and storage medium
CN112584234A (en) * 2020-12-09 2021-03-30 广州虎牙科技有限公司 Video image frame complementing method and related device
CN112596843A (en) * 2020-12-29 2021-04-02 北京元心科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115835035A (en) * 2022-11-17 2023-03-21 歌尔科技有限公司 Image frame interpolation method, device and equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN115835035A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
US9710923B2 (en) Information processing system, information processing device, imaging device, and information processing method
US9870602B2 (en) Method and apparatus for fusing a first image and a second image
WO2021233032A1 (en) Video processing method, video processing apparatus, and electronic device
US9473725B2 (en) Image-processing and encoded aperture pattern setting device and method, and image pickup device comprising same
CN113973190A (en) Video virtual background image processing method and device and computer equipment
EP2981061A1 (en) Method and apparatus for displaying self-taken images
CN110958401B (en) Super night scene image color correction method and device and electronic equipment
CN107948505B (en) Panoramic shooting method and mobile terminal
WO2024104439A1 (en) Image frame interpolation method and apparatus, device, and computer readable storage medium
CN107018316B (en) Image processing apparatus, image processing method, and storage medium
CN103078924A (en) Visual field sharing method and equipment
CN110084765B (en) Image processing method, image processing device and terminal equipment
CN114025105B (en) Video processing method, device, electronic equipment and storage medium
WO2024104436A1 (en) Image processing method and apparatus, device, and computer-readable storage medium
CN113542600A (en) Image generation method, device, chip, terminal and storage medium
CN112995491B (en) Video generation method and device, electronic equipment and computer storage medium
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN115546043B (en) Video processing method and related equipment thereof
US11222235B2 (en) Method and apparatus for training image processing model, and storage medium
CN105472263B (en) Image acquisition method and the image capture equipment for using the method
CN114520906A (en) Monocular camera-based three-dimensional portrait complementing method and system
CN113038010A (en) Video processing method, video processing device, storage medium and electronic equipment
KR20210115185A (en) Image processing method and appratus
CN111726531B (en) Image shooting method, processing method, device, electronic equipment and storage medium
CN107534736B (en) Image registration method and device of terminal and terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23890868

Country of ref document: EP

Kind code of ref document: A1