CN115348437B - Video processing method, device, equipment and storage medium - Google Patents

Video processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115348437B
CN115348437B CN202210908234.0A CN202210908234A CN115348437B CN 115348437 B CN115348437 B CN 115348437B CN 202210908234 A CN202210908234 A CN 202210908234A CN 115348437 B CN115348437 B CN 115348437B
Authority
CN
China
Prior art keywords
video
frame image
resolution
display screen
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210908234.0A
Other languages
Chinese (zh)
Other versions
CN115348437A (en
Inventor
曾光
岳小龙
张波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zejing Xi'an Automotive Electronics Co ltd
Original Assignee
Zejing Xi'an Automotive Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zejing Xi'an Automotive Electronics Co ltd filed Critical Zejing Xi'an Automotive Electronics Co ltd
Priority to CN202210908234.0A priority Critical patent/CN115348437B/en
Publication of CN115348437A publication Critical patent/CN115348437A/en
Application granted granted Critical
Publication of CN115348437B publication Critical patent/CN115348437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Abstract

The application discloses a video processing method, a device, equipment and a storage medium, and belongs to the field of video processing. The method comprises the following steps: the method comprises the steps of firstly adjusting first center distances of left and right display pictures of a display screen of near-eye display equipment according to the acquired user pupil distance to obtain second center distances related to the user pupil distance, then processing a first video to be displayed according to the first center distances and the second center distances to obtain a second video, and displaying the second video through the display screen. Therefore, the second center distance of the left and right display frames which are related to the user interpupillary distance can be obtained first, and then the video to be displayed is processed according to the second center distance after adjustment, so that the overlapping area of each frame of image in the processed video is matched with the center distance of the left and right display frames after adjustment and the user interpupillary distance, and when the processed video is displayed through the display screen, the effect of the video which is watched by the user through the eyepiece which is matched with the user interpupillary distance is good.

Description

Video processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of video processing, and in particular, to a video processing method, apparatus, device, and storage medium.
Background
With the development of science and technology, near-eye display devices based on VR (Virtual Reality), AR (Augmented Reality ), MR (Mixed Reality) and other technologies are widely used in various fields by being able to give users an immersive three-dimensional stereoscopic experience. The near-eye display device comprises an eyepiece and a display screen, wherein the display screen generally displays a left display picture and a right display picture, a left image included in each frame in the video is displayed through the left display picture, a right image included in each frame in the video is displayed through the right display picture, a user can watch an overlapped area of the left image and the right image of each frame displayed by the display screen through the eyepiece, and the overlapped area is a three-dimensional stereoscopic image watched by the user.
At present, when the near-eye display device displays a video, the discomfort of a user wearing the near-eye display device and watching a three-dimensional stereoscopic video is reduced by adjusting the center distance of an eyepiece so that the pupil distance of the user and the center distance of the eyepiece are kept consistent, but the influence of the center distance of a left display picture and a right display picture on the watching experience of the user is hardly considered. For example, since the center distance of the left and right display frames affects the size of the overlapping area of the left and right images of each frame of image displayed on the display screen, if the center distance of the left and right display frames is inconsistent with the user pupil distance, the overlapping area and the user pupil distance will not be adapted, and the effect of the three-dimensional video viewed by the user through the eyepiece adapted to the user pupil distance may be poor.
Disclosure of Invention
The application provides a video processing method, a device, equipment and a storage medium, which can process video to be displayed, and the overlapping area of each frame of image in the processed video is matched with the center distance of the left and right display pictures after adjustment and the pupil distance of a user, so that the effect of the video watched by the user through an eyepiece matched with the pupil distance of the user is better. The technical scheme is as follows:
in a first aspect, a video processing method is provided, the method comprising:
acquiring a user pupil distance;
according to the user interpupillary distance, adjusting a first center distance of left and right display pictures of a display screen of the near-eye display device to obtain a second center distance, wherein the second center distance is related to the user interpupillary distance;
according to the first center distance and the second center distance, processing a first video to be displayed to obtain a second video, wherein the aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen;
and displaying the second video through the display screen.
As an example, the processing the first video to be displayed according to the first center distance and the second center distance to obtain a second video includes:
Adjusting the display resolution of the display screen according to the first center distance and the second center distance;
processing the first video according to the adjusted display resolution of the display screen to obtain the second video;
the displaying the second video through the display screen includes:
and displaying the second video through the display screen according to the adjusted display resolution of the display screen.
As an example, the adjusting the display resolution of the display screen according to the first center distance and the second center distance includes:
determining a target resolution corresponding to the second center distance according to the first center distance, the second center distance and the display resolution of the display screen;
and adjusting the display resolution of the display screen according to the target resolution, wherein the adjusted display resolution of the display screen is the target resolution.
As one example, the determining the target resolution corresponding to the second center distance according to the first center distance, the second center distance, and the display resolution of the display screen includes:
Determining the target resolution according to the first center distance, the second center distance and the display resolution of the display screen by the following formula:
wherein w is 2 For the target resolution including a lateral resolution, w 1 PD for the display lateral resolution included in the display resolution of the display screen 1 For the second center distance, PD 2 To be the instituteThe first center distance, h 2 For the longitudinal resolution included in the target resolution, h 1 And the display resolution of the display screen comprises display longitudinal resolution.
As an example, before the first video to be displayed is processed to obtain the second video, the method further includes:
acquiring the aspect ratio of a source video;
if the aspect ratio of the source video is different from that of the display screen, performing aspect ratio processing on the source video to obtain a third video, and if the aspect ratio of the source video is the same as that of the display screen, the source video is the third video;
and if the resolution of the third video is different from the display resolution of the display screen, performing resolution processing on the third video to obtain the first video, and if the resolution of the third video is the same as the display resolution of the display screen, the third video is the first video.
As an example, if the resolution of the third video is different from the display resolution of the display screen, performing resolution processing on the third video to obtain the first video, including:
if the resolution of the third video is larger than the display resolution of the display screen, performing resolution reduction processing on the third video to obtain the first video;
and if the resolution of the third video is smaller than the display resolution of the display screen, performing super-resolution processing on the third video to obtain the first video.
As an example, the performing the resolution reduction processing on the third video to obtain the first video includes:
determining a resolution ratio of a resolution of the third video to a display resolution of the display screen;
determining at least one pixel extraction position in each frame of image included in the third video according to the resolution ratio;
and carrying out pixel extraction on pixels of at least one pixel extraction position in each frame of image included in the third video, wherein the third video after the pixel extraction is the first video.
As an example, the performing super-resolution processing on the third video to obtain the first video includes:
For a first target frame image in the third video, determining adjacent frame images of the first target frame image, wherein the first target frame image is any frame in the third video;
determining at least one pixel interpolation position in the first target frame image according to the resolution of the third video and the display resolution of the display screen;
determining an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image;
and carrying out pixel interpolation on each pixel interpolation position in the first target frame image according to the frame interpolation image corresponding to the first target frame image, wherein the first target frame image after the pixel interpolation is the frame image corresponding to the first target frame image in the first video.
As an example, the interpolated image corresponding to the first target frame image includes a first interpolated image and/or a second interpolated image;
the determining an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image comprises the following steps:
determining a motion state of the first target frame image relative to the adjacent frame image, and determining the first interpolation frame image corresponding to the first target frame image according to the motion state;
And/or the number of the groups of groups,
and carrying out feature extraction and matching on the first target frame image and the adjacent frame image, and fusing the matched features to obtain the second interpolation frame image corresponding to the first target frame image.
As an example, if the aspect ratio of the source video is different from the aspect ratio of the display screen, performing aspect ratio processing on the source video to obtain a third video, and if the aspect ratio of the source video is the same as the aspect ratio of the display screen, before the source video is the third video, the method further includes:
acquiring a frame rate of the source video and a field angle of an eyepiece of the near-eye display device;
if the frame rate of the source video is different from the frame rate of the display screen, performing frame rate processing on the source video to obtain a fourth video, and if the frame rate of the source video is the same as the frame rate of the display screen, the source video is the fourth video;
if the field angle of the source video is different from the field angle of the ocular, carrying out field angle processing on the fourth video to obtain a fifth video, and if the field angle of the source video is the same as the field angle of the ocular, the fourth video is the fifth video;
And if the aspect ratio of the source video is different from the aspect ratio of the display screen, performing aspect ratio processing on the source video to obtain a third video, and if the aspect ratio of the source video is the same as the aspect ratio of the display screen, the source video is the third video, including:
and if the aspect ratio of the source video is different from the aspect ratio of the display screen, performing aspect ratio processing on the fifth video to obtain a third video, and if the aspect ratio of the source video is the same as the aspect ratio of the display screen, the fifth video is the third video.
As an example, if the frame rate of the source video is different from the frame rate of the display screen, performing frame rate processing on the source video includes:
if the frame rate of the source video is larger than the frame rate of the display screen, deleting at least one frame of image in the source video;
and if the frame rate of the source video is smaller than the frame rate of the display screen, determining at least one frame inserting position in the source video, determining a frame inserting image corresponding to the first frame inserting position according to a previous frame image and a next frame image adjacent to the first frame inserting position for the first frame inserting position in the at least one frame inserting position, and inserting the determined frame inserting image into the first frame inserting position, wherein the first frame inserting position is any one of the at least one frame inserting position.
As an example, if the angle of view of the source video is different from the angle of view of the eyepiece, performing angle of view processing on the fourth video to obtain a fifth video, including:
for a second target frame image in the fourth video, determining an energy value of each pixel point in the second target frame image, wherein the second target frame image is any frame in the fourth video;
determining a path with the minimum energy value in the second target frame image according to the energy value of each pixel point in the second target frame image, wherein the path with the minimum energy value comprises at least one pixel;
if the field angle of the source video is smaller than the field angle of the eyepiece, performing pixel interpolation on the second target frame image according to the at least one pixel, wherein the second target frame image after pixel interpolation is a frame image corresponding to the second target frame image in the fifth video, determining a first cyclic value, if the first cyclic value does not meet a first preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to the step of determining the energy value of each pixel point in the second target frame image until the first cyclic value meets the first preset condition, so as to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the first cyclic value is used for indicating the number of times of pixel interpolation on the second target frame image according to the at least one pixel included in a path with the minimum energy value;
And if the field angle of the source video is larger than the field angle of the eyepiece, performing pixel removal on at least one pixel in the second target frame image, wherein the second target frame image after the pixel removal is a frame image corresponding to the second target frame image in the fifth video, determining a second circulation value, and if the second circulation value does not meet a second preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to the step of determining the energy value of each pixel point in the second target frame image until the second circulation value meets the second preset condition, so as to obtain the frame image corresponding to the second target frame image in the fifth video, wherein the second circulation value is used for indicating the times of performing pixel removal on the second target frame image according to the at least one pixel included in the path with the minimum energy value.
As an example, before the adjusting the first center distance between the left and right display frames of the display screen of the near-eye display device according to the user interpupillary distance, the method further includes:
and if the difference value between the center distance of the ocular lens of the near-eye display device and the pupil distance of the user is larger than a first threshold value, adjusting the center distance of the ocular lens, wherein the adjusted center distance of the ocular lens is related to the pupil distance of the user.
In a second aspect, a video processing apparatus is provided, the apparatus including a first acquisition module, a first adjustment module, a first processing module, and a display module;
the first acquisition module is used for acquiring the pupil distance of the user;
the first adjusting module is used for adjusting a first center distance of left and right display pictures of a display screen of the near-eye display device according to the user interpupillary distance to obtain a second center distance, and the second center distance is related to the user interpupillary distance;
the first processing module is used for processing a first video to be displayed according to the first center distance and the second center distance to obtain a second video, the aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen;
and the display module is used for displaying the second video through the display screen.
As an example, the first processing module is configured to adjust a display resolution of the display screen according to the first center distance and the second center distance, and process the first video according to the adjusted display resolution of the display screen to obtain the second video;
And the display module is used for displaying the second video through the display screen according to the adjusted display resolution of the display screen.
As one example, the first processing module is configured to determine a target resolution corresponding to the second center distance according to the first center distance, the second center distance, and a display resolution of the display screen;
and adjusting the display resolution of the display screen according to the target resolution, wherein the adjusted display resolution of the display screen is the target resolution.
As an example, the first processing module is configured to determine the target resolution according to the first center distance, the second center distance, and the display resolution of the display screen by the following formula:
wherein w is 2 For the target resolution including a lateral resolution, w 1 PD for the display lateral resolution included in the display resolution of the display screen 1 For the second center distance, PD 2 For the first center distance, h 2 For the longitudinal resolution included in the target resolution, h 1 Display portrait resolution included for display resolution of the display screen 。
As an example, the apparatus further includes a second acquisition module, a second processing module, and a third processing module;
the second acquisition module is used for acquiring the transverse-longitudinal ratio of the source video;
the second processing module is configured to perform an aspect ratio processing on the source video to obtain a third video if the aspect ratio of the source video is different from the aspect ratio of the display screen, and if the aspect ratio of the source video is the same as the aspect ratio of the display screen, the source video is the third video;
and the third processing module is used for carrying out resolution processing on the third video to obtain the first video if the resolution of the third video is different from the display resolution of the display screen, and the third video is the first video if the resolution of the third video is the same as the display resolution of the display screen.
As an example, the third processing module is configured to perform resolution reduction processing on the third video if the resolution of the third video is greater than the display resolution of the display screen, so as to obtain the first video;
and if the resolution of the third video is smaller than the display resolution of the display screen, performing super-resolution processing on the third video to obtain the first video.
As one example, the third processing module is configured to determine a resolution ratio of a resolution of the third video to a display resolution of the display screen;
determining at least one pixel extraction position in each frame of image included in the third video according to the resolution ratio;
and carrying out pixel extraction on pixels of at least one pixel extraction position in each frame of image included in the third video, wherein the third video after the pixel extraction is the first video.
As one example, the third processing module is configured to determine, for a first target frame image in the third video, a neighboring frame image of the first target frame image, where the first target frame image is any frame in the third video;
determining at least one pixel interpolation position in the first target frame image according to the resolution of the third video and the display resolution of the display screen;
determining an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image;
and carrying out pixel interpolation on each pixel interpolation position in the first target frame image according to the frame interpolation image corresponding to the first target frame image, wherein the first target frame image after the pixel interpolation is the frame image corresponding to the first target frame image in the first video.
As one example, the third processing module, configured to determine an interpolated frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image, includes:
determining a motion state of the first target frame image relative to the adjacent frame image, and determining the first interpolation frame image corresponding to the first target frame image according to the motion state;
and/or the number of the groups of groups,
and carrying out feature extraction and matching on the first target frame image and the adjacent frame image, and fusing the matched features to obtain the second interpolation frame image corresponding to the first target frame image.
As an example, the apparatus further includes a third acquisition module, a fourth processing module, a fifth processing module;
the third acquisition module is used for acquiring the frame rate of the source video and the field angle of an eyepiece of the near-eye display device;
the fourth processing module is configured to perform frame rate processing on the source video to obtain a fourth video if the frame rate of the source video is different from the frame rate of the display screen, and if the frame rate of the source video is the same as the frame rate of the display screen, the source video is the fourth video;
The fifth processing module is configured to perform angle of view processing on the fourth video to obtain a fifth video if the angle of view of the source video is different from the angle of view of the eyepiece, and if the angle of view of the source video is the same as the angle of view of the eyepiece, the fourth video is the fifth video;
and the second processing module is used for performing the aspect ratio processing on the fifth video to obtain a third video if the aspect ratio of the source video is different from the aspect ratio of the display screen, and the fifth video is the third video if the aspect ratio of the source video is the same as the aspect ratio of the display screen.
As an example, the fourth processing module is configured to delete at least one frame image in the source video if the frame rate of the source video is greater than the frame rate of the display screen;
and if the frame rate of the source video is smaller than the frame rate of the display screen, determining at least one frame inserting position in the source video, determining a frame inserting image corresponding to the first frame inserting position according to a previous frame image and a next frame image adjacent to the first frame inserting position for the first frame inserting position in the at least one frame inserting position, and inserting the determined frame inserting image into the first frame inserting position, wherein the first frame inserting position is any one of the at least one frame inserting position.
As an example, the fifth processing module is configured to determine, for a second target frame image in the fourth video, an energy value of each pixel point in the second target frame image, where the second target frame image is any frame in the fourth video;
determining a path with the minimum energy value in the second target frame image according to the energy value of each pixel point in the second target frame image, wherein the path with the minimum energy value comprises at least one pixel;
if the field angle of the source video is smaller than the field angle of the eyepiece, performing pixel interpolation on the second target frame image according to the at least one pixel, wherein the second target frame image after pixel interpolation is a frame image corresponding to the second target frame image in the fifth video, determining a first cyclic value, if the first cyclic value does not meet a first preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to the step of determining the energy value of each pixel point in the second target frame image until the first cyclic value meets the first preset condition, so as to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the first cyclic value is used for indicating the number of times of pixel interpolation on the second target frame image according to the at least one pixel included in a path with the minimum energy value;
And if the field angle of the source video is larger than the field angle of the eyepiece, performing pixel removal on at least one pixel in the second target frame image, wherein the second target frame image after the pixel removal is a frame image corresponding to the second target frame image in the fifth video, determining a second circulation value, and if the second circulation value does not meet a second preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to the step of determining the energy value of each pixel point in the second target frame image until the second circulation value meets the second preset condition, so as to obtain the frame image corresponding to the second target frame image in the fifth video, wherein the second circulation value is used for indicating the times of performing pixel removal on the second target frame image according to the at least one pixel included in the path with the minimum energy value.
As one example, the apparatus further comprises a second adjustment module;
and the second adjusting module is used for adjusting the center distance of the ocular, if the difference value between the center distance of the ocular of the near-eye display device and the pupil distance of the user is larger than a first threshold value, and the adjusted center distance of the ocular is related to the pupil distance of the user.
In a third aspect, a computer device is provided, the computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the computer program implementing the video processing method described above when executed by the processor.
In a fourth aspect, a computer readable storage medium is provided, the computer readable storage medium storing a computer program, which when executed by a processor, implements the video processing method described above.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
in the embodiment of the application, the user pupil distance is firstly obtained, the first center distance of the left and right display pictures of the display screen of the near-eye display device is adjusted according to the obtained user pupil distance to obtain the second center distance related to the user pupil distance, then the first video to be displayed is processed according to the first center distance and the second center distance to obtain the second video, and then the second video is displayed through the display screen. The aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen. Therefore, the first center distance of the left and right display frames can be adjusted according to the user pupil distance to obtain the second center distance of the left and right display frames after adjustment related to the user pupil distance, and then the video to be displayed is processed according to the first center distance of the left and right display frames before adjustment and the second center distance of the left and right display frames after adjustment, so that the overlapping area of the left image and the right image of each frame of image in the processed video is matched with the center distance of the left and right display frames after adjustment and the user pupil distance, and when the processed video is displayed through the display screen, the effect of the three-dimensional video viewed by a user through an eyepiece matched with the user pupil distance is good.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
FIG. 2 is a flowchart of another video processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
It should be understood that references to "a plurality" in this disclosure refer to two or more. In the description of the present application, "/" means or, unless otherwise indicated, for example, A/B may represent A or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in order to facilitate the clear description of the technical solution of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and function. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
Before explaining the embodiment of the present application in detail, an application scenario of the embodiment of the present application is described.
The video processing method provided by the embodiment of the application can be applied to a scene which gives the user an immersive three-dimensional stereoscopic experience through the near-eye display equipment. For example, the video processing method provided by the embodiment of the application can adjust the center distance of the left and right display frames according to the user pupil distance, then process the video to be displayed according to the center distance before and after adjustment, and adapt the overlapping area of each frame of image in the processed video to the center distance of the left and right display frames after adjustment and the user pupil distance, so that the effect of the video watched by the user through the eyepiece adapted to the user pupil distance is better.
Referring to fig. 1, fig. 1 is a flowchart of a video processing method according to an embodiment of the application. The video processing method can be applied to computer equipment, the computer equipment can be near-eye display equipment, other equipment except near-eye display equipment for processing video to be displayed of the near-eye display equipment, such as a terminal, a server or embedded equipment, and the terminal can be a desktop computer or a tablet computer. As shown in fig. 1, the method comprises the steps of:
In step 101, a computer device obtains a user pupil distance.
The user interpupillary distance refers to the distance between pupils of two eyes of a user, and the size of the user interpupillary distance affects the visual field range which can be watched by the user, for example, affects the size of an overlapping area which can be watched by the user, and the overlapping area is a three-dimensional stereoscopic image watched by the user.
For example, each frame image of a source video to be displayed includes a left image and a right image, the near-eye display device includes an eyepiece and a display screen that can display left and right two display screens, the near-eye display device displays the left image of each frame image through the left display screen, displays the right image of each frame image through the right display screen, and a user can view an overlapping area of the left image displayed by the left display screen and the right image displayed by the right display screen through the eyepiece. The center distance of the left and right display screens influences the size of the overlapping area displayed by the left and right display screens. If the size of the overlapping area viewable by the user and the size of the overlapping area displayed by the left and right display frames are not adapted, that is, the user interpupillary distance is not adapted to the center distances of the left and right display frames, the effect of the three-dimensional video viewed by the user through the eyepiece adapted to the user interpupillary distance may be poor, so before the near-to-eye display device displays the source video to be displayed, the user interpupillary distance should be acquired first, so that the center distances of the left and right display frames are adjusted according to the user interpupillary distance.
And 102, adjusting the first center distance of left and right display pictures of a display screen of the near-eye display device according to the pupil distance of the user to obtain a second center distance.
Wherein the second center distance is related to the user pupil distance. The second center distance being related to the user pupil distance means that the second center distance is adapted to the user pupil distance, e.g. the second center distance is the same as or close to the user pupil distance. For example, the difference between the second center distance and the user pupil distance is less than or equal to a second threshold, where the second threshold is a preset smaller error distance.
As an example, before adjusting the first center distance, the computer device may first obtain the first center distance, then determine whether a difference between the first center distance and the user pupil distance is greater than a second threshold, adjust the first center distance according to the user pupil distance if the difference between the first center distance and the user pupil distance is greater than the second threshold, obtain the second center distance, and not adjust the first center distance if the difference between the first center distance and the user pupil distance is less than or equal to the second threshold, where the first center distance is related to the user pupil distance, and not perform steps 103-104 described below.
As one example, the difference between the first center distance and the user pupil distance being greater than the second threshold may include both cases where the first center distance is less than the user pupil distance and where the first center distance is greater than the user pupil distance. If the first center distance is smaller than the user pupil distance at the beginning, the first center distance is not matched with the user pupil distance, but the overlapping area which can be seen by the user comprises the overlapping area which is displayed on the left and right display pictures on the basis of the first center distance, so that the influence on the stereoscopic video which is seen by the user through the ocular lens is small in the case, and the first center distance can not be adjusted.
As an example, typically, the interpupillary distance for adult males is 59mm-72mm and for adult females is 56mm-66mm. Initially, the first center distance of the left and right display pictures of the near-eye display device may be set to any value greater than or equal to 72mm, so that the obtained pupil distance of the user is smaller than or equal to the first center distance, and the second center distance obtained after the first center distance is adjusted is smaller than or equal to the first center distance, so that the first center distance needs to be adjusted.
As an example, the computer device may also obtain the aspect ratio of the display screen, the display resolution and frame rate, the field angle of the eyepiece, etc. of the near-eye display device, which is not limited by the embodiment of the application.
As one example, the computer device may also adjust the eyepiece center distance of the eyepiece of the near-to-eye display device prior to adjusting the first center distance. For example, the computer device determines whether a difference between the center distance of the eyepiece of the near-to-eye display device and the pupil distance of the user is greater than a first threshold, adjusts the center distance of the eyepiece if the difference between the center distance of the eyepiece and the pupil distance of the user is greater than the first threshold, and the adjusted center distance of the eyepiece is related to the pupil distance of the user, wherein the first threshold is a preset smaller error distance.
And step 103, the computer equipment processes the first video to be displayed according to the first center distance and the second center distance to obtain a second video.
The aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen, so that the adaptation of the aspect ratio and the display resolution of the first video and the display screen of the near-to-eye display device can be ensured, that is, the display effect of the first video can be better through the display screen before the first video is adjusted according to the second center distance of the left and right adjusted display frames.
For example, the computer device may adjust the display resolution of the display screen according to the first center distance and the second center distance, and then process the first video to be displayed according to the adjusted display resolution of the display screen to obtain the second video.
As an example, the center distance of the left and right display screens of the display screen has a correspondence relationship with the display resolution of the display screen, and generally, the larger the center distance of the left and right display screens is, the larger the display resolution is, and thus, after the first center distance is adjusted, the display resolution of the display screen should be adjusted correspondingly. Typically, the resolution includes a lateral resolution and a longitudinal resolution. For example, the display resolution includes a display landscape resolution and a display portrait resolution.
For example, the computer device may determine, according to the first center distance, the second center distance, and the display resolution of the display screen, a target resolution corresponding to the second center distance, and then adjust the display resolution of the display screen according to the target resolution, where the adjusted display resolution of the display screen is the target resolution, and the adjusted display resolution of the display screen is adapted to the second center distance and the user pupil distance. The display resolution of the adjusted display screen is matched with the second center distance and the user pupil distance, so that the second video obtained by processing the first video according to the display resolution of the adjusted display screen is also matched with the second center distance and the user pupil distance.
As an example, in the case where the first center distance is smaller than the user pupil distance at the beginning, that is, the first center distance is smaller than the second center distance, even if the first center distance is not adapted to the user pupil distance, the first center distance has a smaller influence on the stereoscopic video viewed by the user through the eyepiece, and thus the first center distance may not be adjusted in this case.
As one example, if the first center distance is greater than the second center distance, the computer device may determine the target resolution according to the first center distance, the second center distance, and the display resolution of the display screen by the following formula (1) and formula (2):
wherein w is 2 For the target resolution including the lateral resolution, w 1 For display ofThe display resolution of the screen includes a display landscape resolution, PD 1 At a second center distance, PD 2 For a first center distance, h 2 For the longitudinal resolution included in the target resolution, h 1 The display resolution of the display screen includes a display portrait resolution.
As can be seen from the above formulas (1) and (2), if the first center distance is greater than the second center distance, the target resolution is smaller than the display resolution of the display screen, i.e., if the first center distance between the left and right display frames is greater than the user pupil distance initially, the display resolution of the display screen after adjustment is smaller than the display resolution of the display screen before adjustment.
For example, the computer device may perform resolution processing on each frame of image in the first video according to the adjusted display resolution of the display screen, that is, according to the target resolution, to obtain a second video, where the resolution of the second video is the same as the adjusted display resolution of the display screen, that is, the resolution of the second video is the target resolution.
As an example, since the display resolution of the display screen before adjustment is greater than the display resolution of the display screen after adjustment in the case where the first center distance is greater than the user's interpupillary distance, and the resolution of the first video is the same as the display resolution of the display screen before adjustment, the resolution of the first video is greater than the display resolution of the display screen after adjustment, and thus each frame of image in the first video can be subjected to resolution reduction processing according to the display resolution of the display screen after adjustment, so as to obtain the second video.
As an example, a specific implementation process of the resolution reduction process for the first video will be described in detail in step 216 of the embodiment of fig. 2, which is not described herein.
It should be noted that, in the embodiment of the present application, each frame image of the video includes a left image and a right image, and processing each frame image in the video refers to processing both the left image and the right image of each frame image in the video.
As an example, before the first video to be displayed is processed to obtain the second video, the computer device may process the source video according to the aspect ratio and the display resolution of the display screen to obtain the first video, where the aspect ratio of the obtained first video is the same as the aspect ratio of the display screen, and the resolution of the first video is the same as the display resolution of the display screen. For example, the computer device may process the source video according to the aspect ratio and the display resolution of the display screen, and obtain the first video by:
and step 1, the computer equipment acquires the aspect ratio of the source video.
As an example, the computer device may also obtain a source resolution, a frame rate, or a field angle of the source video, which is not limited by the embodiment of the present application.
And 2, determining whether the aspect ratio of the source video is the same as that of the display screen, if the aspect ratio of the source video is different from that of the display screen, performing aspect ratio processing on the source video to obtain a third video, and if the aspect ratio of the source video is the same as that of the display screen, the source video is the third video.
If the aspect ratio of the source video is different from that of the display screen, the aspect ratio processing of the source video may include scaling processing and stretching processing.
As an example, before the step 2, the computer device may further process the frame rate and the view angle of the source video according to the frame rate of the display screen and the view angle of the eyepiece, so as to ensure that the frame rate, the aspect ratio, the display resolution of the first video and the view angle of the eyepiece are all adapted before the first video is adjusted according to the adjusted second center distance of the left and right display frames, so that the effect of the display screen displaying the first video is better and the effect of the first video viewed by the user through the eyepiece is better.
For example, the computer device may first obtain the frame rate of the source video and the angle of view of the eyepiece of the near-to-eye display device, then determine whether the frame rate of the source video is the same as the frame rate of the display screen, if the frame rate of the source video is not the same as the frame rate of the display screen, perform frame rate processing on the source video to obtain a fourth video, if the frame rate of the source video is the same as the frame rate of the display screen, the source video is the fourth video, then determine whether the angle of view of the source video is the same as the angle of view of the eyepiece, if the angle of view of the source video is not the same as the angle of view of the eyepiece, perform angle of view processing on the fourth video to obtain a fifth video, and if the angle of view of the source video is the same as the angle of view of the eyepiece, the fourth video is the fifth video.
For example, if the frame rate of the source video is different from the frame rate of the display screen, performing frame rate processing on the source video may include frame deletion processing and frame insertion processing. If the angle of view of the source video is not the same as the angle of view of the eyepiece, performing the angle of view processing on the fourth video may include an up angle of view processing and a down angle of view processing.
As an example, the specific implementation process of the frame deleting process or the frame inserting process for the source video will be described in detail in step 207 of the embodiment of fig. 2, which is not described herein.
As an example, the specific implementation process of the up-view angle processing and the down-view angle processing for the fourth video will be described in detail in step 210 of the embodiment of fig. 2, which is not described herein.
As one example, performing frame deletion processing or frame insertion processing on the source video does not change the source resolution of the source video, and thus the resolution of the fourth video is the same as the source resolution of the source video; performing the up-view angle processing and the down-view angle processing on the fourth video changes the resolution of the fourth video, so that the resolution of the fourth video is different from the resolution of the fifth video, and the resolution of the fifth video is different from the source resolution of the source video; the scaling and stretching processes are performed on the fifth video, and the resolution of the fifth video is not changed, so that the resolution of the fifth video is the same as the resolution of the third video, which is different from the source resolution of the source video.
As an example, after processing the frame rate and the view angle of the source video, the above step 2 may include: and if the aspect ratio of the source video is determined to be different from the aspect ratio of the display screen, performing aspect ratio processing on the fifth video to obtain a third video, and if the aspect ratio of the source video is determined to be the same as the aspect ratio of the display screen, the fifth video is the third video.
And 3, determining whether the resolution of the third video is the same as the display resolution of the display screen, if the resolution of the third video is different from the display resolution of the display screen, performing resolution processing on the third video to obtain a first video, and if the resolution of the third video is the same as the display resolution of the display screen, the third video is the first video.
As one example, after processing the frame rate and the field angle of the source video, the resolution of the third video is different from the source resolution of the source video.
For example, if the resolution of the third video is different from the display resolution of the display screen, the resolution processing of the third video may include a resolution reduction processing and a super resolution processing.
As an example, the specific implementation process of the resolution reduction process and the super resolution process for the third video will be described in detail in step 216 of the embodiment of fig. 2, which is not described herein.
Thus, through the steps 1 to 3, the first video can be obtained, the aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen.
Step 104, the computer device displays the second video through the display screen.
For example, the computer device may display the second video via the display screen according to the adjusted display resolution of the display screen.
The display resolution of the adjusted display screen is matched with the pupil distance of the user, and the second video is matched with the pupil distance of the user, so that the display effect of the second video displayed by the display screen according to the display resolution of the adjusted display screen is good for the user, namely, the effect of the second video watched by the user through an eyepiece matched with the pupil distance of the user is good.
In the embodiment of the application, the user pupil distance is firstly obtained, the first center distance of the left and right display pictures of the display screen of the near-eye display device is adjusted according to the obtained user pupil distance to obtain the second center distance related to the user pupil distance, then the first video to be displayed is processed according to the first center distance and the second center distance to obtain the second video, and then the second video is displayed through the display screen. The aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen. Therefore, the first center distance of the left and right display frames can be adjusted according to the user pupil distance to obtain the second center distance of the left and right display frames after adjustment related to the user pupil distance, and then the video to be displayed is processed according to the first center distance of the left and right display frames before adjustment and the second center distance of the left and right display frames after adjustment, so that the overlapping area of the left image and the right image of each frame of image in the processed video is matched with the center distance of the left and right display frames after adjustment and the user pupil distance, and when the processed video is displayed through the display screen, the effect of the three-dimensional video viewed by a user through an eyepiece matched with the user pupil distance is good.
Referring to fig. 2, fig. 2 is a flowchart of another video processing method according to an embodiment of the application. The computer device may be a near-eye display device, or may be other devices that process video to be displayed of the near-eye display device, such as a terminal, a server, or an embedded device, where the terminal may be a desktop computer or a tablet computer. As shown in fig. 2, the method comprises the steps of:
in step 201, the computer device obtains a user pupil distance, video parameters of a source video, display parameters of a display screen of a near-eye display device, and a field angle of an eyepiece and an eyepiece center distance.
The user interpupillary distance refers to the distance between pupils of two eyes of a user.
Each frame of image of the source video comprises a left image and a right image, the near-eye display device comprises an eyepiece and a display screen capable of displaying left and right display pictures, the near-eye display device displays the left image of each frame of image through the left display picture, the right image of each frame of image is displayed through the right display picture, and a user can watch an overlapping area of the left image displayed by the left display picture and the right image displayed by the right display picture through the eyepiece.
Wherein, the video parameters are used for indicating the basic characteristics of the source video, and the video parameters can comprise an aspect ratio, a source resolution, a frame rate, a field angle, and the like.
Wherein, the display parameter is used for indicating the capability of the near-eye display device to display video, and the display parameter can include the aspect ratio, the display resolution, the frame rate, the first center distance between left and right display pictures, and the like of the display screen. The near-eye display device may display the video according to the display parameters.
As one example, the computer device may detect a user pupil distance input instruction, and obtain the user pupil distance according to the user pupil distance input instruction. For example, the computer device may include a display screen through which the computer device detects user interpupillary distance input instructions. The user interpupillary distance input instruction may be triggered by the user through the user interpupillary distance input operation based on the display screen. The operation type of the user interpupillary distance input operation may be a click operation, a press operation, a language operation, or a gesture operation, which is not limited in the embodiment of the present application.
Alternatively, the computer device may include a measurement module by which the computer device may automatically measure the user pupil distance. For example, the measuring module is a measuring instrument capable of measuring the pupil distance of the user, and the embodiment of the application is not limited thereto.
In step 202, the computer device determines whether the eyepiece center distance is the same as the user interpupillary distance.
It should be noted that, in the embodiment of the present application, whether the center-to-center distance of the eyepiece is the same as the pupil distance of the user is described as an example, and in other embodiments, the computer device may also determine whether the difference between the center-to-center distance of the eyepiece and the pupil distance of the user is greater than a first threshold, where the first threshold is a preset smaller error distance.
In step 203, if the computer device determines that the center-to-center distance of the eyepiece is different from the user pupil distance, the computer device adjusts the center-to-center distance of the eyepiece according to the user pupil distance.
Wherein the adjusted eyepiece center distance is related to the user pupil distance. The adjusted center-to-center distance of the eyepiece is related to the user's pupil distance, which means that the adjusted center-to-center distance of the eyepiece is matched to the user's pupil distance, e.g., the adjusted center-to-center distance of the eyepiece is the same as or similar to the user's pupil distance. For example, the difference between the adjusted eyepiece center distance and the user pupil distance is less than or equal to a first threshold.
As one example, if the eyepiece center distance is the same as the user pupil distance, then no adjustment is made to the eyepiece center distance.
As one example, the computer device adjusts the eyepiece center distance according to the user pupil distance if it is determined that the difference between the eyepiece center distance and the user pupil distance is greater than a first threshold. If it is determined that the difference between the eyepiece center distance and the user interpupillary distance is less than or equal to the first threshold, the eyepiece center distance is not adjusted, in which case the unadjusted eyepiece center distance is related to the user interpupillary distance.
In step 204, if the computer device determines that the center-to-center distance of the eyepiece is the same as the user interpupillary distance, it determines whether the first center-to-center distance of the left and right display images of the display screen is the same as the user interpupillary distance.
It should be noted that, in the embodiment of the present application, it is described by taking an example of determining whether the first center distance between the left and right display frames of the display screen is the same as the user pupil distance, and in other embodiments, the computer device may also determine whether the difference between the first center distance and the user pupil distance is greater than a second threshold, where the second threshold is a preset smaller error distance.
In step 205, if the computer device determines that the first center distance is different from the user pupil distance, the computer device adjusts the first center distance according to the user pupil distance to obtain the second center distance.
Wherein the second center distance is related to the user pupil distance. The second center distance being related to the user pupil distance means that the second center distance is adapted to the user pupil distance, e.g. the second center distance is the same as or close to the user pupil distance. For example, the difference between the second center distance and the user pupil distance is less than or equal to a second threshold, where the second threshold is a preset smaller error distance.
As one example, if it is determined that the first center-to-center distance is the same as the user pupil distance, no adjustment is made to the first center-to-center distance.
As an example, the computer device adjusts the first center distance according to the user pupil distance if it is determined that the difference between the first center distance and the user pupil distance is greater than the second threshold, and does not adjust the first center distance if it is determined that the difference between the first center distance and the user pupil distance is less than or equal to the second threshold, in which case the first center distance is related to the user pupil distance, without performing steps 206-219 described below.
The first center distance and the user pupil distance being different may include two cases that the first center distance is smaller than the user pupil distance and the first center distance is larger than the user pupil distance.
As an example, in the case where the first center distance is smaller than the user pupil distance at the beginning, that is, the first center distance is smaller than the second center distance, even if the first center distance is not adapted to the user pupil distance, the first center distance has a smaller influence on the stereoscopic video viewed by the user through the eyepiece, and thus the first center distance may not be adjusted in this case.
For example, if the first center-to-center distance is the same as the user pupil distance, the first center-to-center distance is not adjusted, in which case the first center-to-center distance is related to the user pupil distance.
In step 206, the computer device determines whether the frame rate of the source video is the same as the frame rate of the display screen.
In step 207, if the frame rate of the source video is different from the frame rate of the display screen, the computer device performs frame rate processing on the source video to obtain a fourth video.
It should be noted that, in the embodiment of the present application, each frame image of the video includes a left image and a right image, and processing each frame image in the video refers to processing both the left image and the right image of each frame image in the video.
If the frame rate of the source video is different from the frame rate of the display screen, the frame rate processing of the source video may include frame deletion processing and frame insertion processing. For example, if the frame rate of the source video is greater than the frame rate of the display screen, frame deleting processing is performed on the source video to obtain a fourth video, and if the frame rate of the source video is less than the frame rate of the display screen, frame inserting processing is performed on the source video to obtain the fourth video.
As one example, frame deletion processing of a source video refers to deleting at least one frame image in the source video. For example, if the frame rate of the source video is greater than the frame rate of the display screen, determining a frame rate difference between the frame rate of the source video and the frame rate of the display screen, determining at least one frame deleting image in the source video according to the frame rate difference, and deleting at least one frame image in the source video to obtain a fourth video. Wherein the frame rate difference value indicates the number of at least one frame erasure image to be erased from the source video within the measurement unit.
For example, each frame erasure image in the at least one frame erasure image may be any frame in the source video, determining the at least one frame erasure image may indicate that the at least one frame erasure image is deleted from the source video at every erasure interval, and the erasure interval may be determined by the frame rate difference and the frame rate of the source video. For example, the frame rate of the source video is 30fps, the frame rate of the display screen is 20fps, the frame rate difference indicates that the number of at least one frame deletion image to be deleted from the source video per second is 10, and the computer device may delete 1 frame image from every 3 frame images of the source video to obtain a fourth video with a frame rate of 20 fps.
As one example, the process of inserting frames into the source video refers to inserting frames, i.e., inserting frame images, between adjacent frame images in the source video. For example, at least one frame inserting position in the source video is determined, for a first frame inserting position in the at least one frame inserting position, a frame inserting image corresponding to the first frame inserting position is determined according to a previous frame image and a next frame image adjacent to the first frame inserting position, the determined frame inserting image is inserted into the first frame inserting position, and the first frame inserting position is any one of the at least one frame inserting position.
For example, the at least one interpolation position may be one or more positions between adjacent frame images, i.e. one or more interpolation frame images may be inserted between adjacent frame images corresponding to the interpolation position.
For example, an interpolated image may be inserted between adjacent frame images, the frame rate of the source video is 20fps, and the frame rate of the display screen is 30fps, and at least one interpolated position may indicate a position at which an interpolated image is inserted every 2 frames. For example, the source video may include a 0 th frame image-a 29 th frame image, and the computer device may insert the interpolated frame image between a 1 st frame and a 2 nd frame, between a 3 rd frame and a 4 th frame, between a 5 th frame and a 6 th frame, and so on, between a 17 th frame and a 18 th frame, and after a 19 th frame of the source video, resulting in a fourth video having a frame rate of 30 fps.
As one example, the interpolated image corresponding to the first interpolated position may include one or more. For example, the interpolated image corresponding to the first interpolated position may include a third interpolated image and/or a fourth interpolated image. The computer device may determine a motion state of a previous frame image and a subsequent frame image adjacent to the first frame interpolation position, and determine a third frame interpolation image corresponding to the first frame interpolation position according to the motion state. And/or extracting and matching the features of the previous frame image and the next frame image adjacent to the first frame inserting position, and fusing the matched features to obtain a fourth frame inserting image corresponding to the first frame inserting position. Of course, the interpolated image may be obtained in other manners, which is not limited in the embodiment of the present application.
For example, the optical flow between the previous frame image and the next frame image may be determined according to the change of the pixels of the previous frame image and the next frame image in the time domain and the correlation between the previous frame image and the next frame image, the motion state is determined according to the optical flow between the previous frame image and the next frame image, the motion state includes a rotation matrix and an offset, and then affine transformation is performed on the previous frame image or the next frame image according to the rotation matrix and the offset, so as to obtain a third interpolation frame image corresponding to the first interpolation frame position.
For example, feature extraction and matching are performed on a previous frame image and a next frame image adjacent to the first frame interpolation position, a color, texture, shape or spatial relationship of a matched pixel in the previous frame image and the next frame image is obtained, and fusion is performed on the color, texture, shape or spatial relationship of the matched pixel, so as to obtain a fourth frame interpolation image corresponding to the first frame interpolation position.
The frame rate processing, i.e. frame deleting processing or frame inserting processing, is performed on the source video, so that the source resolution of the source video is not changed, and therefore, the resolution of the fourth video is the same as the source resolution of the source video.
In step 208, if the computer device determines that the frame rate of the source video is the same as the frame rate of the display screen, the source video is the fourth video.
In step 209, the computer device determines whether the field angle of the source video is the same as the field angle of the eyepiece.
In step 210, if the computer device determines that the angle of view of the source video is different from the angle of view of the eyepiece, the computer device performs angle of view processing on the fourth video to obtain a fifth video.
If the angle of view of the source video is different from the angle of view of the eyepiece, the angle of view processing of the fourth video may include up angle of view processing and down angle of view processing.
For example, if the field angle of the source video is different from the field angle of the eyepiece, the computer device may determine, for the second target frame image in the fourth video, the energy value of each pixel in the second target frame image first, determine, according to the energy value of each pixel in the second target frame image, a path with the smallest energy value in the second target frame image, and then perform the field angle raising process on the fourth video if it is determined that the field angle of the source video is smaller than the field angle of the eyepiece, to obtain a fifth video, and if it is determined that the field angle of the source video is greater than the field angle of the eyepiece, perform the field angle lowering process on the fourth video, to obtain the fifth video. The second target frame image is any frame in the fourth video, and the path with the minimum energy value comprises at least one pixel.
As an example, the computer device may determine the gray value of each pixel in the second target frame image, and then determine the energy value of each pixel according to the gray value of each pixel. Wherein the energy value of each pixel point indicates the importance level of the pixel in the image, and the energy value of each pixel point is equal to the sum of the gradient of the gray value of each pixel point in the transverse direction of the image and the gradient of the gray value of each pixel point in the longitudinal direction of the image.
As an example, the path of the smallest energy value in the second target frame image is a connection path of pixels in the second target frame image from top to bottom or from left to right. For example, the path with the smallest energy value from top to bottom for the pixel may include one pixel of each line in the second target frame image, that is, the number of at least one pixel included in the path with the smallest energy value is the number of lines of pixels in the second target frame image. If the abscissa of the pixel of a certain row on the path having the smallest energy value is x, the abscissa of the pixel of the path having the smallest energy value of the previous row is x-1, x, or x+1.
As one example, performing the up-field angle processing on the fourth video refers to performing pixel interpolation on each frame of image in the fourth video. For example, pixel interpolation is performed on a second target frame image in the fourth video, if the field angle of the source video is smaller than the field angle of the eyepiece, pixel interpolation is performed on the second target frame image according to at least one pixel, the second target frame image after pixel interpolation is a frame image corresponding to the second target frame image in the fifth video, a first cyclic value is determined, if the first cyclic value does not meet a first preset condition, the frame image corresponding to the second target frame image in the fifth video is taken as the second target frame image, and the step of determining the energy value of each pixel point in the second target frame image is skipped until the first cyclic value meets the first preset condition, so as to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the first cyclic value is used for indicating the number of times of pixel interpolation on the second target frame image according to at least one pixel included in a path with the minimum energy value.
For example, pixel interpolation of the second target frame image according to at least one pixel means that at least one pixel included in the path with the smallest energy value is copied and inserted into a preset position. For example, the resolution of the fourth video is the same as the source resolution of the source video, the source resolution is 200×200, and after each pixel of the at least one pixel is inserted into a preset position laterally adjacent to each pixel, that is, after the second target frame image is subjected to pixel interpolation according to the at least one pixel, the resolution of a frame image corresponding to the second target frame image in the fifth video is 200×201.
For example, the first cyclic value is an integer, the first preset condition is an integer, and 1 is added to the first cyclic value after each pixel interpolation. For example, the first cyclic value is 0 at the beginning, after pixel interpolation is performed on the second target frame image once, the first cyclic value is updated to 1, if the first cyclic value does not meet the first preset condition, the frame image corresponding to the second target frame image in the fifth video obtained after pixel interpolation is taken as the second target frame image, and the step of determining the energy value of each pixel point in the second target frame image is skipped until the first cyclic value meets the first preset condition, and the frame image corresponding to the second target frame image in the fifth video with the first cyclic value meeting the first preset condition is taken as the final result.
For example, the computer device may determine the first preset condition according to a display resolution of the display screen, a field angle of the source video, and a field angle of the eyepiece. For example, the computer device determines the first preset condition according to the display resolution of the display screen, the field angle of the source video, and the field angle of the eyepiece by the following formula (3):
wherein k is 1 X is a first preset condition res Display lateral resolution, X, for display resolution fov X is the angle of view of the eyepiece video Is the field angle of the source video.
As an example, X in the above formula (3) res The display resolution may be a display portrait resolution of a display resolution, which is not limited in the embodiment of the present application.
As one example, performing reduced field angle processing on the fourth video refers to performing pixel removal on each frame of image in the fourth video. For example, performing pixel removal on a second target frame image in the fourth video, if the field angle of the source video is larger than the field angle of the eyepiece, performing pixel removal on at least one pixel in the second target frame image, wherein the second target frame image after pixel removal is a frame image corresponding to the second target frame image in the fifth video, determining a second circulation value, if the second circulation value does not meet a second preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to the step of determining the energy value of each pixel point in the second target frame image until the second circulation value meets the second preset condition, so as to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the second circulation value is used for indicating the number of times of performing pixel removal on the second target frame image according to at least one pixel included in a path with the minimum energy value.
For example, pixel removal of the second target frame image from at least one pixel refers to removing at least one pixel included in the path that minimizes the energy value. For example, if the source resolution is 200×200, the resolution of the frame image corresponding to the second target frame image in the fifth video obtained by removing the pixel of the second target frame image according to at least one pixel is 200×199.
For example, the second cyclic value is an integer, the second preset condition is an integer, and the second cyclic value is added by 1 after each pixel interpolation. The computer device may determine the second preset condition based on a display resolution of the display screen, a field angle of the source video, and a field angle of the eyepiece. For example, the computer device determines the second preset condition according to the display resolution of the display screen, the field angle of the source video, and the field angle of the eyepiece by the following formula (4):
wherein k is 2 X is the second preset condition res Display lateral resolution, X, for display resolution fov X is the angle of view of the eyepiece video Is the field angle of the source video.
The resolution of the fourth video is changed by performing the angle-of-view processing, that is, the up-angle processing and the down-angle processing, on the fourth video, so that the resolution of the fourth video is different from the resolution of the fifth video, which is different from the source resolution of the source video.
In step 211, if the computer device determines that the field angle of the source video is the same as the field angle of the eyepiece, the fourth video is the fifth video.
In step 212, the computer device determines whether the aspect ratio of the source video is the same as the aspect ratio of the display screen.
And step 213, if the computer equipment determines that the aspect ratio of the source video is different from the aspect ratio of the display screen, performing aspect ratio processing on the fifth video to obtain a third video.
If the aspect ratio of the source video is different from the aspect ratio of the display screen, the aspect ratio processing of the fifth video may include a scaling processing and a stretching processing.
For example, if the aspect ratio of the source video is greater than the aspect ratio of the display screen, scaling the fifth video to obtain a third video, and if the aspect ratio of the source video is less than the aspect ratio of the display screen, stretching the fifth video to obtain the third video.
The zooming or stretching of the fifth video refers to zooming or stretching the sizes of the left image and the right image of each frame in the fifth video, and the resolution of the fifth video is not changed. That is, the resolution of the fifth video is not changed by performing the aspect ratio processing, that is, the scaling processing and the stretching processing, on the fifth video, and therefore the resolution of the fifth video is the same as the resolution of the third video, which is different from the source resolution of the source video.
In step 214, if the computer device determines that the aspect ratio of the source video is the same as the aspect ratio of the display screen, the fifth video is the third video.
In step 215, the computer device determines whether the resolution of the third video is the same as the display resolution of the display screen.
In step 216, if the resolution of the third video is determined to be different from the display resolution of the display screen, the computer device performs resolution processing on the third video to obtain the first video.
The first video has the same horizontal-vertical ratio as the display screen, the resolution of the first video is the same as the display resolution of the display screen, and the frame rate of the first video is the same as the frame rate of the display screen.
If the resolution of the third video is different from the display resolution of the display screen, the resolution processing of the third video may include a resolution reduction process and a super resolution process. For example, if the resolution of the third video is greater than the display resolution of the display screen, performing resolution reduction processing on the third video to obtain the first video, and if the resolution of the third video is less than the display resolution of the display screen, performing super-resolution processing on the third video to obtain the first video.
As one example, a specific implementation of the resolution reduction process on the third video may include: the computer device may determine a resolution ratio of a resolution of the third video to a display resolution of the display screen, determine at least one pixel extraction position in each frame of image included in the third video according to the resolution ratio, and perform pixel extraction on a pixel at the at least one pixel extraction position in each frame of image included in the third video, where the third video after pixel extraction is the first video.
For example, the resolution ratio may be a whole of a ratio of a lateral resolution of the third video to a display lateral resolution, or a whole of a ratio of a longitudinal resolution of the third video to a display longitudinal resolution, and the resolution ratio is a common divisor of the lateral resolution of the third video and the longitudinal resolution of the third video.
For example, the resolution ratio is n, at least one pixel extraction position determined according to the resolution ratio may extract a pixel position of a pixel every n pixels in each frame of image, then downsampling each frame of image included in the third video, that is, first extracting a pixel every n pixels in a transverse direction, and then extracting a pixel every n pixels in a longitudinal direction, where the downsampled third video is the first video.
As an example, a specific implementation of super-resolution processing of the third video may include: for a first target frame image in the third video, the computer device may determine an adjacent frame image of the first target frame image, determine at least one pixel interpolation position in the first target frame image according to the resolution of the third video and the display resolution of the display screen, determine an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image, and then perform pixel interpolation on each pixel interpolation position in the first target frame image according to the interpolation frame image corresponding to the first target frame image, where the first target frame image after pixel interpolation is the frame image corresponding to the first target frame image in the first video. The first target frame image is any frame in the third video.
For example, the at least one pixel interpolation position includes a horizontal pixel interpolation position and/or a vertical pixel interpolation position, the horizontal pixel interpolation position may be determined according to a difference between a horizontal resolution of the third video and a display horizontal resolution, the vertical pixel interpolation position may be determined according to a difference between a vertical resolution of the third video and a display vertical resolution, and then the pixel interpolation may be performed on the horizontal pixel interpolation position and the vertical pixel interpolation position, respectively, according to an interpolation frame image corresponding to the first target frame image.
For example, the interpolated image corresponding to the first target frame image may be one or more. For example, the interpolated image corresponding to the first target frame image may include a first interpolated image and/or a second interpolated image, and the computer device may determine a motion state of the first target frame image relative to the adjacent frame image, and determine the first interpolated image corresponding to the first target frame image according to the motion state; and/or extracting and matching the characteristics of the first target frame image and the adjacent frame images, and fusing the matched characteristics to obtain a second interpolation frame image corresponding to the first target frame image. Of course, the interpolated image may be obtained in other manners, which is not limited in the embodiment of the present application.
In step 217, if the computer device determines that the resolution of the third video is the same as the display resolution of the display screen, the third video is the first video.
In this way, after a series of processing is performed on the source video through the steps 206-217, the frame rate, the aspect ratio, the display resolution and the field angle of the eyepiece of the obtained first video and the display screen are all adapted, so that the effect of displaying the first video on the display screen is better, and the effect of the first video viewed by the user through the eyepiece is better.
In addition, the computer device also considers the influence of the second center-to-center distance related to the user's interpupillary distance on the size of the overlapping area of the first video displayed on the display screen, on the basis of ensuring that the frame rate, the aspect ratio and the display resolution of the first video and the field angle of the eyepiece are all adapted. For example, the computer device may process the first video according to the second center-to-center distance, so that the effect of the video viewed by the user through the eyepiece adapted to the user's interpupillary distance is better.
In step 218, the computer device processes the first video according to the first center distance and the second center distance to obtain a second video.
For example, the display resolution of the display screen is adjusted according to the first center distance and the second center distance, and then the first video to be displayed is processed according to the adjusted display resolution of the display screen to obtain a second video, wherein the resolution of the second video is the adjusted display resolution of the display screen.
For example, the computer device may determine, according to the first center distance, the second center distance, and the display resolution of the display screen, the target resolution corresponding to the second center distance, and then adjust the display resolution of the display screen according to the target resolution, where the adjusted display resolution of the display screen is the target resolution, and the adjusted display resolution of the display screen is adapted to the second center distance and the user pupil distance, and the resolution of the second video is the target resolution corresponding to the second center distance.
For example, the computer device may perform resolution processing on the left and right images of each frame in the first video to obtain the second video.
The display resolution of the adjusted display screen is matched with the second center distance and the user pupil distance, so that the second video obtained by processing the first video according to the display resolution of the adjusted display screen is also matched with the second center distance and the user pupil distance.
At step 219, the computer device displays the second video via the display screen.
For example, the computer device displays the second video through the display screen according to the adjusted display resolution of the display screen.
The display resolution of the adjusted display screen is matched with the pupil distance of the user, and the second video is matched with the pupil distance of the user, so that the display effect of the second video displayed by the display screen according to the display resolution of the adjusted display screen is good for the user, namely, the effect of the second video watched by the user through an eyepiece matched with the pupil distance of the user is good.
Therefore, the center distance of the ocular lens is adjusted according to the pupil distance of the user, the frame rate processing, the field angle processing, the aspect ratio processing and the resolution processing are carried out on the source video according to the display parameters of the display screen and the field angle of the ocular lens, namely, the source video is optimized to obtain the first video, and the first video is matched with the display screen and the ocular lens, so that the effect of the display screen for displaying the first video is good, and the effect of the first video watched by the user through the ocular lens is good.
In addition, the first center distance of the left and right display frames can be adjusted according to the user pupil distance to obtain the second center distance of the left and right display frames after adjustment related to the user pupil distance, and the video to be displayed is processed according to the first center distance of the left and right display frames before adjustment and the second center distance of the left and right display frames after adjustment, so that the overlapping area of the left image and the right image of each frame of the processed video is matched with the center distance of the left and right display frames after adjustment and the user pupil distance, and when the processed video is displayed through the display screen, the effect of the three-dimensional stereoscopic video viewed by the user through the eyepiece matched with the user pupil distance is good.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the application. The video processing apparatus may be implemented by software, hardware or a combination of both as part or all of a computer device, which may be the computer device shown in fig. 4 below. Referring to fig. 3, the apparatus includes: a first acquisition module 301, a first adjustment module 302, a first processing module 303 and a display module 304.
The first obtaining module 301 is configured to obtain a pupil distance of a user.
The first adjustment module 302 is configured to adjust a first center distance of left and right display frames of a display screen of the near-eye display device according to a user pupil distance, to obtain a second center distance, where the second center distance is related to the user pupil distance.
The first processing module 303 is configured to process a first video to be displayed according to the first center distance and the second center distance to obtain a second video, where the aspect ratio of the first video is the same as the aspect ratio of the display screen, and the resolution of the first video is the same as the display resolution of the display screen.
And the display module 304 is used for displaying the second video through a display screen.
As an example, the first processing module 303 is configured to adjust a display resolution of the display screen according to the first center distance and the second center distance, and process the first video according to the adjusted display resolution of the display screen to obtain the second video;
And the display module 304 is configured to display the second video through the display screen according to the adjusted display resolution of the display screen.
As an example, the first processing module 303 is configured to determine, according to the first center distance, the second center distance, and the display resolution of the display screen, a target resolution corresponding to the second center distance;
and adjusting the display resolution of the display screen according to the target resolution, wherein the adjusted display resolution of the display screen is the target resolution.
As an example, the first processing module 303 is configured to determine the target resolution according to the first center distance, the second center distance, and the display resolution of the display screen by the following formula:
wherein w is 2 For the target resolution including the lateral resolution, w 1 PD for the display resolution of the display screen including the display landscape resolution 1 At a second center distance, PD 2 For a first center distance, h 2 For the longitudinal resolution included in the target resolution, h 1 The display resolution of the display screen includes a display portrait resolution.
As an example, the apparatus further comprises a second acquisition module 305, a second processing module 306, a third processing module 307;
a second obtaining module 305, configured to obtain an aspect ratio of the source video;
The second processing module 306 is configured to perform an aspect ratio processing on the source video if the aspect ratio of the source video is different from the aspect ratio of the display screen, so as to obtain a third video, and if the aspect ratio of the source video is the same as the aspect ratio of the display screen, the source video is the third video;
and the third processing module 307 is configured to perform resolution processing on the third video if the resolution of the third video is different from the display resolution of the display screen, so as to obtain the first video, and if the resolution of the third video is the same as the display resolution of the display screen, then the third video is the first video.
As an example, the third processing module 307 is configured to perform the resolution reduction processing on the third video to obtain the first video if the resolution of the third video is greater than the display resolution of the display screen;
and if the resolution of the third video is smaller than the display resolution of the display screen, performing super-resolution processing on the third video to obtain the first video.
As an example, the third processing module 307 is configured to determine a resolution ratio of a resolution of the third video to a display resolution of the display screen;
determining at least one pixel extraction position in each frame of image included in the third video according to the resolution ratio;
And carrying out pixel extraction on pixels at least at one pixel extraction position in each frame of image included in the third video, wherein the third video after the pixel extraction is the first video.
As an example, the third processing module 307 is configured to determine, for a first target frame image in the third video, an adjacent frame image of the first target frame image, where the first target frame image is any frame in the third video;
determining at least one pixel interpolation position in the first target frame image according to the resolution of the third video and the display resolution of the display screen;
determining an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image;
and carrying out pixel interpolation on each pixel interpolation position in the first target frame image according to the interpolation frame image corresponding to the first target frame image, wherein the first target frame image after the pixel interpolation is the frame image corresponding to the first target frame image in the first video.
As an example, the third processing module 307 is configured to determine a motion state of the first target frame image relative to the adjacent frame image, and determine a first interpolated frame image corresponding to the first target frame image according to the motion state;
and/or the number of the groups of groups,
And carrying out feature extraction and matching on the first target frame image and the adjacent frame image, and fusing the matched features to obtain a second frame inserting image corresponding to the first target frame image.
As an example, the apparatus further comprises a third acquisition module 308, a fourth processing module 309, a fifth processing module 310;
a third obtaining module 308, configured to obtain a frame rate of a source video and a field angle of an eyepiece of the near-eye display device;
a fourth processing module 309, configured to perform frame rate processing on the source video to obtain a fourth video if the frame rate of the source video is different from the frame rate of the display screen, and if the frame rate of the source video is the same as the frame rate of the display screen, the source video is the fourth video;
a fifth processing module 310, configured to perform angle-of-view processing on the fourth video to obtain a fifth video if the angle of view of the source video is different from the angle of view of the eyepiece, and the fourth video is the fifth video if the angle of view of the source video is the same as the angle of view of the eyepiece;
and the second processing module 306 is configured to perform the aspect ratio processing on the fifth video if the aspect ratio of the source video is different from the aspect ratio of the display screen, so as to obtain a third video, and if the aspect ratio of the source video is the same as the aspect ratio of the display screen, then the fifth video is the third video.
As an example, the fourth processing module 309 is configured to delete at least one frame image in the source video if the frame rate of the source video is greater than the frame rate of the display screen;
if the frame rate of the source video is smaller than the frame rate of the display screen, determining at least one frame inserting position in the source video, for a first frame inserting position in the at least one frame inserting position, determining a frame inserting image corresponding to the first frame inserting position according to a previous frame image and a next frame image adjacent to the first frame inserting position, inserting the determined frame inserting image into the first frame inserting position, wherein the first frame inserting position is any one of the at least one frame inserting position.
As an example, the fifth processing module 310 is configured to determine, for a second target frame image in the fourth video, an energy value of each pixel point in the second target frame image, where the second target frame image is any frame in the fourth video;
determining a path with the minimum energy value in the second target frame image according to the energy value of each pixel point in the second target frame image, wherein the path with the minimum energy value comprises at least one pixel;
if the field angle of the source video is smaller than the field angle of the ocular, carrying out pixel interpolation on the second target frame image according to at least one pixel, wherein the second target frame image after the pixel interpolation is a frame image corresponding to the second target frame image in the fifth video, determining a first cyclic value, if the first cyclic value does not meet a first preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to the step of determining the energy value of each pixel point in the second target frame image until the first cyclic value meets the first preset condition, so as to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the first cyclic value is used for indicating the frequency of carrying out pixel interpolation on the second target frame image according to at least one pixel included in a path with the minimum energy value;
And if the field angle of the source video is larger than the field angle of the ocular, performing pixel removal on at least one pixel in the second target frame image, wherein the second target frame image after the pixel removal is a frame image corresponding to the second target frame image in the fifth video, determining a second circulation value, and if the second circulation value does not meet a second preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to the step of determining the energy value of each pixel point in the second target frame image until the second circulation value meets the second preset condition, so as to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the second circulation value is used for indicating the number of times of performing pixel removal on the second target frame image according to at least one pixel included in a path with the minimum energy value.
As an example, the apparatus further comprises a second adjustment module 311;
the second adjustment module 311 is configured to adjust the eyepiece center distance if the difference between the eyepiece center distance and the user interpupillary distance of the near-eye display device is greater than a first threshold, where the adjusted eyepiece center distance is related to the user interpupillary distance.
It should be noted that: the video processing device provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the functions described above.
The functional units and modules in the above embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the embodiments of the present application.
The video processing device and the video processing method provided in the foregoing embodiments belong to the same concept, and specific working processes and technical effects brought by units and modules in the foregoing embodiments may be referred to a method embodiment part, which is not described herein again.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the application. As shown in fig. 4, the computer device includes: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and executable on the processor 401, the processor 401 implementing the steps in the neural network processing method in the above-described embodiment when executing the computer program 403.
The computer device may be the computer device in embodiment 1 or embodiment 2 described above. The computer device may be a near-eye display device, or a desktop, a portable computer, a network server, a palmtop, a mobile phone, a tablet, a wireless terminal device, a communication device, or an embedded device, and embodiments of the present application are not limited to the type of computer device. It will be appreciated by those skilled in the art that fig. 4 is merely an example of a computer device and is not intended to be limiting, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 401 may be a central processing unit (Central Processing Unit, CPU), and the processor 401 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or may be any conventional processor.
The Memory 402 may be an on-chip Memory or an off-chip Memory of a computer device in some embodiments, such as a cache Memory of a computer device, an SRAM (Static Random-Access Memory), a DRAM (Dynamic Static Random-Access Memory), a floppy disk, or the like. The memory 402 may also be a plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card) or the like provided on a computer device in other embodiments. Further, memory 402 may also include both on-chip memory, off-chip memory internal storage units, and external storage devices of the computer device. The memory 402 is used to store an operating system, application programs, boot Loader (Boot Loader), data, and other programs, etc. The memory 402 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the application also provides a computer device, which comprises: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, which when executed by the processor performs the steps of any of the various method embodiments described above.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of the respective method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a computer, causes the computer to perform the steps of the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the above-described method embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and where the computer program, when executed by a processor, may implement the steps of the above-described method embodiments. Wherein the computer program comprises computer program code which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal device, recording medium, computer Memory, ROM (Read-Only Memory), RAM (Random Access Memory ), CD-ROM (Compact Disc Read-Only Memory), magnetic tape, floppy disk, optical data storage device, and so forth. The computer readable storage medium mentioned in the present application may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps to implement the above-described embodiments may be implemented by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided by the present application, it should be understood that the disclosed apparatus/computer device and method may be implemented in other manners. For example, the apparatus/computer device embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (14)

1. A method of video processing, the method comprising:
acquiring a user pupil distance;
according to the user interpupillary distance, adjusting a first center distance of left and right display pictures of a display screen of the near-eye display device to obtain a second center distance, wherein the second center distance is related to the user interpupillary distance;
Processing the source video to obtain a first video;
processing the first video to be displayed according to the first center distance and the second center distance to obtain a second video, wherein the aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen;
displaying the second video through the display screen;
the processing the source video to obtain the first video includes:
if the frame rate of the source video is different from the frame rate of the display screen, performing frame rate processing on the source video to obtain a fourth video;
if the field angle of the source video is different from the field angle of the eyepiece, determining the energy value of each pixel point in a second target frame image in the fourth video, wherein the second target frame image is any frame in the fourth video;
determining a path with the minimum energy value in the second target frame image according to the energy value of each pixel point in the second target frame image, wherein the path with the minimum energy value comprises at least one pixel;
If the field angle of the source video is smaller than the field angle of the eyepiece, carrying out field angle raising processing on the fourth video to obtain a fifth video, wherein the field angle raising processing on the fourth video comprises pixel interpolation on the second target frame image according to at least one pixel, and the pixel interpolation on the second target frame image according to at least one pixel comprises copying and inserting at least one pixel included in a path with the minimum energy value into a preset position;
if the field angle of the source video is larger than the field angle of the eyepiece, performing field angle reduction processing on the fourth video to obtain the fifth video, wherein the field angle reduction processing on the fourth video comprises pixel removal of the second target frame image according to at least one pixel, and the pixel removal of the second target frame image according to at least one pixel comprises removal of at least one pixel included in a path with the minimum energy value;
if the aspect ratio of the source video is different from that of the display screen, performing aspect ratio processing on the fifth video to obtain a third video;
and if the resolution of the third video is different from the display resolution of the display screen, performing resolution processing on the third video to obtain the first video.
2. The method of claim 1, wherein the processing the first video to be displayed according to the first center distance and the second center distance to obtain the second video comprises:
adjusting the display resolution of the display screen according to the first center distance and the second center distance;
processing the first video according to the adjusted display resolution of the display screen to obtain the second video;
the displaying the second video through the display screen includes:
and displaying the second video through the display screen according to the adjusted display resolution of the display screen.
3. The method of claim 2, wherein adjusting the display resolution of the display screen based on the first center distance and the second center distance comprises:
determining a target resolution corresponding to the second center distance according to the first center distance, the second center distance and the display resolution of the display screen;
and adjusting the display resolution of the display screen according to the target resolution, wherein the adjusted display resolution of the display screen is the target resolution.
4. The method of claim 3, wherein the determining a target resolution corresponding to the second center distance based on the first center distance, the second center distance, and a display resolution of the display screen comprises:
determining the target resolution according to the first center distance, the second center distance and the display resolution of the display screen by the following formula:
wherein w is 2 For the target resolution including a lateral resolution, w 1 PD for the display lateral resolution included in the display resolution of the display screen 1 For the second center distance, PD 2 For the first center distance, h 2 For the longitudinal resolution included in the target resolution, h 1 And the display resolution of the display screen comprises display longitudinal resolution.
5. The method of claim 1, wherein if the resolution of the third video is different from the display resolution of the display screen, performing resolution processing on the third video to obtain the first video, including:
if the resolution of the third video is larger than the display resolution of the display screen, performing resolution reduction processing on the third video to obtain the first video;
And if the resolution of the third video is smaller than the display resolution of the display screen, performing super-resolution processing on the third video to obtain the first video.
6. The method of claim 5, wherein the performing the resolution reduction process on the third video to obtain the first video comprises:
determining a resolution ratio of a resolution of the third video to a display resolution of the display screen;
determining at least one pixel extraction position in each frame of image included in the third video according to the resolution ratio;
and carrying out pixel extraction on pixels of at least one pixel extraction position in each frame of image included in the third video, wherein the third video after the pixel extraction is the first video.
7. The method of claim 5, wherein super-resolution processing the third video to obtain the first video comprises:
for a first target frame image in the third video, determining adjacent frame images of the first target frame image, wherein the first target frame image is any frame in the third video;
determining at least one pixel interpolation position in the first target frame image according to the resolution of the third video and the display resolution of the display screen;
Determining an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image;
and carrying out pixel interpolation on each pixel interpolation position in the first target frame image according to the frame interpolation image corresponding to the first target frame image, wherein the first target frame image after the pixel interpolation is the frame image corresponding to the first target frame image in the first video.
8. The method of claim 7, wherein the interpolated image corresponding to the first target frame image comprises a first interpolated image and/or a second interpolated image;
the determining an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image comprises the following steps:
determining a motion state of the first target frame image relative to the adjacent frame image, and determining the first interpolation frame image corresponding to the first target frame image according to the motion state;
and/or the number of the groups of groups,
and carrying out feature extraction and matching on the first target frame image and the adjacent frame image, and fusing the matched features to obtain the second interpolation frame image corresponding to the first target frame image.
9. The method of claim 1, wherein the performing frame rate processing on the source video if the frame rate of the source video is different from the frame rate of the display screen comprises:
if the frame rate of the source video is larger than the frame rate of the display screen, deleting at least one frame of image in the source video;
and if the frame rate of the source video is smaller than the frame rate of the display screen, determining at least one frame inserting position in the source video, determining a frame inserting image corresponding to the first frame inserting position according to a previous frame image and a next frame image adjacent to the first frame inserting position for the first frame inserting position in the at least one frame inserting position, and inserting the determined frame inserting image into the first frame inserting position, wherein the first frame inserting position is any one of the at least one frame inserting position.
10. The method of claim 1, wherein if the angle of view of the source video is less than the angle of view of the eyepiece, performing an up-angle of view process on the fourth video to obtain a fifth video, and if the angle of view of the source video is greater than the angle of view of the eyepiece, performing an down-angle of view process on the fourth video to obtain the fifth video, comprising:
If the field angle of the source video is smaller than the field angle of the eyepiece, performing pixel interpolation on the second target frame image according to the at least one pixel, wherein the second target frame image after pixel interpolation is a frame image corresponding to the second target frame image in the fifth video, determining a first cyclic value, if the first cyclic value does not meet a first preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to the step of determining the energy value of each pixel point in the second target frame image until the first cyclic value meets the first preset condition, so as to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the first cyclic value is used for indicating the number of times of pixel interpolation on the second target frame image according to the at least one pixel included in a path with the minimum energy value;
and if the field angle of the source video is larger than the field angle of the eyepiece, performing pixel removal on at least one pixel in the second target frame image, wherein the second target frame image after the pixel removal is a frame image corresponding to the second target frame image in the fifth video, determining a second circulation value, and if the second circulation value does not meet a second preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to the step of determining the energy value of each pixel point in the second target frame image until the second circulation value meets the second preset condition, so as to obtain the frame image corresponding to the second target frame image in the fifth video, wherein the second circulation value is used for indicating the times of performing pixel removal on the second target frame image according to the at least one pixel included in the path with the minimum energy value.
11. The method of any one of claims 1-10, wherein before adjusting the first center distance of the left and right display frames of the display screen of the near-eye display device according to the user pupil distance, the method further comprises:
and if the difference value between the center distance of the ocular lens of the near-eye display device and the pupil distance of the user is larger than a first threshold value, adjusting the center distance of the ocular lens, wherein the adjusted center distance of the ocular lens is related to the pupil distance of the user.
12. A video processing device, which is characterized by comprising a first acquisition module, a first adjustment module, a first processing module and a display module;
the first acquisition module is used for acquiring the pupil distance of the user;
the first adjusting module is used for adjusting a first center distance of left and right display pictures of a display screen of the near-eye display device according to the user interpupillary distance to obtain a second center distance, and the second center distance is related to the user interpupillary distance;
the first processing module is configured to process a source video to obtain a first video, where the processing of the source video to obtain the first video includes: if the frame rate of the source video is different from the frame rate of the display screen, performing frame rate processing on the source video to obtain a fourth video; if the field angle of the source video is different from the field angle of the eyepiece, determining the energy value of each pixel point in a second target frame image in the fourth video, wherein the second target frame image is any frame in the fourth video; determining a path with the minimum energy value in the second target frame image according to the energy value of each pixel point in the second target frame image, wherein the path with the minimum energy value comprises at least one pixel; if the field angle of the source video is smaller than the field angle of the eyepiece, carrying out field angle raising processing on the fourth video to obtain a fifth video, wherein the field angle raising processing on the fourth video comprises pixel interpolation on the second target frame image according to at least one pixel, and the pixel interpolation on the second target frame image according to at least one pixel comprises copying and inserting at least one pixel included in a path with the minimum energy value into a preset position; if the field angle of the source video is larger than the field angle of the eyepiece, performing field angle reduction processing on the fourth video to obtain the fifth video, wherein the field angle reduction processing on the fourth video comprises pixel removal of the second target frame image according to at least one pixel, and the pixel removal of the second target frame image according to at least one pixel comprises removal of at least one pixel included in a path with the minimum energy value; if the aspect ratio of the source video is different from that of the display screen, performing aspect ratio processing on the fifth video to obtain a third video; if the resolution of the third video is different from the display resolution of the display screen, performing resolution processing on the third video to obtain the first video;
The method is also used for processing the first video to be displayed according to the first center distance and the second center distance to obtain a second video, wherein the aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen;
and the display module is used for displaying the second video through the display screen.
13. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, which computer program, when executed by the processor, implements the method according to any of claims 1 to 11.
14. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 11.
CN202210908234.0A 2022-07-29 2022-07-29 Video processing method, device, equipment and storage medium Active CN115348437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210908234.0A CN115348437B (en) 2022-07-29 2022-07-29 Video processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210908234.0A CN115348437B (en) 2022-07-29 2022-07-29 Video processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115348437A CN115348437A (en) 2022-11-15
CN115348437B true CN115348437B (en) 2023-10-31

Family

ID=83950989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210908234.0A Active CN115348437B (en) 2022-07-29 2022-07-29 Video processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115348437B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800302A (en) * 2011-05-25 2012-11-28 联想移动通信科技有限公司 Method for adjusting resolution of display screen by terminal equipment, and terminal equipment
CN104216841A (en) * 2014-09-15 2014-12-17 联想(北京)有限公司 Information processing method and electronic equipment
CN105700140A (en) * 2016-01-15 2016-06-22 北京星辰万有科技有限公司 Immersive video system with adjustable pupil distance
CN105847578A (en) * 2016-04-28 2016-08-10 努比亚技术有限公司 Information display type parameter adjusting method and head mounted device
CN108989671A (en) * 2018-07-25 2018-12-11 Oppo广东移动通信有限公司 Image processing method, device and electronic equipment
CN110006634A (en) * 2019-04-15 2019-07-12 北京京东方光电科技有限公司 Visual field angle measuring method, visual field angle measuring device, display methods and display equipment
CN112804561A (en) * 2020-12-29 2021-05-14 广州华多网络科技有限公司 Video frame insertion method and device, computer equipment and storage medium
CN113592720A (en) * 2021-09-26 2021-11-02 腾讯科技(深圳)有限公司 Image scaling processing method, device, equipment, storage medium and program product

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800302A (en) * 2011-05-25 2012-11-28 联想移动通信科技有限公司 Method for adjusting resolution of display screen by terminal equipment, and terminal equipment
CN104216841A (en) * 2014-09-15 2014-12-17 联想(北京)有限公司 Information processing method and electronic equipment
CN105700140A (en) * 2016-01-15 2016-06-22 北京星辰万有科技有限公司 Immersive video system with adjustable pupil distance
CN105847578A (en) * 2016-04-28 2016-08-10 努比亚技术有限公司 Information display type parameter adjusting method and head mounted device
CN108989671A (en) * 2018-07-25 2018-12-11 Oppo广东移动通信有限公司 Image processing method, device and electronic equipment
CN110006634A (en) * 2019-04-15 2019-07-12 北京京东方光电科技有限公司 Visual field angle measuring method, visual field angle measuring device, display methods and display equipment
CN112804561A (en) * 2020-12-29 2021-05-14 广州华多网络科技有限公司 Video frame insertion method and device, computer equipment and storage medium
CN113592720A (en) * 2021-09-26 2021-11-02 腾讯科技(深圳)有限公司 Image scaling processing method, device, equipment, storage medium and program product

Also Published As

Publication number Publication date
CN115348437A (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN109064390B (en) Image processing method, image processing device and mobile terminal
US20160301868A1 (en) Automated generation of panning shots
WO2019237299A1 (en) 3d facial capture and modification using image and temporal tracking neural networks
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
US8866881B2 (en) Stereoscopic image playback device, stereoscopic image playback system, and stereoscopic image playback method
CN102572492B (en) Image processing device and method
CN112565589A (en) Photographing preview method and device, storage medium and electronic equipment
CN110866486B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN112584034B (en) Image processing method, image processing device and electronic equipment applying same
CN107864335B (en) Image preview method and device, computer readable storage medium and electronic equipment
US20160180514A1 (en) Image processing method and electronic device thereof
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN111182196A (en) Photographing preview method, intelligent terminal and device with storage function
US8508603B2 (en) Object detection device, object detection system, integrated circuit for object detection, and object detection method
Jung A modified model of the just noticeable depth difference and its application to depth sensation enhancement
JP2001128195A (en) Stereoscopic image correcting device, stereoscopic image display device, and recording medium with stereoscopic image correcting program recorded thereon
Avraham et al. Ultrawide foveated video extrapolation
US10223766B2 (en) Image processing including geometric distortion adjustment
EP3993383A1 (en) Method and device for adjusting image quality, and readable storage medium
US20120007819A1 (en) Automatic Convergence Based on Touchscreen Input for Stereoscopic Imaging
CN115348437B (en) Video processing method, device, equipment and storage medium
CN114071010A (en) Shooting method and equipment
KR20180000017A (en) Augmented reality providing mehtod using smart glass
Chamaret et al. Video retargeting for stereoscopic content under 3D viewing constraints
CN114610150A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant