CN108833877B - Image processing method and device, computer device and readable storage medium - Google Patents

Image processing method and device, computer device and readable storage medium Download PDF

Info

Publication number
CN108833877B
CN108833877B CN201810602433.2A CN201810602433A CN108833877B CN 108833877 B CN108833877 B CN 108833877B CN 201810602433 A CN201810602433 A CN 201810602433A CN 108833877 B CN108833877 B CN 108833877B
Authority
CN
China
Prior art keywords
channel
value
coordinate
display area
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810602433.2A
Other languages
Chinese (zh)
Other versions
CN108833877A (en
Inventor
李锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Virtual Reality Technology Co Ltd
Original Assignee
Chongqing Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Virtual Reality Technology Co Ltd filed Critical Chongqing Virtual Reality Technology Co Ltd
Priority to CN201810602433.2A priority Critical patent/CN108833877B/en
Publication of CN108833877A publication Critical patent/CN108833877A/en
Application granted granted Critical
Publication of CN108833877B publication Critical patent/CN108833877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides an image processing method and device, a computer device and a readable storage medium, wherein the image processing method comprises the following steps: acquiring pixel coordinates of a screen display area where pixel points are located; determining a division range to which a screen lateral coordinate value acquired from the pixel coordinate belongs according to division data; acquiring a mapping coordinate corresponding to a frame picture of a video source; calculating coordinate values obtained from the divided data and the chartlet coordinates by using a preset function to output a single-channel abscissa value and a single-channel ordinate value; and determining to take corresponding pixel points from a video source according to the single-channel horizontal coordinate value and the single-channel vertical coordinate value so as to display in the screen display area corresponding to the division range to which the screen horizontal coordinate value belongs. By the aid of the method and the device, the immersion feeling of a user watching the panoramic video in the virtual reality environment can be improved.

Description

Image processing method and device, computer device and readable storage medium
Technical Field
The present invention relates to the field of virtual technologies, and in particular, to an image processing method and apparatus, a computer apparatus, and a readable storage medium.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims and the detailed description. The description herein is not admitted to be prior art by inclusion in this section.
At present, panoramic video mostly adopts a playing scheme for playing monocular video, and in a virtual reality environment, scenes observed by people have no parallax, so that the immersion is not strong, and the visual experience of a user is influenced.
Disclosure of Invention
In view of the foregoing, the present invention provides an image processing method and apparatus, a computer apparatus, and a readable storage medium, which can improve the immersion of a user in watching a panoramic video in a virtual reality environment.
An embodiment of the present invention provides an image processing method, which is applied to a virtual panoramic video, and the method includes:
acquiring pixel coordinates of a screen display area where pixel points are located, wherein the screen display area comprises a left eye display area and a right eye display area;
determining a division range to which a screen lateral coordinate value acquired from the pixel coordinate belongs according to division data, wherein the division range comprises a left-eye viewing angle range corresponding to the left-eye display area and a right-eye viewing angle range corresponding to the right-eye display area;
acquiring a mapping coordinate corresponding to a frame picture of a video source;
calculating coordinate values obtained from the divided data and the chartlet coordinates by using a preset function to output a single-channel abscissa value and a single-channel ordinate value;
and determining to take corresponding pixel points from a video source according to the single-channel horizontal coordinate value and the single-channel vertical coordinate value so as to display in the screen display area corresponding to the division range to which the screen horizontal coordinate value belongs.
Preferably, the divided data includes data stored using RGBA channels, wherein the R channel is configured to store a first transversal coordinate minimum value, the G channel is configured to store a first transversal coordinate maximum value, the B channel is configured to store a first ordinate minimum value, and the a channel is configured to store a first ordinate maximum value.
Preferably, the division data further includes a division line that divides the division range, the division line being located in a middle of the first transverse coordinate minimum value and the first transverse coordinate maximum value.
Preferably, the calculating, by using a preset function, the coordinate values obtained from the divided data and the chartlet coordinates to output a single-channel abscissa value and a single-channel ordinate value includes:
calculating the lateral coordinate values of the map coordinates and the lateral coordinate values of the division data and the map coordinates by using a lerp function to output the single-channel lateral coordinate values;
and calculating the longitudinal coordinate value and the longitudinal coordinate value of the map coordinate, which are taken out from the division data and the map coordinate, by using a lerp function to output the single-channel longitudinal coordinate value.
Preferably, the step of determining, according to the single-channel abscissa value and the single-channel ordinate value, to take a corresponding pixel point from a video source to display in the screen display region corresponding to the division range to which the screen abscissa value belongs includes:
and according to the single-channel horizontal coordinate value and the single-channel vertical coordinate value, taking corresponding pixel points in a frame picture of a video source to display in the screen display area corresponding to the division range to which the screen horizontal coordinate value belongs.
Preferably, before the determining, according to the single-channel abscissa value and the single-channel ordinate value, to take the corresponding pixel point from the video source to display in the screen display region corresponding to the division range to which the screen abscissa value belongs, the method further includes:
merging the single-channel abscissa values and the single-channel ordinate values into double-channel UV data;
the step of determining to take corresponding pixel points from a video source according to the single-channel horizontal coordinate value and the single-channel vertical coordinate value so as to display in the screen display area corresponding to the division range to which the screen horizontal coordinate value belongs comprises:
and determining to take corresponding pixel points from the frame picture of a video source according to the double-channel UV data so as to display in the screen display area corresponding to the division range to which the transverse coordinate value of the screen belongs.
The embodiment of the invention also provides an image processing device, which is applied to the virtual panoramic video and comprises the following components:
the display device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring pixel coordinates of a screen display area where pixel points are located, and the screen display area comprises a left eye display area and a right eye display area;
the determining module is used for determining a division range to which a screen transverse coordinate value acquired from the pixel coordinate belongs according to division data, wherein the division range comprises a left eye visual angle range corresponding to the left eye display area and a right eye visual angle range corresponding to the right eye display area;
the acquisition module is also used for acquiring a mapping coordinate corresponding to a frame picture of a video source;
the calculation module is used for calculating coordinate values obtained from the divided data and the chartlet coordinates by utilizing a preset function so as to output single-channel abscissa values and single-channel ordinate values;
and the display module is used for determining to take corresponding pixel points from a video source according to the single-channel horizontal coordinate value and the single-channel vertical coordinate value so as to display the pixel points in the screen display area corresponding to the division range to which the screen horizontal coordinate value belongs.
Preferably, the divided data includes data stored using RGBA channels, wherein the R channel stores a first minimum value of lateral coordinates, the G channel stores a first maximum value of lateral coordinates, the B channel stores a first minimum value of ordinate, and the a channel stores a first maximum value of ordinate.
A further aspect of embodiments of the present invention provides a computer apparatus comprising a processor for implementing the steps of the image processing method as described above when executing a computer program stored in a memory.
Yet another aspect of the embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the image processing method as described above.
The image processing method and device, the computer device and the computer readable storage medium provided by the invention utilize the corresponding relation between the coordinate system of the image display area and the coordinate system of the video source frame picture, and simultaneously utilize the division data to determine the division range of the transverse coordinates of the screen. Then, by obtaining the mapping coordinates of the video source frame picture and utilizing a lerp function to carry out corresponding calculation so as to obtain a single-channel abscissa value and a single-channel ordinate value, corresponding pixel points of the video source frame picture can be determined according to the single-channel abscissa value and the single-channel ordinate value, and the content displayed by the pixel points is displayed in a left eye display area or a right eye display area corresponding to the division range to which the screen abscissa value belongs, so that binocular parallax is formed, and the immersion feeling of a user watching a panoramic video in a virtual reality environment can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of an image processing method according to an embodiment of the invention;
FIG. 2 is a block diagram of a video source according to an embodiment of the present invention;
FIG. 3 is an exemplary functional block diagram of an image processing apparatus provided in one embodiment of the present invention;
fig. 4 is a schematic diagram of an exemplary structure of a computer device according to an embodiment of the present invention.
Description of the main elements
Computer device 1
Processor 10
Memory 20
Image processing apparatus 100
Acquisition module 11
Determination module 12
Computing module 13
Display module 14
Merge processing module 15
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings. In addition, the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention, and the described embodiments are merely a subset of the embodiments of the present invention, rather than a complete embodiment. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Under a virtual reality environment scene, when utilizing Head Mounted Display (HMD) to watch the panoramic video, the show that the left eye show regional corresponding left eye carries out the show of image, the show that the right eye show regional corresponding right eye carries out the image of its, and usable this scheme is handled the image in order to form left and right eyes parallax, promotes the vision and immerses and feels.
According to the scheme, the VR environment sky ball three-dimensional model is mainly built, and the VR virtual visual angle cameras (two VR virtual visual angle cameras and two VR environment sky ball three-dimensional models are arranged for the left eye and the right eye) are placed in the central point of the three-dimensional model, so that the content seen by the VR virtual visual angle cameras in the three-dimensional model is consistent with the visual angle content shot by the real camera. Here, the VR virtual perspective camera is used to render content to be viewed by human eyes. Thereafter, a frame screen rendered by the VR virtual perspective camera (rendering a three-dimensional scene image into a two-dimensional image) is acquired and calculated. Images displayed in a left eye display area and a right eye display area are formed according to the position of a screen where each pixel point of a frame picture is located, the picture loaded by the VR virtual visual angle camera is correspondingly replaced by a left eye video image by reading the screen range of the output picture, and the right eye is the same, so that left-right eye parallax is formed, and visual immersion is improved.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, and it should be noted that the image processing method according to the embodiment of the present invention is not limited to the steps and the sequence in the flowchart shown in fig. 1. Steps in the illustrated flowcharts may be added, removed, or changed in order according to various needs.
As shown in fig. 1, the image processing method according to the present embodiment, applied to a virtual panoramic video, may include the following steps:
step 101: the method comprises the steps of obtaining pixel coordinates of a screen display area where pixel points are located, wherein the screen display area comprises a left eye display area and a right eye display area.
In this embodiment, the screen display area may correspond to the head-mounted display, and a left eye display area of the head-mounted display is used for providing image display for a left eye of a user, and a right eye display area of the head-mounted display is used for providing image display for a right eye of the user.
It can be understood that any pixel point on the screen display area may have its relative coordinate position, and thus obtaining the pixel coordinate of the screen display area where the pixel point is located may utilize controlling the display of the pixel point corresponding to the coordinate position in a reverse direction according to the coordinate position.
Step 102: and determining a division range to which the screen transverse coordinate belongs to be extracted from the pixel coordinates according to division data, wherein the division range comprises a left eye visual angle range corresponding to the left eye display area and a right eye visual angle range corresponding to the right eye display area.
In this embodiment, the divided data includes data stored using RGBA channels, where the R channel is used to store a first transversal coordinate minimum value, the G channel is used to store a first transversal coordinate maximum value, the B channel is used to store a first ordinate minimum value, and the a channel is used to store a first ordinate maximum value. It can be understood that the data stored in the R channel and the G channel are used for representing the abscissa range of the video source frame picture, and the data stored in the G channel and the a channel are used for representing the ordinate range of the video source frame picture.
It can be appreciated that by utilizing existing color channels to store coordinate data, it is beneficial to improve the utilization of existing data channels.
In this embodiment, the division data further includes a division line that divides the division range, the division line being located in a middle of the first horizontal coordinate minimum value and the first horizontal coordinate maximum value.
Step 103: and acquiring a mapping coordinate corresponding to a frame picture of the video source.
In this embodiment, the map coordinates are coordinates for pasting an image to the three-dimensional spherical model, and the map coordinates have a corresponding relationship with coordinates where the screen display area is located.
Step 104: and calculating coordinate values obtained from the divided data and the chartlet coordinates by using a preset function so as to output a single-channel abscissa value and a single-channel ordinate value.
In the present embodiment, the calculation is performed using a lerp function, and the single-channel abscissa value is output by calculating the abscissa value of the abscissa and the abscissa of the map coordinate, which are extracted from the division data and the map coordinate, using the lerp function.
Similarly, the longitudinal coordinate value of the map coordinate and the longitudinal coordinate value of the division data taken from the division data and the map coordinate may be calculated using a lerp function to output the single-channel longitudinal coordinate value.
Specifically, when an abscissa correlation value is taken from the divided data, an abscissa value of the map coordinate is correspondingly taken, and a value output by the lepp function calculation is the single-channel abscissa value; and when the relevant value of the vertical coordinate is taken from the divided data, the vertical coordinate value of the chartlet coordinate is correspondingly taken, and the value output by the calculation of the lerp function is the horizontal and vertical coordinate value of the single channel.
It will be appreciated that the coordinate values stored using the RGBA channels correspond to those described above, where the R and G channels store abscissa correlation values and the G and a channels store ordinate correlation values. In a certain case, the abscissa value stored in any one of the R channel and the G channel may be selected.
Step 105: and determining to take corresponding pixel points from a video source according to the single-channel horizontal coordinate value and the single-channel vertical coordinate value so as to display in a screen display area corresponding to the division range to which the screen horizontal coordinate value belongs.
More specifically, in the present embodiment, according to the single-channel abscissa value and the single-channel ordinate value, a corresponding pixel point is taken from a frame picture of a video source to be displayed in the screen display region corresponding to the division range to which the screen abscissa value belongs.
In this embodiment, the division range to which the horizontal coordinates of the screen belong is determined by using the division data while using the correspondence between the coordinate system of the image display area and the coordinate system of the frame of the video source. Then, by obtaining the chartlet coordinates of the video source frame picture and utilizing a lerp function to carry out corresponding calculation so as to obtain a single-channel abscissa value and a single-channel ordinate value, corresponding pixel points of the video source frame picture can be determined according to the single-channel abscissa value and the single-channel ordinate value, and the content displayed by the pixel points is displayed in a left eye display area or a right eye display area corresponding to the division range to which the screen abscissa value belongs, so that binocular parallax is formed, the immersion sense of a user is increased, and the visual experience is improved.
Fig. 2 is a schematic diagram of a frame of a video source according to an embodiment of the invention. In this embodiment, a frame of a video source is an image of 3840 × 3840 pixels, an upper half of the image is a left-eye panoramic image, a lower half of the image is a right-eye panoramic image, and the image is square, so that a coordinate value can be taken by using an upper left vertex of the frame as a coordinate (0, 0) and a lower right vertex as a coordinate (1, 1), and the following steps are performed:
representing a left-eye panoramic picture when coordinate data are stored as (0, 1, 0, 0.5) by utilizing an RGBA channel, wherein abscissa value ranges of the left-eye panoramic picture are stored in an R channel and a G channel; the B channel and the A channel store the vertical coordinate value range of the left-eye panoramic picture.
When coordinate data are stored as (0, 1, 0.5, 1) by using the RGBA channel, a right-eye panoramic picture is represented, and similarly, the R channel and the G channel store the horizontal coordinate value range of the right-eye panoramic picture; the B channel and the A channel store the vertical coordinate value range of the right-eye panoramic picture.
By using the coordinate values stored in the RGBA channel, the corresponding panoramic picture can be taken out from the frame picture of the video source for subsequently picking up the corresponding pixel points from the panoramic picture (left-eye panoramic picture or right-eye panoramic picture) to display in a specific screen display area.
Meanwhile, at the head-mounted display end, the screen display area also carries out matching judgment, namely, the U coordinate of the screen display area is removed to judge whether the content correspondingly displayed by the screen display area is the image content in the left-eye panoramic picture or the image content in the right-eye panoramic picture by taking the abscissa value of the UV coordinate of the screen display area and corresponding to the frame picture coordinate.
After the correspondence is determined, a frame image needs to be pasted on a three-dimensional spherical model, and a pasting coordinate system (coordinates of a corresponding image taken by a UV representative model) of the three-dimensional spherical model and a coordinate system of the frame image have a correspondence.
By utilizing the corresponding relation, the coordinate values stored in each channel in the RGBA channels are obtained by utilizing the mask function, then the abscissa and the ordinate of the mapping coordinate are combined, and the single-channel abscissa and the single-channel ordinate are calculated and output by utilizing the lerp function, so that the image contents corresponding to the corresponding pixel points can be obtained from the corresponding left-eye panoramic picture or right-eye panoramic picture by utilizing the single-channel abscissa and the single-channel ordinate, and then the display (display) is carried out in the left-eye display area or the right-eye display area.
In this embodiment, the coordinate system of the image display area where the pixel points are located corresponds to the coordinate system of the frame image, so that after the left-eye panoramic image and the right-eye panoramic image are obtained by using the RGBA channel, whether the coordinates of the pixel points belong to the left-eye panoramic image or the right-eye panoramic image is determined according to the coordinates of the pixel points in the image display area.
Then, utilizing a lerp to calculate, and outputting a numerical value a stored in an R channel of the RGBA channel when a transverse coordinate x of the map coordinate is 0; when the transverse coordinate x of the map coordinate is 1, outputting a value b stored in a G channel of the RGBA channel; when the transverse coordinate x of the map coordinate is between 0 and 1, the output value satisfies: output ═ a (1-x) + b ×,
if a is 1 and b is 0, for example, when x is 0.25, there is an output value of 0.75;
when x is 0.25, if a is 0 and b is 1, there is an output value of 0.25.
It is understood that the above is for the calculation of the abscissa, and the ordinate is the same, so that the single-channel abscissa value and the single-channel ordinate value can be calculated by using the lerp function.
In this embodiment, usable foretell single channel abscissa value and single channel ordinate confirm to get corresponding pixel from the video source, and the pixel that acquires can show in left eye display area or right eye display area, so that user's watching through both eyes, because take corresponding pixel from the video source (can also be aforementioned left eye panorama picture or right eye panorama picture) again after carrying out left right eye scope differentiation and demonstrate in corresponding display area, because the pixel that acquires all carries left right eye difference information, therefore can form binocular parallax, thereby do benefit to the sense of immersing that promotes user.
Furthermore, in order to facilitate data transmission, channel combination can be carried out on the single-channel horizontal coordinate value and the single-channel vertical coordinate value to form a double-channel UV numerical value, and then corresponding pixel points can be taken from a frame picture of a video source according to the double-channel UV data to be displayed in the screen display area corresponding to the division range to which the screen horizontal coordinate value belongs.
It is to be understood that the above-described embodiment is directed to panorama picture extraction performed for a video source in an upper-lower binocular format, and when the video source is in a left-right binocular format, a left-eye panorama picture is represented when coordinate data storage is (0, 0.5, 0, 1) using the RGBA channel, and a right-eye panorama picture is represented when coordinate data storage is (0.5, 1, 0, 1) using the RGBA channel. After the left-eye panoramic picture and the right-eye panoramic picture are taken out, similarly, the binocular parallax effect can be obtained by utilizing the coordinate relations and the related functions, so that the immersion feeling of a user is improved.
Fig. 3 is an exemplary functional block diagram of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 3, the image processing apparatus 100 may determine the division range to which the screen lateral coordinates belong using the division data while using the correspondence relationship between the coordinate system in which the image presentation area is located and the coordinate system of the video source frame picture. Then, by obtaining the chartlet coordinates of the video source frame picture and utilizing a lerp function to carry out corresponding calculation so as to obtain a single-channel abscissa value and a single-channel ordinate value, corresponding pixel points of the video source frame picture can be determined according to the single-channel abscissa value and the single-channel ordinate value, and the content displayed by the pixel points is displayed in a left eye display area or a right eye display area corresponding to the division range to which the screen abscissa value belongs, so that binocular parallax is formed, the immersion sense of a user is increased, and the visual experience is improved.
The image processing apparatus 100 of the present invention may include one or more modules, which may be stored in a memory of the terminal and may be configured to be executed by one or more processors (one processor in this embodiment) to complete the present invention. For example, as shown in fig. 3, the image processing apparatus 100 may include an acquisition module 11, a determination module 12, a calculation module 13, a display module 14, and a merge processing module 15. The modules referred to in this application may be program segments that perform particular functions, or may be more specialized than programs that describe the execution of software on a processor.
It should be noted that, corresponding to the above-mentioned embodiments of the image processing method, the image processing apparatus 100 may include some or all of the functional modules shown in fig. 3, and the functions of the modules will be described in detail below. The same noun and its specific explanation in the above embodiments of the image processing method can also be applied to the following functional description of each block. For brevity and to avoid repetition, further description is omitted.
Fig. 4 is an exemplary structural diagram of a computer device according to an embodiment of the present invention. The present embodiment provides a computer apparatus 1 including: a processor 10, a memory 20 and a computer program, such as an image processing program, stored in said memory 20 and executable on said processor 10. The processor 10, when executing the computer program, implements the steps of the above-described embodiments of the image processing method, such as the steps 101 to 105 shown in fig. 1. Alternatively, the processor 10 implements the functions of the modules in the above device embodiments when executing the computer program, for example, the obtaining module 11 in fig. 3 implements the function of obtaining the pixel coordinates of the screen display area where the pixel points are located.
The computer device 1 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The computer device 1 may include, but is not limited to, a processor 10 and a memory 20. It will be understood by those skilled in the art that the schematic diagram is merely an example of the computer apparatus 1 for implementing the image processing method of the present invention, and does not constitute a limitation to the computer apparatus 1, and may include more or less components than those shown, or combine some components, or different components, for example, the computer apparatus 1 may further include an input and output device, a network access device, a bus, etc.
The Processor 10 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 10 is the control center of the computer device 1 and connects the various parts of the whole computer device 1 by various interfaces and lines.
The memory 20 can be used for storing the computer programs and/or modules, and the processor 10 can implement various functions of the computer device 1 by running or executing the computer programs and/or modules stored in the memory 20 and calling the data stored in the memory 20. The memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. In addition, the memory 20 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The computer device 1 may further include an image processing device 100 as shown in fig. 3, and the image processing device 100 may be stored in the memory 20.
Illustratively, the computer program may be partitioned into one or more modules that are stored in the memory 20 and executed by the processor 10 to implement the present invention. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the computer device 1. For example, the computer program may be divided into the image processing apparatus 100 shown in fig. 3, which includes an obtaining module 11, a determining module 12, a calculating module 13, a displaying module 14, and a combining processing module 15, and the specific functions of each module are as follows:
the obtaining module 11 may be configured to obtain pixel coordinates of a screen display area where the pixel points are located, where the screen display area includes a left-eye display area and a right-eye display area. In addition, the method can also be used for acquiring the mapping coordinates corresponding to the frame picture of the video source.
The determining module 12 is configured to determine, according to the division data, a division range to which a screen horizontal coordinate value acquired from the pixel coordinate belongs, where the division range includes a left-eye viewing angle range corresponding to the left-eye display area and a right-eye viewing angle range corresponding to the right-eye display area.
And the calculation module 13 is configured to calculate, by using a preset function, coordinate values obtained from the divided data and the chartlet coordinates to output a single-channel abscissa value and a single-channel ordinate value. Specifically, the lateral coordinate values of the map coordinates and the lateral coordinate values of the division data and the map coordinates may be calculated using a lerp function to output the single-channel lateral coordinate value; and calculating the longitudinal coordinate value and the longitudinal coordinate value of the map coordinate, which are taken out from the division data and the map coordinate, by using a lerp function to output the single-channel longitudinal coordinate value.
And the display module 14 is configured to determine, according to the single-channel abscissa value and the single-channel ordinate value, to take a corresponding pixel point from a video source to display in the screen display area corresponding to the division range to which the screen abscissa value belongs.
And the merging processing module 15 is configured to merge the single-channel abscissa value and the single-channel ordinate value into the dual-channel UV data.
The modules integrated in the computer device 1 according to the present invention may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as independent products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units, modules or devices recited in the system, device or terminal device claims may also be implemented by the same unit, module or device through software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention.

Claims (7)

1. An image processing method applied to a virtual panoramic video, the method comprising:
acquiring pixel coordinates of a screen display area where pixel points are located, wherein the screen display area comprises a left eye display area and a right eye display area;
determining a division range to which a screen lateral coordinate value acquired from the pixel coordinates belongs according to division data, wherein the division range includes a left-eye viewing angle range corresponding to the left-eye display region and a right-eye viewing angle range corresponding to the right-eye display region, wherein the division data includes data stored using RGBA channels for storing a first lateral coordinate minimum value, a G channel for storing a first lateral coordinate maximum value, a B channel for storing a first vertical coordinate minimum value, and a division line dividing the division range, the division line being located in the middle of the first lateral coordinate minimum value and the first lateral coordinate maximum value;
acquiring a mapping coordinate corresponding to a frame picture of a video source;
calculating coordinate values obtained from the divided data and the chartlet coordinates by using a preset function to output a single-channel abscissa value and a single-channel ordinate value;
and determining to take corresponding pixel points from a video source according to the single-channel horizontal coordinate value and the single-channel vertical coordinate value so as to display in the screen display area corresponding to the division range to which the screen horizontal coordinate value belongs.
2. The image processing method according to claim 1, wherein said calculating the coordinate values obtained from the partition data and the map coordinates by using a preset function to output a single-channel abscissa value and a single-channel ordinate value comprises:
calculating an abscissa value taken out from the division data and the chartlet coordinate by using a lerp function to output the single-channel abscissa value;
and utilizing a lerp function to calculate the ordinate values extracted from the dividing data and the map coordinates so as to output the single-channel ordinate value.
3. The image processing method according to claim 1, wherein said determining, according to the single-channel abscissa value and the single-channel ordinate value, a corresponding pixel point from a video source to be displayed in the screen display region corresponding to the partition range to which the screen abscissa value belongs comprises:
and according to the single-channel horizontal coordinate value and the single-channel vertical coordinate value, taking corresponding pixel points in a frame picture of a video source to display in the screen display area corresponding to the division range to which the screen horizontal coordinate value belongs.
4. The image processing method according to claim 1, wherein before said determining, according to the single-channel abscissa value and the single-channel ordinate value, that a corresponding pixel point is taken from a video source to be displayed in the screen display region corresponding to the partition range to which the screen abscissa value belongs, the method further comprises:
merging the single-channel abscissa values and the single-channel ordinate values into double-channel UV data;
the step of determining to take corresponding pixel points from a video source according to the single-channel horizontal coordinate value and the single-channel vertical coordinate value so as to display in the screen display area corresponding to the division range to which the screen horizontal coordinate value belongs comprises:
and determining to take corresponding pixel points from the frame picture of a video source according to the double-channel UV data so as to display in the screen display area corresponding to the division range to which the transverse coordinate value of the screen belongs.
5. An image processing apparatus applied to a virtual panoramic video, the apparatus comprising:
the display device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring pixel coordinates of a screen display area where pixel points are located, and the screen display area comprises a left eye display area and a right eye display area;
a determination module configured to determine a division range to which a screen lateral coordinate value acquired from the pixel coordinate belongs, based on division data, wherein the division range includes a left-eye viewing angle range corresponding to the left-eye display region and a right-eye viewing angle range corresponding to the right-eye display region, wherein the division data includes data stored using RGBA channels and a division line dividing the division range, wherein the R channel is configured to store a first lateral coordinate minimum value, the G channel is configured to store a first lateral coordinate maximum value, the B channel is configured to store a first vertical coordinate minimum value, the a channel is configured to store a first vertical coordinate maximum value, and the division line is located in a middle of the first lateral coordinate minimum value and the first lateral coordinate maximum value;
the acquisition module is also used for acquiring a mapping coordinate corresponding to a frame picture of a video source;
the calculation module is used for calculating coordinate values obtained from the divided data and the chartlet coordinates by using a preset function so as to output single-channel abscissa values and single-channel ordinate values;
and the display module is used for determining to take corresponding pixel points from a video source according to the single-channel horizontal coordinate value and the single-channel vertical coordinate value so as to display the pixel points in the screen display area corresponding to the division range to which the screen horizontal coordinate value belongs.
6. A computer arrangement, characterized in that the computer arrangement comprises a processor for implementing the steps of the image processing method according to any one of claims 1-4 when executing a computer program stored in a memory.
7. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program realizes the steps of the image processing method according to any one of claims 1 to 4 when being executed by a processor.
CN201810602433.2A 2018-06-12 2018-06-12 Image processing method and device, computer device and readable storage medium Active CN108833877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810602433.2A CN108833877B (en) 2018-06-12 2018-06-12 Image processing method and device, computer device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810602433.2A CN108833877B (en) 2018-06-12 2018-06-12 Image processing method and device, computer device and readable storage medium

Publications (2)

Publication Number Publication Date
CN108833877A CN108833877A (en) 2018-11-16
CN108833877B true CN108833877B (en) 2020-02-18

Family

ID=64143772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810602433.2A Active CN108833877B (en) 2018-06-12 2018-06-12 Image processing method and device, computer device and readable storage medium

Country Status (1)

Country Link
CN (1) CN108833877B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070515B (en) * 2019-01-31 2020-06-30 北京字节跳动网络技术有限公司 Image synthesis method, apparatus and computer-readable storage medium
CN111626938B (en) * 2020-06-04 2023-04-07 Oppo广东移动通信有限公司 Image interpolation method, image interpolation device, terminal device, and storage medium
CN111914739A (en) * 2020-07-30 2020-11-10 深圳创维-Rgb电子有限公司 Intelligent following method and device, terminal equipment and readable storage medium
CN111949173B (en) * 2020-07-31 2022-02-15 广州启量信息科技有限公司 Panoramic VR (virtual reality) picture switching method and device, terminal equipment and storage medium
CN112104861B (en) * 2020-11-16 2021-03-19 首望体验科技文化有限公司 720 panoramic stereo video production method and device and related products
CN112203075B (en) * 2020-12-08 2021-04-06 首望体验科技文化有限公司 Three-dimensional square film video processing method, device and product based on 720 capsule type screen

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102243432A (en) * 2011-06-28 2011-11-16 浙江工业大学 Panoramic three-dimensional photographing device
CN102256111B (en) * 2011-07-17 2013-06-12 西安电子科技大学 Multi-channel panoramic video real-time monitoring system and method
CN102291527B (en) * 2011-08-11 2014-02-12 杭州海康威视数字技术股份有限公司 Panoramic video roaming method and device based on single fisheye lens
US11019257B2 (en) * 2016-05-19 2021-05-25 Avago Technologies International Sales Pte. Limited 360 degree video capture and playback
CN106527857A (en) * 2016-10-10 2017-03-22 成都斯斐德科技有限公司 Virtual reality-based panoramic video interaction method

Also Published As

Publication number Publication date
CN108833877A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108833877B (en) Image processing method and device, computer device and readable storage medium
US11076142B2 (en) Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene
JP4214976B2 (en) Pseudo-stereoscopic image creation apparatus, pseudo-stereoscopic image creation method, and pseudo-stereoscopic image display system
CA2927046A1 (en) Method and system for 360 degree head-mounted display monitoring between software program modules using video or image texture sharing
US10553014B2 (en) Image generating method, device and computer executable non-volatile storage medium
US20130027389A1 (en) Making a two-dimensional image into three dimensions
CN109510975B (en) Video image extraction method, device and system
CN108076208B (en) Display processing method and device and terminal
CN105611267B (en) Merging of real world and virtual world images based on depth and chrominance information
CN111612878B (en) Method and device for making static photo into three-dimensional effect video
CN112446939A (en) Three-dimensional model dynamic rendering method and device, electronic equipment and storage medium
CN107390379A (en) A kind of nearly eye hologram three-dimensional display system and display methods
TW201701051A (en) Panoramic stereoscopic image synthesis method, apparatus and mobile terminal
CN102026012B (en) Generation method and device of depth map through three-dimensional conversion to planar video
US20130210520A1 (en) Storage medium having stored therein game program, game apparatus, game system, and game image generation method
CN114742703A (en) Method, device and equipment for generating binocular stereoscopic panoramic image and storage medium
CN111327886B (en) 3D light field rendering method and device
CN112752085A (en) Naked eye 3D video playing system and method based on human eye tracking
CN108124148A (en) A kind of method and device of the multiple view images of single view video conversion
CN111231826B (en) Control method, device and system for vehicle model steering lamp in panoramic image and storage medium
JP4214528B2 (en) Pseudo stereoscopic image generation apparatus, pseudo stereoscopic image generation program, and pseudo stereoscopic image display system
CN113592990A (en) Three-dimensional effect generation method, device, equipment and medium for two-dimensional image
WO2018000610A1 (en) Automatic playing method based on determination of image type, and electronic device
CN115442580B (en) Naked eye 3D picture effect processing method for portable intelligent equipment
CN112004162B (en) Online 3D content playing system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Image processing method and device, computer device and readable storage medium

Effective date of registration: 20221102

Granted publication date: 20200218

Pledgee: Chongqing Longshang financing Company Limited by Guarantee

Pledgor: CHONGQING IVREAL TECHNOLOGY CO.,LTD.

Registration number: Y2022500000092

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20231018

Granted publication date: 20200218

Pledgee: Chongqing Longshang financing Company Limited by Guarantee

Pledgor: CHONGQING IVREAL TECHNOLOGY CO.,LTD.

Registration number: Y2022500000092