CN113592712A - Image processing method, device, equipment, storage medium and cloud VR system - Google Patents

Image processing method, device, equipment, storage medium and cloud VR system Download PDF

Info

Publication number
CN113592712A
CN113592712A CN202110878321.1A CN202110878321A CN113592712A CN 113592712 A CN113592712 A CN 113592712A CN 202110878321 A CN202110878321 A CN 202110878321A CN 113592712 A CN113592712 A CN 113592712A
Authority
CN
China
Prior art keywords
image
pixel coordinate
mapping
pixel
mapping relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110878321.1A
Other languages
Chinese (zh)
Inventor
孙志鹏
陆嘉鸣
吉昌
贺翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110878321.1A priority Critical patent/CN113592712A/en
Publication of CN113592712A publication Critical patent/CN113592712A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

Image processing method, device, equipment, storage medium and cloud VR system. The utility model discloses an image processing method, which relates to the technical field of computers, in particular to the field of cloud computing and computer vision, and can be used in scenes such as cloud VR systems and VR equipment. The specific implementation scheme is as follows: acquiring equipment pose information uploaded by terminal equipment; performing image rendering based on the acquired device pose information to obtain a first image, wherein the first image comprises a first image area and a second image area; carrying out distortion processing on the first image so as to keep the image resolution of the first image area unchanged and reduce the image resolution of the second image area, thereby obtaining a second image; coding the second image to obtain corresponding coding information; and transmitting the encoded information to the terminal device so that the terminal device performs image display based on the encoded information.

Description

Image processing method, device, equipment, storage medium and cloud VR system
Technical Field
The utility model relates to a computer technology field especially relates to cloud computing and computer vision field, can be used for scenes such as cloud VR system and VR equipment. In particular to an image processing method, an image processing device, image processing equipment, a storage medium, a cloud VR system and VR equipment.
Background
With the development of Virtual Reality (VR for short), 5G and other technologies, it becomes more and more feasible and valuable to establish a Cloud Virtual Reality system (Cloud VR). Through the system, a user can use the lightweight VR terminal equipment, and various cloud VR applications can be used at a far end through the cloud VR system. For a typical Cloud VR system, the quality of the user experience can be measured by various parameters, and the important parameters include overall delay, field angle, image resolution, and the like.
Disclosure of Invention
The present disclosure provides an image processing method, apparatus, device, storage medium, cloud VR system, VR device, and computer program product.
According to an aspect of the present disclosure, there is provided an image processing method including: acquiring equipment pose information uploaded by terminal equipment; performing image rendering based on the acquired device pose information to obtain a first image, wherein the first image comprises a first image area and a second image area; carrying out distortion processing on the first image so as to enable the image resolution of the first image area to be kept unchanged and enable the image resolution of the second image area to be reduced, and thus obtaining a second image; coding the second image to obtain corresponding coding information; and transmitting the coding information to the terminal equipment so that the terminal equipment can display images based on the coding information.
According to another aspect of the present disclosure, there is provided an image processing method including: acquiring equipment pose information of terminal equipment; uploading the acquired device pose information to a server, so that the server executes the following operations: performing image rendering based on the acquired device pose information to obtain a first image, wherein the first image comprises a first image area and a second image area; performing first distortion processing on the first image so as to enable the image resolution of the first image area to be kept unchanged and enable the image resolution of the second image area to be reduced, and thus obtaining a second image; coding the second image to obtain corresponding coding information and transmitting the coding information to the terminal equipment; and displaying images based on the coding information returned by the server.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: the acquisition module is used for acquiring the equipment pose information uploaded by the terminal equipment; the image rendering module is used for rendering an image based on the acquired device pose information to obtain a first image, wherein the first image comprises a first image area and a second image area; the image distortion module is used for carrying out distortion processing on the first image so as to enable the image resolution of the first image area to be kept unchanged and the image resolution of the second image area to be reduced, and therefore a second image is obtained; the image coding module is used for coding the second image to obtain corresponding coding information; and the coding transmission module is used for transmitting the coding information to the terminal equipment so that the terminal equipment can display images based on the coding information.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: the acquisition module is used for acquiring the equipment pose information of the terminal equipment; the transmission module is used for uploading the acquired equipment pose information to the server side, so that the server side executes the following operations: performing image rendering based on the acquired device pose information to obtain a first image, wherein the first image comprises a first image area and a second image area; performing first distortion processing on the first image so as to enable the image resolution of the first image area to be kept unchanged and enable the image resolution of the second image area to be reduced, and thus obtaining a second image; coding the second image to obtain corresponding coding information and transmitting the coding information to the terminal equipment; and the display module is used for displaying images based on the coding information returned by the server.
According to another aspect of the present disclosure, there is provided a cloud VR system, comprising: the device of the embodiment of the disclosure.
According to another aspect of the present disclosure, there is provided a VR device comprising: the device of the embodiment of the disclosure.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements a method according to embodiments of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 illustrates a system architecture suitable for embodiments of the present disclosure;
FIG. 2 illustrates a flow chart of an image processing method according to an embodiment of the present disclosure;
fig. 3A and 3B illustrate schematic diagrams of image distortion according to embodiments of the present disclosure;
FIGS. 4A and 4B illustrate schematic diagrams of pixel coordinate mapping relationships according to embodiments of the present disclosure;
FIG. 5A illustrates a schematic diagram of an original image according to an embodiment of the present disclosure;
FIG. 5B is a graph illustrating the effect of linear distortion based on FIG. 5A;
FIG. 5C is a graph illustrating the effect of linear distortion reduction based on FIG. 5B;
FIG. 5D is a graph illustrating the effect of quadratic distortion based on FIG. 5A;
FIG. 5E is a graph illustrating the effect of quadratic distortion reduction based on FIG. 5D;
FIG. 6 illustrates a flow diagram of an image processing method according to another embodiment of the present disclosure;
fig. 7 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 8 illustrates a block diagram of an image processing apparatus according to another embodiment of the present disclosure; and
FIG. 9 illustrates a block diagram of an electronic device used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be appreciated that in the Cloud VR project, the image is rendered primarily in the Cloud. Since a certain time delay exists between the completion of image rendering from the cloud end to the display of the image by the VR terminal device at the user end, in the process, if the VR terminal device worn by the user rotates, a black edge may be generated in the image.
In this regard, in some embodiments, the image may be prevented from producing a black edge by the following operations. That is, when the image is rendered at the cloud, the field angle is expanded, so that when ATW (asynchronous time warping) is performed, more redundant space can be left around the image to fill the part which may generate the black edge, and thus the black edge problem can be improved.
In addition, in other embodiments, besides expanding the field angle, after the image is rendered, the encoding and decoding time delay of the image can be reduced by reducing the image resolution, so that the overall time delay is reduced, and the possibility of black edges of the image is further reduced.
However, the inventors have found in practical use that: when the resolution of the image is reduced to a certain degree, the user can obviously feel that the image becomes more blurred; if the resolution of the image is not reduced, the overall delay is high, and the problem of generating black edges in the image cannot be overcome.
In this regard, embodiments of the present disclosure provide a solution that substantially ensures image sharpness at lower image resolutions.
The main inventive concept of the disclosed embodiment is as follows: considering that when the Cloud VR system is actually operated, a user mainly focuses on the middle part of a picture, and the periphery of the picture, particularly the part beyond the original field angle, is rarely focused on, therefore, the scheme adopts the idea of image distortion, namely, the image is unevenly zoomed, so that the middle part of the picture is not zoomed as much as possible, and the periphery of the picture is zoomed according to a certain rule, thereby reducing the resolution of the image during encoding and decoding, avoiding the occurrence of black edges of the image, and simultaneously ensuring that the middle part of the picture is as clear as possible.
The present disclosure will be described in detail below with reference to the drawings and specific embodiments.
A system architecture suitable for the image processing method and apparatus of the embodiments of the present disclosure is described below.
FIG. 1 illustrates a system architecture suitable for embodiments of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be used in other environments or scenarios.
As shown in fig. 1, the system architecture 100 may include: VR device 101(VR terminal device), VR device 102, VR device 103, and cloud VR 104 (cloud VR system).
In this disclosure, the VR device may record the pose information thereof in real time and upload the pose information to the cloud VR 104. The cloud VR 104 may render an image according to the device pose information uploaded by each VR device, and perform distortion processing on the image after rendering the image, for example, keeping the image resolution of the middle portion of the image unchanged, reducing the image resolution of the peripheral portion of the image, and after obtaining the distorted image, perform image coding to obtain corresponding coding information and transmit the coding information to the corresponding VR device, so as to perform image display.
It should be appreciated that in the embodiment of the present disclosure, since the image resolution of the distorted image is reduced compared to the original image, the image encoding and decoding delay and the image transmission delay can be reduced, and thus the image at the VR device side can be prevented from generating a black edge.
It should also be understood that in the embodiments of the present disclosure, the distorted image only reduces the image resolution of the peripheral portion of the picture compared to the original image, and does not change the image resolution of the middle portion of the picture, so that the sharpness of the middle portion of the image can be ensured. The middle part of the image is usually the part which is focused by the user, and the peripheral part of the image is usually the part which is not focused by the user, so the scheme can improve the user experience.
In addition, in the embodiment of the present disclosure, after each VR device receives the encoding information returned by the cloud VR 104, it may decode to obtain a corresponding distorted image, and then perform corresponding distortion processing on the distorted image, so as to restore a target image having the same image size as the original image and display the target image.
In the embodiment of the present disclosure, the image distortion processing performed by the cloud VR 104 and the image distortion processing performed by the VR device are inverse operations.
It should be understood that the number of VR devices in fig. 1 is merely illustrative. There may be any number of VR devices, as desired for implementation.
Application scenarios of the image processing method and apparatus suitable for the embodiments of the present disclosure are introduced below.
The image definition optimization scheme provided by the embodiment of the disclosure can be used in application scenes such as any cloud VR system and VR equipment, and can perform image definition optimization on images and image frames in video streams.
According to an embodiment of the present disclosure, there is provided an image processing method.
Fig. 2 illustrates a flowchart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the image processing method 200 may include: operations S210 to S250.
In operation S210, device pose information uploaded by the terminal device is acquired.
In operation S220, image rendering is performed based on the acquired device pose information to obtain a first image, where the first image includes a first image area and a second image area.
In operation S230, the first image is subjected to a distortion process such that the image resolution of the first image region remains unchanged and the image resolution of the second image region is reduced, thereby obtaining a second image.
In operation S240, the second image is encoded, resulting in corresponding encoding information.
In operation S250, the encoding information is transmitted to the terminal device so that the terminal device performs image display based on the encoding information.
It should be understood that, in the embodiment of the present disclosure, the terminal device may be a VR terminal device. In addition, in the embodiment of the present disclosure, the image processing method 200 may be applied to a cloud, such as a cloud VR system side, to achieve the purpose of optimizing the image definition.
In the embodiment of the disclosure, the terminal device can record the pose information of the terminal device in real time and upload the pose information to the cloud. The cloud end can render images according to the device pose information uploaded by each terminal device, and distort the images after rendering, namely, the image resolution of a specific part in the images is kept unchanged, the image resolution of the image parts except the specific part in the images is reduced, after the distorted images are obtained, image coding is carried out, corresponding coding information is obtained and is transmitted to the terminal devices, and therefore image display is completed on the terminal devices.
It should be appreciated that in the embodiment of the present disclosure, since the image resolution of the distorted image (i.e., the image obtained by distorting the original image) is reduced compared to the original image (i.e., the image obtained by rendering the image), the image coding and decoding delay and the image transmission delay can be reduced, and thus the black edge of the image displayed on the terminal device can be avoided.
It should also be understood that in the embodiments of the present disclosure, the distorted image merely reduces the resolution of the image picture except for the specific image portion compared to the original image, without changing the image resolution of the specific image portion in the image picture, and thus the sharpness of the specific image portion can be ensured. And the specific image part in the image is usually the part of the user with the important attention, and other parts are usually the parts which are not concerned by the user, so the scheme can improve the user experience.
In addition, in the embodiment of the present disclosure, after receiving the coding information returned by the cloud, the terminal device may decode to obtain a corresponding distorted image, and then perform corresponding distortion processing on the distorted image, so as to restore a target image having the same image size as the original image and display the target image.
It should be noted that, in the embodiment of the present disclosure, the image distortion processing performed by the cloud and the image distortion processing performed on the terminal device are inverse operations to each other. During specific implementation, corresponding image distortion algorithms can be added to the cloud side and the terminal equipment side, so that the image is subjected to distortion processing.
It should be understood that the conversion from a representation of a three-dimensional solid scene to a two-dimensional rasterized and latticed representation is an image rendering. The unit of image resolution is PPI (pixels per inch). The image resolution may be expressed as the number of horizontal pixels × the number of vertical pixels. The image resolution is also referred to as image size, pixel size, recording resolution, and the like.
With the embodiment of the present disclosure, after the image rendering, the distortion processing operation is performed on the image first, and then the image encoding operation is performed, that is, the image resolution of the image part concerned by the user is kept unchanged, and the image resolution of the image part not concerned by the user is reduced. Therefore, the black edges of the image can be avoided, and the parts of the image, which are concerned by a user, can have higher definition, so that the user experience can be improved.
The scheme is based on the actual demand of user experience optimization met in the Cloud VR project, combines the actual situation of the VR project, and provides the scheme for optimizing the image definition, so that the image can be prevented from having black edges, the image definition can be obviously improved, and the user experience is improved.
In other words, the scheme can ensure that the experience of most users is unchanged under the condition of effectively reducing the image resolution. After the technical scheme is used, the image resolution is reduced, so that the overall delay of the Cloud VR system can be reduced. Measurements show that the overall delay can be reduced from 110ms to 50 ms. Meanwhile, after the technical scheme is used, the image resolution of the image part concerned by the user in the image is not changed, so that the definition of the image part can be ensured, and the user experience is obviously improved.
As an alternative embodiment, wherein: the first image area in the first image is positioned in the middle of the first image; and the corresponding second image area surrounds the first image area, i.e. the second image area is located at the peripheral position of the first image.
It should be understood that, in actual use, the frame portion located in the middle of the image is usually the portion of the image that the user focuses on, and the frame portion located in the periphery of the image is usually not the portion of the image that the user focuses on, or even the frame portion that the user does not focus on.
Therefore, in the embodiment of the present disclosure, when the image is distorted, the image resolution of the image in the middle of the image can be kept unchanged, and the image resolution of the image in the peripheral position of the image is reduced, so as to reduce the image encoding and decoding delay and the image transmission delay, thereby preventing the image displayed on the terminal device from generating black edges. In addition, the scheme can ensure that the picture part at the middle position of the image has higher definition, thereby improving the user experience.
Further, as an alternative embodiment, during the distortion processing of the first image, the image resolution of the first image area is kept unchanged by the following operation to obtain the first image area in the second image.
Obtaining each first pixel coordinate (u) within a first image region in a first image1,v1) Wherein u is11≤u1≤u12,v11≤v1≤v12
And acquiring the first linear mapping relation and the second linear mapping relation.
For each first pixel coordinate (u) according to a first linear mapping relation1,v1) And mapping each first pixel coordinate (u) according to a second linear mapping relation1,v1) The corresponding first pixel coordinate (u) is obtained by mapping the v coordinate in (1)2,v2) Wherein u is21≤u2≤u22,v21≤v2≤v22
From each first pixel coordinate (u)2,v2) The corresponding pixel points form a first image area in the second image.
In the embodiment of the present disclosure, for a first image region in a first image, a corresponding linear mapping relationship may be adopted, and a u coordinate and a v coordinate in each pixel coordinate in the first image region are respectively mapped into a first image region in a second image.
It should be understood that in embodiments of the present disclosure, the linear mapping relationships employed for the u and v coordinates may be the same or different. That is, the first linear mapping relationship and the second linear mapping relationship may be the same or different.
Further, as an alternative embodiment, in the process of performing the distortion processing on the first image, the image resolution of the second image area in the first image is reduced to obtain the second image area in the second image by the following operation.
Obtaining each second pixel coordinate (u) within a second image region in the first image1,v1) Wherein u is not less than 01<u11,0≤v1<v11
Obtaining each third pixel coordinate (u) within a second image region in the first image1,v1) Wherein u is12<u1≤1,v12<v1≤1。
And acquiring a third linear mapping relation and a fourth linear mapping relation, and a fifth linear mapping relation and a sixth linear mapping relation.
-for each second pixel coordinate (u) according to said third linear mapping relation1,v1) And mapping each second pixel coordinate (u) according to the fourth linear mapping relationship1,v1) The corresponding coordinates (u) of each second pixel are obtained by mapping the v coordinates2,v2) Wherein u is not less than 02<u21,0≤v2<v21
-for each third pixel coordinate (u) according to said fifth linear mapping relation1,v1) And mapping each third pixel coordinate (u) according to the sixth linear mapping relationship1,v1) The corresponding coordinates (u) of each third pixel are obtained by mapping the v coordinates2,v2) Wherein u is22<u2≤1,v22<v2≤1。
By said each second pixel coordinate (u)2,v2) And the each third pixel coordinate (u)2,v2) And the corresponding pixel points form a second image area in the second image.
Or, as a further alternative embodiment, in the process of performing the distortion processing on the first image, the image resolution of the second image area in the first image is reduced by the following operation to obtain the second image area in the second image.
Obtaining each second pixel coordinate (u) within a second image region in the first image1,v1) Wherein u is not less than 01<u11,0≤v1<v11
Obtaining each third pixel coordinate (u) within a second image region in the first image1,v1) Wherein u is12<u1≤1,v12<v1≤1。
And acquiring a first nonlinear mapping relation, a second nonlinear mapping relation, a third nonlinear mapping relation and a fourth nonlinear mapping relation.
-mapping said each second pixel coordinate (u) according to said first non-linear mapping1,v1) And mapping each second pixel coordinate (u) according to the second non-linear mapping relationship1,v1) The corresponding coordinates (u) of each second pixel are obtained by mapping the v coordinates2,v2) Wherein u is not less than 02<u21,0≤v2<v21
-for each of said third pixel coordinates (u) according to said third non-linear mapping relation1,v1) And mapping each third pixel coordinate (u) according to the fourth non-linear mapping relation1,v1) The corresponding coordinates (u) of each third pixel are obtained by mapping the v coordinates2,v2) Wherein u is22<u2≤1,v22<v2≤1。
By said each second pixel coordinate (u)2,v2) And the each third pixel coordinate (u)2,v2) And the corresponding pixel points form a second image area in the second image.
In the embodiment of the present disclosure, for the second image region in the first image, a corresponding linear mapping relationship or a non-linear mapping relationship (such as a unitary quadratic mapping relationship) may be adopted, and the u coordinate and the v coordinate in each pixel coordinate thereof are respectively mapped into the second image region in the second image.
It should be understood that in embodiments of the present disclosure, the linear mapping relationships employed for the u and v coordinates may be the same or different.
It should also be appreciated that in the disclosed embodiments, when a linear mapping relationship is used for mapping, a significant quality boundary may be caused in the final restored image; when the nonlinear mapping relation is adopted for mapping, no obvious quality boundary line exists in the finally restored image;
for example, as shown in fig. 3A and 3B, the coordinates of each pixel in the image portion with grayscale in fig. 3A may be mapped according to the method provided by the embodiment of the present disclosure, so as to obtain the coordinates of each pixel in the image portion with grayscale in fig. 3B.
As shown in fig. 3A, the image illustrated therein may be denoted as an image P1; as shown in fig. 3B, the image illustrated therein may be denoted as an image P2.
Wherein, as shown, the image P1 is the rendered image of the server, W1、H1Respectively representing the width and height of its image resolution. Image P2 is an image at the time of encoding/decoding (i.e., an image obtained by distortion of image P1), and W is an image2、H2Respectively representing the width and height of its image resolution. X0、Y0Respectively, the width and height of the image resolution of the middle portion (i.e., the shaded image portion) of the image, i.e., the portion of the image resolution that is not scaled. X1、Y1Respectively representing the width and height of the image resolution of the surrounding zoomed portion of the image on image P1. X2、Y2Respectively representing the width and height of the image resolution of the surrounding zoomed portion of the image on image P2.
It should be understood that the variables described above satisfy the following logical relationship:
W1>W2>X0
H1>H2>Y0
X1>X2
Y1>Y2
then there are: x1*2+X0=W1,X2*2+X0=W2
Let u1、v1Is the uv coordinate, u, of any pixel point on the image P11、v1All the range of the belonged interval is [0, 1%]。
Let u2、v2Is the uv coordinate, u, of any pixel point on the image P22、v2All the ranges of [0, 1 ]]。
Regarding the boundaries of the uv coordinates of the middle screen portion and the peripheral screen portion in the two images, image P1 and image P2, U is assigned to them11、U12、U21、U22And satisfies the following logical relationship:
U11=X1/W1
U12=(X1+X0)/W1
U21=X2/W2
U22=(X2+X0)/W2
in the embodiment of the present disclosure, when compressing the image P1 into the image P2, u may be considered2How to switch to u1(i.e., how to use u2Represents u1) I.e. solve equation u1=f(u2). Referring to fig. 4A and 4B, this needs to be considered in 3 intervals:
if U is present21<=u2<=U22F is a linear equation (the resolution of the image in the middle of the image can be made constant), and then (U)21,U11),(U22,U12) And (4) point.
Then the equation can be found as:
u1=(U11-U12)/(U21-U22)*u2+(U12*U21-U11*U22)/(U21-U22)
slope of this equation:
k0=(U11-U12)/(U21-U22)。
if 0 < ═ u2<U21The pixel points of the left edge portion in the image P1 may be coordinate mapped in some manner (e.g., a linear equation or a quadratic equation).
(1) If a linear equation is used, f passes through points (0, 0), (U)21,U11) Then, the equation can be found as:
u1=u2*(U11/U21);
however, in this way, the point (U) is formed21,U11) Is in the interval [ U21,U22]The equations in (a) are not conducive, and the final restored image may have a significant image quality boundary.
(2) If a quadratic equation with one element is adopted, the pixel points can be concentrated on the part close to the middle as much as possible, namely, the situation that the finally restored image has obvious image quality boundary and f passes through the point (U) can be avoided21,U11) (0, 0), and at point (U)21,U11) Has a slope of k0(thus, the sum interval [ U ] can be secured21,U22]At point (U)21,U11) Derived), then the one-dimensional quadratic equation can be solved as:
a=(U21*k0-U11)/(U21 2);
b=k0-2*U21*a;
c=0;
u1=a*u2 2+b*u2+c。
if U is present22u 21, then is and 0 < ═ u2<U21Is symmetricalThe system of equations is derived in two ways (i.e., linear equations and one-dimensional quadratic equations):
(1) if a linear equation is used, f passes through the point (U)22,U12) (1, 1), the equation can be found as:
u1=u2*((U12-1)/(U22-1))+(1-(U12-1)/(U22-1))。
(2) if a quadratic equation of unity is used, f passes through the point (U)22,U12) (1, 1), and at point (U)22,U12) Has a slope of k0(thus, the sum interval [ U ] can be secured21,U22]At point (U)22,U12) Derived), then the one-dimensional quadratic equation can be solved as:
a=((U22-1)*k0+1-U12)/((U22-1)2);
b=k0-2*a*U22
c=1-a-b;
u1=a*u2 2+b*u2+c。
in conclusion, u can be obtained2Conversion to u1Equation u of1=f(u2)。
As to v2Conversion to v1The equations in (a) and (b) are similar, and the description of the embodiment is omitted here.
The linear equation is compared with a one-dimensional quadratic equation as follows.
As shown in fig. 4A and 4B, fig. 4A is a linear equation, which shows that it is not conducive to boundary between regions, which may result in a significant quality boundary of the restored image. Fig. 4B is a quadratic equation, which shows that the boundary between the regions is derivable, so that the restored image has no obvious quality boundary.
Further, when the image P2 is restored to an image having the same image scale as the image P1, u is considered1How to switch to u2I.e. u2=g(u1) And g is the inverse function of f, then:
if U is present11<=u1-u 12, then:
u2=(u1*(U21-U22)-(U12*U21-U11*U22))/(U11-U12)。
if 0 < ═ u1<U11And then:
a=(U21*k0-U11)/U21 2;
b=k0-2*U21*a;
c=-u1
u2=(-b+sqrt(b2-4ac))/2a。
if U is present12u 11, then:
a=((U22-1)*k0+1-U12)/((U22-1)2);
b=k0-2*a*U22
c=1-a-b-u1
u2=(-b+sqrt(b2-4ac))/2a。
by the above calculation, u can be obtained1And u2And converting the two into each other. Similarly, v can be obtained1And v2And converting the two into each other. The interconversion between the normal image and the distorted image can be realized through the formula.
As shown in fig. 5A to 5E, fig. 5A is an original image; FIG. 5B is a small resolution image obtained after reduction using a linear equation distortion algorithm; FIG. 5C is an image obtained by restoring FIG. 5B; FIG. 5D is a small resolution image obtained after reduction using a one-dimensional quadratic distortion algorithm; fig. 5E is an image obtained by restoring fig. 5D.
It can be seen that, compared with fig. 5A, the whole content is unchanged, and the middle part of the definition is clear, and the periphery is fuzzy; the effect is also the same in fig. 5E compared to fig. 5A. In contrast, there is a clear image quality boundary in FIG. 5C; there is no apparent image quality demarcation in fig. 5E.
According to an embodiment of the present disclosure, the present disclosure provides another image processing method.
Fig. 6 illustrates a flowchart of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 6, the image processing method 600 may include: operations S610 to S630.
In operation S610, device pose information of the terminal device is acquired.
In operation S620, the obtained device pose information is uploaded to the server, so that the server performs the following operations: performing image rendering based on the acquired device pose information to obtain a first image, wherein the first image comprises a first image area and a second image area; performing first distortion processing on the first image so as to keep the image resolution of the first image area unchanged and reduce the image resolution of the second image area, thereby obtaining a second image; and coding the second image to obtain corresponding coding information and transmitting the coding information to the terminal equipment.
In operation S630, an image is displayed based on the encoded information returned by the server.
It should be understood that, in the embodiment of the present disclosure, the terminal device may be a VR terminal device (abbreviated as VR device). In addition, in the embodiment of the present disclosure, the image processing method 600 may be applied to a terminal device side, such as a VR terminal device, to achieve the purpose of optimizing the image definition.
It should also be understood that, in the embodiment of the present disclosure, the above operations performed by the server (i.e., the cloud end) may refer to the detailed description in the foregoing related embodiments in the present disclosure, and the present disclosure is not repeated herein.
In addition, in the embodiment of the present disclosure, the above operations performed by the terminal device may also refer to the specific descriptions in the foregoing related embodiments in the present disclosure, and the present disclosure is not repeated herein.
With the embodiment of the present disclosure, after the image rendering, the distortion processing operation is performed on the image first, and then the image encoding operation is performed, that is, the image resolution of the image part concerned by the user is kept unchanged, and the image resolution of the image part not concerned by the user is reduced. Therefore, the black edges of the image can be avoided, and the parts of the image, which are concerned by a user, can have higher definition, so that the user experience can be improved.
As an alternative embodiment, displaying the image based on the encoded information returned by the server may include performing the following operations by the terminal device.
And decoding the coded information returned by the server to obtain a third image.
And carrying out second distortion processing on the third image to obtain a fourth image, wherein the second distortion processing and the first distortion processing are inverse operations.
And displaying the fourth image.
In the embodiment of the present disclosure, when performing image rendering, the current decoded image may be restored to an image having the same image size as the original image (i.e., the image obtained by the server through image rendering) by performing an operation opposite to the distortion processing used before encoding on the decoded image. Moreover, the finally obtained image (i.e., the finally displayed image) may be an image that can ensure that the picture part concerned by the user is relatively clear and other picture parts are relatively blurred. Such as a clear image in the middle and blurred in the periphery.
It should be noted that, in other embodiments, the implementation of the video codec may also be modified to reduce the delay for a specific video stream or image, so as to prevent the image from having a black edge, and ensure the definition of a picture portion of the image that is focused on. The object of the invention disclosed herein can also be achieved thereby. This aspect is not elaborated on by the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides an image processing apparatus.
Fig. 7 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the image processing apparatus 700 may include: an acquisition module 710, an image rendering module 720, an image distortion module 730, an image encoding module 740, and an encoding transmission module 750.
The obtaining module 710 is configured to obtain device pose information uploaded by the terminal device.
And an image rendering module 720, configured to perform image rendering based on the acquired device pose information to obtain a first image, where the first image includes a first image area and a second image area.
The image distortion module 730 is configured to perform distortion processing on the first image, so as to maintain the image resolution of the first image area unchanged and reduce the image resolution of the second image area, thereby obtaining a second image.
The image encoding module 740 is configured to encode the second image to obtain corresponding encoding information.
And a code transmission module 750 for transmitting the code information to the terminal device so that the terminal device displays an image based on the code information.
As an alternative embodiment, wherein: the first image area is positioned in the middle of the first image; and the second image area surrounds the first image area and is positioned at the peripheral position of the first image.
As an alternative embodiment, the image distortion module includes a first image distortion module, configured to, during distortion processing on the first image, perform corresponding operations to keep the image resolution of the first image area unchanged to obtain the first image area in the second image by: a first pixel coordinate acquisition unit for acquiring each first pixel coordinate (u) in a first image region in the first image1,v1) Wherein u is11≤u1≤u12,v11≤v1≤v12(ii) a A first mapping relation obtaining unit, configured to obtain a first linear mapping relation and a second linear mapping relation; a first pixel coordinate mapping unit for mapping each first pixel coordinate (u) according to the first linear mapping relation1,v1) And mapping each first pixel coordinate (u) according to the second linear mapping relation1,v1) The corresponding first pixel coordinate (u) is obtained by mapping the v coordinate in (1)2,v2) Wherein u is21≤u2≤u22,v21≤v2≤v22(ii) a And a first image area composing unit for composing the first image area from each of the first pixel coordinates (u)2,v2) The corresponding pixel points constitute a first image area in the second image.
As an optional embodiment, the image distortion module further includes a second image distortion module, configured to, during the distortion processing on the first image, perform corresponding operations to reduce the image resolution of a second image area in the first image to obtain the second image area in the second image by: a second pixel coordinate acquisition unit for acquiring each second pixel coordinate (u) in a second image region in the first image1,v1) Wherein u is not less than 01<u11,0≤v1<v11(ii) a A third pixel coordinate acquisition unit for acquiring each third pixel coordinate (u) in the second image region in the first image1,v1) Wherein u is12<u1≤1,v12<v1Less than or equal to 1; a second mapping relation obtaining unit, configured to obtain a third linear mapping relation and a fourth linear mapping relation, and a fifth linear mapping relation and a sixth linear mapping relation; a second pixel coordinate mapping unit for mapping each second pixel coordinate (u) according to the third linear mapping relation1,v1) And mapping the u coordinates of (a) and each of the second pixel coordinates (u) according to the fourth linear mapping relationship1,v1) The corresponding coordinates (u) of each second pixel are obtained by mapping the v coordinates2,v2) Wherein u is not less than 02<u21,0≤v2<v21(ii) a A third pixel coordinate mapping unit for mapping each third pixel coordinate (u) according to the fifth linear mapping relation1,v1) And mapping the u coordinates of (a) and every third pixel coordinate (u) according to the sixth linear mapping relation1,v1) The corresponding coordinates (u) of each third pixel are obtained by mapping the v coordinates2,v2) Wherein u is22<u2≤1,v22<v2Less than or equal to 1; and a second image area composing unit for composing a second image area from each of the second pixel coordinates (u)2,v2) And the each third pixel coordinate (u)2,v2) The corresponding pixel points form a second image area in the second image.
As an optional embodiment, the image distortion module further includes a third image distortion module, configured to, during the distortion processing on the first image, perform corresponding operations to reduce the image resolution of the second image area in the first image to obtain the second image area in the second image by: a fourth pixel coordinate acquisition unit for acquiring each second pixel coordinate (u) in the second image region in the first image1,v1) Wherein u is not less than 01<u11,0≤v1<v11(ii) a A fifth pixel coordinate acquisition unit for acquiring each third pixel coordinate (u) in the second image region in the first image1,v1) Wherein u is12<u1≤1,v12<v1Less than or equal to 1; a third mapping relation obtaining unit, configured to obtain the first nonlinear mapping relation and the second nonlinear mapping relation, and the third nonlinear mapping relation and the fourth nonlinear mapping relation; a fourth pixel coordinate mapping unit for mapping each second pixel coordinate (u) according to the first non-linear mapping relation1,v1) And mapping each second pixel coordinate (u) according to the second non-linear mapping relation1,v1) The corresponding coordinates (u) of each second pixel are obtained by mapping the v coordinates2,v2) Wherein O is less than or equal to u2<u21,0≤v2<v21(ii) a A fifth pixel coordinate mapping unit for mapping each of the third pixel coordinates (u) according to the third non-linear mapping relationship1,v1) And mapping according to the fourth non-lineThe sexual mapping relation is applied to each third pixel coordinate (u)1,v1) The corresponding coordinates (u) of each third pixel are obtained by mapping the v coordinates2,v2) Wherein u is22<u2≤1,v22<v2Less than or equal to 1; and a third image area composing unit for composing the second pixel coordinate (u) from each of the second pixel coordinates2,v2) And the each third pixel coordinate (u)2,v2) The corresponding pixel points form a second image area in the second image.
According to an embodiment of the present disclosure, the present disclosure also provides another image processing apparatus.
Fig. 8 illustrates a block diagram of an image processing apparatus according to another embodiment of the present disclosure.
As shown in fig. 8, the image processing apparatus 800 may include: an acquisition module 810, a transmission module 820, and a display module 830.
An obtaining module 810, configured to obtain device pose information of the terminal device.
A transmission module 820, configured to upload the acquired device pose information to a server, so that the server performs the following operations: performing image rendering based on the acquired device pose information to obtain a first image, wherein the first image comprises a first image area and a second image area; performing first distortion processing on the first image so as to keep the image resolution of the first image area unchanged and reduce the image resolution of the second image area, thereby obtaining a second image; and coding the second image to obtain corresponding coding information and transmitting the coding information to the terminal equipment.
And a display module 830, configured to display an image based on the encoded information returned by the server.
As an alternative embodiment, the display module comprises: a decoding unit for decoding the encoded information to obtain a third image; the image distortion unit is used for carrying out second distortion processing on the third image to obtain a fourth image, wherein the second distortion processing and the first distortion processing are in inverse operation; and an image display unit for displaying the fourth image.
It should be understood that the embodiments of the apparatus portions of the present disclosure correspond to the embodiments of the method portions of the present disclosure, and the technical problems to be solved and the technical effects to be achieved also correspond to the same or similar embodiments, and the detailed description of the present disclosure is omitted.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 9 illustrates a schematic block diagram of an example electronic device 900 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic apparatus 900 includes a computing unit 901, which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the electronic device 900 can also be stored. The calculation unit 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
A number of components in the electronic device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 901 performs the respective methods and processes described above, such as an image processing method. For example, in some embodiments, the image processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into the RAM 903 and executed by the computing unit 901, one or more steps of the image processing method described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the image processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a traditional physical host and a VPS service ("Virtual Private Server", or "VPS" for short). The server may also be a server of a distributed system, or a server incorporating a blockchain.
In the technical scheme of the disclosure, the related user data recording, storage, application and the like all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. An image processing method comprising:
acquiring equipment pose information uploaded by terminal equipment;
performing image rendering based on the acquired device pose information to obtain a first image, wherein the first image comprises a first image area and a second image area;
carrying out distortion processing on the first image so as to enable the image resolution of the first image area to be kept unchanged and enable the image resolution of the second image area to be reduced, and thus obtaining a second image;
coding the second image to obtain corresponding coding information; and
and transmitting the coding information to the terminal equipment so that the terminal equipment can display images based on the coding information.
2. The method of claim 1, wherein:
the first image area is located in the middle of the first image; and
the second image area surrounds the first image area and is located at the peripheral position of the first image.
3. The method of claim 2, wherein during the distortion processing of the first image, the image resolution of the first image region is kept unchanged to obtain the first image region in the second image by:
obtaining each first pixel coordinate (u) within a first image region in the first image1,v1) Wherein u is11≤u1≤u12,v11≤v1≤v12
Acquiring a first linear mapping relation and a second linear mapping relation;
-mapping said each first pixel coordinate (u) according to said first linear mapping relation1,v1) And mapping each first pixel coordinate (u) according to the second linear mapping relation1,v1) The corresponding first pixel coordinate (u) is obtained by mapping the v coordinate in (1)2,v2) Wherein u is21≤u2≤u22,v21≤v2≤v22(ii) a And
by said each first pixel coordinate (u)2,v2) And the corresponding pixel points form a first image area in the second image.
4. The method of claim 3, wherein in the course of the distortion processing of the first image, the image resolution of the second image region in the first image is reduced to obtain the second image region in the second image by:
obtaining each second pixel coordinate (u) within a second image region in the first image1,v1) Wherein u is not less than 01<u11,0≤v1<v11
Obtaining each third pixel coordinate (u) within a second image region in the first image1,v1) Wherein u is12<u1≤1,v12<v1≤1;
Acquiring a third linear mapping relation and a fourth linear mapping relation, and a fifth linear mapping relation and a sixth linear mapping relation;
-for each second pixel coordinate (u) according to said third linear mapping relation1,v1) And mapping each second pixel coordinate (u) according to the fourth linear mapping relationship1,v1) The corresponding coordinates (u) of each second pixel are obtained by mapping the v coordinates2,v2) Wherein u is not less than 02<u21,0≤v2<v21
-for each third pixel coordinate (u) according to said fifth linear mapping relation1,v1) And mapping each third pixel coordinate (u) according to the sixth linear mapping relationship1,v1) The corresponding coordinates (u) of each third pixel are obtained by mapping the v coordinates2,v2) Wherein u is22<u2≤1,v22<v2Less than or equal to 1; and
by said each second pixel coordinate (u)2,v2) And said each third pixel coordinate (u)2,v2) And the corresponding pixel points form a second image area in the second image.
5. The method of claim 3, wherein in the course of the distortion processing of the first image, the image resolution of the second image region in the first image is reduced to obtain the second image region in the second image by:
obtaining each second pixel coordinate (u) within a second image region in the first image1,v1) Wherein u is not less than 01<u11,0≤v1<v11
Obtaining each third pixel coordinate (u) within a second image region in the first image1,v1) Wherein u is12<u1≤1,v12<v1≤1;
Acquiring a first nonlinear mapping relation, a second nonlinear mapping relation, a third nonlinear mapping relation and a fourth nonlinear mapping relation;
according to the first nonlinear mapping relation pairEach second pixel coordinate (u)1,v1) And mapping each second pixel coordinate (u) according to the second non-linear mapping relationship1,v1) The corresponding coordinates (u) of each second pixel are obtained by mapping the v coordinates2,v2) Wherein u is not less than 02<u21,0≤v2<v21
-for each of said third pixel coordinates (u) according to said third non-linear mapping relation1,v1) And mapping each third pixel coordinate (u) according to the fourth non-linear mapping relation1,v1) The corresponding coordinates (u) of each third pixel are obtained by mapping the v coordinates2,v2) Wherein u is22<u2≤1,v22<v2Less than or equal to 1; and
by said each second pixel coordinate (u)2,v2) And said each third pixel coordinate (u)2,v2) And the corresponding pixel points form a second image area in the second image.
6. An image processing method comprising:
acquiring equipment pose information of terminal equipment;
uploading the acquired device pose information to a server, so that the server executes the following operations: performing image rendering based on the acquired device pose information to obtain a first image, wherein the first image comprises a first image area and a second image area; performing first distortion processing on the first image so as to enable the image resolution of the first image area to be kept unchanged and enable the image resolution of the second image area to be reduced, and thus obtaining a second image; coding the second image to obtain corresponding coding information and transmitting the coding information to the terminal equipment; and
and displaying images based on the coding information returned by the server.
7. The method of claim 6, wherein displaying the image based on the encoded information returned by the server comprises, by the terminal device:
decoding the encoded information to obtain a third image;
performing second distortion processing on the third image to obtain a fourth image, wherein the second distortion processing and the first distortion processing are inverse operations; and
and displaying the fourth image.
8. An image processing apparatus comprising:
the acquisition module is used for acquiring the equipment pose information uploaded by the terminal equipment;
the image rendering module is used for rendering an image based on the acquired device pose information to obtain a first image, wherein the first image comprises a first image area and a second image area;
the image distortion module is used for carrying out distortion processing on the first image so as to enable the image resolution of the first image area to be kept unchanged and the image resolution of the second image area to be reduced, and therefore a second image is obtained;
the image coding module is used for coding the second image to obtain corresponding coding information; and
and the coding transmission module is used for transmitting the coding information to the terminal equipment so that the terminal equipment can display images based on the coding information.
9. The apparatus of claim 8, wherein:
the first image area is located in the middle of the first image; and
the second image area surrounds the first image area and is located at the peripheral position of the first image.
10. The apparatus of claim 9, wherein the image distortion module comprises a first image distortion module, configured to, during the distortion processing of the first image, maintain the image resolution of the first image region unchanged by performing corresponding operations to obtain the first image region in the second image:
a first pixel coordinate acquisition unit for acquiring each first pixel coordinate (u) within a first image region in the first image1,v1) Wherein u is11≤u1≤u12,v11≤v1≤v12
A first mapping relation obtaining unit, configured to obtain a first linear mapping relation and a second linear mapping relation;
a first pixel coordinate mapping unit for mapping each first pixel coordinate (u) according to the first linear mapping relation1,v1) And mapping each first pixel coordinate (u) according to the second linear mapping relation1,v1) The corresponding first pixel coordinate (u) is obtained by mapping the v coordinate in (1)2,v2) Wherein u is21≤u2≤u22,v21≤v2≤v22(ii) a And
a first image area composing unit for composing a first image area from each of the first pixel coordinates (u)2,v2) And the corresponding pixel points form a first image area in the second image.
11. The apparatus of claim 10, wherein the image distortion module further comprises a second image distortion module configured to, in the process of performing the distortion processing on the first image, perform a corresponding operation to reduce an image resolution of a second image region in the first image to obtain the second image region in the second image by:
a second pixel coordinate acquisition unit for acquiring each second pixel coordinate (u) within a second image region in the first image1,v1) Wherein u is not less than 01<u11,0≤v1<v11
A third pixel coordinate acquisition unit for acquiring each third pixel coordinate (u) within a second image region in the first image1,v1) Wherein u is12<u1≤1,v12<v1≤1;
A second mapping relation obtaining unit, configured to obtain a third linear mapping relation and a fourth linear mapping relation, and a fifth linear mapping relation and a sixth linear mapping relation;
a second pixel coordinate mapping unit for mapping each second pixel coordinate (u) according to the third linear mapping relation1,v1) And mapping each second pixel coordinate (u) according to the fourth linear mapping relationship1,v1) The corresponding coordinates (u) of each second pixel are obtained by mapping the v coordinates2,v2) Wherein u is not less than 02<u21,0≤v2<v21
A third pixel coordinate mapping unit for mapping each third pixel coordinate (u) according to the fifth linear mapping relation1,v1) And mapping each third pixel coordinate (u) according to the sixth linear mapping relationship1,v1) The corresponding coordinates (u) of each third pixel are obtained by mapping the v coordinates2,v2) Wherein u is22<u2≤1,v22<v2Less than or equal to 1; and
a second image area composing unit for composing a second image area from each of the second pixel coordinates (u)2,v2) And said each third pixel coordinate (u)2,v2) And the corresponding pixel points form a second image area in the second image.
12. The apparatus of claim 10, wherein the image distortion module further comprises a third image distortion module configured to, in the process of performing the distortion processing on the first image, perform a corresponding operation to reduce an image resolution of a second image region in the first image to obtain the second image region in the second image by:
a fourth pixel coordinate acquisition unit for acquiring each second pixel coordinate (u) within a second image region in the first image1,v1) Wherein u is not less than 01<u11,0≤v1<v11
A fifth pixel coordinate acquisition unit for acquiring each third pixel coordinate (u) within the second image region in the first image1,v1) Wherein u is12<u1≤1,v12<v1≤1;
A third mapping relation obtaining unit, configured to obtain the first nonlinear mapping relation and the second nonlinear mapping relation, and the third nonlinear mapping relation and the fourth nonlinear mapping relation;
a fourth pixel coordinate mapping unit for mapping each of the second pixel coordinates (u) according to the first non-linear mapping relationship1,v1) And mapping each second pixel coordinate (u) according to the second non-linear mapping relationship1,v1) The corresponding coordinates (u) of each second pixel are obtained by mapping the v coordinates2,v2) Wherein u is not less than 02<u21,0≤v2<v21
A fifth pixel coordinate mapping unit for mapping each of the third pixel coordinates (u) according to the third non-linear mapping relationship1,v1) And mapping each third pixel coordinate (u) according to the fourth non-linear mapping relation1,v1) The corresponding coordinates (u) of each third pixel are obtained by mapping the v coordinates2,v2) Wherein u is22<u2≤1,v22<v2Less than or equal to 1; and
a third image area composing unit for composing a second image area from each of the second pixel coordinates (u)2,v2) And said each third pixel coordinate(u2,v2) And the corresponding pixel points form a second image area in the second image.
13. An image processing apparatus comprising:
the acquisition module is used for acquiring the equipment pose information of the terminal equipment;
the transmission module is used for uploading the acquired equipment pose information to the server side, so that the server side executes the following operations: performing image rendering based on the acquired device pose information to obtain a first image, wherein the first image comprises a first image area and a second image area; performing first distortion processing on the first image so as to enable the image resolution of the first image area to be kept unchanged and enable the image resolution of the second image area to be reduced, and thus obtaining a second image; coding the second image to obtain corresponding coding information and transmitting the coding information to the terminal equipment; and
and the display module is used for displaying images based on the coding information returned by the server.
14. The apparatus of claim 13, wherein the display module comprises:
a decoding unit configured to decode the encoded information to obtain a third image;
the image distortion unit is used for carrying out second distortion processing on the third image to obtain a fourth image, wherein the second distortion processing and the first distortion processing are in inverse operation; and
an image display unit for displaying the fourth image.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
18. A cloud Virtual Reality (VR) system, comprising: the device of any one of claims 8 to 12.
19. A Virtual Reality (VR) device, comprising: the device of claim 13 or 14.
CN202110878321.1A 2021-07-30 2021-07-30 Image processing method, device, equipment, storage medium and cloud VR system Pending CN113592712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110878321.1A CN113592712A (en) 2021-07-30 2021-07-30 Image processing method, device, equipment, storage medium and cloud VR system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110878321.1A CN113592712A (en) 2021-07-30 2021-07-30 Image processing method, device, equipment, storage medium and cloud VR system

Publications (1)

Publication Number Publication Date
CN113592712A true CN113592712A (en) 2021-11-02

Family

ID=78253501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110878321.1A Pending CN113592712A (en) 2021-07-30 2021-07-30 Image processing method, device, equipment, storage medium and cloud VR system

Country Status (1)

Country Link
CN (1) CN113592712A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761728A (en) * 2013-12-27 2014-04-30 华为技术有限公司 Method and device for lens shading correction
CN106372344A (en) * 2016-09-05 2017-02-01 中山大学 Three-dimensional clothes transformation method based on feature size constrain and system thereof
CN108287678A (en) * 2018-03-06 2018-07-17 京东方科技集团股份有限公司 A kind of image processing method, device, equipment and medium based on virtual reality
CN109194923A (en) * 2018-10-18 2019-01-11 眸芯科技(上海)有限公司 Video image processing apparatus, system and method based on non-uniformed resolution ratio
CN109461213A (en) * 2018-11-16 2019-03-12 京东方科技集团股份有限公司 Image processing method, device, equipment and storage medium based on virtual reality
CN110428379A (en) * 2019-07-29 2019-11-08 慧视江山科技(北京)有限公司 A kind of image grayscale Enhancement Method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761728A (en) * 2013-12-27 2014-04-30 华为技术有限公司 Method and device for lens shading correction
CN106372344A (en) * 2016-09-05 2017-02-01 中山大学 Three-dimensional clothes transformation method based on feature size constrain and system thereof
CN108287678A (en) * 2018-03-06 2018-07-17 京东方科技集团股份有限公司 A kind of image processing method, device, equipment and medium based on virtual reality
CN109194923A (en) * 2018-10-18 2019-01-11 眸芯科技(上海)有限公司 Video image processing apparatus, system and method based on non-uniformed resolution ratio
CN109461213A (en) * 2018-11-16 2019-03-12 京东方科技集团股份有限公司 Image processing method, device, equipment and storage medium based on virtual reality
CN110428379A (en) * 2019-07-29 2019-11-08 慧视江山科技(北京)有限公司 A kind of image grayscale Enhancement Method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
鄢玉飞 等: "MBA、MPA、MEM、MPACC管理类联考综合能力高分教程", 机械工业出社, pages: 30 *

Similar Documents

Publication Publication Date Title
CN108022212B (en) High-resolution picture generation method, generation device and storage medium
US10402941B2 (en) Guided image upsampling using bitmap tracing
CN110827380B (en) Image rendering method and device, electronic equipment and computer readable medium
CN108600783B (en) Frame rate adjusting method and device and terminal equipment
CN112714357B (en) Video playing method, video playing device, electronic equipment and storage medium
CN110913230A (en) Video frame prediction method and device and terminal equipment
CN115861131A (en) Training method and device based on image generation video and model and electronic equipment
CN110830808A (en) Video frame reconstruction method and device and terminal equipment
CN110913219A (en) Video frame prediction method and device and terminal equipment
CN113688907A (en) Model training method, video processing method, device, equipment and storage medium
CN113989174A (en) Image fusion method and training method and device of image fusion model
CN113327193A (en) Image processing method, image processing apparatus, electronic device, and medium
CN113658073B (en) Image denoising processing method and device, storage medium and electronic equipment
CN114071190A (en) Cloud application video stream processing method, related device and computer program product
CN117336527A (en) Video editing method and device
CN111833262A (en) Image noise reduction method and device and electronic equipment
CN113592712A (en) Image processing method, device, equipment, storage medium and cloud VR system
CN115941966A (en) Video compression method and electronic equipment
CN115567712A (en) Screen content video coding perception code rate control method and device based on just noticeable distortion by human eyes
CN115278250A (en) Low-bandwidth video transmission method and conference system
CN114418882A (en) Processing method, training method, device, electronic equipment and medium
CN110830806A (en) Video frame prediction method and device and terminal equipment
CN113438485B (en) Image coding method, image coding device, electronic equipment and storage medium
US12014476B2 (en) Upscaling device, upscaling method, and upscaling program
CN116389670A (en) Video processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination