CN113645500A - Virtual reality video stream data processing system - Google Patents

Virtual reality video stream data processing system Download PDF

Info

Publication number
CN113645500A
CN113645500A CN202111200449.9A CN202111200449A CN113645500A CN 113645500 A CN113645500 A CN 113645500A CN 202111200449 A CN202111200449 A CN 202111200449A CN 113645500 A CN113645500 A CN 113645500A
Authority
CN
China
Prior art keywords
region
sub
client
code rate
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111200449.9A
Other languages
Chinese (zh)
Other versions
CN113645500B (en
Inventor
许晓明
郭建君
李鑫
孙华庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Weiling Times Technology Co Ltd
Original Assignee
Beijing Weiling Times Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Weiling Times Technology Co Ltd filed Critical Beijing Weiling Times Technology Co Ltd
Priority to CN202111200449.9A priority Critical patent/CN113645500B/en
Publication of CN113645500A publication Critical patent/CN113645500A/en
Application granted granted Critical
Publication of CN113645500B publication Critical patent/CN113645500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Abstract

The invention relates to a virtual reality video stream data processing system which comprises a cloud server, wherein the cloud server comprises a database, a memory and a processor, the memory stores a computer program, the database stores picture cutting parameters, region division parameters, a first coding compression code rate and a second coding compression code rate corresponding to each client, and the first coding compression code rate is greater than the second coding compression code rate. On the premise of ensuring the image quality, the invention reduces the size of the data after image coding, reduces the bandwidth requirement, ensures the definition and smoothness of virtual reality video data playing, and improves the visual experience of users.

Description

Virtual reality video stream data processing system
Technical Field
The invention relates to the technical field of data processing, in particular to a virtual reality video stream data processing system.
Background
Virtual Reality (VR) is a Virtual three-dimensional world created by computers, electronic information, simulation techniques, etc. that gives users corresponding visual, auditory, and tactile feedback to immerse them in it as in the real world. With the development of virtual reality technology, virtual reality video real-time interaction programs derived from the virtual reality video real-time interaction programs are popular among more and more users, in order to create a more realistic effect, equipment for virtual reality needs to be configured very highly, and the experience cost of virtual reality video real-time interaction is greatly improved. The video real-time interaction program can be specifically a cloud game program. By taking a virtual reality cloud game as an example, based on a cloud game technology, a game picture is rendered at the cloud end, and a user can experience a virtual reality effect which can only be operated on highly configured virtual reality equipment originally by connecting a piece of commonly configured virtual reality glasses equipment at a client end.
The current cloud games applied to PC desktops or mobile terminals usually adopt a resolution not higher than 1080p and a frame rate (FPS) not higher than 60, and the required network bandwidth is usually 0-8 Mbps. However, in the virtual reality cloud game, in order to obtain a smooth game interaction experience, it is necessary to run the cloud game at 90FPS at a resolution of at least 2K, and below the resolution, the user sees a lattice, and below the frame rate, the game is stuck. Based on this standard, at least the required bandwidth =2560 (resolution wide) x1440 (resolution high) x90 (frame rate)/102 (encoding rate x8(bit) =24.81mbps, which is several times higher than the bandwidth of a general cloud game, when the network speed of the local user is not enough or unstable, the user feels obvious blocking and can not obtain good experience, while virtual reality cloud games make it possible for users to experience virtual reality through common equipment, however, due to the limitation of bandwidth, the output image to the user is not clear enough, the frame rate is low, and it is difficult to reach the minimum standard of 2k,90FPS, and the visual experience of the user is greatly reduced, therefore, how to reduce the data size after image coding on the premise of not sacrificing the image quality, and further reducing the bandwidth requirement becomes a technical problem to be solved urgently.
Disclosure of Invention
The invention aims to provide a virtual reality video stream data processing system, which reduces the size of data after image coding, reduces the bandwidth requirement, ensures the definition and smoothness of virtual reality video data playing and improves the visual experience of a user on the premise of ensuring the image quality.
According to an aspect of the present invention, a virtual reality video stream data processing system is provided, including a cloud server, where the cloud server includes a database, a memory storing a computer program, and a processor, where the database stores a picture cutting parameter, a region division parameter, a first coding compression code rate, and a second coding compression code rate corresponding to each client, where the first coding compression code rate is greater than the second coding compression code rate, and the server includes the processor, when executing the computer program, to implement the following steps:
step S1, starting the real-time interactive program based on the real-time interactive program starting instruction sent by the client;
step S2, receiving a real-time interaction instruction and current eyeball focusing point position information sent by the client, wherein the current eyeball focusing position information is obtained based on virtual reality equipment connected with the client;
step S3, generating a current frame rendering picture based on the real-time interactive instruction;
step S4, cutting the current frame rendering picture according to the picture cutting parameters corresponding to the client, and dividing the cut current frame rendering picture into a focusing area and an edge area according to the area dividing parameters corresponding to the client and the current eyeball focusing position information;
step S5, compressing the image in the focus area according to a first coding compression rate corresponding to the client, compressing the image in the edge area according to a second coding compression rate of the client, and generating current frame compressed data based on the image in the focus area and the image coding compression result in the edge area;
and step S6, sending the current frame compressed data to a corresponding client, and displaying the current frame compressed data on virtual reality equipment connected with the client.
Compared with the prior art, the invention has obvious advantages and beneficial effects. By the technical scheme, the virtual reality video stream data processing system provided by the invention can achieve considerable technical progress and practicability, has wide industrial utilization value and at least has the following advantages:
according to the invention, on the premise of ensuring the image quality, the frame image is partitioned, and different code rates are adopted for respectively compressing and encoding, so that the size of data after image encoding is reduced, the bandwidth requirement is reduced, the definition and the fluency of virtual reality video data playing are ensured, and the visual experience of a user is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic diagram of a virtual reality video stream data processing system according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to an embodiment of a virtual reality video stream data processing system and its effects according to the present invention with reference to the accompanying drawings and preferred embodiments.
An embodiment of the present invention provides a virtual reality video stream data processing system, as shown in fig. 1, which includes a cloud server, where the cloud server includes a database, a memory storing a computer program, and a processor, where the database stores a picture cutting parameter, a region division parameter, a first coding compression code rate, and a second coding compression code rate corresponding to each client, and the first coding compression code rate is greater than the second coding compression code rate, and preferably, the first coding compression code rate is usually a default coding rate of the client, that is, a coding compression code rate directly calculated by using an existing coding rate algorithm, and the existing coding rate algorithm is not described herein again. The first encoding compression code rate is C1, and the second encoding compression code rate is C2, C2= a × C2, preferably, a has a value range of [0.3,0.7 ]. The server comprising the processor, when executing the computer program, implementing the steps of:
step S1, starting the real-time interactive program based on the real-time interactive program starting instruction sent by the client;
the real-time interaction program can be specifically a cloud game program, and correspondingly, the real-time interaction program starting instruction is a cloud game starting instruction.
Step S2, receiving a real-time interaction instruction and current eyeball focusing point position information sent by the client, wherein the current eyeball focusing position information is obtained based on virtual reality equipment connected with the client;
the real-time interaction instruction comprises an input equipment key input instruction, a virtual reality equipment rotation instruction and the like. The virtual reality equipment can be virtual reality glasses specifically, and the current virtual reality equipment can be tracked through glasses, and the eyeball focus point position information is obtained in the location, and the process of obtaining the body is no longer repeated here.
Step S3, generating a current frame rendering picture based on the real-time interactive instruction;
step S4, cutting the current frame rendering picture according to the picture cutting parameters corresponding to the client, and dividing the cut current frame rendering picture into a focusing area and an edge area according to the area dividing parameters corresponding to the client and the current eyeball focusing position information;
step S5, compressing the image in the focus area according to a first coding compression rate corresponding to the client, compressing the image in the edge area according to a second coding compression rate of the client, and generating current frame compressed data based on the image in the focus area and the image coding compression result in the edge area;
and step S6, sending the current frame compressed data to a corresponding client, and displaying the current frame compressed data on virtual reality equipment connected with the client.
According to the system provided by the embodiment of the invention, on the premise of ensuring the image quality, the frame image is partitioned, the frame image is divided into the edge regions of the focusing region and the non-focusing region corresponding to the human eye focusing, different code rates are adopted for respectively compressing and coding, the region focused by the human eye is coded with the high code rate, the non-focusing edge region is coded with the low code rate, the size of data after the image coding is reduced, the bandwidth requirement is reduced, the definition and the fluency of virtual reality video data playing are ensured, and the visual experience of a user is improved.
As an embodiment, the processor, when executing the computer program, further performs the steps of:
step S10, receiving a configuration parameter compression packet sent by each client, wherein the configuration parameter compression packet is generated after being compressed through picture cutting parameters, region division parameters, a first coding compression code rate and a second coding compression code rate which are acquired through a configuration interface of the client and input by a user;
it can be understood that each client can set the corresponding picture cutting parameter, the region division parameter, the first coding compression code rate and the second coding compression code rate through the configuration interface of the client according to the configuration requirement of the client.
Step S20, obtaining the picture cutting parameter, the region dividing parameter, the first coding compression code rate and the second coding compression code rate corresponding to the client from the configuration parameter compression, weight-saving analysis, and storing in the database.
Through step S20, the configuration is performed on the cloud server based on the configuration parameters input by the client.
As an embodiment, the cloud server is preconfigured with input event information of a real-time interactive program, and the step S3 includes:
step S31, applying the real-time interactive instruction to corresponding input event information to generate current frame rendering picture information;
and step S32, rendering the current frame rendering picture information and generating the current frame rendering picture.
It should be noted that, through the steps S31 to S32, the corresponding current frame rendering picture can be obtained based on the real-time interactive instruction, and the specific rendering operation directly adopts the prior art, which is not described herein again.
As an embodiment, the picture cutting parameter is N, where N is a positive integer greater than or equal to 2; the region division parameter is R which is a positive integer greater than or equal to 1,
Figure 729565DEST_PATH_IMAGE001
wherein m is a positive odd number, m is less than or equal to N-2, L is the resolution length of the frame rendering picture, and W is the resolution width of the frame rendering picture. L and W may or may not be equal. The step S4 includes:
step S41, dividing the resolution width of the current frame rendered picture into N parts, and dividing the resolution height into N parts, thereby dividing the current frame rendered picture into N × N sub-regions;
step S42, determining a target central point in the N × N subregions according to the current eyeball focusing position information;
wherein, the step S42 may further include:
step S421, determining four vertex position points of a sub-area where the current eyeball focusing position information is located as candidate center points;
step S422, determine a position point closest to the current eyeball focus position among the candidate center points as the target center point.
Step S43, based on the target center point and the region division parameter R corresponding to the client, determining a sub-region having a sub-region center point within a range from the target center point to R as a focus region, and determining all other sub-regions except the focus region in the N × N sub-regions as the edge region.
Through the steps S41-S43, the current frame rendering picture is divided into a focusing area corresponding to human eye focusing and a non-focusing area, namely an edge area, and the size of the compressed data can be reduced by adopting different code rate coding compression based on different areas, the compressed data can be reduced by (0.4, 1) times of normal compression generally, the frame rate and the resolution ratio in the whole process do not need to be adjusted, and the image of the non-human eye focusing area seen by human eyes is fuzzy, so that the image of the edge area is compressed by coding with a lower coding compression code rate subsequently, the visual experience is not influenced, the size of the compressed data can be reduced, and the bandwidth requirement is reduced.
In order to further reduce the size of the compressed data, the edge region may be further divided into a plurality of regions, and different coding compression code rates are used for compression, as an embodiment, the second coding compression code rate includes a first sub-coding compression code rate, a second sub-coding compression code rate, and an … mth sub-coding compression code rate, which are sequentially reduced, where M is a positive integer greater than 1; the region division parameter is that R1,R2,…RM-1},RiDividing the radius for the ith edge region, wherein the value range of i is 1 to M-1, and R1,R2,…RM-1The steps S43 further include:
step S431, enabling the distance between the center point of the sub-area and the target center point to be larger than Ri-1R is less than or equal toiIs determined as the i-th edge region, R0Is R, will be greater than RM-1Is determined as the Mth edge region; dividing the edge area into a first edge area, a second edge area and an … Mth edge area, wherein the coding compression code rate corresponding to the jth edge area is the jth sub-coding compression rate, and the value range of j is 1 to M;
correspondingly, in step S5, the encoding and compressing the image in the edge region according to the second encoding and compressing rate of the client includes:
and step S511, encoding and compressing each jth edge region according to the jth sub-encoding compression rate in the second encoding and compression code rate of the client.
The edge region can be further segmented through the steps S431 and S511, and the corresponding edge region images are encoded and compressed by adopting the gradually decreasing encoding and compression code rate according to the sequence from the near to the far from the focus region, so that the size of the compressed data can be further reduced.
As an embodiment, the system further includes a preset sub-region compressed data structure, where the sub-region compressed data structure includes a data header sub-structure and a compressed data sub-structure, the data header sub-structure includes a sub-region position data segment, a sub-region encoding compression rate data segment, and a sub-region encoding compressed data size data segment, the compressed data sub-structure is used to store compressed sub-region data, and in step S5, generating current frame compressed data based on the image of the focus region and the image encoding result of the edge region includes:
step S51, performing code compression on each focusing sub-region of the focusing region according to the first code compression code rate corresponding to the client, acquiring the size of the compressed focusing sub-region data and the position information of the focusing sub-region, and generating focusing sub-region compressed data according to the sub-region compressed data structure based on the size of the focusing sub-region data, the position information of the focusing sub-region, the first code compression code rate and the compressed focusing sub-region data;
step S52, performing code compression on each edge sub-region of the edge region according to the second code compression code rate corresponding to the client, obtaining the data size and the position information of the compressed edge sub-region, and generating compressed data of the edge sub-region according to the sub-region compressed data structure based on the data size of the edge sub-region, the position information of the edge sub-region, the second code compression code rate, and the compressed data of the edge sub-region;
it should be noted that, if the edge regions can be further segmented through steps S431 and S511, the same logic as that in step S52 is adopted to encode and compress the sub-region corresponding to each jth edge region type according to the jth sub-encoding compression rate, and generate the sub-region compressed data corresponding to the jth edge region type according to the sub-region compressed data structure, which is not described herein again, and all the sub-region compressed data corresponding to the jth edge region type are synthesized into the corresponding edge sub-region compressed data.
And step S53, combining all the focal sub-region compressed data and the edge sub-region compressed data into the current frame compressed data.
It should be noted that step S53 may be implemented by directly adopting the existing method of synthesizing a plurality of data into a whole data, and details are not described here. The same frame of data is synthesized into the whole data to be sent to the client, so that the data of the same frame of image can be ensured to reach the corresponding client at the same time, and partial data delay or packet loss caused by factors such as network instability is avoided.
As an embodiment, in step S6, the client parses the received current frame compressed data, divides the parsed current frame compressed data into N × N parts according to the size of the sub-region data in the data header of each sub-region, decodes each sub-region according to the coding compression rate in the corresponding data header, and sends the decoded data of all sub-regions to the virtual reality device connected to the client for display.
It should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. Further, the order of the partial steps may be rearranged, for example, the step S51 and the step S52 may be interchanged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A virtual reality video stream data processing system,
the cloud server comprises a database, a memory and a processor, wherein the memory stores computer programs, the database stores picture cutting parameters, region division parameters, a first coding compression code rate and a second coding compression code rate corresponding to each client, the first coding compression code rate is greater than the second coding compression code rate, and the server comprises the processor, and when the processor executes the computer programs, the following steps are realized:
step S1, starting the real-time interactive program based on the real-time interactive program starting instruction sent by the client;
step S2, receiving a real-time interaction instruction and current eyeball focusing point position information sent by the client, wherein the current eyeball focusing position information is obtained based on virtual reality equipment connected with the client;
step S3, generating a current frame rendering picture based on the real-time interactive instruction;
step S4, cutting the current frame rendering picture according to the picture cutting parameters corresponding to the client, and dividing the cut current frame rendering picture into a focusing area and an edge area according to the area dividing parameters corresponding to the client and the current eyeball focusing position information;
step S5, compressing the image in the focus area according to a first coding compression rate corresponding to the client, compressing the image in the edge area according to a second coding compression rate of the client, and generating current frame compressed data based on the image in the focus area and the image coding compression result in the edge area;
and step S6, sending the current frame compressed data to a corresponding client, and displaying the current frame compressed data on virtual reality equipment connected with the client.
2. The system of claim 1,
the processor, when executing the computer program, further implements the steps of:
step S10, receiving a configuration parameter compression packet sent by each client, wherein the configuration parameter compression packet is generated after being compressed through picture cutting parameters, region division parameters, a first coding compression code rate and a second coding compression code rate which are acquired through a configuration interface of the client and input by a user;
step S20, obtaining the picture cutting parameter, the region dividing parameter, the first coding compression code rate and the second coding compression code rate corresponding to the client from the configuration parameter compression, weight-saving analysis, and storing in the database.
3. The system of claim 1,
the cloud server is preconfigured with input event information of a real-time interactive program, and step S3 includes:
step S31, applying the real-time interactive instruction to corresponding input event information to generate current frame rendering picture information;
and step S32, rendering the current frame rendering picture information and generating the current frame rendering picture.
4. The system of claim 1,
the picture cutting parameter is N, and N is a positive integer greater than or equal to 2; the region division parameter is R which is a positive integer greater than or equal to 1,
Figure DEST_PATH_IMAGE001
wherein m is a positive odd number, m is less than or equal to N-2, L is the resolution length of the frame rendering picture, and W is the resolution width of the frame rendering picture.
5. The system of claim 4,
the step S4 includes:
step S41, dividing the resolution width of the current frame rendered picture into N parts, and dividing the resolution height into N parts, thereby dividing the current frame rendered picture into N × N sub-regions;
step S42, determining a target central point in the N × N subregions according to the current eyeball focusing position information;
step S43, based on the target center point and the region division parameter R corresponding to the client, determining a sub-region having a sub-region center point within a range from the target center point to R as a focus region, and determining all other sub-regions except the focus region in the N × N sub-regions as the edge region.
6. The system of claim 5,
the step S42 includes the steps of,
step S421, determining four vertex position points of a sub-area where the current eyeball focusing position information is located as candidate center points;
step S422, determine a position point closest to the current eyeball focus position among the candidate center points as the target center point.
7. The system of claim 5,
the system further includes a preset sub-region compressed data structure, where the sub-region compressed data structure includes a data header sub-structure and a compressed data sub-structure, the data header sub-structure includes a sub-region position data segment, a sub-region coding compression code rate data segment, and a sub-region coding compressed data size data segment, the compressed data sub-structure is used to store compressed sub-region data, and in step S5, the generating current frame compressed data based on the image of the focus region and the image coding result of the edge region includes:
step S51, performing code compression on each focusing sub-region of the focusing region according to the first code compression code rate corresponding to the client, acquiring the size of the compressed focusing sub-region data and the position information of the focusing sub-region, and generating focusing sub-region compressed data according to the sub-region compressed data structure based on the size of the focusing sub-region data, the position information of the focusing sub-region, the first code compression code rate and the compressed focusing sub-region data;
step S52, performing code compression on each edge sub-region of the edge region according to the second code compression code rate corresponding to the client, obtaining the data size and the position information of the compressed edge sub-region, and generating compressed data of the edge sub-region according to the sub-region compressed data structure based on the data size of the edge sub-region, the position information of the edge sub-region, the second code compression code rate, and the compressed data of the edge sub-region;
and step S53, combining all the focal sub-region compressed data and the edge sub-region compressed data into the current frame compressed data.
8. The system of claim 7,
in step S6, the client parses the received current frame compressed data, divides the parsed current frame compressed data into N × N parts according to the size of the sub-region data in the data header of each sub-region, decodes each sub-region according to the coding compression code rate in the corresponding data header, and sends the decoded data of all sub-regions to the virtual reality device connected to the client for display.
9. The system of claim 5,
the second coding compression code rate comprises a first sub-coding compression code rate, a second sub-coding compression code rate and an … Mth sub-coding compression code rate which are sequentially reduced, wherein M is a positive integer larger than 1; the region division parameter is that R1,R2,…RM-1},RiDividing the radius for the ith edge region, wherein the value range of i is 1 to M-1, and R1,R2,…RM-1The steps S43 further include:
step S431, enabling the distance between the center point of the sub-area and the target center point to be larger than Ri-1R is less than or equal toiIs determined as the i-th edge region, R0Is R, will be greater than RM-1Is determined as the Mth edge region; thereby dividing the edge area intoAn edge region, a second edge region and … th edge region, wherein the code compression rate of the j-th edge region is the j-th sub-code compression rate, and the value range of j is 1 to M;
in step S5, the encoding and compressing the image of the edge region according to the second encoding and compressing rate of the client includes:
and step S511, encoding and compressing each jth edge region according to the jth sub-encoding compression rate in the second encoding and compression code rate of the client.
10. The system of claim 1,
the first encoding compression code rate is C1, and the second encoding compression code rate is C2, where C2= a × C2, and a is [0.3,0.7 ].
CN202111200449.9A 2021-10-15 2021-10-15 Virtual reality video stream data processing system Active CN113645500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111200449.9A CN113645500B (en) 2021-10-15 2021-10-15 Virtual reality video stream data processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111200449.9A CN113645500B (en) 2021-10-15 2021-10-15 Virtual reality video stream data processing system

Publications (2)

Publication Number Publication Date
CN113645500A true CN113645500A (en) 2021-11-12
CN113645500B CN113645500B (en) 2022-01-07

Family

ID=78426939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111200449.9A Active CN113645500B (en) 2021-10-15 2021-10-15 Virtual reality video stream data processing system

Country Status (1)

Country Link
CN (1) CN113645500B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006235719A (en) * 2005-02-22 2006-09-07 Matsushita Electric Ind Co Ltd Imaging method and imaging apparatus
CN101282479A (en) * 2008-05-06 2008-10-08 武汉大学 Method for encoding and decoding airspace with adjustable resolution based on interesting area
CN102036073A (en) * 2010-12-21 2011-04-27 西安交通大学 Method for encoding and decoding JPEG2000 image based on vision potential attention target area
CN106101847A (en) * 2016-07-12 2016-11-09 三星电子(中国)研发中心 The method and system of panoramic video alternating transmission
CN106131615A (en) * 2016-07-25 2016-11-16 北京小米移动软件有限公司 Video broadcasting method and device
CN106658011A (en) * 2016-12-09 2017-05-10 深圳市云宙多媒体技术有限公司 Panoramic video coding and decoding methods and devices
CN106791854A (en) * 2016-11-22 2017-05-31 北京疯景科技有限公司 Image Coding, coding/decoding method and device
CN107040794A (en) * 2017-04-26 2017-08-11 盯盯拍(深圳)技术股份有限公司 Video broadcasting method, server, virtual reality device and panoramic virtual reality play system
CN107770561A (en) * 2017-10-30 2018-03-06 河海大学 A kind of multiresolution virtual reality device screen content encryption algorithm using eye-tracking data
CN108322781A (en) * 2018-02-08 2018-07-24 武汉噢易云计算股份有限公司 Promote the method and system that HD video effect is played in virtual desktop
CN108919958A (en) * 2018-07-16 2018-11-30 北京七鑫易维信息技术有限公司 A kind of image transfer method, device, terminal device and storage medium
CN109862019A (en) * 2019-02-20 2019-06-07 联想(北京)有限公司 Data processing method, device and system
CN110545430A (en) * 2018-05-28 2019-12-06 北京松果电子有限公司 video transmission method and device
CN110856019A (en) * 2019-11-20 2020-02-28 广州酷狗计算机科技有限公司 Code rate allocation method, device, terminal and storage medium
CN111787398A (en) * 2020-06-24 2020-10-16 浙江大华技术股份有限公司 Video compression method, device, equipment and storage device
CN111882626A (en) * 2020-08-06 2020-11-03 腾讯科技(深圳)有限公司 Image processing method, apparatus, server and medium
CN113012174A (en) * 2021-04-26 2021-06-22 中国科学院苏州生物医学工程技术研究所 Image fusion method, system and equipment

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006235719A (en) * 2005-02-22 2006-09-07 Matsushita Electric Ind Co Ltd Imaging method and imaging apparatus
CN101282479A (en) * 2008-05-06 2008-10-08 武汉大学 Method for encoding and decoding airspace with adjustable resolution based on interesting area
CN102036073A (en) * 2010-12-21 2011-04-27 西安交通大学 Method for encoding and decoding JPEG2000 image based on vision potential attention target area
CN106101847A (en) * 2016-07-12 2016-11-09 三星电子(中国)研发中心 The method and system of panoramic video alternating transmission
CN106131615A (en) * 2016-07-25 2016-11-16 北京小米移动软件有限公司 Video broadcasting method and device
CN106791854A (en) * 2016-11-22 2017-05-31 北京疯景科技有限公司 Image Coding, coding/decoding method and device
CN106658011A (en) * 2016-12-09 2017-05-10 深圳市云宙多媒体技术有限公司 Panoramic video coding and decoding methods and devices
CN107040794A (en) * 2017-04-26 2017-08-11 盯盯拍(深圳)技术股份有限公司 Video broadcasting method, server, virtual reality device and panoramic virtual reality play system
CN107770561A (en) * 2017-10-30 2018-03-06 河海大学 A kind of multiresolution virtual reality device screen content encryption algorithm using eye-tracking data
CN108322781A (en) * 2018-02-08 2018-07-24 武汉噢易云计算股份有限公司 Promote the method and system that HD video effect is played in virtual desktop
CN110545430A (en) * 2018-05-28 2019-12-06 北京松果电子有限公司 video transmission method and device
CN108919958A (en) * 2018-07-16 2018-11-30 北京七鑫易维信息技术有限公司 A kind of image transfer method, device, terminal device and storage medium
CN109862019A (en) * 2019-02-20 2019-06-07 联想(北京)有限公司 Data processing method, device and system
CN110856019A (en) * 2019-11-20 2020-02-28 广州酷狗计算机科技有限公司 Code rate allocation method, device, terminal and storage medium
CN111787398A (en) * 2020-06-24 2020-10-16 浙江大华技术股份有限公司 Video compression method, device, equipment and storage device
CN111882626A (en) * 2020-08-06 2020-11-03 腾讯科技(深圳)有限公司 Image processing method, apparatus, server and medium
CN113012174A (en) * 2021-04-26 2021-06-22 中国科学院苏州生物医学工程技术研究所 Image fusion method, system and equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CONGXIA DAI: "Geometry-Adaptive Block Partitioning for Intra Prediction in Image & Video Coding", 《2007 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》 *
汪大勇: "一种基于人类视觉系统的视频编码方法", 《计算机应用研究》 *
滕晓斌: "高清道口监控系统中的视频编码方案选择", 《信息技术》 *

Also Published As

Publication number Publication date
CN113645500B (en) 2022-01-07

Similar Documents

Publication Publication Date Title
CN109902767B (en) Model training method, image processing device, model training apparatus, image processing apparatus, and computer-readable medium
US5926575A (en) Model-based coding/decoding method and system
CN107979763B (en) Virtual reality equipment video generation and playing method, device and system
WO2022156640A1 (en) Gaze correction method and apparatus for image, electronic device, computer-readable storage medium, and computer program product
CN110166757B (en) Method, system and storage medium for compressing data by computer
CN101622876A (en) Systems and methods for providing personal video services
JP7390454B2 (en) Image generation method, device, electronic device and storage medium
CN112543342A (en) Virtual video live broadcast processing method and device, storage medium and electronic equipment
CN111402399A (en) Face driving and live broadcasting method and device, electronic equipment and storage medium
WO2019226429A1 (en) Data compression by local entropy encoding
CN115601484B (en) Virtual character face driving method and device, terminal equipment and readable storage medium
JP2023001926A (en) Method and apparatus of fusing image, method and apparatus of training image fusion model, electronic device, storage medium and computer program
CN113012270A (en) Stereoscopic display method and device, electronic equipment and storage medium
KR20230028253A (en) Face image processing method, face image processing model training method, device, device, storage medium and program product
CN113645500B (en) Virtual reality video stream data processing system
EP4162691A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
WO2023143349A1 (en) Facial video encoding method and apparatus, and facial video decoding method and apparatus
CN117274491A (en) Training method, device, equipment and medium for three-dimensional reconstruction model
RU2236751C2 (en) Methods and devices for compression and recovery of animation path using linear approximations
CN113411587B (en) Video compression method, device and computer readable storage medium
CN115908712A (en) Three-dimensional reconstruction and model training method and equipment based on image and storage medium
CN113810755A (en) Panoramic video preview method and device, electronic equipment and storage medium
CN114926658A (en) Picture feature extraction method and device, computer equipment and readable storage medium
CN113486787A (en) Face driving and live broadcasting method and device, computer equipment and storage medium
CN113469292A (en) Training method, synthesizing method, device, medium and equipment for video synthesizing model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant