CN113440838A - End-to-end image rendering fusion method and system - Google Patents

End-to-end image rendering fusion method and system Download PDF

Info

Publication number
CN113440838A
CN113440838A CN202111010210.5A CN202111010210A CN113440838A CN 113440838 A CN113440838 A CN 113440838A CN 202111010210 A CN202111010210 A CN 202111010210A CN 113440838 A CN113440838 A CN 113440838A
Authority
CN
China
Prior art keywords
user
rendering
terminal
cloud
personalized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111010210.5A
Other languages
Chinese (zh)
Inventor
唐勇
付志鹏
张卫江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuancai Interactive Network Science And Technology Co ltd
Original Assignee
Xuancai Interactive Network Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuancai Interactive Network Science And Technology Co ltd filed Critical Xuancai Interactive Network Science And Technology Co ltd
Priority to CN202111010210.5A priority Critical patent/CN113440838A/en
Publication of CN113440838A publication Critical patent/CN113440838A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering

Abstract

The invention discloses an end-to-end image rendering fusion method and system, relates to the technical field of image rendering fusion, and solves the technical problems of reduced cloud bearing capacity and terminal resource waste caused by insufficient fusion of a cloud and a terminal in image processing. Meanwhile, compared with the overall cloud mode of the graphic application adopted in the market, the terminal video playing and control instruction uploading mode has the advantages of multi-person mutual use of cloud resources, short processing flow, low server pressure, short time delay and the like.

Description

End-to-end image rendering fusion method and system
Technical Field
The present application relates to the field of image rendering fusion technologies, and in particular, to an end-to-end image rendering fusion method and system.
Background
Image applications, as a typical internet application, have a huge user base and a huge market space. The traditional image application implementation technical means comprises mechanisms such as C/S, B/S, and the like, although the management server such as remote position/value/interaction and the like is provided, the local control and rendering are mainly used, and a client must be installed locally for playing, so that the inconvenience in use of a user is brought, and the market revenue situation is also influenced.
Along with the arrival of the 5G era, the cloud running image application which is made with the arrival, for example, cloud games, adopt the mechanism of logical operation at the cloud, graph rendering, streaming transmission + local instruction conversion uploading, the advantages of installation-free and multi-screen interaction of users are achieved, however, the adaptation cloud transformation which needs to be transformed by the traditional B/S, C/S game exists, the cloud conversion is controlled locally by the user side, the cloud response is controlled to play and transmit and other tedious problems, the cloud bearing capacity is lower, the whole adaptation workload is improved, the end-to-end double interaction brings longer time delay, and the user experience effect is seriously influenced.
The image application running in the cloud in the market, including the typical business cloud game, has the above problems, and the fundamental reason is that the image application content based on the traditional C/S, B/S architecture design and development is transformed into the cloud mode based on the terminal running pressure, so that the cloud bearing capacity becomes the business bottleneck, and the waste of the terminal capacity and the pressure of network transmission are caused at the same time.
How effectively fuse the ability of terminal, network and high in the clouds, can survey the promotion of chip ability with the help of the terminal, accessible high in the clouds ability sharing supports many people and networks interactive flexibility again, shares the synchronization with the help of the quick sharing of network ability simultaneously, and this becomes the problem that image application high in the clouds operation need a solution urgently.
Disclosure of Invention
The application provides an end-to-end image rendering fusion method and system, and the technical purpose is that a cloud is adopted to process a public multiplexing scene and multi-user interactive information, a terminal processes rendering parts such as user personalized roles, lighting effects and equipment, meanwhile, a user control part is processed by a user terminal, and high-speed sharing of network transmission is effectively combined, so that the capabilities of the cloud and the terminal are effectively fused, the capability of the terminal is fully utilized, network transmission pressure is reduced, and the service carrying capacity of the cloud is improved.
The technical purpose of the application is realized by the following technical scheme:
an end-to-end image rendering fusion method, comprising:
the terminal initiates an image application service request to the cloud, and the cloud carries out resource scheduling and logic division according to the image application service request; the logic division comprises the step of logically dividing the corresponding image application according to the principle of universality and user individuation;
the cloud end processes the user personalized logic, the user role, the lighting effect and the equipment delivery terminal;
the cloud end carries out general logic processing, then renders the general image according to the result of the general logic operation to obtain general image rendering data, intercepts the general scene, and packs the general image rendering data and the general scene to obtain a general scene data packet; meanwhile, the terminal carries out user personalized logic processing, and then carries out personalized rendering on local personalized user roles, equipment and image data related to the local personalized user roles and the equipment according to the running result of the user personalized logic to obtain personalized rendering data;
the cloud transmits the general scene data packet to the terminal through network transmission, and the terminal receives the general scene data packet of the cloud, fuses the personalized rendering data and the general image rendering data to complete the rendering fusion of the images;
the terminal performs consistency processing of light and shadow effects on the rendered and fused images;
and the terminal finishes the control of the user on the image application, and uploads a corresponding control processing instruction to the cloud end through network transmission to perform interactive information processing among multiple users and user state storage.
An end-to-end image rendering fusion system comprises a cloud and a terminal, wherein the cloud comprises a system management module and a cloud rendering module;
the system management module comprises:
the server management unit is used for scheduling resources according to the image application service request initiated by the terminal;
the rendering logic division unit is used for logically dividing the corresponding image application according to the universality and the user individuation principle;
the interactive synchronization management unit is used for carrying out interactive synchronization processing on user coordinates and multi-user interaction on the general logic and general image rendering data of the cloud and the user personalized logic and personalized rendering data of the terminal;
the cloud rendering management unit is used for managing logic division and network addressing transmission;
the cloud rendering module comprises:
the general logic processing unit is used for performing general logic operation and calculating before rendering the general image;
the universal image rendering unit is used for rendering the universal image to obtain universal image rendering data;
the adaptive range intercepting unit intercepts the general scene;
the data packaging unit is used for packaging the general image rendering data and the general scene to obtain a general scene data packet and transmitting the general scene data packet to the terminal through network transmission;
the terminal includes:
the personalized logic processing unit is used for carrying out user personalized logic processing, calling local role and equipment data according to the position of a user game role and corresponding interaction information issued by a cloud end, and finishing corresponding user attribute and interaction relation processing;
the personalized graphic rendering unit is used for performing personalized rendering on the local personalized user role, equipment and image data related to the local personalized user role and the equipment according to the personalized logic operation result of the user to obtain personalized rendering data;
the acquisition unit acquires a universal scene data packet of the cloud;
the fusion unit is used for fusing the personalized rendering data and the general image rendering data to complete the rendering fusion of the images;
the shadow synchronization unit is used for carrying out consistency processing on shadow effects on the rendered and fused images;
and the control processing unit is used for finishing the control of the user on the image application, and uploading a corresponding control processing instruction to the cloud end through network transmission to perform interactive information processing among multiple users and user state storage.
The method and the system for rendering and fusing the images at the destination have the advantages that:
(1) the cloud rendering/data packet issuing of the multiplexing scene based on the cloud graphics application, the role equipment graphics rendering/resource integration/light and shadow synchronization/control processing at the terminal side and the like are achieved, the cloud and terminal capabilities are fused, on the basis of network transmission guarantee, cloud computing resources and general scene rendering data are shared, the terminal processing capability is fully utilized, and the capabilities of the cloud and the two sides of the terminal are compatible.
Merging end-to-end rendering capability: based on the principle of cloud and local personalized processing on the multiplexing part, the method is different from the independent emphasis on the cloud or terminal operation mode, can better exert the capabilities of the cloud and the two sides of the terminal, and simultaneously gives consideration to the convenience of use of a user and the scale deployment cost.
The cloud bearing capacity is effectively improved: the graphics applications on the market are cloud, for example, cloud game implementation mechanisms, all adopt a method of constructing an operating environment facing each user, and processing cloud rendering and video coding issuing, so that the operating pressure of a cloud is high. By adopting the method, the running environment can be constructed by the game scene processed by the cloud, the multiplexed scene data is directly issued to a plurality of users in a data packet mode according to the location of the user coordinates, the rendering work and the video coding process of user personalized roles/equipment and the like are reduced, and the service bearing capacity of the cloud is doubled.
(2) Meanwhile, compared with the overall cloud mode of the graphic application adopted in the market, the terminal video playing and control instruction uploading mode has the advantages of multi-person mutual use of cloud resources, short processing flow, low server pressure, short time delay and the like.
Drawings
FIG. 1 is a flow chart of a method described herein;
FIG. 2 is a schematic view of a system according to the present application;
FIG. 3 is a schematic diagram of a cloud-side general scene graph capture;
fig. 4 is a schematic diagram of a resolution transition process of the nine-square grid.
Detailed Description
The technical solution of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a method according to the present application, as shown in fig. 1, the method comprising:
the terminal initiates an image application service request to the cloud, and the cloud carries out resource scheduling and logic division according to the image application service request; the logic division comprises the step of logically dividing the corresponding image application according to the principle of universality and user individuation;
the cloud end processes the user personalized logic, the user role, the lighting effect and the equipment delivery terminal;
the cloud end carries out general logic processing, then renders the general image according to the result of the general logic operation to obtain general image rendering data, intercepts the general scene, and packs the general image rendering data and the general scene to obtain a general scene data packet; meanwhile, the terminal carries out user personalized logic processing, and then carries out personalized rendering on local personalized user roles, equipment and image data related to the local personalized user roles and the equipment according to the running result of the user personalized logic to obtain personalized rendering data;
the cloud transmits the general scene data packet to the terminal through network transmission, and the terminal receives the general scene data packet of the cloud, fuses the personalized rendering data and the general image rendering data to complete the rendering fusion of the images;
the terminal performs consistency processing of light and shadow effects on the rendered and fused images;
and the terminal finishes the control of the user on the image application, and uploads a corresponding control processing instruction to the cloud to perform interactive information processing among multiple users and user state storage.
Fig. 2 is a schematic diagram of a system according to the present application, where the system includes a cloud and a terminal, and the cloud includes a system management module and a cloud rendering module; the terminal comprises a personality logic processing unit, a personality graphic rendering unit, an acquisition unit, a fusion unit, a light and shadow synchronization unit and a control processing unit.
The system management module comprises a server management unit, a rendering logic division unit, an interaction synchronization management unit and a cloud rendering management unit; the cloud rendering module comprises a general logic processing unit, a general image rendering unit, a self-adaptive range intercepting unit and a data packaging unit.
The system management module is mainly used for providing functions of division of labor cooperation of cloud general logic, terminal personalized logic processing and rendering, corresponding computing resource calling and the like.
The server management unit is used for scheduling resources according to an image application service request initiated by a terminal, and specifically comprises: and functions of division and cooperation of cloud general logic, terminal personalized logic processing and rendering, corresponding computing resource calling and the like are provided.
The rendering logic division unit is used for logically dividing the corresponding image application according to the principle of universality and user individuation; the general logic and the corresponding general scene rendering are delivered to the cloud terminal for processing, and the personalized logic, the user role, the lighting effect, the equipment and the like are delivered to the terminal for processing.
And the interaction synchronization management unit is used for performing interaction synchronization processing of user coordinates and multi-user interaction on the general logic and the general image rendering data of the cloud and the user personalized logic and personalized rendering data of the terminal.
The cloud rendering management unit is used for managing logic division and network addressing transmission, and specifically comprises: and aiming at the cloud universal rendering part, performing user personalized scene self-adaptive division logic management, network addressing issuing management and the like.
The cloud rendering module is used for realizing a universal scene rendering service facing multiple users based on cloud rendering capability, and mainly comprises functions of universal logic processing, universal graph rendering, adaptive range interception, scene data packaging and the like.
The general logic processing unit is used for performing general logic operation and calculating before rendering the general image; the method specifically comprises the following steps: and completing the calculation parts before rendering, such as multi-user multiplexing graph association, scene element calling and the like at the cloud.
The universal image rendering unit is used for rendering a universal image to obtain universal image rendering data, for example, based on one or more server rendering resources, the unified rendering of the universal scene of the graphic application at the cloud is completed.
And the adaptive range intercepting unit is used for intercepting the general scene. Specifically, the corresponding game picture is captured according to the current coordinate position of the user, and the method of uniform scene division or personalized division for the user can be adopted, however, in consideration of the complexity of the implementation, a uniform scene division mechanism, such as a squared figure mode, can be adopted.
And the data packaging unit is used for packaging the general image rendering data and the general scene to obtain a general scene data packet and transmitting the general scene data packet to the terminal.
And processing a part related to user personalization on the terminal side, wherein the part comprises functions of personalized logic operation, personalized graph rendering, cloud rendering data acquisition, layer integration, light and shadow effect synchronization, control processing and the like.
The individual logic processing unit is used for carrying out user individual logic processing, calling local role and equipment data according to the user game role position and corresponding interaction information issued by the cloud end, and finishing corresponding user attribute and interaction relation processing.
And the personalized graphic rendering unit is used for performing personalized rendering on the local personalized user role, equipment and image data related to the local personalized user role and the equipment according to the operation result of the personalized logic of the user to obtain personalized rendering data.
The acquisition unit is used for acquiring a general scene data packet of the cloud, and specifically, decoding and extracting are completed aiming at the mode of the audio and video stream.
The fusion unit is configured to fuse the personalized rendering data and the general image rendering data to complete rendering fusion of images, and specifically includes: and integrating rendering frame cache data output by local rendering and general scene data issued by a cloud, wherein the rendering frame cache data comprises cloud rendering shared scenes, local personalized rendering coordinate data mapping, layer superposition processing and the like.
And the light and shadow synchronization unit is used for performing consistency processing on the light and shadow effect on the image subjected to rendering fusion according to factors such as unified space-time position of the terminal.
The control processing unit is used for finishing the control of the user for the image application, and uploading a corresponding control processing instruction to the cloud end through network transmission to perform interactive information processing among multiple users and user state storage. The control processing is put on the terminal side, so that the uploading part of the operation instruction can be avoided, the relevant links of the game operation are effectively reduced, and the time delay is remarkably reduced.
The high-speed data issuing from the cloud to the terminal and the control instruction uploading from the terminal to the cloud are realized through network transmission. The high-speed data issuing refers to issuing the universal graphic scene processed by the cloud end to the terminal in an audio/video mode and the like; the operation instruction uploading means that the terminal is uploaded to the cloud terminal through related operation instructions such as position movement and other general scene changes.
As a specific embodiment, when the data packing unit packs the general image rendering data and the general scene to obtain a general scene data packet, and issues the general scene data packet to the terminal, as shown in fig. 3, when the data packing unit is different from the current terminal or the cloud running image application, a processing mode of opening a whole map at a time is adopted, and this application is implemented by appropriately dividing a whole game scene map, and specifically includes: dividing the general scene data packet according to a nine-square grid to obtain nine data packets, and then issuing the nine data packets to a terminal; the area of the nine-grid is within the range of twice the maximum resolution of the terminal, the center grid of the nine-grid is the position of a terminal user, four edge joint grids are connected with the four edges of the center grid, and four corner joint grids are connected with the four corners of the center grid.
In order to further reduce the transmission pressure of the cloud and the network and adapt to the user range, the resolution of the center grid is the highest resolution which can be provided, the resolution of the edge grid is 50% of the highest resolution, and the resolution of the corner grid is 30% of the highest resolution. After the processing is carried out, the operation pressure is reduced to only about 46.67 percent relative to the total lowest resolution output, and therefore the corresponding processing efficiency is improved.
When the location of the end user changes, as shown in fig. 4, the user moves from point a of frame (a) to point B of frame (B), frames 3, 4, 5 in (a) associated with point a are removed, and frames 10, 11, 12 in (B) are added as part of a new grid.
Meanwhile, in the position changing process, the resolution precision of different areas corresponding to the nine-square grid synchronously changes in the process that the user moves from the point A to the point B. Before moving, the center grid point A in (a) is served with the highest resolution output, the edge grids 2, 4, 6 and 8 are output according to 50% of the highest resolution, and the corner grids 3, 5, 7 and 9 are output according to 30% of the highest resolution. After moving, the center grid point B in (B) is output for serving the highest resolution, the edge grids 1, 7, 9 and 11 are output according to 50% of the highest resolution, and the corner grids 6, 8, 10 and 12 are output according to 30% of the highest resolution.
The method of the nine-square grid mode has the characteristic of self-adaption network, when the network quality is poor, the picture frames 1, 2, 4, 6 and 8 around the core area of the nine-square grid, such as the point A, adopt higher picture quality to transmit, other picture frames can have lower picture quality, and the transmission can be avoided under extreme conditions; meanwhile, the frames 1, 2, 4, 6 and 8 can be further refined, so that the frame 1 which is mainly concerned by a user is clear enough, and other frames can be subjected to image quality dimension reduction processing, so that the network transmission pressure can be reduced, and the cloud bearing capacity can be increased. The processing mode can reduce the operating pressure caused by the processing of the whole map in the prior art, flexibly and adaptively adapt to the position change of a user, and can effectively adapt to the change of the network state by adopting the adjustment of the image quality in different picture frames.
The foregoing is an exemplary embodiment of the present disclosure, and the scope of the present disclosure is defined by the claims and their equivalents.

Claims (6)

1. An end-to-end image rendering fusion method is characterized by comprising the following steps:
the terminal initiates an image application service request to the cloud, and the cloud carries out resource scheduling and logic division according to the image application service request; the logic division comprises the step of logically dividing the corresponding image application according to the principle of universality and user individuation;
the cloud end processes the user personalized logic, the user role, the lighting effect and the equipment delivery terminal;
the cloud end carries out general logic processing, then renders the general image according to the result of the general logic operation to obtain general image rendering data, intercepts the general scene, and packs the general image rendering data and the general scene to obtain a general scene data packet; meanwhile, the terminal carries out user personalized logic processing, and then carries out personalized rendering on local personalized user roles, equipment and image data related to the local personalized user roles and the equipment according to the running result of the user personalized logic to obtain personalized rendering data;
the cloud transmits the general scene data packet to the terminal through network transmission, and the terminal receives the general scene data packet of the cloud, fuses the personalized rendering data and the general image rendering data to complete the rendering fusion of the images;
the terminal performs consistency processing of light and shadow effects on the rendered and fused images;
and the terminal finishes the control of the user on the image application, and uploads a corresponding control processing instruction to the cloud end through network transmission to perform interactive information processing among multiple users and user state storage.
2. The method of claim 1, wherein when the cloud issues the generic scene data packet to the terminal, the method comprises:
dividing the general scene data packet according to a nine-square grid to obtain nine data packets, and then issuing the nine data packets to a terminal;
the area where the nine-grid is located is a range twice of the maximum resolution of the terminal, the center grid of the nine-grid is the position where a terminal user is located, four edge joint grids are connected with four edges of the center grid, and four corner joint grids are connected with four corners of the center grid; the resolution of the center grid is the highest resolution that can be provided, the resolution of the edge grid is 50% of the highest resolution, and the resolution of the corner grid is 30% of the highest resolution.
3. The method of claim 2, wherein when the location of the end user changes, the location of the center grid changes, and the edge grid and corner grid of the center grid change accordingly.
4. An end-to-end image rendering fusion system is characterized by comprising a cloud end and a terminal, wherein the cloud end comprises a system management module and a cloud end rendering module;
the system management module comprises:
the server management unit is used for scheduling resources according to the image application service request initiated by the terminal;
the rendering logic division unit is used for logically dividing the corresponding image application according to the universality and the user individuation principle;
the interactive synchronization management unit is used for carrying out interactive synchronization processing on user coordinates and multi-user interaction on the general logic and general image rendering data of the cloud and the user personalized logic and personalized rendering data of the terminal;
the cloud rendering management unit is used for managing logic division and network addressing transmission;
the cloud rendering module comprises:
the general logic processing unit is used for performing general logic operation and calculating before rendering the general image;
the universal image rendering unit is used for rendering the universal image to obtain universal image rendering data;
the adaptive range intercepting unit intercepts the general scene;
the data packaging unit is used for packaging the general image rendering data and the general scene to obtain a general scene data packet and transmitting the general scene data packet to the terminal through network transmission;
the terminal includes:
the personalized logic processing unit is used for carrying out user personalized logic processing, calling local role and equipment data according to the position of a user game role and corresponding interaction information issued by a cloud end, and finishing corresponding user attribute and interaction relation processing;
the personalized graphic rendering unit is used for performing personalized rendering on the local personalized user role, equipment and image data related to the local personalized user role and the equipment according to the personalized logic operation result of the user to obtain personalized rendering data;
the acquisition unit acquires a universal scene data packet of the cloud;
the fusion unit is used for fusing the personalized rendering data and the general image rendering data to complete the rendering fusion of the images;
the shadow synchronization unit is used for carrying out consistency processing on shadow effects on the rendered and fused images;
and the control processing unit is used for finishing the control of the user on the image application, and uploading a corresponding control processing instruction to the cloud end through network transmission to perform interactive information processing among multiple users and user state storage.
5. The system of claim 4, wherein the data packing unit, when sending the generic scene data packet to the terminal via network transmission, comprises:
dividing the general scene data packet according to a nine-square grid to obtain nine data packets, and transmitting the nine data packets to a terminal through network transmission;
the area where the nine-grid is located is a range twice of the maximum resolution of the terminal, the center grid of the nine-grid is the position where a terminal user is located, four edge joint grids are connected with four edges of the center grid, and four corner joint grids are connected with four corners of the center grid; the resolution of the center grid is the highest resolution that can be provided, the resolution of the edge grid is 50% of the highest resolution, and the resolution of the corner grid is 30% of the highest resolution.
6. The system of claim 5, wherein when the location of the end user changes, the location of the center grid changes, and the edge grid and corner grid of the center grid change accordingly.
CN202111010210.5A 2021-08-31 2021-08-31 End-to-end image rendering fusion method and system Pending CN113440838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111010210.5A CN113440838A (en) 2021-08-31 2021-08-31 End-to-end image rendering fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111010210.5A CN113440838A (en) 2021-08-31 2021-08-31 End-to-end image rendering fusion method and system

Publications (1)

Publication Number Publication Date
CN113440838A true CN113440838A (en) 2021-09-28

Family

ID=77819273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111010210.5A Pending CN113440838A (en) 2021-08-31 2021-08-31 End-to-end image rendering fusion method and system

Country Status (1)

Country Link
CN (1) CN113440838A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827186A (en) * 2022-02-25 2022-07-29 阿里巴巴(中国)有限公司 Cloud application processing method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110393921A (en) * 2019-08-08 2019-11-01 腾讯科技(深圳)有限公司 Processing method, device, terminal, server and the storage medium of cloud game
CN110891659A (en) * 2017-06-09 2020-03-17 索尼互动娱乐股份有限公司 Optimized delayed illumination and foveal adaptation of particle and simulation models in a point of gaze rendering system
CN111803940A (en) * 2020-01-14 2020-10-23 厦门雅基软件有限公司 Game processing method and device, electronic equipment and computer-readable storage medium
CN113082693A (en) * 2021-03-29 2021-07-09 阿里巴巴新加坡控股有限公司 Rendering method, cloud game rendering method, server and computing equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110891659A (en) * 2017-06-09 2020-03-17 索尼互动娱乐股份有限公司 Optimized delayed illumination and foveal adaptation of particle and simulation models in a point of gaze rendering system
CN110393921A (en) * 2019-08-08 2019-11-01 腾讯科技(深圳)有限公司 Processing method, device, terminal, server and the storage medium of cloud game
CN111803940A (en) * 2020-01-14 2020-10-23 厦门雅基软件有限公司 Game processing method and device, electronic equipment and computer-readable storage medium
CN113082693A (en) * 2021-03-29 2021-07-09 阿里巴巴新加坡控股有限公司 Rendering method, cloud game rendering method, server and computing equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827186A (en) * 2022-02-25 2022-07-29 阿里巴巴(中国)有限公司 Cloud application processing method and system

Similar Documents

Publication Publication Date Title
CN105263050B (en) Mobile terminal real-time rendering system and method based on cloud platform
US9923957B2 (en) Multimedia aware cloud for mobile device computing
US9741316B2 (en) Method and system for displaying pixels on display devices
CN102883135B (en) Screen sharing and control method
CN100588250C (en) Method and system for rebuilding free viewpoint of multi-view video streaming
Abdallah et al. Delay-sensitive video computing in the cloud: A survey
CN109640029B (en) Method and device for displaying video stream on wall
CN110226316A (en) For executing conversion to virtual reality video and spreading defeated system and method
CN108881797B (en) Data processing method and device for video network
CN113902866B (en) Double-engine driven digital twin system
CN109040786A (en) Transmission method, device, system and the storage medium of camera data
CN102857533B (en) A kind of long-distance interactive system based on cloud computing
CN111147801A (en) Video data processing method and device for video networking terminal
CN111124333A (en) Method, device, equipment and storage medium for synchronizing display contents of electronic whiteboard
CN113440838A (en) End-to-end image rendering fusion method and system
CN103763307B (en) A kind of bandwidth optimization method and system
CN113849073A (en) Remote control-oriented mouse and returned picture synchronization method and system
CN110659080B (en) Page display method and device, electronic equipment and storage medium
CN110401871B (en) Intelligent television interactive service operation system and method based on operation resource sharing
CN102857535B (en) A kind of computer processing unit, computer gateway, interactive system
CN110737519A (en) theme switching method and device
CN102857534A (en) Remote interaction method on basis of cloud computing
CN116503498A (en) Picture rendering method and related device
KR20140050522A (en) System and providing method for multimedia virtual system
EP4209003A1 (en) Orchestrating a multidevice video session

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210928