WO2021164533A1 - 渲染方法和装置 - Google Patents

渲染方法和装置 Download PDF

Info

Publication number
WO2021164533A1
WO2021164533A1 PCT/CN2021/074693 CN2021074693W WO2021164533A1 WO 2021164533 A1 WO2021164533 A1 WO 2021164533A1 CN 2021074693 W CN2021074693 W CN 2021074693W WO 2021164533 A1 WO2021164533 A1 WO 2021164533A1
Authority
WO
WIPO (PCT)
Prior art keywords
operation instruction
server
user
screen
rendering
Prior art date
Application number
PCT/CN2021/074693
Other languages
English (en)
French (fr)
Inventor
张希文
周晓鹏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21756688.4A priority Critical patent/EP4088795A4/en
Priority to US17/904,661 priority patent/US20230094880A1/en
Publication of WO2021164533A1 publication Critical patent/WO2021164533A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/355Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/358Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/67Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering

Definitions

  • This application relates to electronic technology, and in particular to a rendering method and device.
  • Cloud gaming is a game based on cloud computing. All games run on the server side.
  • the rendered game screen is compressed by video and transmitted to the client through the network.
  • the user can watch the game screen and operate the game through the client.
  • the operation instruction is transmitted to the server through the network, and the server responds to the operation instruction.
  • the processing delay of cloud games is related to the communication characteristics of the network. Once the network fluctuates, the processing delay will be prolonged and the game will be stuck.
  • the embodiments of the present application provide a rendering method and device to save processing time delay and avoid the phenomenon of screen freezes.
  • the present application provides a rendering method, including: receiving a first operation instruction from a user; according to the first operation instruction, rendering a first screen of an application program corresponding to the first operation instruction; The first operation instruction predicts the second operation instruction; according to the second operation instruction, the second screen of the application corresponding to the second operation instruction is rendered; if the preset time period after the first operation instruction is received If no operation instruction from the user is received, the rendered second picture is sent to the user.
  • the server predicts the user's operation through the server, which can render the screen switching caused by the user's operation in advance, saving Deal with the time delay and avoid the phenomenon of screen freeze.
  • the predicting the second operation instruction according to the first operation instruction includes: using an artificial intelligence method to predict the second operation instruction according to the first operation instruction.
  • predicting the user's operation instructions through the artificial intelligence method can improve the accuracy of the prediction result.
  • the rendering the first picture of the application program corresponding to the first operation instruction includes: determining the first picture, and rendering the first picture.
  • the rendering the second screen of the application program corresponding to the second operation instruction includes: determining the second screen, and rendering the second screen.
  • the preset duration is 100ms or 150ms.
  • this application does not receive an operation instruction from the client in a relatively short period of time, it will render the screen based on the prediction result, which can avoid screen jams.
  • the present application provides an application server, including: a receiving module, configured to receive a first operation instruction from a user; and a rendering module, configured to render the corresponding first operation instruction according to the first operation instruction The first picture of the application program; a prediction module for predicting a second operation instruction according to the first operation instruction; the rendering module is also used for rendering the second operation instruction according to the second operation instruction Corresponding second screen of the application; a sending module, configured to display the rendered second screen if no operation instruction from the user is received within a preset time period after receiving the first operation instruction Sent to the user.
  • the prediction module is specifically configured to use an artificial intelligence method to predict a second operation instruction according to the first operation instruction.
  • the rendering module is specifically configured to determine the first picture and render the first picture.
  • the rendering module is specifically configured to determine the second picture and render the second picture.
  • the preset duration is 100ms or 150ms.
  • the present application provides a server including: one or more processors; a memory for storing one or more programs; when the one or more programs are executed by the one or more processors, The one or more processors are caused to implement the method according to any one of the above-mentioned first aspects.
  • the present application provides a computer-readable storage medium, including a computer program, which when executed on a computer, causes the computer to execute the method described in any one of the first to second aspects.
  • the present application provides a computer program, when the computer program is executed by a computer, it is used to execute the method described in any one of the first to second aspects.
  • Fig. 1 shows an exemplary structural diagram of a communication system
  • FIG. 2 shows an exemplary structural diagram of the server 200
  • FIG. 3 shows an exemplary structural diagram of the terminal device 300
  • FIG. 4 shows an exemplary structural diagram of the software layer of the terminal device 300
  • FIG. 5 is a flowchart of an embodiment of a rendering method of this application.
  • Fig. 6 shows an exemplary schematic diagram of the prediction process of the server
  • Figures 7-11 exemplarily show a schematic diagram of cloud game screen switching
  • FIG. 12 is a schematic structural diagram of an embodiment of an application server of this application.
  • At least one (item) refers to one or more, and “multiple” refers to two or more.
  • “And/or” is used to describe the association relationship of the associated objects, indicating that there can be three types of relationships. For example, “A and/or B” can mean: only A, only B, and both A and B. , Where A and B can be singular or plural. The character “/” generally indicates that the associated objects before and after are in an “or” relationship. "The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • At least one of a, b, or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c" ", where a, b, and c can be single or multiple.
  • Fig. 1 shows an exemplary structural diagram of a communication system.
  • the communication system includes a server and a terminal device.
  • the communication system may also include multiple servers and each server’s Other numbers of terminal devices may be included in the coverage area, which is not limited in this application.
  • the communication system may also include other network entities such as a network controller and a switching device, and the present application is not limited thereto.
  • the black arrow in Figure 1 indicates that there is a communication connection between the server and the terminal device, that is, data transmission can be realized between the server and the terminal device through a communication network.
  • the above-mentioned communication network may be a local area network, or a wide area network switched through a relay (relay) device, or include a local area network and a wide area network.
  • the communication network may be a short-distance communication network such as a wifi hotspot network, a wifi P2P network, a Bluetooth network, a zigbee network, or a near field communication (NFC) network.
  • the communication network is a wide area network, for example, the communication network may be a 3rd-generation wireless telephone technology (3G) network, or the 4th generation mobile communication technology (4G). ) Network, 5th-generation mobile communication technology (5G) network, public land mobile network (PLMN) or the Internet, etc., which are not limited in the embodiment of the present application.
  • FIG. 1 is only for ease of understanding, and schematically shows a communication system, but this should not constitute any limitation to this application.
  • the communication system may also include a larger number of servers, or a larger number of servers.
  • the servers communicating with different terminal devices may be the same server or different servers, and the number of servers communicating with different terminal devices may be the same or different, which is not limited in this application.
  • the server in the communication system may be any device with a transceiver function or a chip that can be installed in the device.
  • FIG. 2 shows an exemplary structural diagram of the server 200, and the structure of the server 200 can refer to the structure shown in FIG. 2.
  • the server includes at least one processor 201, at least one memory 202, and at least one network interface 203.
  • the processor 201, the memory 202, and the network interface 203 are connected, for example, by a bus. In the present application, the connection may include various interfaces, transmission lines, or buses, which are not limited in this embodiment.
  • the network interface 203 is used to connect the server to other communication devices through a communication link, such as an Ethernet interface.
  • the processor 201 is mainly used for processing communication data, controlling the entire server, executing software programs, and processing data of the software programs, for example, for supporting the server to perform the actions described in the embodiments.
  • the processor 201 is mainly used to control the entire server, execute software programs, and process data of the software programs.
  • the server may include multiple processors to enhance its processing capability, and various components of the server may be connected through various buses.
  • the processor 201 may also be expressed as a processing circuit or a processor chip.
  • the memory 202 is mainly used to store software programs and data.
  • the memory 202 may exist independently and is connected to the processor 201.
  • the memory 202 may be integrated with the processor 201, for example, integrated in one chip.
  • the memory 202 can store program codes for executing the technical solutions of the present application, and the processor 201 controls the execution.
  • Various types of computer program codes that are executed can also be regarded as drivers of the processor 201.
  • Figure 2 shows only one memory and one processor. In an actual server, there may be multiple processors and multiple memories.
  • the memory may also be referred to as a storage medium or storage device.
  • the memory may be a storage element on the same chip as the processor, that is, an on-chip storage element, or an independent storage element, which is not limited in this application.
  • terminal equipment in the communication system can also be referred to as user equipment (UE), which can be deployed on land, including indoor or outdoor, handheld or vehicle-mounted; and can also be deployed on water (such as ships, etc.) ); It can also be deployed in the air (such as airplanes, balloons, and satellites, etc.).
  • Terminal devices can be mobile phones, tablets, wearable devices with wireless communication functions (such as smart watches), location trackers with positioning functions, computers with wireless transceiver functions, virtual reality (virtual reality) , VR) devices, augmented reality (AR) devices, wireless devices in smart homes (smart home), etc., this application does not limit this.
  • the aforementioned terminal equipment and the chips that can be installed in the aforementioned terminal equipment are collectively referred to as terminal equipment.
  • FIG. 3 shows an exemplary structural diagram of the terminal device 300.
  • the terminal device 300 includes: an application processor 301, a microcontroller unit (MCU) 302, a memory 303, a modem (modem) 304, a radio frequency (RF) module 305, and wireless fidelity (Wireless-Fidelity, Wi-Fi for short) module 306, Bluetooth module 307, sensor 308, input/output (input/output, I/O) device 309, positioning module 310 and other components.
  • These components can communicate through one or more communication buses or signal lines.
  • the aforementioned communication bus or signal line may be the CAN bus provided in this application.
  • the terminal device 300 may include more or fewer components than those shown in the figure, or a combination of certain components, or a different component arrangement.
  • the application processor 301 is the control center of the terminal device 300, and uses various interfaces and buses to connect various components of the terminal device 300.
  • the processor 301 may include one or more processing units.
  • the memory 303 stores computer programs, such as the operating system 311 and the application program 312 shown in FIG. 3.
  • the application processor 301 is configured to execute the computer program in the memory 303, so as to realize the functions defined by the computer program.
  • the application processor 301 executes the operating system 311 to implement various functions of the operating system on the terminal device 300.
  • the memory 303 also stores other data besides the computer program, such as data generated during the running of the operating system 311 and the application program 312.
  • the memory 303 is a non-volatile storage medium, and generally includes a memory and an external memory. Memory includes but is not limited to random access memory (RAM), read-only memory (ROM), or cache (cache).
  • External storage includes, but is not limited to, flash memory (flash memory), hard disks, optical disks, universal serial bus (USB) disks, etc.
  • Computer programs are usually stored in external memory, and the processor loads the program from external memory to memory before executing the computer program.
  • the memory 303 may be independent and connected to the application processor 301 through a bus; the memory 303 may also be integrated with the application processor 301 into a chip subsystem.
  • MCU 302 is a co-processor used to acquire and process data from sensor 308.
  • the processing power and power consumption of MCU 302 are less than that of application processor 301, but it has the feature of "always on” and can be used in application processor 301 continuously collects and processes sensor data when it is in sleep mode to ensure the normal operation of the sensor with extremely low power consumption.
  • the MCU 302 may be a sensor hub chip.
  • the sensor 308 may include a light sensor and a motion sensor.
  • the light sensor may include an ambient light sensor and a proximity sensor, where the ambient light sensor can adjust the brightness of the display 3091 according to the brightness of the ambient light, and the proximity sensor can turn off the power of the display when the terminal device 300 is moved to the ear .
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when stationary; the sensor 308 can also include a gyroscope, barometer, hygrometer, Other sensors such as thermometers and infrared sensors will not be described here.
  • the MCU 302 and the sensor 308 may be integrated on the same chip, or may be separate components, connected by a bus.
  • the modem 304 and the radio frequency module 305 constitute the communication subsystem of the terminal device 300, and are used to implement the main functions of the wireless communication standard protocol. Among them, modem 304 is used for encoding and decoding, signal modulation and demodulation, and equalization.
  • the radio frequency module 305 is used for receiving and transmitting wireless signals.
  • the radio frequency module 305 includes but is not limited to an antenna, at least one amplifier, a coupler, a duplexer, and the like.
  • the radio frequency module 305 cooperates with the modem 304 to realize the wireless communication function.
  • the modem 304 can be used as a separate chip, or can be combined with other chips or circuits to form a system-level chip or integrated circuit. These chips or integrated circuits can be applied to all terminal devices that implement wireless communication functions, including: mobile phones, computers, notebooks, tablets, routers, wearable devices, automobiles, home appliances, etc.
  • the terminal device 300 may also use the Wi-Fi module 306, the Bluetooth module 307, etc. to perform wireless communication.
  • the Wi-Fi module 306 is used to provide the terminal device 300 with network access that complies with Wi-Fi related standard protocols.
  • the terminal device 300 can access the Wi-Fi access point through the Wi-Fi module 306, and then access the Internet.
  • the Wi-Fi module 306 can also be used as a Wi-Fi wireless access point, which can provide Wi-Fi network access for other terminal devices.
  • the Bluetooth module 307 is used to implement short-distance communication between the terminal device 300 and other terminal devices (such as mobile phones, smart watches, etc.).
  • the Wi-Fi module 306 in this application may be an integrated circuit or a Wi-Fi chip, and the Bluetooth module 307 may be an integrated circuit or a Bluetooth chip.
  • the positioning module 310 is used to determine the geographic location of the terminal device 300. It is understandable that the positioning module 310 may specifically be a receiver of a positioning system such as a global positioning system (GPS), Beidou satellite navigation system, and Russian GLONASS.
  • GPS global positioning system
  • Beidou satellite navigation system Beidou satellite navigation system
  • Russian GLONASS Russian GLONASS
  • the Wi-Fi module 306, the Bluetooth module 307, and the positioning module 310 may be separate chips or integrated circuits, respectively, or they may be integrated together.
  • the Wi-Fi module 306, the Bluetooth module 307 and the positioning module 310 may be integrated on the same chip.
  • the Wi-Fi module 306, the Bluetooth module 307, the positioning module 310, and the MCU 302 can also be integrated into the same chip.
  • the input/output device 309 includes, but is not limited to: a display 3091, a touch screen 3092, an audio circuit 3093, and so on.
  • the touch screen 3092 can collect touch events on or near the user of the terminal device 300 (for example, the user uses a finger, a stylus, or any other suitable object to operate on the touch screen 3092 or near the touch screen 3092), and collect The received touch event is sent to other devices (for example, the application processor 301).
  • the user's operation near the touch screen 3092 can be called floating touch; through the floating touch, the user can select, move, or drag objects (such as icons, etc.) without directly touching the touch screen 3092.
  • multiple types of resistive, capacitive, infrared, and surface acoustic waves can be used to implement the touch screen 3092.
  • the display (also called a display screen) 3091 is used to display information input by the user or information presented to the user.
  • the display can be configured in the form of liquid crystal display, organic light emitting diode, etc.
  • the touch screen 3092 can be overlaid on the display 3091. When the touch screen 3092 detects a touch event, it is sent to the application processor 301 to determine the type of the touch event, and then the application processor 301 can provide corresponding information on the display 3091 according to the type of the touch event. Visual output.
  • the touch screen 3092 and the display 3091 are used as two independent components to realize the input and output functions of the terminal device 300, in some embodiments, the touch screen 3092 and the display 3091 can be integrated to realize the terminal device 300. Input and output functions.
  • the touch screen 3092 and the display 3091 can be configured on the front of the terminal device 300 in the form of a full panel to realize a frameless structure.
  • the audio circuit 3093, the speaker 3094, and the microphone 3095 can provide an audio interface between the user and the terminal device 300.
  • the audio circuit 3093 can transmit the electrical signal converted from the received audio data to the speaker 3094, which is converted into a sound signal by the speaker 3094 for output; on the other hand, the microphone 3095 converts the collected sound signal into an electrical signal, which is then output by the audio circuit 3093
  • the audio data is converted into audio data, and then the audio data is sent to, for example, another terminal device through the modem 304 and the radio frequency module 305, or the audio data is output to the memory 303 for further processing.
  • the terminal device 300 may also have a fingerprint recognition function.
  • a fingerprint collection device may be configured on the back of the terminal device 300 (for example, below the rear camera), or a fingerprint collection device may be configured on the front of the terminal device 300 (for example, below the touch screen 3092).
  • a fingerprint collection device can be configured in the touch screen 3092 to realize the fingerprint identification function, that is, the fingerprint collection device can be integrated with the touch screen 3092 to realize the fingerprint identification function of the terminal device 300.
  • the fingerprint collection device is configured in the touch screen 3092, may be a part of the touch screen 3092, or may be configured in the touch screen 3092 in other ways.
  • the main component of the fingerprint acquisition device in this application is a fingerprint sensor, which can use any type of sensing technology, including but not limited to optical, capacitive, piezoelectric or ultrasonic sensing technology.
  • the terminal device 300 can be logically divided into a hardware layer, an operating system 311, and an application program layer.
  • the hardware layer includes hardware resources such as the application processor 301, the MCU 302, the memory 303, the modem 304, the Wi-Fi module 306, the sensor 308, and the positioning module 310 as described above.
  • the operating system 311 carried by the terminal device 300 may be Or other operating systems, this application does not impose any restrictions on this.
  • the operating system 311 and the application program layer may be collectively referred to as the software layer of the terminal device 300.
  • FIG. 4 shows an exemplary structural diagram of the software layer of the terminal device 300. As shown in Figure 4, Take the operating system as an example. As a software middleware between the hardware layer and the application layer, the operating system is a computer program that manages and controls hardware and software resources.
  • the application layer includes one or more applications, which can be any type of application such as social applications, e-commerce applications, and browsers. For example, desktop launcher, settings, calendar, camera, photos, calls and text messages, etc.
  • the operating system includes the kernel layer, the Android runtime and system libraries, and the application framework layer.
  • the kernel layer is used to provide underlying system components and services, such as: power management, memory management, thread management, hardware drivers, etc.; hardware drivers include display drivers, camera drivers, audio drivers, and touch drivers.
  • the kernel layer encapsulates the kernel driver, provides an interface to the application framework layer, and shields low-level implementation details.
  • the Android runtime and system library provide the required library files and execution environment for the executable program at runtime.
  • a virtual machine or virtual machine instance capable of converting the bytecode of an application into a machine code.
  • System library is a program library that provides support for executable programs at runtime, including two-dimensional graphics engine, three-dimensional graphics engine, media library, surface manager, condition monitoring services, etc.
  • the application framework layer is used to provide various basic common components and services for applications in the application layer, including window managers, activity managers, package managers, resource managers, display policy services, and so on.
  • the functions of the various components of the operating system 311 described above can all be implemented by the application processor 301 executing the programs stored in the memory 303.
  • the terminal device 300 may include fewer or more components than those shown in FIG. 3, and the terminal device shown in FIG. 3 only includes components that are more relevant to the multiple implementations disclosed in this application. .
  • the rendering method provided in this application is applicable to the communication system shown in FIG. 1.
  • the server may be a server of a provider of cloud computing-based application programs (application, APP).
  • application application
  • APP can adopt a server-client (client-server, C/S) structure, and the client installed on the user's terminal device is responsible for interacting with the user, and the user performs operations on the operation interface of the APP.
  • the generated operating instructions are sent to the server; the server is responsible for the management of APP data, responding to operating instructions from the client, and rendering the screen displayed on the client.
  • the APP in this application can be a cloud game, which is a cloud computing-based game mode.
  • the cloud game operation mode all games are run on the server, and the server will render After the game screen is video compressed, it is transmitted to the client through the network.
  • the terminal device On the client side, the terminal device does not need to have high-end processors and graphics cards, only basic video decompression capabilities.
  • Cloud computing is a computing method based on the Internet. In this way, shared software and hardware resources and information can be provided to terminal devices on demand. The network that provides resources is called "cloud". Cloud games get rid of the dependence on hardware. For the server, it only needs to improve the performance of the server without the need to develop a new host. For the client, it can get higher picture quality without having to equip high-performance terminal equipment.
  • the process of cloud games is as follows. First, the user operates the terminal device to connect to the transfer server and selects the game, and then the transfer server sends the information of the selected game to the game server. At this time, the user's terminal device can obtain the uniform resource locator ( uniform resource locator, URL), connect to the game server through the URL to start playing the game.
  • uniform resource locator uniform resource locator, URL
  • the APP in this application can be a map.
  • the map runs on the server and plans the route.
  • the rendered map screen is video compressed and then transmitted to the client through the network. The user can watch the map screen and Walk the route and operate it on the map screen for easy viewing.
  • the APP in this application can be a document editor.
  • the document is edited and managed on the server.
  • the rendered document screen is video compressed and transmitted to the client through the network.
  • the user views the document page through the terminal device installed with the client. And operate on the document page to move related page elements.
  • the APP in this application may also include cloud IoT, cloud identity, cloud storage, cloud security, etc. There is no specific restriction on this.
  • FIG. 5 is a flowchart of an embodiment of a rendering method of this application. As shown in FIG. 5, the method of this embodiment can be applied to the communication system shown in FIG. 1 above.
  • the rendering method may include:
  • Step 501 The client receives a first operation instruction from the user.
  • the above-mentioned first operation instruction is an instruction generated by a user's operation.
  • apps based on cloud computing usually adopt a C/S structure. Users of this type of APP need to install the APP client on the terminal device first, and then click the APP icon to open the client.
  • the client connects to the server through the communication function of the terminal device and starts to run.
  • the client can store a large amount of resources in the APP.
  • the user inputs operation instructions through the client, and the client translates them into data and sends them to the server.
  • the server processes the instructions according to the operation instructions, the processing results are sent to the client. It is graphically displayed on the screen of the terminal device. It can be said that the client is an intermediary between the user and the server.
  • the client will translate according to the user operation (that is, generate operation instructions that the server can recognize ).
  • the user's operations can include operations such as clicking, dragging, sliding, and long-pressing on the touch screen of the smart device. It can also include operations such as clicking and dragging on the mouse of the computer, and input operations on the keyboard. Including related operations on other input devices, etc., this application does not specifically limit this.
  • Step 502 The client sends the first operation instruction to the server.
  • the client After obtaining the corresponding first operation instruction based on the user's operation, the client sends the first operation instruction to the server through the communication network between the terminal device and the server.
  • the user can send the first operation instruction to the server through the client installed on different terminal devices.
  • the user walks on the road and sends the first operation instruction through the client installed on the mobile phone. After returning home, he immediately switches to the computer to continue. Play the game, so that the subsequent first operation instruction is sent to the server by the client installed on the computer.
  • the first operation instructions come from different terminal devices, they all correspond to the same user.
  • the destination end of the rendered screen sent by the server can also correspond to different terminal devices.
  • the user sends the first operation instruction with the client installed on the mobile phone, then the rendered screen of the server is sent to the user's mobile phone, and the user installs
  • the client on the computer sends the first operation instruction
  • the rendered screen of the server is sent to the user's computer, but these all correspond to the same user and do not affect the smoothness of the game.
  • Step 503 The server renders the first screen of the application program corresponding to the first operation instruction according to the first operation instruction.
  • Step 504 The server predicts the second operation instruction according to the first operation instruction.
  • Step 505 The server renders the second screen of the application program corresponding to the second operation instruction according to the second operation instruction.
  • both screen a and screen b are rendered by the server and sent to the client through video compression. It is displayed on the screen of the terminal device after decompression. That is, any picture displayed on the screen of the terminal device running the client is rendered by the server. Therefore, the server needs to know what kind of screen changes the user's operation has caused.
  • the server After receiving the operation instruction from the client, the server performs corresponding processing according to the operation instruction, and when the operation instruction causes a screen switch, triggers the screen switch according to the operation instruction, and renders the switched screen.
  • the operation instruction indicates that the target person walks from the first position to the second position. During the movement, the scene where the target person is located changes, which causes the screen to switch from the screen corresponding to the first position to the screen corresponding to the second position.
  • the server needs to obtain After the operation instruction is reached, the screen corresponding to the second position is rendered.
  • the operation instruction represents switching from the first document to the second document, or from the first page of the document to the second page. The document switching causes the page displayed on the screen to change, thus causing the screen to switch. After obtaining the operation instruction, the screen corresponding to the second document or the second page is rendered.
  • the server may also predict the user's possible future operations based on the currently received operation instructions, determine whether the predicted operation will cause a screen switch based on the predicted user operation, and then render the switched screen in advance.
  • the operation instruction received by the server represents that the target person walks from the first position to the second position, and it can be predicted that the target person will walk from the second position to the third position.
  • the operation instruction from the client is actually received Before, the server can render the screen corresponding to the third position in advance.
  • the received operation instruction characterizes the target person walking from the second position to the third position, it means that the prediction made by the server before is accurate.
  • the server can directly send the rendered image corresponding to the third position to the client, saving Time to render.
  • the server can render the screen corresponding to the fourth position according to the received operation instruction.
  • the previous rendering of the screen corresponding to the third position can be discarded.
  • the server may first render the screen of the corresponding application program according to the operation instruction from the user, and then predict the user operation according to the operation instruction, and perform the screen rendering according to the prediction result.
  • the server processes the above actions in parallel if it is sufficient or does not affect the fluency of the user side, such as first predicting, and then rendering the received operation instructions and predicted operation instructions at the same time; or first according to the received operation
  • the instruction is rendered, and the prediction is performed at the same time, and then the prediction result is rendered, or processed in other possible order.
  • the communication network between the server and the terminal device is unstable, it may cause the operation instructions issued by the client to not be received by the server in time, or even the operation instructions are lost, and the server cannot receive them. This will make the server unable to perform the screen according to the operation instructions from the client. Rendering, which in turn causes the picture displayed on the screen of the terminal device to be stuck, inconsistent, and so on. Based on the predicted operation of the server, the server renders the screens that may be switched in advance.
  • the server can send the rendered images to the client Even if the above-mentioned communication network is unstable, the client can still continuously receive the compressed video data and display it on the screen after decompression, maintaining the continuity of the picture.
  • the server does not have to perform predictive operations all the time, but it will start the timing mechanism. If the server does not receive instructions, requests, feedback information from the client within a certain period of time (for example, 100ms or 150ms) Handshake data, etc., that is, it is considered that the link with the terminal device is unstable, and the server can initiate a prediction operation for the client, and render possible switching screens based on the prediction result.
  • a certain period of time for example, 100ms or 150ms
  • the server has not received instructions, requests, feedback information, handshake data, etc. sent by the client after a certain period of time (for example, 1s), the client is considered to be offline, and the server does not need to respond again.
  • the client provides operational services such as APP prediction operations.
  • Fig. 6 shows an exemplary schematic diagram of the prediction process of the server.
  • the actual operation of the user on the client includes user touch operations 1-4.
  • these 4 operations generate 4 operation instructions.
  • the sequence of operations is sent by the client to the server one by one.
  • the server after receiving the operation instruction 1 generated by the user's touch operation 1, it can predict the user's possible future operations based on the operation instruction 1, and obtain the user's touch operation prediction 1, and then based on the operation instruction 1 and User touch operation prediction 1 is predicted to obtain user touch operation prediction 2, and then user touch operation prediction 3 is further obtained.
  • the user touch operation prediction 1 and the actual user touch operation 2 are still very similar, or even the same, but later, if the server cannot obtain the subsequent operation instructions 2-4 in time, the predicted result may be User touch operation predictions 2 and 3 are different from actual user touch operations 3 and 4. But even if there is such a prediction deviation, it will not image the user's experience. On the one hand, such prediction ensures the continuity of the picture, and the server will not stop the picture rendering because it has not waited for the operation instructions of the client; on the other hand, On the one hand, if the communication network can quickly return to stability, the server can continue to accept operating instructions from the client in a short time, and adjust the prediction results and rendered pictures according to the actual operating instructions. This short-term deviation will not affect the user.
  • Step 506 If the server does not receive the operation instruction from the user within the preset time period after receiving the first operation instruction, it sends the rendered second picture to the client.
  • the server can perform rendering according to the operation instruction to obtain the rendering result 1.
  • the server may also predict the user's operation according to the operation instruction, perform rendering according to the prediction result, and obtain the rendering result 2. If the communication network is normal, the server will send the rendering result 1 to the client; if the communication network is unstable and the uplink freeze occurs (for example, the user operation instruction is not received within the preset time), the server will send the rendering result 2 to the client .
  • the server can send operation instructions to the client through the communication network between the terminal device and the server.
  • Step 507 The client displays the screen of the application program.
  • the client decompresses the received video compression data, and translates it to obtain image data that can be recognized by the terminal device, so as to display a corresponding picture on the screen according to the obtained image data.
  • the terminal device when the upstream communication of the communication network between the server and the terminal device is unstable (the terminal device sends data to the server), the user's operation is predicted by the server, and the screen switching caused by the user's operation can be rendered in advance. It saves processing time delay and avoids the phenomenon of screen jam.
  • a cloud game APP is taken as an example to illustrate the rendering method provided in this application.
  • the user installs a certain cloud game APP on the terminal device, and the user opens the game APP to enter the game interface, and plays the game by clicking, dragging, zooming, and long pressing.
  • the user drags the game character to move in the game, As the position of the game character changes, the game screen will also change, so that the user can feel the visual experience synchronized with the game character.
  • FIGs 7-11 exemplarily show a schematic diagram of cloud game screen switching.
  • the game character stands at point A.
  • the client displays the image rendered by the cloud game server according to the point A where the game character is located.
  • the user drags the game character to the upper right to reach point B.
  • the operation instruction generated during this operation is transmitted to the server through the network.
  • the server renders the game character's movement trajectory corresponding to point B according to the operation instruction.
  • the screen which is transmitted to the client through the network and displayed to the user.
  • the user drags the game character from point B to the right to reach point C.
  • the operation instructions generated during the operation process did not reach the server, and the server did not receive the information from the client within the set time. That is, it is considered that network fluctuations cause the inability to receive instructions in time.
  • the server predicts that the user will drag the game character from point B to point C according to the previous operation of dragging the game character from point A to point B. Therefore, the server renders the screen corresponding to point C according to the predicted operation. The screen is transmitted to the client through the network and displayed to the user.
  • the server can still render the next game screen based on the prediction of the user's operation, and send the screen to it via the network
  • the client shows it to the user. In this way, the user can watch the continuously switched game screen without causing a freeze.
  • the user drags the game character from point B to the lower right to reach point D.
  • the operation instruction generated by the operation process did not reach the server, and the server did not receive the operation from the client within the set time Instruction, that is, it is considered that network fluctuations cause the inability to receive instructions in a timely manner.
  • the server predicts that the user will drag the game character from point B to point C according to the previous operation of dragging the game character from point A to point B. Therefore, the server renders the screen corresponding to point C according to the predicted operation. The screen is transmitted to the client through the network and displayed to the user.
  • the user drags the game character down from point C to point E.
  • the operation instruction generated during this operation is transmitted to the server through the network, and the server can receive the operation instruction again, indicating that the network condition is good.
  • the time server can also render a picture corresponding to point E according to the movement trajectory of the game character indicated by the operation instruction, and the picture is transmitted to the client through the network and displayed to the user. In this way, the server switches back to rendering the screen according to the operation instructions from the client, without affecting the continuity of the game screen.
  • the user holds the terminal device, and the game interface is displayed on the screen of the terminal device.
  • the user can perform operations such as clicking, dragging, zooming, and long-pressing on the touch screen of the terminal device. These operations are performed by the client installed on the terminal device.
  • the corresponding operation instruction is generated after the terminal obtains it.
  • the client transmits the operating instructions to the server via the network.
  • the server receives the operating instructions, predicts the user's behavior according to the operating instructions, and predicts the user's next operations.
  • the server can periodically predict the user's operation based on the received operation instruction, or as long as it receives the operation instruction from the client, the server can predict the user's operation based on the existing operation instruction, and the server can also detect When the network fluctuates, the user's operation is predicted, which is not specifically limited in this application.
  • the server can use multiple methods to predict the user's operation. For example, the server performs fitting based on the user's already performed operation instruction, according to the user's historical drag operation, and predicts the location of the user's next operation. There is no specific limitation on this application.
  • the server When the network condition is good, the server does not need to render the corresponding screen based on the prediction operation after obtaining the prediction operation, but renders the corresponding screen according to the operation instruction from the client. Only when the network fluctuates, the server will render the corresponding screen according to the prediction operation.
  • the server determines that network fluctuations can be based on the reception of handshake data and control signaling information with the client. For example, for the acknowledgement (ACK) fed back by the client, the server sets a timer. The duration of the timer is, for example, 100ms. The server resets the timer every time it receives an ACK from the client. If the server has not received the next ACK after the timeout, the server considers that data loss has occurred, indicating that the network is fluctuating at this time and the channel condition is not good.
  • ACK acknowledgement
  • the server resets the timer every time it receives an ACK from the client. If the server has not received the next ACK after the timeout, the server considers that data loss has occurred,
  • the server needs to set another timer after determining network fluctuations and starting to enable predictive operation to render the screen.
  • the duration of the timer is, for example, 1s.
  • the purpose of this timer is that if the user closes the cloud game APP or shuts down the terminal device, Or the user holds the terminal device and enters the non-mobile service area, the server does not need to continue to render the game screen for the client. Once the timer expires, the server can be triggered to no longer render the screen for the client, whether it is based on operation The instructions are still based on predictive operations.
  • the server When the network fluctuates, the server renders the screen based on the predicted operation. Since the predicted operation is obtained based on the existing operation instructions combined with the game screen prediction, it cannot fully conform to the user's actual operation. As shown in Figure 8, the user's actual operation is dragging. The game character goes from point B to point D, but the operation predicted by the server is to drag the game character from point B to point C, that is, the screen rendered by the server may not be the corresponding screen after the actual operation by the user. However, as mentioned above, the server only sets to execute the rendering method provided by this application within a period of time after determining the network fluctuation. Once the second timer above expires, the server considers that the client is offline and does not need to be the client anymore. Provide game processing and rendering images. Therefore, as long as the second timer does not expire, the server can quickly switch back to rendering the screen based on the operation instructions from the client, and the rendering deviation in a short period of time will not cause the user's viewing experience.
  • FIG. 12 is a schematic structural diagram of an embodiment of an application server of this application.
  • the server of this embodiment includes: a receiving module 1201, a rendering module 1202, a prediction module 1203, and a sending module 1204.
  • the receiving module 1201 is configured to receive a first operation instruction from the user;
  • the rendering module 1202 is configured to render the first picture of the application corresponding to the first operation instruction according to the first operation instruction;
  • the prediction module 1203 Configured to predict a second operation instruction according to the first operation instruction;
  • the rendering module 1202 further configured to render a second screen of the application program corresponding to the second operation instruction according to the second operation instruction;
  • the sending module 1204 is configured to send the rendered second picture to the user if the operation instruction from the user is not received within a preset time period after receiving the first operation instruction.
  • the prediction module 1203 is specifically configured to use an artificial intelligence method to predict a second operation instruction according to the first operation instruction.
  • the rendering module 1202 is specifically configured to determine the first picture and render the first picture.
  • the rendering module 1202 is specifically configured to determine the second picture and render the second picture.
  • the preset duration is 100ms or 150ms.
  • the device in this embodiment can be used to implement the technical solution of the method embodiment shown in FIG. 5, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the steps of the foregoing method embodiments may be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the processor can be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware encoding processor, or executed and completed by a combination of hardware and software modules in the encoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the memory mentioned in the above embodiments may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), and electrically available Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • static random access memory static random access memory
  • dynamic RAM dynamic RAM
  • DRAM dynamic random access memory
  • synchronous dynamic random access memory synchronous DRAM, SDRAM
  • double data rate synchronous dynamic random access memory double data rate SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • synchronous connection dynamic random access memory serial DRAM, SLDRAM
  • direct rambus RAM direct rambus RAM
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be indirect couplings or communication connections between devices or units through some interfaces, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (personal computer, server, or network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种渲染方法和装置。该渲染方法包括:接收来自用户的第一操作指令(501);根据第一操作指令,渲染第一操作指令对应的应用程序的第一画面(503);根据第一操作指令,预测第二操作指令(504);根据第二操作指令,渲染第二操作指令对应的应用程序的第二画面(505);如果在接收到第一操作指令后的在APP运行的过程中,若超过预设时长内,没有接收到来自终端设备用户的操作指令,则根据已经接收到的操作指令对用户的操作进行预测,操作指令为由用户的操作触发生成的指令且会触发APP的画面切换;根据预测到的用户的操作渲染APP的画面;将渲染后得到的APP的画面发送给终端设备用户(506)。该方法节省处理时延,避免画面卡顿的现象。

Description

渲染方法和装置
本申请要求于2020年2月21日提交中国专利局、申请号为202010108932.3、申请名称为“渲染方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术,尤其涉及一种渲染方法和装置。
背景技术
云游戏是以云计算为基础的游戏方式,所有游戏都在服务器端运行,渲染后的游戏画面经视频压缩后通过网络传送至客户端,用户通过客户端观看游戏画面和操作游戏,所产生的操作指令通过网络传送至服务器端,由服务器端响应该操作指令。
可见,云游戏的处理时延和网络的通信特性相关,一旦网络产生波动,会导致处理时延延长,出现游戏卡顿的现象。
发明内容
本申请实施例提供一种渲染方法和装置,以节省处理时延,避免画面卡顿的现象。
第一方面,本申请提供一种渲染方法,包括:接收来自用户的第一操作指令;根据所述第一操作指令,渲染所述第一操作指令对应的应用程序的第一画面;根据所述第一操作指令,预测第二操作指令;根据所述第二操作指令,渲染所述第二操作指令对应的应用程序的第二画面;如果在接收到所述第一操作指令后的预设时长内,没有接收到来自所述用户的操作指令,则将渲染后的第二画面发送给所述用户。
本申请在服务器和终端设备之间的通信网络上行通信不稳定(终端设备向服务器发送数据)时,通过服务器对用户的操作进行预测,可以提前对用户操作所引起的画面切换进行渲染,节省了处理时延,避免画面卡顿的现象。
在一种可能的实现方式中,所述根据所述第一操作指令,预测第二操作指令,包括:采用人工智能方法根据所述第一操作指令,预测第二操作指令。
本申请通过人工智能的方法预测用户的操作指令可以提高预测结果的准确性。
在一种可能的实现方式中,所述渲染所述第一操作指令对应的应用程序的第一画面,包括:确定所述第一画面,并渲染所述第一画面。
在一种可能的实现方式中,所述渲染所述第二操作指令对应的应用程序的第二画面,包括:确定所述第二画面,并渲染所述第二画面。
在一种可能的实现方式中,所述预设时长为100ms或150ms。
本申请在一个较短时间内没有收到来自客户端的操作指令,就会基于预测结果渲染画面,可以避免画面卡段。
第二方面,本申请提供一种应用程序服务器,包括:接收模块,用于接收来自用户的第一操作指令;渲染模块,用于根据所述第一操作指令,渲染所述第一操作指令对应的应用程序的第一画面;预测模块,用于根据所述第一操作指令,预测第二操作指令;所述渲染模块,还用于根据所述第二操作指令,渲染所述第二操作指令对应的应用程序的第二画 面;发送模块,用于如果在接收到所述第一操作指令后的预设时长内,没有接收到来自所述用户的操作指令,则将渲染后的第二画面发送给所述用户。
在一种可能的实现方式中,所述预测模块,具体用于采用人工智能方法根据所述第一操作指令,预测第二操作指令。
在一种可能的实现方式中,所述渲染模块,具体用于确定所述第一画面,并渲染所述第一画面。
在一种可能的实现方式中,所述渲染模块,具体用于确定所述第二画面,并渲染所述第二画面。
在一种可能的实现方式中,所述预设时长为100ms或150ms。
第三方面,本申请提供一种服务器,包括:一个或多个处理器;存储器,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如上述第一方面中任一项所述的方法。
第四方面,本申请提供一种计算机可读存储介质,包括计算机程序,所述计算机程序在计算机上被执行时,使得所述计算机执行上述第一至二方面中任一项所述的方法。
第五方面,本申请提供一种计算机程序,当所述计算机程序被计算机执行时,用于执行上述第一至二方面中任一项所述的方法。
附图说明
图1示出了通信系统的一个示例性的结构示意图;
图2示出了服务器200的一个示例性的结构示意图;
图3示出了终端设备300的一个示例性的结构示意图;
图4示出了终端设备300的软件层的一个示例性的结构示意图;
图5为本申请渲染方法实施例的流程图;
图6示出了服务器的预测过程的一个示例性的示意图;
图7-11示例性的示出了云游戏画面切换示意图;
图12为本申请应用程序服务器实施例的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请中的附图,对本申请中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书实施例和权利要求书及附图中的术语“第一”、“第二”等仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元。方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
应当理解,在本申请中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/ 或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。
图1示出了通信系统的一个示例性的结构示意图,如图1所示,该通信系统包括一个服务器和一个终端设备,可选地,该通信系统还可以包括多个服务器并且每个服务器的覆盖范围内可以包括其它数量的终端设备,本申请对此不做限定。可选地,该通信系统还可以包括网络控制器、交换设备等其他网络实体,本申请不限于此。图1中黑色箭头表示服务器与终端设备存在通信连接,即服务器和终端设备之间可以通过通信网络实现数据传输。
需要说明的是,上述通信网络可以是局域网,也可以是通过中继(relay)设备转接的广域网,或者包括局域网和广域网。当该通信网络为局域网时,示例性的,该通信网络可以是wifi热点网络、wifi P2P网络、蓝牙网络、zigbee网络或近场通信(near field communication,NFC)网络等近距离通信网络。当该通信网络为广域网时,示例性的,该通信网络可以是第三代移动通信技术(3rd-generation wireless telephone technology,3G)网络、第四代移动通信技术(the 4th generation mobile communication technology,4G)网络、第五代移动通信技术(5th-generation mobile communication technology,5G)网络、未来演进的公共陆地移动网络(public land mobile network,PLMN)或因特网等,本申请实施例对此不作限定。
应理解,图1中仅为便于理解,示意性地示出了一个通信系统,但这不应对本申请构成任何限定,该通信系统中还可以包括更多数量的服务器,也可以包括更多数量的终端设备,与不同的终端设备通信的服务器可以是相同的服务器,也可以是不同的服务器,与不同的终端设备通信的服务器的数量可以相同,也可以不同,本申请对此不做限定。
还应理解,该通信系统中的服务器可以是任意一种具有收发功能的设备或可设置于该设备的芯片。图2示出了服务器200的一个示例性的结构示意图,服务器200的结构可以参考图2所示的结构。
服务器包括至少一个处理器201、至少一个存储器202和至少一个网络接口203。处理器201、存储器202和网络接口203相连,例如通过总线相连,在本申请中,所述连接可包括各类接口、传输线或总线等,本实施例对此不做限定。网络接口203用于使得服务器通过通信链路,与其它通信设备相连,例如以太网接口。
处理器201主要用于对通信数据进行处理,以及对整个服务器进行控制,执行软件程序,处理软件程序的数据,例如用于支持服务器执行实施例中所描述的动作。处理器201主要用于对整个服务器进行控制,执行软件程序,处理软件程序的数据。本领域技术人员可以理解,服务器可以包括多个处理器以增强其处理能力,服务器的各个部件可以通过各种总线连接。处理器201也可以表述为处理电路或者处理器芯片。
存储器202主要用于存储软件程序和数据。存储器202可以是独立存在,与处理器201相连。可选的,存储器202可以和处理器201集成在一起,例如集成在一个芯片之内。其中,存储器202能够存储执行本申请的技术方案的程序代码,并由处理器201来控制执 行,被执行的各类计算机程序代码也可被视为是处理器201的驱动程序。
图2仅示出了一个存储器和一个处理器。在实际的服务器中,可以存在多个处理器和多个存储器。存储器也可以称为存储介质或者存储设备等。存储器可以为与处理器处于同一芯片上的存储元件,即片内存储元件,或者为独立的存储元件,本申请对此不做限定。
还应理解,该通信系统中的终端设备又可称之为用户设备(user equipment,UE),可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上(如轮船等);还可以部署在空中(例如飞机、气球和卫星上等)。终端设备可以是手机(mobile phone)、平板电脑(pad)、具备无线通讯功能的可穿戴设备(如智能手表)、具有定位功能的位置追踪器、带无线收发功能的电脑、虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、智慧家庭(smart home)中的无线设备等,本申请对此不作限定。本申请中将前述终端设备及可设置于前述终端设备的芯片统称为终端设备。
图3示出了终端设备300的一个示例性的结构示意图。如图3所示,终端设备300包括:应用处理器301、微控制器单元(microcontroller unit,MCU)302、存储器303、调制解调器(modem)304、射频(radio frequency,RF)模块305、无线保真(Wireless-Fidelity,简称Wi-Fi)模块306、蓝牙模块307、传感器308、输入/输出(input/output,I/O)设备309、定位模块310等部件。这些部件可通过一根或多根通信总线或信号线进行通信。前述通信总线或信号线可以是本申请提供的CAN总线。本领域技术人员可以理解,终端设备300可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图3对终端设备300的各个部件进行具体的介绍:
应用处理器301是终端设备300的控制中心,利用各种接口和总线连接终端设备300的各个部件。在一些实施例中,处理器301可包括一个或多个处理单元。
存储器303中存储有计算机程序,诸如图3所示的操作系统311和应用程序312。应用处理器301被配置用于执行存储器303中的计算机程序,从而实现该计算机程序定义的功能,例如应用处理器301执行操作系统311从而在终端设备300上实现操作系统的各种功能。存储器303还存储有除计算机程序之外的其他数据,诸如操作系统311和应用程序312运行过程中产生的数据。存储器303为非易失性存储介质,一般包括内存和外存。内存包括但不限于随机存取存储器(random access memory,RAM),只读存储器(read-only memory,ROM),或高速缓存(cache)等。外存包括但不限于闪存(flash memory)、硬盘、光盘、通用串行总线(universal serial bus,USB)盘等。计算机程序通常被存储在外存上,处理器在执行计算机程序前会将该程序从外存加载到内存。
存储器303可以是独立的,通过总线与应用处理器301相连接;存储器303也可以和应用处理器301集成到一个芯片子系统。
MCU 302是用于获取并处理来自传感器308的数据的协处理器,MCU 302的处理能力和功耗小于应用处理器301,但具有“永久开启(always on)”的特点,可以在应用处理器301处于休眠模式时持续收集以及处理传感器数据,以极低的功耗保障传感器的正常运行。在一个实施例中,MCU 302可以为sensor hub芯片。传感器308可以包括光传感器、运动传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示器3091的亮度,接近传感器可在终端设备300移动到耳边时,关闭显示屏的电源。作为运动传感器的一种,加速计传感器可检测各个方向上(一 般为三轴)加速度的大小,静止时可检测出重力的大小及方向;传感器308还可以包括陀螺仪、气压计、湿度计、温度计、红外线传感器等其它传感器,在此不再赘述。MCU 302和传感器308可以集成到同一块芯片上,也可以是分离的元件,通过总线连接。
modem 304以及射频模块305构成了终端设备300通信子系统,用于实现无线通信标准协议的主要功能。其中,modem 304用于编解码、信号的调制解调、均衡等。射频模块305用于无线信号的接收和发送,射频模块305包括但不限于天线、至少一个放大器、耦合器、双工器等。射频模块305配合modem 304实现无线通信功能。modem 304可以作为单独的芯片,也可以与其他芯片或电路在一起形成系统级芯片或集成电路。这些芯片或集成电路可应用于所有实现无线通信功能的终端设备,包括:手机、电脑、笔记本、平板、路由器、可穿戴设备、汽车、家电设备等。
终端设备300还可以使用Wi-Fi模块306,蓝牙模块307等来进行无线通信。Wi-Fi模块306用于为终端设备300提供遵循Wi-Fi相关标准协议的网络接入,终端设备300可以通过Wi-Fi模块306接入到Wi-Fi接入点,进而访问互联网。在其他一些实施例中,Wi-Fi模块306也可以作为Wi-Fi无线接入点,可以为其他终端设备提供Wi-Fi网络接入。蓝牙模块307用于实现终端设备300与其他终端设备(例如手机、智能手表等)之间的短距离通信。本申请中的Wi-Fi模块306可以是集成电路或Wi-Fi芯片等,蓝牙模块307可以是集成电路或者蓝牙芯片等。
定位模块310用于确定终端设备300的地理位置。可以理解的是,定位模块310具体可以是全球定位系统(global position system,GPS)或北斗卫星导航系统、俄罗斯GLONASS等定位系统的接收器。
Wi-Fi模块306,蓝牙模块307和定位模块310分别可以是单独的芯片或集成电路,也可以集成到一起。例如,在一个实施例中,Wi-Fi模块306,蓝牙模块307和定位模块310可以集成到同一芯片上。在另一个实施例中,Wi-Fi模块306,蓝牙模块307、定位模块310以及MCU 302也可以集成到同一芯片中。
输入/输出设备309包括但不限于:显示器3091、触摸屏3092,以及音频电路3093等等。
其中,触摸屏3092可采集终端设备300的用户在其上或附近的触摸事件(比如用户使用手指、触控笔等任何适合的物体在触摸屏3092上或在触摸屏触摸屏3092附近的操作),并将采集到的触摸事件发送给其他器件(例如应用处理器301)。其中,用户在触摸屏3092附近的操作可以称之为悬浮触控;通过悬浮触控,用户可以在不直接接触触摸屏3092的情况下选择、移动或拖动目的(例如图标等)。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型来实现触摸屏3092。
显示器(也称为显示屏)3091用于显示用户输入的信息或展示给用户的信息。可以采用液晶显示屏、有机发光二极管等形式来配置显示器。触摸屏3092可以覆盖在显示器3091之上,当触摸屏3092检测到触摸事件后,传送给应用处理器301以确定触摸事件的类型,随后应用处理器301可以根据触摸事件的类型在显示器3091上提供相应的视觉输出。虽然在图3中,触摸屏3092与显示器3091是作为两个独立的部件来实现终端设备300的输入和输出功能,但是在某些实施例中,可以将触摸屏3092与显示器3091集成而实现终端设备300的输入和输出功能。另外,触摸屏3092和显示器3091可以以全面板的 形式配置在终端设备300的正面,以实现无边框的结构。
音频电路3093、扬声器3094、麦克风3095可提供用户与终端设备300之间的音频接口。音频电路3093可将接收到的音频数据转换后的电信号,传输到扬声器3094,由扬声器3094转换为声音信号输出;另一方面,麦克风3095将收集的声音信号转换为电信号,由音频电路3093接收后转换为音频数据,再通过modem 304和射频模块305将音频数据发送给比如另一终端设备,或者将音频数据输出至存储器303以便进一步处理。
另外,终端设备300还可以具有指纹识别功能。例如,可以在终端设备300的背面(例如后置摄像头的下方)配置指纹采集器件,或者在终端设备300的正面(例如触摸屏3092的下方)配置指纹采集器件。又例如,可以在触摸屏3092中配置指纹采集器件来实现指纹识别功能,即指纹采集器件可以与触摸屏3092集成在一起来实现终端设备300的指纹识别功能。在这种情况下,该指纹采集器件配置在触摸屏3092中,可以是触摸屏3092的一部分,也可以以其他方式配置在触摸屏3092中。本申请中的指纹采集器件的主要部件是指纹传感器,该指纹传感器可以采用任何类型的感测技术,包括但不限于光学式、电容式、压电式或超声波传感技术等。
终端设备300从逻辑上可划分为硬件层、操作系统311,以及应用程序层。硬件层包括如上所述的应用处理器301、MCU 302、存储器303、modem 304、Wi-Fi模块306、传感器308、定位模块310等硬件资源。终端设备300搭载的操作系统311可以为
Figure PCTCN2021074693-appb-000001
Figure PCTCN2021074693-appb-000002
或者其它操作系统,本申请对此不作任何限制。
操作系统311和应用程序层可以统称为终端设备300的软件层,图4示出了终端设备300的软件层的一个示例性的结构示意图。如图4所示,以
Figure PCTCN2021074693-appb-000003
操作系统为例,操作系统作为硬件层和应用程序层之间的软件中间件,是管理和控制硬件与软件资源的计算机程序。
应用程序层包括一个或多个应用程序,应用程序可以为社交类应用、电子商务类应用、浏览器等任意类型的应用程序。例如,桌面启动器、设置、日历、相机、照片、通话和短信等等。
Figure PCTCN2021074693-appb-000004
操作系统包括内核层、安卓运行时和系统库以及应用程序框架层。其中,内核层用于提供底层系统组件和服务,例如:电源管理、内存管理、线程管理、硬件驱动程序等;硬件驱动程序包括显示驱动、摄像头驱动、音频驱动和触控驱动等。内核层对内核驱动程序的封装,向应用程序框架层提供接口,屏蔽低层的实现细节。
安卓运行时和系统库为可执行程序在运行时提供所需要的库文件和执行环境。能够把应用程序的字节码转换为机器码的虚拟机或虚拟机实例。系统库是为可执行程序在运行时提供支持的程序库,包括二维图像引擎、三维图形引擎、媒体库、表面管理器、状态监测服务等。
应用程序框架层用于为应用程序层中的应用程序提供各种基础的公共组件和服务,包括窗口管理器、活动管理器、包管理器、资源管理器、显示策略服务等等。
以上描述的操作系统311的各个组件的功能均可以由应用处理器301执行存储器303中存储的程序来实现。
所属领域的技术人员可以理解终端设备300可包括比图3所示的更少或更多的部件,图3所示的该终端设备仅包括与本申请所公开的多个实现方式更加相关的部件。
本申请提供的渲染方法适用于图1所示的通信系统。其中,服务器可以是基于云计算的应用程序(application,APP)的提供商的服务器。需要说明的是,上述APP可以采用服务器-客户端(client-server,C/S)结构,安装于用户的终端设备上的客户端负责与用户交互,将用户在APP的操作界面上执行操作所产生的操作指令发送给服务器;服务器负责APP数据的管理、响应来自客户端的操作指令以及渲染显示于客户端的画面。
例如,本申请中的APP可以是云游戏,云游戏是一种以云计算(cloud computing)为基础的游戏方式,在云游戏的运行模式下,所有游戏都在服务器上运行,由服务器将渲染后的游戏画面进行视频压缩后通过网络传送给客户端。在客户端,终端设备不需要有高端处理器和显卡,只需要基本的视频解压能力即可。云计算是一种基于互联网的计算方式,通过这种方式,共享的软硬件资源和信息可以按需被提供给终端设备。而提供资源的网络被称为“云”。云游戏摆脱了对硬件的依赖,对服务器来说,仅仅需要提高服务器性能而不需要研发新主机,对客户端来说,可以得到更高的画质而不用配备高性能的终端设备。通常云游戏的流程为,首先用户操作终端设备连接到传送服务器并选择游戏,然后传送服务器把选择的游戏的信息发送给游戏服务器,此时用户的终端设备可以得到游戏服务器的统一资源定位符(uniform resource locator,URL),通过该URL连接到游戏服务器开始玩游戏。
又例如,本申请中的APP可以是地图,该地图在服务器上运行和规划路线,渲染后的地图画面经视频压缩后通过网络传送至客户端,用户通过安装了客户端的终端设备观看地图画面和行走路线,并在地图画面上操作以方便观看。
又例如,本申请中的APP可以是文档编辑,该文档在服务器上编辑和管理,渲染后的文档画面经视频压缩后通过网络传送至客户端,用户通过安装了客户端的终端设备查看文档页面,并在文档页面上操作以移动相关页面元素。
又例如,本申请中的APP还可以包括云物联、云身份、云存储、云安全等。对此不做具体限定。
图5为本申请渲染方法实施例的流程图,如图5所示,本实施例的方法可以应用于上述图1所示的通信系统。该渲染方法可以包括:
步骤501、客户端接收来自用户的第一操作指令。
上述第一操作指令是用户的操作产生的指令。如上所述,基于云计算的APP通常采用C/S结构,使用该类APP用户需要在终端设备上先安装APP的客户端,然后点击APP的图标打开客户端。该客户端通过终端设备的通信功能连接到服务器并开始运行。客户端可以将APP中的大量资源储存起来,用户通过客户端输入操作指令,客户端将其翻译成数据发送给服务器,服务器根据操作指令处理完后得到处理结果发送给客户端,由客户端将其图形化后显示于终端设备的屏幕上。可以说,客户端是一个用户和服务器之间中介。由此可见,在APP运行的过程中,无论用户在客户端上进行了什么操作,根据上述云计算类的APP的原理,客户端均会根据用户操作进行翻译(即产生服务器可以识别的操作指令)。通常用户的操作可以包括在智能设备的触摸屏上的点击、拖动、滑动、长按等操作,也可以包括在计算机的鼠标上的点击、拖动等操作,键盘上的输入操作等,还可以包括其他输入设备上的相关操作等,本申请对此不做具体限定。
步骤502、客户端将第一操作指令发送给服务器。
客户端基于用户的操作得到相应的第一操作指令后,通过终端设备和服务器之间的通信网络将第一操作指令发送给服务器。
本申请中用户可以通过安装于不同终端设备上的客户端向服务器发送第一操作指令,例如用户走在路上通过安装于手机上的客户端发送第一操作指令,回到家后马上切换到电脑继续玩游戏,这样后续第一操作指令是由安装于电脑上的客户端发送给服务器的。虽然第一操作指令来自不同的终端设备,但都对应同一个用户。相应的,服务器发送的渲染画面的目的端也可以对应不同的终端设备,例如用户用安装于手机上的客户端发送第一操作指令,那么服务器的渲染画面就是发送至用户的手机,用户用安装于电脑上的客户端发送第一操作指令,那么服务器的渲染画面就是发送至用户的电脑,但这都对应同一个用户,并不影响游戏的流畅。
步骤503、服务器根据第一操作指令渲染所述第一操作指令对应的应用程序的第一画面;
步骤504、服务器根据第一操作指令预测第二操作指令。
步骤505、服务器根据第二操作指令,渲染第二操作指令对应的应用程序的第二画面。
本申请中,如果用户的操作会触发应用程序的画面切换,例如从画面a切换到画面b,画面a和画面b均是在服务器渲染完成后,经视频压缩发送给客户端,由客户端视频解压后显示于终端设备的屏幕上的。即任何显示于运行客户端的终端设备的屏幕上的画面均是由服务器渲染得到的。因此服务器需要知道用户的操作导致了什么样的画面变化。
通常服务器在接收到来自客户端的操作指令后,根据该操作指令做出相应的处理,并在操作指令导致画面切换的情况下,根据操作指令触发画面切换,渲染切换后的画面。例如,操作指令表征目标人物从第一位置走到第二位置,该移动过程目标人物所在场景发生变化,因此导致画面从第一位置对应的画面切换至第二位置对应的画面,服务器需要在获取到该操作指令后渲染第二位置对应的画面。又例如,操作指令表征从第一文档切换至第二文档,或者从文档的第一页面切换至第二页面,该文档切换使显示于屏幕上的页面发生变化,因此导致画面切换,服务器需要在获取到该操作指令后渲染第二文档或第二页面对应的画面。
服务器还可以基于当前接收到的操作指令,对用户未来可能的操作进行预测,根据预测到的用户操作,确定该预测操作是否会导致画面切换,进而对切换后的画面提前渲染。例如,服务器接收到的操作指令表征目标人物从第一位置走到第二位置,可以预测目标人物会从第二位置走到第三位置,根据该预测结果,在实际接收到来自客户端的操作指令之前服务器就可以提前对第三位置对应的画面进行渲染。此后如果接收到的操作指令表征目标人物从第二位置走到第三位置,说明此前服务器作出的预测是准确的,服务器可以直接把已经渲染好的第三位置对应的画面发送给客户端,节省了渲染的时间。如果接收到的操作指令表征目标人物从第二位置走到第四位置,说明此前服务器作出的预测与实际有误差,服务器可以根据接收到的操作指令对第四位置对应的画面进行渲染。而此前对第三位置对应的画面的渲染丢弃即可。
服务器可以先根据来自用户的操作指令对对应的应用程序的画面进行渲染,然后再根据该操作指令对用户操作进行预测,并根据预测结果进行画面渲染。服务器在算了充足或者不影响用户侧流畅度的情况下,并行处理以上这些动作,例如先进行预测,然后同时对 接收的操作指令和预测的操作指令进行对应的渲染;或者先根据接收的操作指令进行渲染,同时进行预测,然后对预测结果进行渲染,或者以其他可能的顺序进行处理。
如果服务器和终端设备之间的通信网络不稳定,可能导致客户端发出的操作指令不能及时被服务器接收,甚至操作指令丢失,服务器接收不到,这会使得服务器不能根据来自客户端的操作指令进行画面渲染,进而导致终端设备的屏幕上显示的画面卡顿、不连贯等。基于上述服务器的预测操作,服务器提前对可能切换的画面进行渲染,一旦发现在预设时长(例如100ms或150ms)内没有接收到来自客户端的操作指令,服务器就可以将渲染好的画面发送给客户端,即使出现上述通信网络不稳定的情况,客户端依然可以连续接收到压缩的视频数据,从而将其解压后显示于屏幕上,保持了画面的连贯性。
可选的,为了减轻服务器的工作负载,服务器不必一直进行预测操作,但会启动定时机制,如果在某一时长(例如100ms或150ms)内服务器没有接收到来自客户端的指令、请求、反馈信息、握手数据等,即认为和终端设备之间的链路不稳定,服务器可以针对该客户端启动预测操作,基于预测结果对可能的切换画面进行渲染。
可选的,如果服务器在某一时长(例如1s)后一直没有接收到客户端发送的指令、请求、反馈信息、握手数据等,就认为该客户端掉线,此时服务器就不需要再对该客户端提供APP预测操作等运行服务了。
需要说明的是,上述服务器的预测操作可以采用人工智能、神经网络、模型训练等方法实现,对此不做具体限定。图6示出了服务器的预测过程的一个示例性的示意图,如图6所示,用户在客户端的实际操作包括用户触控操作1-4,理论上该4个操作产生4条操作指令,依据操作的先后顺序逐一由客户端发送给服务器。那么在服务器上,在收到用户触控操作1产生的操作指令1后,可以根据该操作指令1对用户未来可能的操作进行预测,得到用户触控操作预测1,进而再基于操作指令1和用户触控操作预测1预测得到用户触控操作预测2,再进一步得到用户触控操作预测3。可以看到用户触控操作预测1与实际的用户触控操作2还是很相近的,甚至相同,但越往后,如果服务器不能及时获取到后续的操作指令2-4,可能预测的结果,即用户触控操作预测2和3和实际的用户触控操作3和4就有了偏差。但即使有这样的预测偏差,也并不会影像用户的使用体验,一方面,这样的预测确保了画面的连贯性,服务器不会因为一直等不到客户端的操作指令,而中止画面渲染;另一方面,如果通信网络很快就能恢复稳定,服务器就可以在短时间内继续接受来自客户端的操作指令,并根据实际的操作指令调整预测结果和渲染的画面,这种短时间的偏差用户并不会感知到;第三方面,如果通信网络一直不能恢复,基于上述机制,服务器如果长时间内没有收到来自客户端的操作指令,将不再为该客户端提供数据处理和画面渲染等服务,那么处于客户端的用户也可以感知到通信网络的问题,及时作出相应的处理。
步骤506、如果在接收到第一操作指令后的预设时长内,服务器没有接收到来自用户的操作指令,则将渲染后的第二画面发送给客户端。
如上所述,服务器收到操作指令后,可以根据操作指令进行渲染,得到渲染结果1。服务器也可以根据该操作指令对用户的操作进行预测,根据预测结果进行渲染,得到渲染结果2。如果通信网络正常,服务器将渲染结果1发送给客户端;如果通信网络不稳定,发生上行卡顿(例如预设时间内没有收到用户操作指令),服务器就会将渲染结果2发送给客户端。服务器可以通过终端设备和服务器之间的通信网络将操作指令发送给客户端。
步骤507、客户端显示应用程序的画面。
客户端对接收到的视频压缩数据进行解压,并对其进行翻译得到终端设备可识别的图像数据,从而根据得到的图像数据在屏幕上显示相应的画面。
本实施例,在服务器和终端设备之间的通信网络上行通信不稳定(终端设备向服务器发送数据)时,通过服务器对用户的操作进行预测,可以提前对用户操作所引起的画面切换进行渲染,节省了处理时延,避免画面卡顿的现象。
示例性的,以云游戏APP为例对本申请提供的渲染方法进行说明。假设用户在终端设备上安装了某云游戏APP,用户打开该游戏APP进入游戏界面,通过点击、拖动、缩放、长按等操作玩游戏,其中,当用户拖动游戏人物在游戏中移动时,伴随着游戏人物的位置变换,游戏画面也会发生变化,以使用户感受到和游戏人物同步的视觉体验。
图7-11示例性的示出了云游戏画面切换示意图。如图7所示,游戏人物站在A点,此时客户端显示的是云游戏的服务器根据游戏人物所处的A点渲染的画面。
如图8所示,用户拖动游戏人物向右上方移动到达B点,该操作过程产生的操作指令通过网络传送给服务器,服务器根据操作指令所指示的游戏人物的移动轨迹渲染对应于B点的画面,该画面通过网络传送给客户端显示给用户。
如图9所示,用户拖动游戏人物从B点向右移动到达C点,此时该操作过程产生的操作指令并未到达服务器,服务器在设定的时间内没有收到来自客户端的信息,即认为网络波动造成不能及时接收指令。此时服务器根据之前用户拖动游戏人物从A点移动到B点的操作,预测用户接下来会拖动游戏人物从B点到C点,因此服务器根据预测操作渲染对应于C点的画面,该画面通过网络传送给客户端显示给用户。可见,即使用户拖动游戏人物向右移动到达C点所产生的操作指令并未被服务器接收到,服务器基于对用户操作的预测仍然可以渲染接下来的游戏画面,并将该画面通过网络传送给客户端显示给用户。这样用户可以观看到连续切换的游戏画面,并不会产生卡顿。
如图10所示,用户拖动游戏人物从B点向右下移动到达D点,此时该操作过程产生的操作指令并未到达服务器,服务器在设定的时间内没有收到来自客户端的操作指令,即认为网络波动造成不能及时接收指令。此时服务器根据之前用户拖动游戏人物从A点移动到B点的操作,预测用户接下来会拖动游戏人物从B点到C点,因此服务器根据预测操作渲染对应于C点的画面,该画面通过网络传送给客户端显示给用户。但与图7的区别在于,图8中服务器得到的预测操作和用户的实际操作不一致,即用户的实际操作是拖动游戏人物从B点到D点,但服务器预测的操作是拖动游戏人物从B点到C点,进而渲染的画面是对应于C点的画面。同样用户可以观看到连续切换的游戏画面,并不会产生卡顿。
如图11所示,用户拖动游戏人物从C点向下移动到达E点,该操作过程产生的操作指令通过网络传送给服务器,服务器又可以接收到该操作指令,表明网络状况回复良好,此时服务器又可以根据操作指令所指示的游戏人物的移动轨迹渲染对应于E点的画面,该画面通过网络传送给客户端显示给用户。这样服务器又切换回根据来自客户端的操作指令渲染画面,不影响游戏画面的连贯性。
在上述过程中,用户持有终端设备,终端设备的屏幕上显示游戏界面,用户可以在终端设备的触摸屏上执行点击、拖动、缩放、长按等操作,这些操作被终端设备上安装的客户端获取到后产生相应操作指令。客户端将操作指令通过网络传输给服务器。服务器接收 操作指令,根据操作指令对用户的行为进行预测,预测用户接下来的操作。需要说明的是,服务器可以根据收到的操作指令周期性地预测用户的操作,或者只要接收到来自客户端的操作指令,服务器都可以基于已有的操作指令预测用户的操作,服务器还可以在检测到网络波动时再预测用户的操作,对此本申请不做具体限定。
本申请中服务器可以采用多种方法预测用户的操作,例如,服务器基于用户已发生的操作指令,根据用户在历史上的拖动操作进行拟合,对用户下一个操作发生的位置进行预测。对此本申请不做具体限定。
在网络状况良好时,服务器得到预测操作后不需要基于预测操作渲染对应的画面,而是根据来自客户端的操作指令渲染对应的画面,只有在网络波动时服务器才会根据预测操作渲染对应的画面。本申请中服务器确定网络波动可以基于和客户端之间的握手数据、控制信令的信息的接收情况。例如,客户端反馈的肯定应答(Acknowledge,ACK),服务器设定一个定时器,该定时器的时长例如为100ms,服务器每次收到一个客户端反馈的ACK就重置该定时器,如果定时器超时还没有收到下一个ACK,则服务器认为发生了数据丢失,表明此时网络发生波动,信道状况不好。
另外,服务器在确定网络波动,开始启用预测操作渲染画面后,还需要设定另外一个定时器,该定时器的时长例如为1s,其目的在于,如果用户关闭云游戏APP,或者关机终端设备,又或者用户持有终端设备进入无移动服务区域,服务器就不需要再针对该客户端继续渲染游戏的画面,一旦该定时器超时,可以触发服务器不再为该客户端渲染画面,无论是基于操作指令还是基于预测操作。
当网络波动时,服务器基于预测操作渲染画面,由于预测操作是基于已有的操作指令结合游戏的画面预测得到的,其无法完全符合用户的实际操作,如图8中用户的实际操作是拖动游戏人物从B点到D点,但服务器预测的操作是拖动游戏人物从B点到C点,亦即服务器渲染的画面可能不是用户实际操作后对应的画面。但如上所述,服务器在确定网络波动后,只设定在一段时间内执行本申请提供的渲染方法,一旦上述第二个定时器超时,服务器就认为客户端不在线,无需再为该客户端提供游戏处理、渲染画面。因此只要第二个定时器没有超时,服务器很快就能切换回基于来自客户端的操作指令渲染画面,而短时间内的渲染偏差并不会造成用户的观看体验。
图12为本申请应用程序服务器实施例的结构示意图,如图12所示,本实施例的服务器包括:接收模块1201、渲染模块1202、预测模块1203和发送模块1204。其中,接收模块1201,用于接收来自用户的第一操作指令;渲染模块1202,用于根据所述第一操作指令,渲染所述第一操作指令对应的应用程序的第一画面;预测模块1203,用于根据所述第一操作指令,预测第二操作指令;所述渲染模块1202,还用于根据所述第二操作指令,渲染所述第二操作指令对应的应用程序的第二画面;发送模块1204,用于如果在接收到所述第一操作指令后的预设时长内,没有接收到来自所述用户的操作指令,则将渲染后的第二画面发送给所述用户。
在一种可能的实现方式中,所述预测模块1203,具体用于采用人工智能方法根据所述第一操作指令,预测第二操作指令。
在一种可能的实现方式中,所述渲染模块1202,具体用于确定所述第一画面,并渲染所述第一画面。
在一种可能的实现方式中,所述渲染模块1202,具体用于确定所述第二画面,并渲染所述第二画面。
在一种可能的实现方式中,所述预设时长为100ms或150ms。
本实施例的装置,可以用于执行图5所示方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。处理器可以是通用处理器、数字信号处理器(digital signal processor,DSP)、特定应用集成电路(application-specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。本申请实施例公开的方法的步骤可以直接体现为硬件编码处理器执行完成,或者用编码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
上述各实施例中提及的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间 接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (13)

  1. 一种渲染方法,由服务器执行,其特征在于,包括:
    接收来自用户的第一操作指令;
    根据所述第一操作指令,渲染所述第一操作指令对应的应用程序的第一画面;
    根据所述第一操作指令,预测第二操作指令;
    根据所述第二操作指令,渲染所述第二操作指令对应的应用程序的第二画面;
    如果在接收到所述第一操作指令后的预设时长内,没有接收到来自所述用户的操作指令,则将渲染后的第二画面发送给所述用户。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一操作指令,预测第二操作指令,包括:
    采用人工智能方法根据所述第一操作指令,预测第二操作指令。
  3. 根据权利要求1或2所述的方法,其特征在于,所述渲染所述第一操作指令对应的应用程序的第一画面,包括:
    确定所述第一画面,并渲染所述第一画面。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述渲染所述第二操作指令对应的应用程序的第二画面,包括:
    确定所述第二画面,并渲染所述第二画面。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述预设时长为100ms或150ms。
  6. 一种应用程序服务器,其特征在于,包括:
    接收模块,用于接收来自用户的第一操作指令;
    渲染模块,用于根据所述第一操作指令,渲染所述第一操作指令对应的应用程序的第一画面;
    预测模块,用于根据所述第一操作指令,预测第二操作指令;
    所述渲染模块,还用于根据所述第二操作指令,渲染所述第二操作指令对应的应用程序的第二画面;
    发送模块,用于如果在接收到所述第一操作指令后的预设时长内,没有接收到来自所述用户的操作指令,则将渲染后的第二画面发送给所述用户。
  7. 根据权利要求6所述的服务器,其特征在于,所述预测模块,具体用于采用人工智能方法根据所述第一操作指令,预测第二操作指令。
  8. 根据权利要求6或7所述的服务器,其特征在于,所述渲染模块,具体用于确定所述第一画面,并渲染所述第一画面。
  9. 根据权利要求6-8中任一项所述的服务器,其特征在于,所述渲染模块,具体用于确定所述第二画面,并渲染所述第二画面。
  10. 根据权利要求6-9中任一项所述的服务器,其特征在于,所述预设时长为100ms或150ms。
  11. 一种服务器,其特征在于,包括:
    一个或多个处理器;
    存储器,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-5中任一项所述的方法。
  12. 一种计算机可读存储介质,其特征在于,包括计算机程序,所述计算机程序在计算机上被执行时,使得所述计算机执行权利要求1-5中任一项所述的方法。
  13. 一种计算机程序,其特征在于,当所述计算机程序被计算机执行时,用于执行权利要求1-5中任一项所述的方法。
PCT/CN2021/074693 2020-02-21 2021-02-01 渲染方法和装置 WO2021164533A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21756688.4A EP4088795A4 (en) 2020-02-21 2021-02-01 RENDERING METHOD AND APPARATUS
US17/904,661 US20230094880A1 (en) 2020-02-21 2021-02-01 Rendering Method and Apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010108932.3A CN113289330B (zh) 2020-02-21 2020-02-21 渲染方法和装置
CN202010108932.3 2020-02-21

Publications (1)

Publication Number Publication Date
WO2021164533A1 true WO2021164533A1 (zh) 2021-08-26

Family

ID=77317533

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/074693 WO2021164533A1 (zh) 2020-02-21 2021-02-01 渲染方法和装置

Country Status (4)

Country Link
US (1) US20230094880A1 (zh)
EP (1) EP4088795A4 (zh)
CN (1) CN113289330B (zh)
WO (1) WO2021164533A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114880107A (zh) * 2021-12-09 2022-08-09 许磊 一种高效低成本的云游戏系统
CN114513512B (zh) * 2022-02-08 2023-01-24 腾讯科技(深圳)有限公司 界面渲染的方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022286A (zh) * 2017-11-30 2018-05-11 腾讯科技(深圳)有限公司 画面渲染方法、装置及存储介质
CN108379832A (zh) * 2018-01-29 2018-08-10 珠海金山网络游戏科技有限公司 一种游戏同步方法和装置
CN109304031A (zh) * 2018-09-19 2019-02-05 电子科技大学 一种基于异构智能终端的虚拟化云游戏平台
WO2019026765A1 (ja) * 2017-08-02 2019-02-07 株式会社ソニー・インタラクティブエンタテインメント レンダリング装置、ヘッドマウントディスプレイ、画像伝送方法、および画像補正方法
CN109893857A (zh) * 2019-03-14 2019-06-18 腾讯科技(深圳)有限公司 一种操作信息预测的方法、模型训练的方法及相关装置
US10552752B2 (en) * 2015-11-02 2020-02-04 Microsoft Technology Licensing, Llc Predictive controller for applications

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5453799A (en) * 1993-11-05 1995-09-26 Comsat Corporation Unified motion estimation architecture
JP2981642B2 (ja) * 1994-01-07 1999-11-22 富士通株式会社 映像生成装置
KR100204478B1 (ko) * 1996-05-09 1999-06-15 배순훈 전역 움직임에 의한 빈 공간 보상 방법 및 그 장치
JP3745117B2 (ja) * 1998-05-08 2006-02-15 キヤノン株式会社 画像処理装置及び画像処理方法
US6415317B1 (en) * 1999-10-01 2002-07-02 Joshua Michael Yelon Software system for reducing the appearance of latency in a multi-user environment
US6868434B1 (en) * 2000-08-07 2005-03-15 Sun Microsystems, Inc. System and method for testing server latencies using multiple concurrent users in a computer system
US6983283B2 (en) * 2001-10-03 2006-01-03 Sun Microsystems, Inc. Managing scene graph memory using data staging
US7515156B2 (en) * 2003-01-08 2009-04-07 Hrl Laboratories, Llc Method and apparatus for parallel speculative rendering of synthetic images
US7240162B2 (en) * 2004-10-22 2007-07-03 Stream Theory, Inc. System and method for predictive streaming
US7934058B2 (en) * 2006-12-14 2011-04-26 Microsoft Corporation Predictive caching of assets to improve level load time on a game console
WO2013070228A2 (en) * 2011-11-10 2013-05-16 Empire Technology Development, Llc Speculative rendering using historical player data
US9564102B2 (en) * 2013-03-14 2017-02-07 Microsoft Technology Licensing, Llc Client side processing of player movement in a remote gaming environment
US9959506B1 (en) * 2014-06-17 2018-05-01 Amazon Technologies, Inc. Predictive content retrieval using device movements
US9756375B2 (en) * 2015-01-22 2017-09-05 Microsoft Technology Licensing, Llc Predictive server-side rendering of scenes
US10962780B2 (en) * 2015-10-26 2021-03-30 Microsoft Technology Licensing, Llc Remote rendering for virtual images
US11403820B1 (en) * 2021-03-11 2022-08-02 International Business Machines Corporation Predictive rendering of an image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552752B2 (en) * 2015-11-02 2020-02-04 Microsoft Technology Licensing, Llc Predictive controller for applications
WO2019026765A1 (ja) * 2017-08-02 2019-02-07 株式会社ソニー・インタラクティブエンタテインメント レンダリング装置、ヘッドマウントディスプレイ、画像伝送方法、および画像補正方法
CN108022286A (zh) * 2017-11-30 2018-05-11 腾讯科技(深圳)有限公司 画面渲染方法、装置及存储介质
CN108379832A (zh) * 2018-01-29 2018-08-10 珠海金山网络游戏科技有限公司 一种游戏同步方法和装置
CN109304031A (zh) * 2018-09-19 2019-02-05 电子科技大学 一种基于异构智能终端的虚拟化云游戏平台
CN109893857A (zh) * 2019-03-14 2019-06-18 腾讯科技(深圳)有限公司 一种操作信息预测的方法、模型训练的方法及相关装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4088795A4

Also Published As

Publication number Publication date
EP4088795A1 (en) 2022-11-16
CN113289330B (zh) 2023-12-08
EP4088795A4 (en) 2023-06-14
CN113289330A (zh) 2021-08-24
US20230094880A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
WO2021164532A1 (zh) 云游戏直播方法和装置
WO2019024898A1 (zh) 文件传输显示控制方法、装置及相应的终端
CN106534940B (zh) 直播入口预览图的显示方法及装置
WO2018227398A1 (zh) 一种显示方法及装置
WO2019183788A1 (zh) 一种基于场景推荐应用的方法及装置
US10397153B2 (en) Electronic device and method for controlling reception of data in electronic device
WO2019080065A1 (zh) 一种显示方法及装置
CN110168487B (zh) 一种触摸控制方法及装置
US20170024121A1 (en) Operating method for contents searching function and electronic device supporting the same
WO2021164533A1 (zh) 渲染方法和装置
US11095838B2 (en) Electronic device and method for capturing image in electronic device
JP2016506517A (ja) モバイル機器用のナビゲーションシステムアプリケーション
KR102090745B1 (ko) 전자장치에서 외부 디스플레이 장치를 이용하여 멀티태스킹을 수행하는 방법 및 장치
WO2017193496A1 (zh) 应用数据的处理方法、装置和终端设备
KR102306536B1 (ko) 위젯 제공 시스템 및 방법
WO2019183997A1 (zh) 视频的预览方法及电子设备
WO2019178865A1 (zh) 一种应用窗口的显示方法及终端
CN109408072B (zh) 一种应用程序删除方法及终端设备
WO2022127661A1 (zh) 应用共享方法、电子设备和存储介质
US20200082631A1 (en) Content output method and electronic device for supporting same
WO2022088974A1 (zh) 一种遥控方法、电子设备及系统
KR20170081976A (ko) 클라우드 스토리지 서비스를 지원하는 무선 통신 시스템에서 파일 송/수신 장치 및 방법
CN112788583B (zh) 设备寻找方法、装置、存储介质及电子设备
CN108702489B (zh) 包括多个相机的电子设备及其操作方法
WO2019061512A1 (zh) 一种任务切换方法及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21756688

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021756688

Country of ref document: EP

Effective date: 20220809

NENP Non-entry into the national phase

Ref country code: DE