CN111821688A - Virtual reality game picture processing method and related equipment - Google Patents

Virtual reality game picture processing method and related equipment Download PDF

Info

Publication number
CN111821688A
CN111821688A CN202010453997.1A CN202010453997A CN111821688A CN 111821688 A CN111821688 A CN 111821688A CN 202010453997 A CN202010453997 A CN 202010453997A CN 111821688 A CN111821688 A CN 111821688A
Authority
CN
China
Prior art keywords
virtual reality
picture
user
game picture
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010453997.1A
Other languages
Chinese (zh)
Inventor
袁雪梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN202010453997.1A priority Critical patent/CN111821688A/en
Publication of CN111821688A publication Critical patent/CN111821688A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Abstract

The application discloses a virtual reality game picture processing method and related equipment, wherein the method comprises the following steps: receiving a first picture acquisition request sent by virtual reality equipment; acquiring a second virtual reality game picture from the local picture set according to the first picture acquisition request, and determining a prediction operation identifier set according to the second virtual reality game picture, wherein the prediction operation identifier set is used for indicating the next user operation corresponding to the first user operation; sending the second virtual reality game picture to the virtual reality equipment so that the virtual reality equipment responds to the first user operation, running the second virtual reality game picture on the virtual reality equipment, and sending the prediction operation identification set to the cloud server so that the cloud server generates a third virtual reality game picture according to the prediction operation identification set; and after receiving a third virtual reality game picture sent by the cloud server, storing the third virtual reality game picture to a local picture set. This technical scheme can reduce the cost of using the VR recreation at home.

Description

Virtual reality game picture processing method and related equipment
Technical Field
The present application relates to the field of games, and in particular, to a method and related device for processing a virtual reality game screen.
Background
Virtual Reality (VR) games are a new game mode that has been created with the development of VR technology in recent years, and the principle thereof is to generate a three-dimensional virtual world by computer simulation, and provide the user with sensory simulations about vision, hearing, touch, and the like, thereby providing the user with an immersive experience. Because of the requirements of fidelity and substitution feeling, the VR game has high requirements on the graphic processing capacity of the equipment, and if a user wants to use the VR game at home, the VR equipment with high performance needs to be equipped at home, so that the cost is high.
Disclosure of Invention
The application provides a virtual reality game picture processing method and related equipment, which are used for solving the problem that the cost of using VR games in families is high at present.
In a first aspect, a virtual reality game picture processing method is provided, and the method is applied to an edge computing device, and includes the following steps: receiving a first picture acquisition request sent by virtual reality equipment, wherein the first picture acquisition request is a request sent by the virtual reality equipment after a first user operation aiming at a first virtual reality game picture running on the virtual reality equipment is acquired, and the first picture acquisition request is used for requesting to acquire a second virtual reality game picture corresponding to the first user operation; acquiring a second virtual reality game picture from the local picture set according to the first picture acquisition request, and determining a prediction operation identifier set according to the second virtual reality game picture, wherein the prediction operation identifier set is used for indicating at least one next user operation corresponding to the first user operation; sending the second virtual reality game picture to virtual reality equipment so that the virtual reality equipment can respond to the first user operation, running the second virtual reality game picture on the virtual reality equipment, and sending the prediction operation identification set to the cloud server so that the cloud server can generate a third virtual reality game picture according to the prediction operation identification set, wherein the third virtual reality game picture is a virtual reality game picture corresponding to the user operation indicated by the prediction operation identification set; and after receiving a third virtual reality game picture sent by the cloud server, storing the third virtual reality game picture to a local picture set.
In the technical scheme, after receiving a first picture acquisition request which is sent by virtual reality equipment based on user operation and is used for acquiring a virtual reality game picture, edge computing equipment acquires a second virtual reality game picture from a local picture set according to the first picture acquisition request and then sends the second virtual reality game picture to the virtual reality equipment, so that the virtual reality equipment runs the second virtual reality game picture, and the function of displaying the virtual reality game picture corresponding to the user operation is realized; and after receiving the first picture acquisition request, the edge computing device further determines a prediction operation identification set for indicating the next user operation of the first user operation according to the first picture acquisition request, and sends the prediction operation identification set to the cloud server, so that the cloud server can generate a third virtual reality game picture according to the prediction operation identification set, and then stores the received virtual reality game picture in a local picture set, thereby realizing the pre-generation and storage of the virtual reality game picture. On one hand, because the virtual reality equipment only needs to display the virtual reality game picture and does not need to render to generate the virtual reality game picture, the performance requirement on the virtual reality equipment is reduced, and the cost of using the VR game in the family can be reduced; on the other hand, because the game picture is generated in advance by the cloud server and stored by the edge computing device, the interaction between the edge computing device and the VR device and the pre-generation of the game picture can reduce the rendering generation time delay of the game picture, and the reduction of the two time delays can shorten the time from the acquisition of the user operation by the VR device to the display of the VR game picture corresponding to the user operation by the VR device, so that the user experience of game blocking for the user can be avoided, and the user experience of the user when using the VR game can be ensured.
With reference to the first aspect, in a possible implementation manner, the method further includes: receiving brain wave feedback information and user visual angle information aiming at a first virtual reality game picture, which are acquired by virtual reality equipment; analyzing the user emotion corresponding to the first virtual reality game picture based on the brain wave feedback information; generating a parameter adjusting instruction according to the emotion of the user and the visual angle information of the user; and sending the parameter adjusting instruction to the virtual reality equipment so that the virtual reality equipment adjusts the operation parameters of the virtual reality game picture running on the virtual reality equipment. The user emotion of the user on the VR game picture running on the VR equipment is analyzed according to brain wave feedback information aiming at the VR game picture, and the virtual reality equipment is indicated according to the user emotion and the user visual angle to adjust the running parameters of the VR game picture, so that the VR game picture can adapt to the emotion and visual angle information of the user, and better user experience can be brought to the user.
With reference to the first aspect, in a possible implementation manner, the method further includes: receiving brain wave feedback information aiming at a first virtual reality game picture, which is acquired by virtual reality equipment; analyzing the user emotion corresponding to the first virtual reality game picture based on the brainwave feedback information to obtain user emotion indication information; the step of sending the prediction operation identifier set to the cloud server specifically includes: after the user emotion indication information is added into the prediction operation identification set, the prediction operation identification set is sent to the cloud server, so that when the cloud server generates a third virtual reality game picture according to the prediction operation identification set, the image quality parameters of the third virtual reality game picture are adjusted, and the adjusted third virtual reality game picture is matched with the user emotion. The user emotion of the VR game picture running on the VR equipment by the user is analyzed according to brain wave feedback information aiming at the VR game picture, and the indication information indicating the user emotion is carried in the prediction operation identification which indicates the cloud server to generate the VR game picture, so that the cloud service can adjust the image quality parameters of the VR game picture when the VR game picture is generated, the image quality parameters are matched with the user emotion, in the subsequent VR game process, the image quality parameters can accord with the user emotion, and the user experience of the user using the VR game is improved.
With reference to the first aspect, in a possible implementation manner, the step of analyzing the emotion of the user corresponding to the first virtual reality game picture based on the brainwave feedback information specifically includes: extracting features of the brain wave feedback information to obtain a feature vector corresponding to the brain wave feedback information, and inputting the feature vector corresponding to the brain wave feedback information into a preset brain wave analysis model to obtain an emotion recognition result of the brain wave analysis model; wherein the brain wave analysis model comprises (m-1) layers; m is the total number of user emotions which can be recognized by the brain wave analysis model, and each layer of the predicted brain wave analysis model is composed of emotion recognition models with different numbers; the ith layer of the brain wave analysis model is provided with i emotion recognition models, and each emotion recognition model is used for recognizing two emotions; the first emotion recognition model of the ith layer of the brain wave analysis model is connected with the second emotion recognition model of the (i +1) th layer of the brain wave analysis model and the third emotion recognition model of the (i +1) th layer of the brain wave analysis model, wherein one user emotion capable of being recognized by the second emotion recognition model is the same as one user emotion capable of being recognized by the first emotion recognition model, one user emotion capable of being recognized by the third emotion recognition model is the same as the other user emotion capable of being recognized by the first emotion recognition model, and i is more than or equal to 1 and less than or equal to m; the emotion recognition result of the (i +1) th layer of the brain wave analysis model is associated with the emotion recognition result of the i th layer, and the emotion recognition result of the (m-1) th layer is the emotion recognition result of the brain wave analysis model; and determining the user emotion corresponding to the first virtual reality game picture according to the emotion recognition result of the brain wave analysis model. The characteristic vectors corresponding to the brain wave feedback information are identified and analyzed through the brain wave analysis model with the multilayer structure, so that the emotion of a user is determined, the identification logic is simple, the operation efficiency can be improved under the condition that more emotions of the user are identified as far as possible, and the rapid adjustment of the operation parameters is facilitated.
With reference to the first aspect, in a possible implementation manner, the step of performing feature extraction on the electroencephalogram feedback information to obtain a feature vector corresponding to the electroencephalogram feedback information specifically includes: respectively determining a first feature vector, a second feature vector and a third feature vector according to the brain wave feedback information, wherein the first feature vector is used for representing the energy distribution of the brain wave feedback information, the second feature vector is used for representing the complexity of the brain wave feedback information, and the third feature vector is used for representing the fractal feature of the brain wave feedback information; and obtaining a feature vector corresponding to the brainwave feedback information according to the first feature vector, the second feature vector and the third feature vector. Through extracting the characteristic vector of the brain wave feedback information from multiple dimensions, the emotion of the user can be jointly analyzed from the multiple dimensions, and therefore the accuracy of determining the emotion of the user can be guaranteed.
With reference to the first aspect, in a possible implementation manner, the first picture acquiring request includes a first picture identifier and a first operation identifier, where the first picture identifier is a picture identifier of a first virtual reality game picture, and the first operation identifier is an operation identifier operated by a first user; the step of collectively obtaining the second virtual reality game picture from the local picture according to the first picture obtaining request specifically includes: determining a second picture identifier according to the first picture identifier and the first operation identifier, wherein the second picture identifier is a picture identifier of a second virtual reality game picture; acquiring a plurality of picture materials corresponding to a second virtual reality game picture from the local picture set according to the second picture identification; and rendering and generating a three-dimensional picture based on the plurality of picture materials to obtain a second virtual reality game picture. The game picture of the VR game can be generated quickly and in real time by carrying the identifier indicating the running game picture and the executed operation identifier in the request for obtaining the game picture and then obtaining the picture material to finish the rendering of the picture.
In a second aspect, an edge computing device is provided, comprising:
the virtual reality game system comprises a first receiving module, a second receiving module and a display module, wherein the first receiving module is used for receiving a first picture acquiring request sent by virtual reality equipment, the first picture acquiring request is a request sent by the virtual reality equipment after the first picture acquiring request is acquired and is aimed at a first user operation of a first virtual reality game picture running on the virtual reality equipment, and the first picture acquiring request is used for requesting to acquire a second virtual reality game picture corresponding to the first user operation;
the picture acquisition module is used for acquiring a second virtual reality game picture from the local picture set according to the first picture acquisition request and determining a prediction operation identification set according to the second virtual reality game picture, wherein the prediction operation identification set is used for indicating at least one next user operation corresponding to the first user operation;
the sending module is used for sending the second virtual reality game picture to the virtual reality equipment so that the virtual reality equipment responds to the first user operation, the second virtual reality game picture is operated on the virtual reality equipment, and the prediction operation identification set is sent to the cloud server so that the cloud server generates a third virtual reality game picture according to the prediction operation identification set, wherein the third virtual reality game picture is a virtual reality game picture corresponding to the user operation indicated by the prediction operation identification set;
and the picture storage module is used for storing the third virtual reality game picture to the local picture set after receiving the third virtual reality game picture sent by the cloud server.
With reference to the second aspect, in one possible design, the apparatus further includes: the second receiving module is used for receiving brain wave feedback information and user visual angle information which are acquired by the virtual reality equipment and aim at the first virtual reality game picture; the emotion analysis module is used for analyzing the emotion of the user corresponding to the first virtual reality game picture based on the brain wave feedback information; the instruction generating module is used for generating a parameter adjusting instruction according to the user emotion and the user visual angle information; the sending module is further configured to send the parameter adjustment instruction to the virtual reality device, so that the virtual reality device adjusts the operation parameters of the virtual reality game picture running on the virtual reality device.
With reference to the second aspect, in one possible design, the apparatus further includes: the second receiving module is used for receiving brain wave feedback information aiming at the first virtual reality game picture, which is acquired by the virtual reality equipment; the emotion analysis module is used for analyzing the emotion of the user corresponding to the first virtual reality game picture based on the brainwave feedback information to obtain emotion indication information of the user; the sending module is specifically configured to: after the user emotion indication information is added into the prediction operation identification set, the prediction operation identification set is sent to the cloud server, so that when the cloud server generates a third virtual reality game picture according to the prediction operation identification set, the image quality parameters of the third virtual reality game picture are adjusted, and the adjusted third virtual reality game picture is matched with the user emotion.
With reference to the second aspect, in a possible design, the emotion analysis module is specifically configured to: extracting features of the brain wave feedback information to obtain a feature vector corresponding to the brain wave feedback information, and inputting the feature vector corresponding to the brain wave feedback information into a preset brain wave analysis model to obtain an emotion recognition result of the brain wave analysis model; the brain wave analysis model comprises (m-1) layers, wherein m is the total number of user emotions which can be recognized by the brain wave analysis model, and each layer of the predicted brain wave analysis model is composed of emotion recognition models with different numbers; the ith layer of the brain wave analysis model is provided with i emotion recognition models, and each emotion recognition model is used for recognizing two emotions; the first emotion recognition model of the ith layer of the brain wave analysis model is connected with the second emotion recognition model of the (i +1) th layer of the brain wave analysis model and the third emotion recognition model of the (i +1) th layer of the brain wave analysis model, wherein one user emotion capable of being recognized by the second emotion recognition model is the same as one user emotion capable of being recognized by the first emotion recognition model, one user emotion capable of being recognized by the third emotion recognition model is the same as the other user emotion capable of being recognized by the first emotion recognition model, and i is more than or equal to 1 and less than or equal to m; the emotion recognition result of the (i +1) th layer of the brain wave analysis model is associated with the emotion recognition result of the i th layer, and the emotion recognition result of the (m-1) th layer is the emotion recognition result of the brain wave analysis model; and determining the user emotion corresponding to the first virtual reality game picture according to the emotion recognition result of the brain wave analysis model.
With reference to the second aspect, in a possible design, the emotion analysis module is specifically configured to: respectively determining a first feature vector, a second feature vector and a third feature vector according to the brain wave feedback information, wherein the first feature vector is used for representing the energy distribution of the brain wave feedback information, the second feature vector is used for representing the complexity of the brain wave feedback information, and the third feature vector is used for representing the fractal feature of the brain wave feedback information; and obtaining a feature vector corresponding to the brainwave feedback information according to the first feature vector, the second feature vector and the third feature vector.
With reference to the second aspect, in one possible design, the first picture acquiring request includes a first picture identifier and a first operation identifier, the first picture identifier is a picture identifier of a first virtual reality game picture, and the first operation identifier is an operation identifier operated by a first user; the image acquisition module is specifically configured to: determining a second picture identifier according to the first picture identifier and the first operation identifier, wherein the second picture identifier is a picture identifier of a second virtual reality game picture; acquiring a plurality of picture materials corresponding to a second virtual reality game picture from the local picture set according to the second picture identification; and rendering and generating a three-dimensional picture based on the plurality of picture materials to obtain a second virtual reality game picture.
In a third aspect, there is provided another edge computing device, including a memory and one or more processors, the one or more processors being configured to execute one or more computer programs stored in the memory, the one or more processors, when executing the one or more computer programs, causing the device to implement the virtual reality game screen processing method of the first aspect.
In a fourth aspect, there is provided a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the virtual reality game picture processing method of the first aspect.
The application can realize the following beneficial effects: reducing the cost of using VR games at home; the time from the VR equipment to the VR equipment for displaying the VR game picture corresponding to the user operation is shortened, the user experience of game blocking caused by the user can be avoided, and the user experience of the user using the VR game is guaranteed.
Drawings
Fig. 1 is a schematic diagram of an edge computing-based VR game network system architecture provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a virtual reality game picture processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an association relationship between a VR game screen and a user operation according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another virtual reality game picture processing method according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an electroencephalogram analysis model and a recognition logic of the electroencephalogram analysis model according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a method for generating a parameter adjustment instruction according to an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of an edge computing device according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of another edge computing device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
The technical scheme of the application can be applied to the operation scene of the VR game. Based on the current running scene of the VR game, the application applies the edge computing technology to the VR game, and proposes a new system architecture for the VR game, so as to obtain the VR game network architecture 10 based on the edge computing. An edge computing based VR gaming network system architecture 10 may be as shown in fig. 1, including a VR device 101, an edge computing device 102, and a cloud server 103. The VR device 101 is connected with the edge computing device 102 and used for realizing interaction with a user and interacting with the edge computing device 102 based on the interaction with the user, so that the running of a VR game is guaranteed; specifically, the VR device may be configured to collect various user operations such as voice and gestures of the user, initiate a request to the edge computing device based on the collected user operations to obtain a VR game screen corresponding to the user operations, and then display the VR game screen corresponding to the user operations to the user based on VR technology. The edge computing device 102 is connected with the cloud server 103, and is used for performing communication interaction with the cloud server, and the cloud server 103 can provide various service supports for the edge computing device 102, so as to ensure normal operation of the VR game.
By applying the edge computing technology in the VR game scene, the functions executed or realized on the VR equipment can be transferred to the edge computing equipment for realization, the normal operation of the VR game is ensured by utilizing the low-delay characteristic of edge computing, and the VR equipment only needs to have some basic functions (such as a display function and a communication function), so that the cost for using the VR game in a family can be reduced.
Based on the above-described VR game network system architecture 10 based on edge computing, the technical solution of the present application can be implemented, and the technical solution of the present application is specifically described below.
Referring to fig. 2, fig. 2 is a flowchart illustrating a virtual reality game picture processing method according to an embodiment of the present application, which can be applied to the edge computing device 102 described above, as shown in fig. 2, the method includes the following steps:
s201, receiving a first picture acquisition request sent by virtual reality equipment.
Here, the virtual reality device may be the virtual reality device 101 in fig. 1, and is used to implement interaction with a user, and the virtual reality device may specifically be VR glasses, a wearable device, and the like.
In the embodiment of the application, the first picture acquiring request is a request sent by the virtual reality device after acquiring a first user operation for a first virtual reality game picture running on the virtual reality device, and the first picture acquiring request is used for requesting to acquire a second virtual reality game picture corresponding to the first user operation.
It can be understood that the first virtual reality game screen refers to a VR game screen that is displayed on the virtual reality device in real time, and may also be understood as a VR game screen that is currently displayed on the virtual reality device at the present moment. Here, the current time is a time when something is happening. In the embodiment of the present application, the current time refers to the time when the VR game screen is being presented. The first user operation refers to a certain user operation performed by a user based on a first virtual reality game screen, and an actual control object of the user operation is the first virtual reality game screen, wherein the actual control object refers to an object actually controlled and acted on by operating various operating peripherals (such as a game pad, a game terminal and the like). The second virtual reality game picture refers to a VR game picture that should be displayed on the virtual reality device after the first user operation is performed on the first virtual reality game picture.
S202, a second virtual reality game picture is obtained from the local picture set according to the first picture obtaining request, and a prediction operation identification set is determined according to the second virtual reality game picture.
In the embodiment of the present application, the local screen set is one or more local folders/local databases used for storing screen materials of VR game screens or VR game screens generated in advance for the VR game screens being displayed by the virtual reality device at the present time. The local frame set may include VR game frames or VR frame materials corresponding to all executable user operations on the VR game frame currently displayed by the virtual reality device. The VR screen material is an image material for rendering and generating a VR game screen.
For example, 5 kinds of operations, i.e., operation 1, operation 2, operation 3, operation 4, and operation 5, can be performed on the VR game screen a being displayed in virtual reality. The local frame set includes VR game frames to be presented on the virtual reality device or VR frame materials required for presenting the VR game frames after the operation 1 is performed on the VR game frame a, VR game frames to be presented on the virtual reality device or VR frame materials required for presenting the VR game frames after the operation 2 is performed on the VR game frame a, VR game frames to be presented on the virtual reality device or VR frame materials required for presenting the VR game frames after the operation 3 is performed on the VR game frame a, VR game frames to be presented on the virtual reality device or VR frame materials required for presenting the VR game frames after the operation 4 is performed on the VR game frame a, and VR game frames to be presented on the virtual reality device or VR frame materials required for presenting the VR game frames after the operation 5 is performed on the VR game frame a.
Optionally, before VR game screens or VR screen materials corresponding to all user operations executable for a VR game screen currently being displayed by the virtual reality device are included, the local game screen set may include various contents for performing identification or establishing various association relationships, such as a correspondence relationship between VR game screens and user operations, an association relationship between VR game screens stored in the local game screen set and VR game screens currently being displayed by the virtual reality device, and the like.
In a first possible scenario, the second virtual reality game screen is saved in the local screen set after being rendered by the cloud server. In this case, the VR game screens corresponding to all the user operations that can be performed on the VR game screen currently displayed by the virtual reality device are collectively stored in the local screen, and the second virtual reality game screen can be directly acquired from the local screen set according to the first screen acquisition request. In this way, the edge computing device directly obtains the VR game screen, which can save the time for rendering and generating the VR game screen, thereby reducing the time delay.
Specifically, the first picture acquisition request may carry a first picture identifier and a first operation identifier, where the first picture identifier and the first operation identifier are respectively a picture identifier of the first game picture and an operation identifier of the first user operation, and are respectively used to uniquely indicate the first game picture and the first user operation; the VR game screen corresponding to the screen identifier of the first game screen and the operation identifier of the first user operation may be acquired from the local screen set as the second virtual reality game screen.
Optionally, the first image obtaining request may also carry other content related to the first game image and the first user operation, so that the second virtual reality game image may be collectively obtained from the local image according to the other content, and the application is not limited.
In a second possible scenario, the cloud server is responsible for providing VR picture materials for rendering and generating a second virtual reality game picture for the edge computing device, and the edge computing device renders and generates the second virtual reality game picture according to a request of the virtual reality device. In this case, the first picture acquisition request may carry the first picture identification and the first operation identification; determining a second picture identifier according to the first picture identifier and the second operation identifier, wherein the second picture identifier is a picture identifier of a second virtual reality game picture; acquiring a plurality of picture materials corresponding to a second virtual reality game picture from the local picture set according to the second picture identification; and generating a three-dimensional picture based on the plurality of picture materials corresponding to the second virtual reality game picture to obtain the second virtual reality game picture. By the method, the edge computing equipment renders and generates the VR game picture according to the request of the virtual reality equipment, the cloud server only needs to send the picture material for generating the VR game picture to the edge computing equipment in advance, and on the premise of meeting the requirement of rendering and generating the VR game in real time, the operation that the cloud server generates a large number of VR game pictures in advance can be omitted, and computing resources of the cloud server can be saved to a certain extent.
In the embodiment of the application, the prediction operation identifier set is used for indicating at least one next user operation corresponding to the user operation. Here, the at least one next user operation corresponding to the user operation refers to an operation performed based on the second virtual reality game screen, that is, an actual control object of the next user operation is the second virtual reality game screen. Generally, the at least one next user operation corresponding to the user operation refers to all user operations executable on the basis of the second virtual reality screen.
In a specific implementation, an association relationship between each VR game screen and a user operation may be established in advance, and then a prediction operation identifier set may be determined according to the association relationship between the VR game screen and the user operation.
By way of example, referring to fig. 3, fig. 3 shows an association relationship between a VR game screen and a user operation. As can be seen from fig. 3, if the user operation is the user operation a, it can be determined that the next user operation of the user operation a is the user operation a1, the user operation a2, and the user operation a 3. The predicted operation identifier set is a set of operation identifiers indicating user operation a1, user operation a2, and user operation a 3.
S203, sending the second virtual reality game picture to the virtual reality equipment, and sending the prediction operation identification set to the cloud server.
Here, after the second virtual reality game screen is transmitted to the virtual reality device, the virtual reality device executes the second virtual reality game screen on the virtual reality device in response to the first user operation, that is, displays the second virtual reality game screen on the virtual reality device. Because the VR equipment does not need to render by itself to generate a VR game picture, the VR equipment with lower performance can also meet the requirement of a user on using a VR game, and the cost of the user on using the VR game at home can be reduced; because the edge computing device is communicated with the VR device, network time delay can be reduced, and normal operation of the VR game can be guaranteed.
Here, the cloud server may be the cloud server 103 in fig. 1, and after sending the prediction operation identifier set to the cloud server, the cloud server generates a third virtual reality game screen according to the prediction operation identifier set, where the third virtual reality game screen is a virtual reality game screen corresponding to the user operation indicated by the prediction operation set.
It is understood that the third virtual reality game screen is a VR game screen that should be displayed on the virtual reality device after the user operation indicated by the prediction operation instruction set is performed on the second virtual reality game screen. The cloud server generating the third virtual reality game picture according to the prediction operation identifier set corresponding to the first possible scenario may be that the cloud server directly renders and generates a virtual reality game picture corresponding to each user operation indicated by the prediction operation identifier set according to the prediction operation identifier set; alternatively, in correspondence to the second possible scenario, the cloud server generating the third virtual reality game screen according to the prediction operation identifier set may be that the cloud server generates, according to the prediction operation identifier set, screen materials of virtual reality game screens corresponding to respective user operations indicated by the prediction operation identifier set.
It can be seen that before the edge computing device receives the request for acquiring the virtual reality game screen, the cloud server has generated a corresponding virtual reality game screen or screen material for a user operation that the user may perform on the virtual reality device, so that the edge computing device can directly acquire the screen material of the virtual reality game screen or the virtual reality game screen from the local, and the time from the acquisition of the user operation by the VR device to the display of the VR game screen corresponding to the user operation by the VR device can be shortened.
It should be noted that, the aforementioned rendering of the virtual reality game picture by the cloud server or the edge computing device is implemented by a preset rendering program, and any existing method for generating a virtual reality game picture may be adopted, which is not limited in this application.
And S204, after receiving the third virtual reality game picture sent by the cloud server, storing the third virtual game picture to a local picture set.
In the technical scheme, after receiving a first picture acquisition request which is sent by the virtual reality equipment based on user operation and is used for acquiring a virtual reality game picture, the edge computing equipment acquires a second virtual reality game picture from a local picture set according to the first picture acquisition request and then sends the second virtual reality game picture to the virtual reality equipment, so that the virtual reality equipment runs the second virtual reality game picture, and the function of displaying the virtual reality game picture corresponding to the user operation is realized; and after receiving the first picture acquisition request, the edge computing device further determines a prediction operation identification set for indicating the next user operation of the first user operation according to the first picture acquisition request, and sends the prediction operation identification set to the cloud server, so that the cloud server can generate a third virtual reality game picture according to the prediction operation identification set, and then stores the received virtual reality game picture in a local picture set, thereby realizing the pre-generation and storage of the virtual reality game picture. On one hand, because the virtual reality equipment only needs to display the virtual reality game picture and does not need to render to generate the virtual reality game picture, the performance requirement on the virtual reality equipment is reduced, and the cost of using the VR game in the family can be reduced; on the other hand, because the game picture is generated in advance by the cloud server and stored by the edge computing device, the interaction between the edge computing device and the VR device and the pre-generation of the game picture can reduce the rendering generation time delay of the game picture, and the reduction of the two time delays can shorten the time from the acquisition of the user operation by the VR device to the display of the VR game picture corresponding to the user operation by the VR device, so that the user experience of game blocking for the user can be avoided, and the user experience of the user when using the VR game can be ensured.
In some possible embodiments, the VR device may further collect brainwave information of the user, analyze emotion of the user based on the brainwave information, and adjust image quality parameters of a game screen of the VR game based on the emotion of the user. Referring to fig. 4, fig. 4 is a flowchart illustrating another virtual reality game picture processing method provided in the embodiment of the present application, which can be applied to the edge computing device 102 described above, as shown in fig. 4, the method includes the following steps:
s301, a first picture acquisition request sent by the virtual reality device is received.
S302, a second virtual reality game picture is obtained from the local picture set according to the first picture obtaining request, and a prediction operation identification set is determined according to the second virtual reality game picture.
Here, the specific implementation manner of steps S301 to S302 may refer to the description of steps S201 to S202, and is not described herein again.
And S303, receiving brain wave feedback information aiming at the first virtual reality game picture, which is acquired by the virtual reality equipment.
Here, the brain wave feedback information is brain wave signals collected by the virtual reality device for feeding back the emotion of the user with respect to the virtual reality game screen currently displayed on the virtual reality device. Since the user experiences different frame rates, image quality, and the like of the virtual reality game image, the user may generate various emotions expressing the current psychological state of the user, such as dizziness, joy, anger, and worry, which affect the user experience of the virtual reality game. Considering that the virtual reality equipment is generally head-mounted equipment, the emotion of a user can be presented in brain waves, and the brain wave signals corresponding to different emotions are different, the emotion of the user is determined by collecting the brain wave signals of the user, so that a foundation is laid for relieving certain unfavorable user emotion of the user. In specific implementation, the virtual reality device may collect the brain wave signal of the user through a combination of circuits such as a brain wave sensor, a signal amplification circuit, and a filter circuit.
S304, analyzing the emotion of the user corresponding to the first virtual reality game picture based on the brain wave feedback information to obtain emotion indication information of the user.
In the embodiment of the application, feature vectors for feeding back electroencephalogram features, namely, feature vectors corresponding to electroencephalogram feedback information, can be obtained by extracting features of electroencephalogram feedback information, such as one or more feature extraction modes of a Common Spatial Pattern (CSP), wavelet transformation, auto-regression (AR) model, power spectral density and the like, and then, the feature vectors corresponding to the electroencephalogram feedback information are classified and identified through a user emotion classification model, so that the user emotion of a virtual reality game picture currently displayed on VR equipment, namely, the user emotion corresponding to a first virtual reality game picture, of a user is determined. The emotion indication information refers to an identifier for indicating the emotion of the user, and may specifically be a number, a character code, or the like.
In one possible implementation, the user emotion classification model may be a tree structure-based brainwave analysis model.
In some embodiments, the brainwave analysis model is used for identifying m user emotions, and the brainwave analysis model may include (m-1) layers, each layer of the brainwave analysis model being composed of a different number of emotion recognition models, each emotion recognition model being used for identifying two user emotions. The first emotion recognition model of the ith layer is connected with the second emotion recognition model of the (i +1) th layer and the third emotion recognition model of the (i +1) th layer, wherein one user emotion capable of being recognized by the second emotion recognition model is the same as one user emotion capable of being recognized by the first emotion recognition model, and one user emotion capable of being recognized by the third emotion recognition model is the same as the other user emotion capable of being recognized by the first emotion recognition model.
The specific logic of the brain wave analysis model for classifying and identifying the feature vectors corresponding to the brain wave feedback information may be as follows: taking the 1 st emotion recognition model as a target emotion recognition model of the 1 st layer, inputting the feature vector corresponding to the brainwave feedback information into the target emotion recognition model of the 1 st layer, and determining an emotion recognition result of the target emotion recognition model of the 1 st layer as an emotion recognition result of the 1 st layer; according to the emotion recognition result of the layer 1, determining a target emotion recognition model of the layer 2 from a second emotion recognition model and a third emotion recognition model of the layer 2 which are connected with the target emotion recognition model of the layer 1, inputting a feature vector corresponding to brain wave feedback information into the target emotion recognition model of the layer 2, and determining the emotion recognition result of the target emotion recognition model of the layer 2 as the emotion recognition result of the layer 2; in the same way, until the emotion recognition result of the target emotion recognition model on the (m-1) th layer is obtained, determining the emotion recognition result of the target emotion recognition model on the (m-1) th layer as the emotion recognition result of the brain wave analysis model; and determining the user emotion corresponding to the first virtual reality game picture according to the emotion recognition result of the brain wave analysis model. For the target emotion recognition model of the ith layer, if the emotion recognition result of the target emotion recognition model of the ith layer corresponds to one of the user emotions recognizable by the target emotion recognition model of the ith layer, determining the second emotion recognition model of the (i +1) th layer connected with the target emotion recognition model of the ith layer as the target emotion recognition model of the (i +1) th layer; and if the emotion recognition result of the target emotion recognition model of the ith layer corresponds to another user emotion recognizable by the target emotion recognition model of the ith layer, determining the third emotion recognition model of the (i +1) th layer connected with the target emotion recognition model of the ith layer as the target emotion recognition model of the (i +1) th layer.
For example, the identification logics of the electroencephalogram analysis model and the electroencephalogram analysis model are described, where m is 4 as an example, and it is assumed that the user emotion is user emotion 1, user emotion 2, user emotion 3, and user emotion 4, respectively. Referring to fig. 5, fig. 5 is a schematic diagram of an electroencephalogram analysis model and a recognition logic of the electroencephalogram analysis model according to an embodiment of the present application. As shown in fig. 5, each node in fig. 5 is an emotion recognition model, which is an emotion recognition model M1-M6, where the emotion recognition model M1 is used to recognize user emotion 1 and user emotion 2, the emotion recognition model M2 is used to recognize user emotion 1 and user emotion 3, the emotion recognition model M3 is used to recognize user emotion 2 and user emotion 4, the emotion recognition model M4 is used to recognize user emotion 1 and user emotion 4, the emotion recognition model M5 is used to recognize user emotion 2 and user emotion 3, and the emotion recognition model M6 is used to recognize user emotion 3 and user emotion 4. When the emotion recognition result of the emotion recognition model M1 corresponds to the user emotion 1, the emotion recognition model M2 is determined as the target emotion recognition model of layer 2, and when the emotion recognition result of the emotion recognition model M1 corresponds to the user emotion 2, the emotion recognition model M3 is determined as the target emotion recognition model of layer 2. Similarly, if the emotion recognition model M2 is the target emotion recognition model at layer 2, when the emotion recognition result of the emotion recognition model M2 is the user emotion 1, the emotion recognition model M4 is determined as the target emotion recognition model at layer 3, and when the emotion recognition result of the emotion recognition model M2 corresponds to the user emotion 3, the emotion recognition model M5 is determined as the target emotion recognition model at layer 3. If the emotion recognition model M3 is the target emotion recognition model at layer 2, the emotion recognition model M5 is determined as the target emotion recognition model at layer 3 when the emotion recognition result of the emotion recognition model M3 corresponds to the emotion 2 of the user, and the emotion recognition model M6 is determined as the target emotion recognition model at layer 3 when the emotion recognition result of the emotion recognition model M3 corresponds to the emotion 3 of the user. And finally, determining the emotion recognition result of the target emotion recognition model on the 3 rd layer as the emotion recognition result of the brain wave analysis model.
In a possible implementation, each emotion recognition model may be a classification tree constructed based on a symbolic function, and the formula of the classification tree may be: (x) sign (wx + b), where x ═ x1,x2,…,xh) Is the eigenvector corresponding to the electroencephalogram feedback information, h is the vector dimension of the eigenvector corresponding to the electroencephalogram feedback information, and w ═ w1,w2,…,xh) The weight parameter of the eigenvector corresponding to the brain wave feedback information in each vector dimension is b, and b is a bias parameter. The formula obtained by developing the above formula is f (x) sign (w)1x1+w2x2+…+whxh+ b). The two emotion recognition results of the emotion recognition model are 1 and-1, wherein 1 and-1 respectively represent two user emotions which can be recognized by the emotion recognition model, if the calculated emotion recognition result is 1, the user emotion is one of the user emotions which can be recognized by the emotion recognition model, and if the calculated result is-1, the user emotion is the other user emotion which can be recognized by the emotion recognition model.
Wherein, the weight parameter and the bias parameter on each vector dimension in each emotion recognition model can be obtained by obtaining a training sample for training, and one emotion recognition is obtainedThe emotion recognition model can obtain brain wave samples corresponding to two user emotions (taking the two user emotions as the user emotion S1 and the user emotion S2 as examples) which can be recognized by the emotion recognition model, wherein feature extraction is respectively performed on the brain wave samples corresponding to the user emotion S1 and the user emotion S2 to obtain feature vector samples corresponding to the user emotion S1 and the user emotion S2, wherein the number of the brain wave samples and the number of the feature vector samples are multiple, and then one feature vector sample corresponding to the user emotion S1 is used as an argument of the formula, namely x (x is x) (x is x ═ and is S2 as an example)1,x2,…,xh) Taking 1 as a dependent variable of the formula, namely f (x) (namely y), a training sample corresponding to the emotion of the user S1 is obtained, and each feature vector sample is processed in this way, so that a plurality of training samples corresponding to the emotion of the user S1 are obtained. And using a feature vector sample corresponding to the emotion S2 of the user as an argument of the above formula, that is, as x ═ x (x)1,x2,…,xh) And taking-1 as a dependent variable of the formula, namely f (x) (y), obtaining a training sample corresponding to the emotion of the user S2, and processing each feature vector sample according to the method to obtain a plurality of training samples corresponding to the emotion of the user S2. Then, training samples corresponding to the user emotion S1 and the user emotion S2 are mapped to a high-dimensional space, a hyperplane capable of completely distinguishing two types of elements (two elements with different y) is found in the multi-dimensional space, and parameter values corresponding to the hyperplane are determined as weight parameters and bias parameters in each dimension.
The emotion recognition method has the advantages that the emotion of the user is recognized through the multi-layer tree structure, the recognition logic is simple, compared with the mode that each emotion recognition model of the user is operated once to determine the emotion recognition result, the emotion recognition result can be obtained only by operating part of the emotion models of the user through the tree structure, and the operation efficiency can be improved under the condition that the emotion types of the user to be recognized are large.
In other possible implementations, the user emotion classification model may also be a model for classifying feature vectors corresponding to the electroencephalogram feedback information based on other structures or recognition logic to identify the emotion of the user. Specifically, the user emotion classification model may be a full-link-based classification model, for example, may be a multi-layer perceptron (MLP) classification model; still alternatively, the user emotion classification model may be a classification model based on a convolutional neural network, such as VGG; still alternatively, the user emotion classification model is a classification model based on a proximity algorithm, and may be, for example, a k-nearest neighbor (KNN) classification model or the like. And are not limited to the examples herein.
In the embodiment of the application, the feature extraction is carried out on the brain wave feedback information to obtain the feature vector for feeding back the brain wave features, and various implementation modes can be provided.
In a possible implementation manner, a first feature vector, a second feature vector and a third feature vector may be respectively determined according to the brain wave feedback information, where the first feature vector is used to represent energy distribution of the brain wave feedback information, the second feature vector is used to represent complexity of the brain wave feedback information, and the third feature vector is used to represent fractal features of the brain wave feedback information; and finally, splicing the first eigenvector, the second eigenvector and the third eigenvector to obtain the eigenvector corresponding to the brainwave feedback information. It can be understood that the vector dimension of the feature vector corresponding to the brain wave feedback information is equal to the sum of the vector dimensions of the first feature vector, the second feature vector and the third feature vector.
In some possible embodiments, wavelet transformation and reconstruction can be performed on the brain wave feedback information to obtain a wavelet decomposition coefficient and four rhythm waves (finger waves, theta waves, alpha waves and beta waves) of a brain wave signal, wavelet energy and wavelet entropy are calculated according to the wavelet decomposition coefficient, and the wavelet energy and the wavelet entropy are determined as first feature vectors; calculating approximate entropies of the four rhythm waves, and determining the approximate entropies of the four rhythm waves as a second feature vector; and calculating Hurst indexes of the four rhythm waves, and determining the Hurst indexes of the four rhythm waves as a third feature vector.
In other possible embodiments, the energy feature of the brain wave feedback information may also be calculated through discrete fourier transform to obtain a first feature vector; calculating the sample entropy of the brain wave feedback information to obtain a second feature vector; and calculating the fractal characteristics of the brain wave feedback information through a Higuchi algorithm to obtain a third characteristic vector.
In still other possible embodiments, the first feature vector may be a combination of the above wavelet energy, wavelet entropy, and energy features; the second feature vector may be a combination of the approximate entropy and the sample entropy described above; the third feature vector may be a combination of the Hurst exponent and fractal features described above. Namely, wavelet transformation and reconstruction are carried out on the brain wave feedback information to obtain a wavelet decomposition coefficient and four rhythm waves of brain wave signals; calculating the energy characteristics of the brain wave feedback information through discrete Fourier transform, calculating wavelet energy and wavelet entropy according to wavelet decomposition coefficients, and determining the energy characteristics of the brain wave feedback information and the wavelet energy and the wavelet entropy as first characteristic vectors; calculating sample entropy of brain wave feedback information, calculating approximate entropies of the four rhythm waves, and determining the sample entropy and the approximate entropies as second feature vectors; and calculating the fractal characteristic of the brain wave feedback information through a Higuchi algorithm, calculating the hessian indexes of the four rhythm waves, and determining the fractal characteristic of the brain wave feedback information and the hessian index as a third eigenvector.
The first feature vector, the second feature vector and the third feature vector of the embodiment of the application can be not limited to those listed in the above embodiments, and the emotion of the user can be determined by extracting features of brainwave feedback information from multiple dimensions and combining multiple dimensions, so that the emotion recognition accuracy of the user can be improved. It should be understood that the more feature factors and feature extraction methods are considered in the feature extraction process, the more vector dimensions of the extracted feature vectors are, and the more accurate the identification can be.
Optionally, after the first feature vector, the second feature vector, and the third feature vector are determined, the first feature vector, the second feature vector, and the third feature vector may also be fused to obtain a feature vector corresponding to the electroencephalogram feedback information. The correlation among the first feature vector, the second feature vector and the third feature vector may be analyzed by a Principal Component Analysis (PCA) algorithm, a Singular Value Decomposition (SVD) algorithm, and the like, feature fusion may be performed on the first feature vector, the second feature vector and the third feature vector in a dimension reduction manner, and the feature vector obtained by fusion may be used as a feature vector corresponding to the electroencephalogram feedback information. It can be understood that the vector dimension of the feature vector obtained by fusion is smaller than the sum of the vector dimensions of the first feature vector, the second feature vector and the third feature vector. By carrying out feature fusion on the first feature vector, the second feature vector and the third feature vector, the feature vector which can feed back electroencephalogram features to the greatest extent can be extracted, the vector dimension of the feature vector corresponding to electroencephalogram feedback information is reduced, and on the premise that the recognition accuracy is guaranteed, the complexity of follow-up emotion recognition calculation can be reduced by reduction of the vector dimension, so that the emotion recognition efficiency is improved.
S305, adding the emotion indication information of the user into the prediction operation identification set.
S306, sending the second virtual reality game picture to the virtual reality equipment, and sending the prediction operation identification set to the cloud server.
Here, after the prediction operation identifier set is sent to the cloud server, the cloud server generates a third virtual reality screen according to the prediction operation identifier set, and reference may be made to the description of the step S203 described above regarding the generation of the third virtual reality screen by the cloud server. In the process of generating the third virtual reality picture, the cloud server adjusts the image quality parameters of the third virtual reality game picture according to the user emotion indication information, so that the adjusted third virtual reality picture is matched with the user emotion. Specifically, the image quality parameter of the third virtual reality game screen may refer to parameters such as resolution, brightness, and color saturation of the third virtual reality game screen. For example, if the cloud server determines that the emotion of the user is a worry according to the emotion indication information of the user, the color saturation of the third virtual reality game picture is increased to relieve the worry emotion of the user. Or the cloud server determines that the emotion of the user is angry according to the emotion indication information of the user, and then the resolution of the third virtual reality game picture is increased. The matching relationship between the adjusted image quality parameter and the emotion of the user can be set according to the actual application condition of the virtual reality game, and the application is not limited.
And S307, after receiving the third virtual reality game picture sent by the cloud server, storing the third virtual game picture to a local picture set.
In the technical scheme, the user emotion of the VR game picture running on the VR equipment by the user is analyzed according to brain wave feedback information aiming at the VR game picture, and the indication information indicating the user emotion is carried in the prediction operation identification which indicates the cloud server to generate the VR game picture, so that the cloud service can adjust the image quality parameters of the VR game picture when the VR game picture is generated, the image quality parameters are matched with the user emotion, in the subsequent VR game process, the image quality parameters can accord with the user emotion, and the user experience of the user using the VR game is improved.
Optionally, after the emotion of the user is obtained through the analysis in the above steps S303 to S304, before the second virtual reality game screen is sent to the virtual reality device, the image quality parameter of the second virtual reality screen may also be adjusted according to the emotion of the user, so that the image quality parameter of the second virtual reality screen matches the emotion of the user, thereby improving the user experience of the user in using the VR game.
In some possible embodiments, the VR device may further collect brainwave information of the user, analyze emotion of the user based on the brainwave information, and adjust operation parameters of a game screen of the VR game based on the emotion of the user. Referring to fig. 6, fig. 6 is a schematic flowchart of a method for generating a parameter adjustment instruction according to an embodiment of the present application, where the method may be executed on the basis of the above-mentioned method embodiment of fig. 2 or fig. 3, and includes the following steps:
s401, brain wave feedback information and user visual angle information aiming at a first virtual reality game picture, which are collected by virtual reality equipment, are received.
Here, for the description of the electroencephalogram feedback information, reference may be made to the description of step S304, and details are not repeated here.
The user visual angle information refers to information which is collected by the virtual reality equipment and reflects the current visual angle of the user, and corresponds to the picture content focused by eyes of the user when the user watches the first virtual reality game picture. In specific implementation, the virtual reality device may collect the user perspective information through motion tracking sensing (such as a gyroscope and a speed sensor).
S402, analyzing the emotion of the user corresponding to the first virtual reality game picture based on the brain wave feedback information.
Here, the specific implementation manner of steps S401 to S402 can refer to the description of steps S303 to S304, which is not described herein again.
And S403, generating a parameter adjusting instruction according to the user emotion and the user visual angle information.
In the embodiment of the application, the generation of the parameter adjustment instruction according to the user emotion and the user visual angle information specifically means that feedback of a user on a virtual reality game picture displayed on the virtual reality device at a current visual angle is obtained according to the user emotion, so that comfort level feedback of the user on the virtual reality game picture at the current visual angle is determined, an operation parameter of the virtual reality game picture matched with the comfort level feedback is determined, and a parameter adjustment instruction corresponding to the operation parameter is generated, so that the operation parameter corresponding to the parameter adjustment instruction can adapt to the comfort level feedback. The operation parameters of the virtual reality game picture may refer to display angle information, frame rate, and the like of the virtual reality game picture. For example, if the user emotion is vertigo, a parameter adjustment instruction for instructing to increase the frame rate may be generated. The association relationship between the user emotion and the user visual angle information and the parameter adjustment can be set according to the actual situation of the virtual reality game, and the application is not limited.
And S404, sending the parameter adjusting instruction to the virtual reality equipment.
Here, after the parameter adjustment instruction is sent to the virtual reality device, the virtual reality device adjusts the operation parameters of the virtual game screen according to the parameter adjustment instruction.
In the technical scheme, the user emotion of the VR game picture running on the VR equipment is analyzed by the user according to the brain wave feedback information aiming at the VR game picture, and the virtual reality equipment is indicated to adjust the running parameters of the VR game picture according to the user emotion and the user visual angle, so that the VR game picture can adapt to the emotion and visual angle information of the user, and better user experience can be brought to the user.
The method of the present application is described above, and in order to better carry out the method of the present application, the apparatus of the present application is described next.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an edge computing device according to an embodiment of the present disclosure, where the edge computing device may be the edge computing device 102 in fig. 1, and as shown in the figure, the edge computing device 50 includes:
a first receiving module 501, configured to receive a first image obtaining request sent by a virtual reality device, where the first image obtaining request is a request sent by the virtual reality device after a first user operation on a first virtual reality game image running on the virtual reality device is acquired, and the first image obtaining request is used to request to obtain a second virtual reality game image corresponding to the first user operation;
a picture obtaining module 502, configured to obtain a second virtual reality game picture from the local picture set according to the first picture obtaining request, and determine a prediction operation identifier set according to the second virtual reality game picture, where the prediction operation identifier set is used to indicate at least one next user operation corresponding to the first user operation;
a sending module 503, configured to send the second virtual reality game picture to the virtual reality device, so that the virtual reality device responds to the first user operation, runs the second virtual reality game picture on the virtual reality device, and sends the prediction operation identifier set to the cloud server, so that the cloud server generates a third virtual reality game picture according to the prediction operation identifier set, where the third virtual reality game picture is a virtual reality game picture corresponding to the user operation indicated by the prediction operation identifier set;
the picture saving module 504 is configured to, after receiving the third virtual reality game picture sent by the cloud server, save the third virtual reality game picture to the local picture set.
In one possible design, the apparatus 50 further includes: a second receiving module 505, configured to receive brainwave feedback information and user perspective information for the first virtual reality game picture, which are acquired by the virtual reality device; the emotion analysis module 506 is used for analyzing the emotion of the user corresponding to the first virtual reality game picture based on the brain wave feedback information; the instruction generating module 507 is used for generating a parameter adjusting instruction according to the user emotion and the user visual angle information; the sending module 503 is further configured to send the parameter adjustment instruction to the virtual reality device, so that the virtual reality device adjusts the operation parameter of the virtual reality game picture running on the virtual reality device.
In one possible design, the apparatus further includes: a second receiving module 505, configured to receive brainwave feedback information for the first virtual reality game screen, where the brainwave feedback information is acquired by the virtual reality device; the emotion analysis module 505 is configured to analyze the user emotion corresponding to the first virtual reality game picture based on the brainwave feedback information to obtain user emotion indication information; the sending module 503 is specifically configured to: after the user emotion indication information is added into the prediction operation identification set, the prediction operation identification set is sent to the cloud server, so that when the cloud server generates a third virtual reality game picture according to the prediction operation identification set, the image quality parameters of the third virtual reality game picture are adjusted, and the adjusted third virtual reality game picture is matched with the user emotion.
In one possible design, the emotion analysis module 505 is specifically configured to: extracting features of the brain wave feedback information to obtain a feature vector corresponding to the brain wave feedback information, and inputting the feature vector corresponding to the brain wave feedback information into a preset brain wave analysis model to obtain an emotion recognition result of the brain wave analysis model; the brain wave analysis model comprises (m-1) layers, wherein m is the total number of user emotions which can be recognized by the brain wave analysis model, and each layer of the predicted brain wave analysis model is composed of emotion recognition models with different numbers; the ith layer of the brain wave analysis model is provided with i emotion recognition models, and each emotion recognition model is used for recognizing two emotions; the first emotion recognition model of the ith layer of the brain wave analysis model is connected with the second emotion recognition model of the (i +1) th layer of the brain wave analysis model and the third emotion recognition model of the (i +1) th layer of the brain wave analysis model, wherein one user emotion capable of being recognized by the second emotion recognition model is the same as one user emotion capable of being recognized by the first emotion recognition model, one user emotion capable of being recognized by the third emotion recognition model is the same as the other user emotion capable of being recognized by the first emotion recognition model, and i is more than or equal to 1 and less than or equal to m; the emotion recognition result of the (i +1) th layer of the brain wave analysis model is associated with the emotion recognition result of the i th layer, and the emotion recognition result of the (m-1) th layer is the emotion recognition result of the brain wave analysis model; and determining the user emotion corresponding to the first virtual reality game picture according to the emotion recognition result of the brain wave analysis model.
In one possible design, the emotion analysis module 505 is specifically configured to: respectively determining a first feature vector, a second feature vector and a third feature vector according to the brain wave feedback information, wherein the first feature vector is used for representing the energy distribution of the brain wave feedback information, the second feature vector is used for representing the complexity of the brain wave feedback information, and the third feature vector is used for representing the fractal feature of the brain wave feedback information; and obtaining a feature vector corresponding to the brainwave feedback information according to the first feature vector, the second feature vector and the third feature vector.
In one possible design, the first picture acquisition request includes a first picture identifier and a first operation identifier, the first picture identifier is a picture identifier of a first virtual reality game picture, and the first operation identifier is an operation identifier operated by a first user; the image acquisition module 502 is specifically configured to: determining a second picture identifier according to the first picture identifier and the first operation identifier, wherein the second picture identifier is a picture identifier of a second virtual reality game picture; acquiring a plurality of picture materials corresponding to a second virtual reality game picture from the local picture set according to the second picture identification; and rendering and generating a three-dimensional picture based on the plurality of picture materials to obtain a second virtual reality game picture.
It should be noted that, for the content that is not mentioned in the embodiment corresponding to fig. 7, reference may be made to the description of the method embodiment, and details are not described here again.
After receiving a first picture acquisition request which is sent by the virtual reality equipment based on user operation and used for acquiring a virtual reality game picture, the edge computing equipment acquires a second virtual reality game picture from a local picture set according to the first picture acquisition request and then sends the second virtual reality game picture to the virtual reality equipment, so that the virtual reality equipment runs the second virtual reality game picture, and the function of displaying the virtual reality game picture corresponding to the user operation is realized; and after receiving the first picture acquisition request, the edge computing device further determines a prediction operation identification set for indicating the next user operation of the first user operation according to the first picture acquisition request, and sends the prediction operation identification set to the cloud server, so that the cloud server can generate a third virtual reality game picture according to the prediction operation identification set, and then stores the received virtual reality game picture in a local picture set, thereby realizing the pre-generation and storage of the virtual reality game picture. On one hand, because the virtual reality equipment only needs to display the virtual reality game picture and does not need to render to generate the virtual reality game picture, the performance requirement on the virtual reality equipment is reduced, and the cost of using the VR game in the family can be reduced; on the other hand, because the game picture is generated in advance by the cloud server and stored by the edge computing device, the interaction between the edge computing device and the VR device and the pre-generation of the game picture can reduce the rendering generation time delay of the game picture, and the reduction of the two time delays can shorten the time from the acquisition of the user operation by the VR device to the display of the VR game picture corresponding to the user operation by the VR device, so that the user experience of game blocking for the user can be avoided, and the user experience of the user when using the VR game can be ensured.
Referring to fig. 8, fig. 8 is a schematic structural diagram of another edge computing device provided in this embodiment of the present application, where the edge computing device may be the edge computing device 102 in fig. 1, and as shown in fig. 8, the edge computing device 60 includes a processor 601, a memory 602, and a communication interface 603. The processor 601 is connected to the memory 602 and the communication interface 603, for example, the processor 601 may be connected to the memory 602 and the communication interface 603 through a bus.
The processor 601 is configured to enable the device 60 to perform corresponding functions in the methods of fig. 2-6. The processor 601 may be a Central Processing Unit (CPU), a Network Processor (NP), a hardware chip, or any combination thereof. The hardware chip may be an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The memory 602 is used for storing program codes and the like. The memory 602 may include Volatile Memory (VM), such as Random Access Memory (RAM); the memory 602 may also include a non-volatile memory (NVM), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD); the memory 602 may also comprise a combination of memories of the kind described above.
The communication interface 603 is used for performing communication-related functions such as transmitting data, receiving data, and the like in cooperation with the processor 601.
The processor 601 may call program code stored in memory to perform the following operations:
receiving a first picture acquisition request sent by virtual reality equipment, wherein the first picture acquisition request is a request sent by the virtual reality equipment after a first user operation aiming at a first virtual reality game picture running on the virtual reality equipment is acquired, and the first picture acquisition request is used for requesting to acquire a second virtual reality game picture corresponding to the first user operation; acquiring a second virtual reality game picture from the local picture set according to the first picture acquisition request, and determining a prediction operation identifier set according to the second virtual reality game picture, wherein the prediction operation identifier set is used for indicating at least one next user operation corresponding to the first user operation; sending the second virtual reality game picture to the virtual reality equipment so that the virtual reality equipment responds to the first user operation, running the second virtual reality game picture on the virtual reality equipment, and sending the prediction operation identification set to the cloud server so that the cloud server generates a third virtual reality game picture according to the prediction operation identification set, wherein the third virtual reality game picture is a virtual reality game picture corresponding to the user operation indicated by the prediction operation identification set; and after receiving a third virtual reality game picture sent by the cloud server, storing the third virtual reality game picture to a local picture set.
It should be noted that, the implementation of each operation may also correspond to the corresponding description with reference to the above method embodiment; the processor 601 may also cooperate with other functional hardware to perform other operations in the above-described method embodiments.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, the computer program comprising program instructions, which when executed by a computer, which may be part of the above-mentioned edge computing device, cause the computer to perform the method according to the foregoing embodiments. Such as the processor 601 described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. A virtual reality game picture processing method is applied to an edge computing device and comprises the following steps:
receiving a first picture acquisition request sent by virtual reality equipment; the first picture acquisition request is a request sent by the virtual reality equipment after a first user operation aiming at a first virtual reality game picture running on the virtual reality equipment is acquired; the first picture acquiring request is used for requesting to acquire a second virtual reality game picture corresponding to the first user operation;
acquiring the second virtual reality game picture from a local picture set according to the first picture acquisition request, and determining a prediction operation identification set according to the second virtual reality game picture, wherein the prediction operation identification set is used for indicating at least one next user operation corresponding to the first user operation;
sending the second virtual reality game picture to the virtual reality equipment, so that the virtual reality equipment responds to the first user operation, running the second virtual reality game picture on the virtual reality equipment, and sending the prediction operation identification set to a cloud server, so that the cloud server generates a third virtual reality game picture according to the prediction operation identification set, wherein the third virtual reality game picture is a virtual reality game picture corresponding to the user operation indicated by the prediction operation identification set;
and after receiving the third virtual reality game picture sent by the cloud server, saving the third virtual reality game picture to the local picture set.
2. The method of claim 1, further comprising:
receiving brain wave feedback information and user visual angle information which are acquired by the virtual reality equipment and aim at the first virtual reality game picture;
analyzing the user emotion corresponding to the first virtual reality game picture based on the brain wave feedback information;
generating a parameter adjusting instruction according to the user emotion and the user visual angle information;
and sending the parameter adjusting instruction to the virtual reality equipment so that the virtual reality equipment adjusts the operation parameters of the virtual reality game picture running on the virtual reality equipment.
3. The method of claim 1, further comprising:
receiving brain wave feedback information aiming at the first virtual reality game picture, which is acquired by the virtual reality equipment;
analyzing the user emotion corresponding to the first virtual reality game picture based on the brain wave feedback information to obtain user emotion indication information;
the sending the prediction operation identification set to a cloud server includes:
after the user emotion indication information is added into the prediction operation identification set, the prediction operation identification set is sent to the cloud server, so that when the cloud server generates a third virtual reality game picture according to the prediction operation identification set, image quality parameters of the third virtual reality game picture are adjusted, and the adjusted third virtual reality game picture is matched with the user emotion.
4. The method according to claim 2 or 3, wherein the analyzing the user emotion corresponding to the first virtual reality game picture based on the brain wave feedback information comprises:
extracting features of the brain wave feedback information to obtain a feature vector corresponding to the brain wave feedback information;
inputting the characteristic vector into a preset brain wave analysis model to obtain an emotion recognition result of the brain wave analysis model; the brain wave analysis model comprises (m-1) layers, wherein m is the total number of user emotions which can be recognized by the brain wave analysis model; each layer of the brain wave analysis model consists of different numbers of emotion recognition models; the ith layer of the brain wave analysis model is provided with i emotion recognition models, each emotion recognition model is used for recognizing two user emotions, the first emotion recognition model of the ith layer is connected with the second emotion recognition model of the (i +1) th layer and the third emotion recognition model of the (i +1) th layer, wherein one user emotion recognized by the second emotion recognition model is the same as one user emotion recognized by the first emotion recognition model, and one user emotion recognized by the third emotion recognition model is the same as the other user emotion recognized by the first emotion recognition model; each layer of the brain wave analysis model has an emotion recognition result, the emotion recognition result of the (i +1) th layer is associated with the emotion recognition result of the i th layer, and the emotion recognition result of the (m-1) th layer is the emotion recognition result of the brain wave analysis model; i is more than or equal to 1 and less than or equal to m;
and determining the user emotion corresponding to the first virtual reality game picture according to the emotion recognition result of the brain wave analysis model.
5. The method according to claim 4, wherein the performing feature extraction on the brain wave feedback information to obtain a feature vector corresponding to the brain wave feedback information includes:
respectively determining a first feature vector, a second feature vector and a third feature vector according to the brain wave feedback information, wherein the first feature vector is used for representing energy distribution of the brain wave feedback information, the second feature vector is used for representing complexity of the brain wave feedback information, and the third feature vector is used for representing fractal features of the brain wave feedback information;
and obtaining a feature vector corresponding to the brainwave feedback information according to the first feature vector, the second feature vector and the third feature vector.
6. The method according to claim 1, wherein the first screen acquisition request includes a first screen identifier and a first operation identifier, the first screen identifier is a screen identifier of the first virtual reality game screen, and the first operation identifier is an operation identifier of the first user operation;
the acquiring the second virtual reality game picture from the local picture set according to the first picture acquiring request includes:
determining a second picture identifier according to the first picture identifier and the first operation identifier, wherein the second picture identifier is a picture identifier of the second virtual reality game picture;
acquiring a plurality of picture materials corresponding to the second virtual reality game picture from the local picture set according to the second picture identification;
and rendering and generating a three-dimensional picture based on the plurality of picture materials to obtain the second virtual reality game picture.
7. An edge computing device, comprising:
the device comprises a first receiving module, a second receiving module and a display module, wherein the first receiving module is used for receiving a first picture acquiring request sent by virtual reality equipment, the first picture acquiring request is a request sent by the virtual reality equipment after the first picture acquiring request is acquired and is aimed at a first virtual reality game picture running on the virtual reality equipment, and the first picture acquiring request is used for requesting to acquire a second virtual reality game picture corresponding to the first user operation;
a picture obtaining module, configured to obtain the second virtual reality game picture from a local picture set according to the first picture obtaining request, and determine a prediction operation identifier set according to the second virtual reality game picture, where the prediction operation identifier set is used to indicate at least one next user operation corresponding to the first user operation;
a sending module, configured to send the second virtual reality game picture to the virtual reality device, so that the virtual reality device responds to the first user operation, runs the second virtual reality game picture on the virtual reality device, and sends the prediction operation identifier set to a cloud server, so that the cloud server generates a third virtual reality game picture according to the prediction operation identifier set, where the third virtual reality game picture is a virtual reality game picture corresponding to the user operation indicated by the prediction operation identifier set;
and the picture storage module is used for storing the third virtual reality game picture to the local picture set after receiving the third virtual reality game picture sent by the cloud server.
8. The apparatus of claim 7, further comprising:
the second receiving module is used for receiving brain wave feedback information and user visual angle information which are acquired by the virtual reality equipment and aim at the first virtual reality game picture;
the emotion analysis module is used for analyzing the emotion of the user corresponding to the first virtual reality game picture based on the brain wave feedback information;
the instruction generating module is used for generating a parameter adjusting instruction according to the user emotion and the user visual angle information;
the sending module is further configured to send the parameter adjustment instruction to the virtual reality device, so that the virtual reality device adjusts an operation parameter of a virtual reality game picture running on the virtual reality device.
9. An edge computing device comprising memory and one or more processors to execute one or more computer programs stored in the memory, the one or more processors, when executing the one or more computer programs, causing the apparatus to implement the method of any of claims 1-6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-6.
CN202010453997.1A 2020-05-26 2020-05-26 Virtual reality game picture processing method and related equipment Pending CN111821688A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010453997.1A CN111821688A (en) 2020-05-26 2020-05-26 Virtual reality game picture processing method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010453997.1A CN111821688A (en) 2020-05-26 2020-05-26 Virtual reality game picture processing method and related equipment

Publications (1)

Publication Number Publication Date
CN111821688A true CN111821688A (en) 2020-10-27

Family

ID=72913971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010453997.1A Pending CN111821688A (en) 2020-05-26 2020-05-26 Virtual reality game picture processing method and related equipment

Country Status (1)

Country Link
CN (1) CN111821688A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134416A (en) * 2021-03-22 2022-09-30 中国联合网络通信集团有限公司 Virtual reality service processing system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134416A (en) * 2021-03-22 2022-09-30 中国联合网络通信集团有限公司 Virtual reality service processing system and method

Similar Documents

Publication Publication Date Title
US20220335711A1 (en) Method for generating pre-trained model, electronic device and storage medium
WO2019242222A1 (en) Method and device for use in generating information
US20190392587A1 (en) System for predicting articulated object feature location
JP6994588B2 (en) Face feature extraction model training method, face feature extraction method, equipment, equipment and storage medium
KR102056806B1 (en) Terminal and server providing a video call service
JP2021507439A (en) Image processing methods and devices, electronic devices, storage media and program products
CN111598168B (en) Image classification method, device, computer equipment and medium
CN113240778B (en) Method, device, electronic equipment and storage medium for generating virtual image
US20220198836A1 (en) Gesture recognition method, electronic device, computer-readable storage medium, and chip
CN112381707B (en) Image generation method, device, equipment and storage medium
CN114187624B (en) Image generation method, device, electronic equipment and storage medium
CN113050860B (en) Control identification method and related device
CN108875931A (en) Neural metwork training and image processing method, device, system
JP2024511171A (en) Action recognition method and device
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
CN116228867B (en) Pose determination method, pose determination device, electronic equipment and medium
US20220100531A1 (en) Hyperparameter tuning method, program trial system, and computer program
CN114332553A (en) Image processing method, device, equipment and storage medium
CN112562045B (en) Method, apparatus, device and storage medium for generating model and generating 3D animation
CN111821688A (en) Virtual reality game picture processing method and related equipment
CN112365957A (en) Psychological treatment system based on virtual reality
CN111821689A (en) Virtual reality game system based on cloud computing technology
CN111488476B (en) Image pushing method, model training method and corresponding devices
CN115705689A (en) Image recognition method, device, equipment and storage medium
CN114332993A (en) Face recognition method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201027

WD01 Invention patent application deemed withdrawn after publication