CN115225926A - Game live broadcast picture processing method and device, computer equipment and storage medium - Google Patents

Game live broadcast picture processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115225926A
CN115225926A CN202210744739.8A CN202210744739A CN115225926A CN 115225926 A CN115225926 A CN 115225926A CN 202210744739 A CN202210744739 A CN 202210744739A CN 115225926 A CN115225926 A CN 115225926A
Authority
CN
China
Prior art keywords
game
virtual character
picture
virtual
viewpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210744739.8A
Other languages
Chinese (zh)
Other versions
CN115225926B (en
Inventor
莫筱羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202210744739.8A priority Critical patent/CN115225926B/en
Publication of CN115225926A publication Critical patent/CN115225926A/en
Application granted granted Critical
Publication of CN115225926B publication Critical patent/CN115225926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a live game frame processing method and device, computer equipment and a storage medium. The method is applied to a client or a server, the client carries a target game or a live platform of the target game, and the server is a server corresponding to the target game or the live platform, and the method comprises the following steps: acquiring first data generated when a user watches live game of the target game; acquiring second data generated by the virtual character of the target game in the live game; matching the first data with the second data to obtain a matching result; if the matching result is a first matching result, switching the live game picture into a game scene picture of at least one key virtual character; the key virtual role is a virtual role corresponding to the first matching result. The method can flexibly and accurately adjust the display content of the live game picture according to the watching preference of the user.

Description

Game live broadcast picture processing method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of internet, in particular to a live game picture processing method and device, computer equipment and a storage medium.
Background
In a live game scene, for example, a live event of an MOBA game (multiplayer online tactical competition game), both virtual characters have a key virtual character worth paying attention, for example, a virtual character with higher output and more control, before a battle occurs or when the virtual characters are in a mixed battle.
However, the switching time and position of the live broadcast scenes of the game are determined by the director on the spot, and the game has strong subjective guidance, and the watching preference of most users is not considered, so that some users may have difficulty in finding key virtual roles according to the scenes of the director, and even miss important live broadcast scenes.
Disclosure of Invention
The embodiment of the application provides a live game picture processing method and device, computer equipment and a storage medium, which can efficiently and accurately switch a live game picture to a key virtual character so as to improve the experience of watching the live game by a user.
The embodiment of the application provides a picture processing method for live game, which is applied to a client or a server, wherein the client carries a target game or a live platform of the target game, and the server is a server corresponding to the target game or the live platform to which the target game belongs, and the method comprises the following steps:
acquiring first data generated when a user watches live game of the target game;
acquiring second data generated by the virtual character of the target game in the live game;
matching the first data with the second data to obtain a matching result;
if the matching result is a first matching result, switching the live game picture into a game scene picture of at least one key virtual character; the key virtual role is a virtual role corresponding to the first matching result.
The embodiment of the present application further provides a picture processing device for live game, which is applied to a client or a server, where the client carries a target game or a live platform of the target game, and the server is the target game or a server corresponding to the live platform, and the device includes:
the first data acquisition module is used for acquiring first data generated when a user watches live game of the target game;
the second data acquisition module is used for acquiring second data generated by the virtual character of the target game in the live game;
the matching result acquisition module is used for matching the first data with the second data to obtain a matching result;
the picture switching module is used for switching the picture of the live game into a game scene picture of at least one key virtual character if the matching result is a first matching result; the key virtual role is a virtual role corresponding to the first matching result.
Optionally, the first data comprises a first position, and the second data comprises a second position, wherein the first position is a position of a viewpoint of the user in the picture; the second position is the position of the virtual character on the picture;
the matching result obtaining module further comprises:
the matching degree obtaining sub-module is used for obtaining the matching degree of the first position and the second position; wherein the matching degree is used for representing the coincidence degree of the first position and the second position;
and a first matching result obtaining submodule, wherein if the matching degree is greater than a preset threshold, the matching result is the first matching result.
Optionally, the matching degree obtaining sub-module is specifically configured to:
connecting the first positions according to the watching time sequence of the user to obtain a first track;
connecting the second positions according to the time sequence corresponding to the virtual character moving to the second positions to obtain a second track;
and acquiring the length of the overlapped part of the first track and the second track, calculating a first ratio of the length of the overlapped part to the corresponding length of the second track, and recording the first ratio as the matching degree.
Optionally, each of the second positions corresponds to a virtual range; the virtual range is determined according to at least one of the volume, the range and the skill range of the virtual character on the picture;
the matching degree obtaining sub-module is further specifically configured to:
acquiring a plurality of virtual ranges corresponding to the plurality of second positions according to the plurality of second positions;
acquiring the number of first positions in the plurality of virtual ranges in the plurality of first positions, calculating a second ratio between the number and the total number of the plurality of first positions, and recording the second ratio as the matching degree; wherein, for a first position in the plurality of virtual ranges, the first position and the virtual range in which the first position is located occur at the same time.
Optionally, the first data comprises a first position, and the second data comprises a second position, wherein the first position is a position of a viewpoint of the user in the picture; the second position is the position of the virtual character on the picture;
the matching result obtaining module further comprises:
the motion state acquisition sub-module is used for acquiring the current motion state of the virtual role;
the distance obtaining sub-module is used for obtaining the distance between the first position and the second position if the current motion state of the virtual character is static and the static duration exceeds the preset duration;
and the matching result judging submodule is used for judging that the matching result is the first matching result if the distance is smaller than a first preset threshold value.
Optionally, the screen switching module further includes:
the weight obtaining sub-module is used for obtaining the weights of the key virtual roles if the key virtual roles exist;
and the picture position adjusting submodule is used for adjusting the positions of the key virtual characters in the picture based on the weight.
Optionally, the weight obtaining sub-module is specifically configured to:
generating the professional degree of the portrait information corresponding to the user according to the portrait information; the professionalism is used for representing the degree of understanding of the user on the target game;
generating a first weight of the first data according to the specialty; the magnitude of the first weight is proportional to the magnitude of the professionalism;
for each key virtual character in the plurality of key virtual characters, acquiring a plurality of target viewpoints corresponding to each key virtual character in a target viewpoint set, and acquiring a first sum of viewpoint stay time lengths corresponding to the plurality of target viewpoints, wherein the first sum is the total viewpoint stay time length corresponding to the key virtual character;
and calculating a product value of the first weight and the total stay time of the viewpoint, and taking the product value as the weight of the key virtual character.
Optionally, the apparatus further comprises:
the initial stay time length obtaining submodule is used for obtaining an initial viewpoint of each user in a display interface of the live game of the users and the stay time length corresponding to each initial viewpoint;
the target stay time obtaining submodule is used for obtaining a second sum of the stay time of the plurality of overlapped initial viewpoints if the plurality of initial viewpoints are overlapped, and taking the plurality of overlapped initial viewpoints as a new initial viewpoint, wherein the second sum of the stay time is the stay time of the new initial viewpoint;
and the target viewpoint set acquisition submodule is used for acquiring at least one target viewpoint with viewpoint stay time longer than a second preset threshold for the plurality of initial viewpoints, and taking the at least one target viewpoint and the viewpoint stay time of the at least one target viewpoint as the target viewpoint set.
Optionally, the second data includes attribute values of the virtual character, and the apparatus further includes:
the screen splitting instruction acquisition submodule is used for acquiring a preset screen splitting instruction, and the screen splitting instruction is used for controlling the client or the server to generate a split picture in the picture; the size of the sub-picture is smaller than that of the picture;
and the sub-picture generation sub-module is used for receiving the screen splitting instruction in response to the fact that the attribute value of the key virtual character is smaller than a second preset threshold value so as to generate a sub-picture which is displayed and follows the key virtual character.
The embodiment of the application also provides computer equipment, which comprises a processor and a memory, wherein the memory stores a plurality of instructions; the processor loads instructions from the memory to execute the steps in the live game picture processing method according to any one of the embodiments.
An embodiment of the present application further provides a computer-readable storage medium, where a plurality of instructions are stored in the computer-readable storage medium, where the instructions are suitable for being loaded by a processor to perform the steps in the live game screen processing method according to any of the above embodiments:
acquiring first data generated when a user watches live game of the target game;
acquiring second data generated by the virtual character of the target game in the live game;
matching the first data with the second data to obtain a matching result;
if the matching result is a first matching result, switching the live game picture into a game scene picture of at least one key virtual character; the key virtual role is a virtual role corresponding to the first matching result.
Therefore, the watching preference of the user to the virtual character in the live game is obtained by matching the watching data of the user with the data generated by the virtual character in the game, so that the live game picture can be accurately adjusted according to the watching preference.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a system diagram of a picture processing apparatus for live game provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a picture processing method for live game provided in an embodiment of the present application;
fig. 3 is another schematic flow chart of a live game frame processing method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of determining a degree of matching according to an embodiment of the present application;
FIG. 5 is another schematic diagram of determining a degree of matching according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating adjustment of positions of a plurality of key virtual characters in a screen according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating generation of a sub-frame in a live game frame according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a picture processing apparatus for live game provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a live game frame processing method and device, a storage medium and computer equipment. Specifically, the screen processing method for live game in the embodiment of the present application may be executed by a computer device, where the computer device may be a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like, and the terminal device may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform.
For example, when the live game screen processing method is operated on a terminal, the terminal device stores a game application program and presents part of game scenes in a game through a display component. The terminal device is used for interacting with a user through a graphical user interface, for example, downloading and installing a game application program through the terminal device and running the game application program. The manner in which the terminal device provides the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a game screen and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for executing the game, generating the graphical user interface, responding to the operation instructions, and controlling display of the graphical user interface on the touch display screen.
For example, when the live game screen processing method runs on a server, the game can be a cloud game. Cloud gaming refers to a gaming mode based on cloud computing. In the running mode of the cloud game, the running main body of the game application program and the game picture presenting main body are separated, and the storage and the running of the picture processing method of the live game are finished on the cloud game server. The game screen presentation is performed at a cloud game client, which is mainly used for receiving and sending game data and presenting game screens, for example, the cloud game client may be a display device with a data transmission function near a user side, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, and the like, but a terminal device for performing a screen processing method for live game is a cloud game server at the cloud end. When a game is played, a user operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are coded and compressed, the data are returned to the cloud game client through a network, and finally the data are decoded through the cloud game client and the game pictures are output.
Referring to fig. 1, fig. 1 is a schematic diagram of a system for processing a live game screen according to an embodiment of the present disclosure. The system may include at least one terminal 1000, at least one server 2000, at least one database 3000, and a network 4000. The terminal 1000 held by the user can be connected to servers of different games through the network 4000. Terminal 1000 can be any device having computing hardware capable of supporting and executing a software product corresponding to a game. In addition, terminal 1000 can have one or more multi-touch sensitive screens for sensing and obtaining user input through touch or slide operations performed at multiple points on one or more touch sensitive display screens. In addition, when the system includes a plurality of terminals 1000, a plurality of servers 2000, and a plurality of networks 4000, different terminals 1000 may be connected to each other through different networks 4000 and through different servers 2000. The network 4000 may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, and so on. In addition, different terminals 1000 may be connected to other terminals or a server using their own bluetooth network or hotspot network. For example, a plurality of users may be online through different terminals 1000 to be connected and synchronized with each other through a suitable network to support multiplayer games. In addition, the system may include a plurality of databases 3000, the plurality of databases 3000 being coupled to different servers 2000, and information related to the game environment may be continuously stored in the databases 3000 when different users play the multiplayer game online.
The embodiment of the application provides a live game picture processing method, which can be executed by a terminal or a server. The embodiment of the present application is described by taking an example in which a screen processing method for live game is executed by a terminal. The terminal comprises a display component and a processor, wherein the display component is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the display component. When the user operates the graphical user interface through the display component, the graphical user interface can control the local content of the terminal through responding to the received operation instruction, and can also control the content of the opposite-end server through responding to the received operation instruction. For example, the operation instruction generated by the user acting on the graphical user interface comprises an instruction for starting a game application, and the processor is configured to start the game application after receiving the instruction provided by the user for starting the game application. Further, the processor is configured to render and draw a graphical user interface associated with the game on the touch display screen. A touch display screen is a multi-touch sensitive screen capable of sensing a touch or slide operation performed simultaneously at a plurality of points on the screen. The user uses a finger to perform touch operation on the graphical user interface, and when the graphical user interface detects the touch operation, different virtual objects in the graphical user interface of the game are controlled to perform actions corresponding to the touch operation. For example, the game may be any one of a leisure game, an action game, a role playing game, a strategy game, a sports game, a game for developing intelligence, a First Person Shooter (FPS) game, and the like. Wherein the game may include a virtual scene of the game drawn on a graphical user interface. Further, one or more virtual objects, such as virtual characters, controlled by the user (or player) may be included in the virtual scene of the game. Additionally, one or more obstacles, such as railings, ravines, walls, etc., may also be included in the virtual scene of the game to limit movement of the virtual objects, e.g., to limit movement of one or more objects to a particular area within the virtual scene. Optionally, the virtual scene of the game also includes one or more elements, such as skills, points, character health, energy, etc., to provide assistance to the player, provide virtual services, increase points related to player performance, etc. In addition, the graphical user interface may also present one or more indicators to provide instructional information to the player. For example, a game may include a player-controlled virtual object and one or more other virtual objects (such as enemy characters). In one embodiment, one or more other virtual objects are controlled by other players of the game. For example, one or more other virtual objects may be computer controlled, such as a robot using Artificial Intelligence (AI) algorithms, to implement a human-machine fight mode. For example, the virtual objects possess various skills or capabilities that the game player uses to achieve the goal. For example, the virtual object possesses one or more weapons, props, tools, etc. that may be used to eliminate other objects from the game. Such skills or capabilities may be activated by a player of the game using one of a plurality of preset touch operations with a touch display screen of the terminal. The processor may be configured to present a corresponding game screen in response to an operation instruction generated by a touch operation of a user.
It should be noted that the system schematic diagram of the map display system shown in fig. 1 is merely an example, the map display system and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and as a person having ordinary skill in the art knows, with the evolution of the map display system and the appearance of a new service scene, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
In the present embodiment, a description will be given of a view of a screen process of a live game, which can be specifically integrated in a computer device having an arithmetic capability with a storage unit and a microprocessor mounted thereon.
Referring to fig. 2, fig. 2 is a schematic flow diagram of a picture processing method for live game provided in an embodiment of the present application, where the picture processing method for live game is applied to a client or a server, the client carries a target game or a live platform of the target game, and the server corresponds to the target game or the live platform to which the server belongs, and includes:
step 201, obtaining first data generated by a user watching a live game of the target game.
The first data may include a first position, the first position may be a position of a viewpoint of a user in a screen, and the user may be a viewer, a judge, a referee, or the like who views the target game. The viewpoint of the user refers to an intersection point of the sight lines of the user in the game picture, and the position of the viewpoint can be represented in a form of coordinates, for example, the position of the viewpoint a can be coordinates (x) 1 ,y 1 ) The coordinates are coordinate points on a coordinate system established in the game screen, and the coordinate system may be a coordinate system established with one of the vertexes of the game screen as an origin, with the lateral direction of the game screen as an x-axis, and with the longitudinal direction of the game screen as a y-axis.
Wherein, the position of each coordinate point in the coordinate system can be calculated by taking the pixel point corresponding to the resolution of the game picture as a unit. For example, the resolution of the game screen is 1024 × 768, which means that 1024 pixels are arranged in the width direction of the game screen and 768 pixels are arranged in the height direction of the game screen, 1024 coordinate units and 768 coordinate units can be arranged in the x-axis direction and the y-axis direction of the coordinate system, so that the position of each pixel is used as the position of each coordinate, and when the viewpoint of the user falls into the range of the pixel, the position of the pixel is used as the position of the viewpoint of the user.
It should be noted that, the position of a coordinate point may also be determined by taking a plurality of pixel points as a coordinate unit, for example, two adjacent pixel points are taken from the x axis and the y axis, and four pixel points are used as a coordinate unit, and when the viewpoint of the user falls into the coordinate unit, the corresponding coordinate value is the center point corresponding to the four pixel points. In addition, in addition to the above-mentioned determining the first position by using the pixel point as the most form position, the live game frame may be divided into any number of coordinate units with the same size.
In some scenarios, the viewpoint of the user watching the live game picture may be acquired by a wearable device such as a VR device or an eye tracker. Specifically, after the user wears the VR device, the live game can be watched through a virtual image picture in the VR device, so that the server or the client of the target game can effectively capture eye movement data of the user, and the first position is obtained based on the eye movement data. Alternatively, the first position may be displayed in a thermodynamic diagram, where the positions of the points in the thermodynamic diagram are the first positions, and the thermodynamic values of the points and the areas in the thermodynamic diagram depend on the corresponding durations of the viewpoints of the user, where the durations of the viewpoints may be understood as durations of stay of the lines of sight of the user looking at a certain position. For most live game scenes, the number of watched users is large, so that the thermodynamic value in the thermodynamic diagram can represent the sight line dwell time of all the users at each point and each area.
Step 202, obtaining second data generated by the virtual character of the target game in the live game.
The second data may include a second position, and the second position may be a position of the virtual character on the screen. It is understood that in a target game such as an MOBA-type game, the operations of moving, attacking, and releasing skills of a virtual character are controlled by a player, and the movement of the virtual character and the operation idea of the player in controlling the virtual character can be accurately grasped by acquiring the second position. Therefore, obtaining the position of the virtual character and obtaining the viewing preference of the user to the virtual character according to the matching degree of the second position of the virtual character in the game picture and the first position of the viewpoint of the user is a crucial link in obtaining the viewing preference of the user to the virtual character.
Specifically, the client or the server of the target game may acquire the position of the virtual character on the game screen in real time by acquiring an instruction based on a preset position, for example, the real-time position of the virtual character may be acquired in milliseconds. The determination manner and the display form of the second position need to be unified with the first position, so as to compare the first position with the second position in the following, and therefore the second position may also be a coordinate point located in the game frame coordinate system, for example, the second position may be a coordinate (x) of the game frame 2 ,y 2 ) Indicating that the virtual character is currently in the game screen (x) 2 ,y 2 ) The position of the point.
And 203, matching the first data with the second data to obtain a matching result.
Acquiring the matching degree of the first position and the second position;
and if the matching degree is greater than a preset threshold value, the matching result is the first matching result.
Wherein the matching degree can be used for representing the coincidence degree of the first position and the second position. It can be understood that, since the first position represents the position of the viewpoint of the user in the screen and the second position represents the position of the virtual character in the screen, the viewing preference of the user for the virtual character can be judged by comparing the similarity between the first position and the second position. For example, if the first position coincides with the second position, it means that the viewpoint of the user coincides with the current position of the virtual character; if the first position is close to the second position, considering factors such as an attack distance, a skill release distance and the like of the virtual character in the target game, whether the virtual character meets the watching preference of the user can be determined under the two conditions.
Specifically, the matching degree may be represented by a numerical value, for example, 0.9, 90%, etc., and the greater the value corresponding to the matching degree, the higher the matching degree between the first position and the second position, which means the closer the first position and the second position are. It can be understood that a preset threshold may be preset, and since a higher matching degree indicates that the first location is closer to the second location, whether a matching result of the first location and the second location is a first matching result may be identified by determining whether the matching degree is greater than the preset threshold, where the first matching result may be that the first location is matched with the second location.
In some embodiments, step 203 may further comprise the steps of:
connecting the first positions according to the watching time sequence of the user to obtain a first track;
connecting the second positions according to the time sequence corresponding to the virtual character moving to the second positions to obtain a second track;
and acquiring the length of the overlapped part of the first track and the second track, calculating a first ratio of the length of the overlapped part to the corresponding length of the second track, and recording the first ratio as the matching degree.
Referring to fig. 4, fig. 4 is a schematic diagram of determining a matching degree according to an embodiment of the present application. As shown in fig. 4, the second position of the virtual character changes in real time with time during the movement of the virtual character. For example, the position of avatar A at a first time may be (x) 2 ,y 2 ) The position at the second time may be changed to (x) 3 ,y 3 ). It can be understood that if the first location of each user is compared with the second location of the virtual character in real time, the server or the client needs to use a large amount of computing resources, and in order to obtain the matching degree under the conditions of saving computing resources and improving computing efficiency, the matching degree can be calculated by comparing the first track with the second track.
Specifically, the plurality of first positions may be connected according to a time sequence viewed by the user to obtain a first trajectory, and the plurality of second positions may be connected to obtain a second trajectory. It can be understood that the eye movement trajectory viewed by the user and the movement trajectory of the virtual character can be accurately obtained by associating the generation times of the first position and the second position and connecting the plurality of first positions and the plurality of second positions in the order of the generation times. For example, the first trajectory may be a user eye movement trajectory generated at 15 to 15 minutes, and the second trajectory may be a movement trajectory of the virtual character generated at 15 to 15 minutes.
Specifically, as shown in fig. 4, after the first track and the second track are acquired, the matching degree between the first track and the second track can be obtained by calculating the overlapping degree of the first track and the second track, and the overlapping length can be obtained by comparing the length of the overlapped part of the two tracks. For example, assuming that the lengths of the first track and the second track are both 10 and the length of the overlapped portion between the first track and the second track is 8, the first ratio may be 0.8, which indicates that the matching degree between the first track and the second track is 0.8 or 80%.
Referring to fig. 5, fig. 5 is another schematic diagram of determining a matching degree according to an embodiment of the present disclosure.
Wherein each second position may correspond to a virtual range, and the virtual range may be determined according to at least one of the volume, range and skill range of the virtual character on the screen. As shown in fig. 5, the dotted rectangle in fig. 5 is a virtual range of the virtual character, and the virtual range is larger than the coverage volume of the virtual character in the game screen. In the live game event, the user also pays attention to the characteristics of character positioning, skill attributes and the like of each virtual character, so that the range and the skill range of the virtual character need to be considered when the virtual range of the virtual character is determined, and the size of the virtual range corresponding to the virtual character can be positively correlated with the volume, the range and the skill range of the virtual character. For example, the larger the volume and the larger the range of the virtual character, the larger the virtual range thereof. Specifically, the server or the client of the target game may directly acquire the above data of the virtual character from a database storing data of a plurality of virtual characters, and determine the virtual range of the virtual character according to the above data.
In some embodiments, the step of "obtaining the matching degree between the first position and the second position" may further include the steps of:
acquiring a plurality of virtual ranges corresponding to the plurality of second positions according to the plurality of second positions;
acquiring the number of first positions in the plurality of virtual ranges in the plurality of first positions, calculating a second ratio between the number and the total number of the plurality of first positions, and recording the second ratio as the matching degree.
For a first position in the plurality of virtual ranges, the first position and the virtual range in which the first position is located occur at the same time. As shown in fig. 5, the position of the virtual range changes with the position of the virtual character, and assuming that, for a certain user, three first positions and three second positions are obtained at the first time, the second time, and the third time, it can be seen that two first positions of the three first positions are within the virtual range of the corresponding time, which means that the viewpoints of the users at the second time and the third time of the three times are concentrated on the virtual character, so that the second ratio is 2/3, and the matching degree between the first position and the second position is 60%.
It can be understood that, in a scene where a plurality of users watch live games, the comparison result between the first positions and the virtual range generated by all users at a plurality of times may be obtained, for example, a second ratio of all the first positions in the virtual range at the corresponding time may be calculated to determine the matching degree between the first positions and the second positions.
If the distance between the two virtual characters in the game screen is short, and the virtual ranges corresponding to the two virtual characters may overlap or partially overlap, the matching degree may be further determined by increasing the determination time, comparing the actual distance between the first position and the virtual character, and the like. For example, if the preset rule is to compare the matching degree of the first position with the virtual range at time 1 to 1.
Optionally, step 203 may further include the steps of:
acquiring the current motion state of the virtual role;
if the current motion state of the virtual character is static and the static duration exceeds a preset duration, acquiring the distance between the first position and the second position;
and if the distance is smaller than a first preset threshold value, determining that the matching result is the first matching result.
The current motion state of the virtual character refers to the state of the virtual character in the target game scene, such as stationary, moving, accelerating, slowing, being locked and the like. It can be understood that, since the aforementioned ways of obtaining the matching result are all premised on the movement of the virtual character, in the live game scene, the virtual character does not necessarily move in real time. For example, the player may manipulate the virtual character to perform operations of occupying a position, burying an enemy, waiting for teammates, etc. to stay at a certain position, and thus the result of matching the virtual character with the first position in the stationary state needs to be considered.
Specifically, it may be determined by identifying whether the position of the virtual character changes within a certain time length range, for example, if the second position of the virtual character a does not change within 1 second, it may be obtained that the current motion state of the virtual character is static. After the virtual character a is identified to be in the static state, the static duration of the virtual character a can be further determined, for example, only when the virtual character is in the static state for a certain time, the identification rule for the matching result is changed, so as to improve the accuracy and the necessity of triggering the matching result identification instruction.
It will be appreciated that in some scenarios, when the virtual character is in a continuous static state, no matter which operation the virtual character is performing, such as occupying position, burying enemies, waiting for teammates, etc., the user's attention point will usually be shifted from the virtual character itself to an area near the virtual character, so that by determining whether the first position of the user's viewpoint is close to the second position, if so, it indicates that the user's current viewing preference is the virtual character in the continuous static state. Therefore, the matching result can be obtained by determining whether the distance between the first position and the second position is smaller than a first pre-threshold value in the period of time when a certain virtual character is in the continuous static state, if so, the matching result is the first matching result, and if not, the matching result is not the first matching result.
It should be noted that, considering that the live game frame can have a suitable switching frequency, the matching result calculation and the time interval of game frame switching can be determined according to the requirement, for example, the time interval of matching result calculation can be 1 second, and the time interval of game frame switching can be set to 1-10 seconds, so as to avoid that the frequency of game frame switching is too high or too low, thereby affecting the experience of the user watching the live game frame.
Therefore, the first position corresponding to the viewpoint of the user and the second position where the virtual character moves in the target game can be obtained, and the matching result of the first position and the second position can be obtained more quickly and intuitively by obtaining the eye movement track and the virtual character movement track of the user. In addition, in the process of obtaining the matching result, the second position of the virtual character in the continuous static state is also considered, so that the accuracy of obtaining the matching result can be improved.
And 204, if the matching result is a first matching result, switching the live game picture into a game scene picture of at least one key virtual character.
And the key virtual role is a virtual role corresponding to the first matching result. As described above, the first matching result is a matching result of the first position and the second position, that is, in the live game scene, the viewpoint position of the user matches the current position of the virtual character at the corresponding time, and it can be determined that the viewing preference of the user is the virtual character, and then the virtual character can be determined as a "key virtual character", for example, the key virtual character can be a virtual character with attributes such as high output, stable control, and first-hand release skill in the target game, so that such a virtual character is more likely to be concerned by most users, and subsequently, after the key virtual character is determined, the live game scene picture can be switched to the game scene picture including the key virtual character, so as to achieve accurate and flexible game picture switching, improve the experience of the user viewing the game, and avoid the user missing key pictures, processes, and the like.
In some embodiments, step 204 may further include the steps of:
if a plurality of key virtual roles exist, acquiring the weights of the key virtual roles;
based on the weights, adjusting the positions of the plurality of key avatars in the picture.
In some embodiments, the step of "obtaining weights for the plurality of key virtual characters" may comprise the steps of:
generating the professional degree of the portrait information corresponding to the user according to the portrait information; the specialty is used for representing the understanding degree of the user on the target game;
generating a first weight of the first data according to the specialty; the magnitude of the first weight is proportional to the magnitude of the professionalism;
for each key virtual character in the plurality of key virtual characters, acquiring a plurality of target viewpoints corresponding to each key virtual character in a target viewpoint set, and acquiring a first sum of viewpoint stay time lengths corresponding to the plurality of target viewpoints, wherein the first sum is the total viewpoint stay time length corresponding to the key virtual character;
and calculating a product value of the first weight and the total stay time of the viewpoint, and taking the product value as the weight of the key virtual character.
It can be understood that in some scenarios, since there are many users watching live games and there are many concerned virtual characters, for example, the virtual character a is a virtual character with an attribute of opening a group by hand, and the virtual character B is a virtual character with a high output attribute in another battle, so that an attack object of the virtual character a is likely to be the virtual character B, and the user may concern the virtual character a and the virtual character B at the same time, so that a plurality of key virtual characters may be determined in a normal case.
Wherein the first data may further include portrait information of the user. Optionally, the representation information may include identity characteristics of the user related to the target game, for example, the identity characteristics of the user may be identities of guests, commentary, spectators, etc. that are live in the game. Optionally, the representation information may further include associated characteristics of the user related to the target game, for example, the associated characteristics may be characteristics of proficiency level of the user in the target game, game registration age, and the like.
It should be noted that, in the specific implementation manner of the present application, related data such as user information, user characteristic user profile information, etc. when the above embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant countries and regions.
The portrait information of the user can correspond to the specialty of the user, and the specialty is used for representing the understanding degree of the user on the target game. For example, if the image information of the user a is the identity of the commentator and the image information of the user B is the identity of the audience, it can be determined that the profession of the user a is greater than that of the user B with a high probability, and then different first weights can be assigned to the first data generated by the user a and the user B according to the profession, for example, the first weight corresponding to the user a is 1.1, and the first weight corresponding to the user B is 0.9.
Specifically, for any key virtual character, multiple target viewpoints corresponding to each key virtual character in the target viewpoint set may be obtained, and a first sum of viewpoint stay durations corresponding to the multiple target viewpoints is obtained, where the first sum is a total viewpoint stay duration corresponding to the key virtual character. It can be understood that, since the magnitude of the heat value in the thermodynamic diagram at the first location may reflect the viewpoint stay time of the user at a certain first location, the larger the viewpoint stay time of the user, the greater the attention degree and the viewing preference of the first location. Therefore, for the total duration of the viewpoint stay of the target viewpoint set generated by the plurality of users, the attention degree and the viewing preference of the plurality of users to the key virtual character can be represented.
In some embodiments, before the step "obtaining all target viewpoints corresponding to each key virtual character in the target viewpoint set, and the total duration of viewpoint stay of all target viewpoints", the method in the embodiments of the present application further includes the following steps:
acquiring an initial viewpoint of each user in a display interface of the live game and a dwell time corresponding to each initial viewpoint;
if the plurality of initial viewpoints are overlapped, acquiring a second sum of the stay time lengths of the plurality of overlapped initial viewpoints, and taking the plurality of overlapped initial viewpoints as a new initial viewpoint, wherein the second sum of the stay time lengths is the stay time length of the new initial viewpoint;
for a plurality of initial viewpoints, at least one target viewpoint with viewpoint stay time longer than a second preset threshold is obtained, and the at least one target viewpoint and the viewpoint stay time of the at least one target viewpoint are used as the target viewpoint set.
Wherein the initial viewpoint may be a viewpoint that is originally generated by the user and has not been filtered. It can be understood that, for a scene where a plurality of users watch live game, the number and the distribution degree of viewpoints of the plurality of users are uncertain, and if one of the viewpoints is obtained for calculating a matching result, a large calculation resource of a server is wasted, and calculation efficiency is reduced. Therefore, it is possible to obtain an effective set of target viewpoints.
Specifically, first, viewpoints where a plurality of initial viewpoints coincide with each other may be screened from the viewpoint distribution aspect, and a second sum of dwell times corresponding to the plurality of coincident initial viewpoints may be calculated to exclude initial viewpoints that are too dispersed. Further, a target viewpoint with the duration larger than a second preset threshold is screened out from the viewpoint staying duration. It can be understood that, for a viewpoint with a small viewpoint stay time, the representation user does not watch the first position for a long time, so that the accuracy of the viewpoint data on the subsequent calculation matching result is relatively unfavorable, and therefore, the viewpoint with a small viewpoint stay time can be excluded in the manner, and the target viewpoint set is obtained.
Specifically, after the target viewpoint set is obtained, the first sum values corresponding to the plurality of key virtual characters within a certain time range can be accurately calculated, and the weight of the key virtual character is obtained based on the product of the first sum value and the first weight.
For example, in the period of 16. It can be understood that the weight value may be adjusted according to a requirement, for example, the weight of the key virtual character c is also 32, and for convenience of understanding, the weights and values of the three key virtual characters may be set to 1, and the weights of the key virtual character a, the key virtual character b, and the key virtual character c may be 0.36, 0.32, and 0.32, respectively.
As shown in fig. 6, fig. 6 is a schematic diagram of adjusting positions of a plurality of key virtual characters in a screen according to an embodiment of the present application.
After the weights of the plurality of virtual key characters are determined, the positions of the plurality of virtual key characters in the picture can be adjusted according to the weights of the plurality of key virtual characters. As shown in fig. 6, when there are multiple key avatars, the live game view is switched to the multiple key avatars, and the weights of the key avatars can represent the viewing preference of the user, so that the second position of the key avatar with higher weight can be closer to the center of the game view by adjusting the live game view, for example, the key avatar a in fig. 6 is closer to the center of the game view than the key avatar b. It can be understood that in a target game such as an MOBA, each virtual character has a corresponding attack mode and skill release mode, so that the live game screen is adjusted to a position where the virtual character is slightly offset from the center of the game screen, so that the virtual character is exactly located at the center of the game screen in the attack action and skill release special effect, and an optimal viewing effect can be brought to a user.
The second data may further include attribute values of the virtual character, and the attribute values may include attributes of the virtual character such as a blood value, a magic value, an anger value, and a skill cooling value.
In some embodiments, the method of the embodiments of the present application may further include the steps of:
acquiring a preset screen splitting instruction, wherein the screen splitting instruction is used for controlling the client or the server to generate a split picture in the picture;
and responding to the fact that the attribute value of the key virtual character is smaller than a second preset threshold value, receiving the screen splitting instruction, and generating a split picture which is displayed and follows the key virtual character.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating generation of a sub-frame in a live game frame according to an embodiment of the present application. The size of the sub-picture is smaller than that of the live game picture, which is equivalent to generating the sub-picture in a picture-in-picture form in the current live game picture. The split screen can be generated by receiving a preset split screen instruction through the server, and specifically, the split screen instruction can be received under the condition that the attribute value of the key virtual character is identified to be smaller than a second preset threshold value.
In the live game screen shown in fig. 7, assuming that at the current time, the attribute value of the key virtual character c is smaller than a second preset threshold, for example, the blood volume value of the current virtual character is smaller than 100 or the current proportion of the blood volume value is smaller than 10%, the server may receive a preset screen splitting instruction, and display the key virtual character c in a split screen manner. It can be understood that, in some scenarios, if the attribute value of the key virtual character is low, the player may control the key virtual character to move to a more secret position, but the key virtual character with a low attribute value may cause a lot of attention of the user, so that the key virtual character with a low attribute value can be displayed in a split-screen manner, and not only is the user not miss the game scene of the main screen, but also the viewing preference of the key virtual character with a low attribute value can be satisfied.
Therefore, the watching preference of the user to the virtual character in the live game is obtained by matching the watching data of the user with the data generated by the virtual character in the game, so that the live game picture can be accurately adjusted according to the watching preference.
Referring to fig. 3, fig. 3 is another schematic flow chart of a live game frame processing method according to an embodiment of the present disclosure. The specific process of the method can be as follows:
step 301, acquiring a viewpoint position set of a user in a live frame of a target game.
Step 302, a first track is obtained according to the connecting line of the viewpoint set position.
And 303, acquiring a moving position set of the virtual character in a live broadcasting picture of the target game.
And 304, obtaining a second track according to the connecting line of the moving position set, or obtaining a plurality of corresponding virtual ranges according to the moving position set.
And 305, acquiring the coincidence degree of the first track and the second track, or the ratio of each viewpoint in the viewpoint position set within the simultaneous virtual range, as the matching degree between the user viewing target and the virtual character.
And step 306, if the matching degree is greater than a preset threshold value, determining that the virtual character is a key virtual character.
And 307, acquiring the weight of each key virtual role.
And 308, adjusting the position of each key virtual character in the live game picture according to the weight.
In order to better implement the foregoing method, an embodiment of the present application further provides a screen processing apparatus for live game, please refer to fig. 8, where fig. 8 is a schematic structural diagram of a screen processing apparatus for live game provided in an embodiment of the present application, where the apparatus is applied to a client or a server, the client carries a target game or a live platform of the target game, and the server is a server corresponding to the target game or the live platform to which the server belongs, and the apparatus includes:
a first data obtaining module 401, configured to obtain first data generated when a user watches live game of the target game;
a second data obtaining module 402, configured to obtain second data generated by a virtual character of the target game in the live game;
a matching result obtaining module 403, configured to match the first data with the second data to obtain a matching result;
a picture switching module 404, configured to switch a live game picture to a game scene picture of at least one key virtual character if the matching result is a first matching result; the key virtual role is a virtual role corresponding to the first matching result.
Optionally, the first data comprises a first position, and the second data comprises a second position, wherein the first position is a position of a viewpoint of the user in the picture; the second position is the position of the virtual character on the picture;
optionally, the matching result obtaining module 403 may further include:
the matching degree obtaining sub-module is used for obtaining the matching degree of the first position and the second position; wherein the matching degree is used for representing the coincidence degree of the first position and the second position;
and a first matching result obtaining submodule, wherein if the matching degree is greater than a preset threshold, the matching result is the first matching result.
Optionally, the matching degree obtaining sub-module is specifically configured to:
connecting the first positions according to the watching time sequence of the user to obtain a first track;
connecting the second positions according to the time sequence corresponding to the virtual character moving to the second positions to obtain a second track;
and acquiring the length of the overlapped part of the first track and the second track, calculating a first ratio of the length of the overlapped part to the corresponding length of the second track, and recording the first ratio as the matching degree.
Optionally, each of the second positions corresponds to a virtual range; the virtual range is determined according to at least one of the volume, the range and the skill range of the virtual character on the picture;
the matching degree obtaining sub-module is further specifically configured to:
acquiring a plurality of virtual ranges corresponding to the plurality of second positions according to the plurality of second positions;
acquiring the number of first positions in the plurality of virtual ranges in the plurality of first positions, calculating a second ratio between the number and the total number of the plurality of first positions, and recording the second ratio as the matching degree; wherein, for a first position in the plurality of virtual ranges, the first position and the virtual range in which the first position is located occur at the same time.
Optionally, the first data comprises a first position, and the second data comprises a second position, wherein the first position is a position of a viewpoint of the user in the picture; the second position is the position of the virtual character on the picture;
optionally, the matching result obtaining module 403 may further include:
the motion state acquisition submodule is used for acquiring the current motion state of the virtual role;
the distance obtaining sub-module is used for obtaining the distance between the first position and the second position if the current motion state of the virtual character is static and the static duration exceeds the preset duration;
and the matching result judging submodule is used for judging that the matching result is the first matching result if the distance is smaller than a first preset threshold value.
Optionally, the screen switching module 404 may further include:
the weight obtaining sub-module is used for obtaining the weights of the key virtual roles if the key virtual roles exist;
and the picture position adjusting submodule is used for adjusting the positions of the key virtual characters in the picture based on the weight.
Optionally, the weight obtaining sub-module is specifically configured to:
generating the professional degree of the portrait information corresponding to the user according to the portrait information; the professionalism is used for representing the degree of understanding of the user on the target game;
generating a first weight of the first data according to the specialty; the magnitude of the first weight is proportional to the magnitude of the specialty degree;
for each key virtual character in the plurality of key virtual characters, acquiring a plurality of target viewpoints corresponding to each key virtual character in a target viewpoint set, and acquiring a first sum of viewpoint stay time lengths corresponding to the plurality of target viewpoints, wherein the first sum is the total viewpoint stay time length corresponding to the key virtual character;
and calculating a product value of the first weight and the total stay time of the viewpoint, and taking the product value as the weight of the key virtual character.
Optionally, the apparatus further comprises:
an initial stay time obtaining submodule, configured to obtain an initial viewpoint of each user in a display interface of the live game and a stay time corresponding to each initial viewpoint of each user;
the target stay time obtaining submodule is used for obtaining a second sum of the stay time of the plurality of overlapped initial viewpoints if the plurality of initial viewpoints are overlapped, and taking the plurality of overlapped initial viewpoints as a new initial viewpoint, wherein the second sum of the stay time is the stay time of the new initial viewpoint;
and the target viewpoint set acquisition submodule is used for acquiring at least one target viewpoint with viewpoint stay time longer than a second preset threshold for the plurality of initial viewpoints, and taking the at least one target viewpoint and the viewpoint stay time of the at least one target viewpoint as the target viewpoint set.
Optionally, the second data includes attribute values of the virtual character, and the apparatus further includes:
the screen splitting instruction acquisition submodule is used for acquiring a preset screen splitting instruction, and the screen splitting instruction is used for controlling the client or the server to generate a split picture in the picture; the size of the sub-picture is smaller than that of the picture;
and the sub-picture generation sub-module is used for receiving the screen splitting instruction in response to the fact that the attribute value of the key virtual role is smaller than a second preset threshold value so as to generate a sub-picture which is displayed and follows the key virtual role.
The embodiment of the application also provides computer equipment, which comprises a processor and a memory, wherein the memory stores a plurality of instructions; the processor loads instructions from the memory to execute the steps in the live game picture processing method according to any one of the embodiments.
An embodiment of the present application further provides a computer-readable storage medium, where a plurality of instructions are stored in the computer-readable storage medium, where the instructions are suitable for being loaded by a processor to perform the steps in the live game screen processing method according to any of the above embodiments:
acquiring first data generated when a user watches live game of the target game;
acquiring second data generated by the virtual character of the target game in the live game;
matching the first data with the second data to obtain a matching result;
if the matching result is a first matching result, switching the live game picture into a game scene picture of at least one key virtual character; the key virtual role is a virtual role corresponding to the first matching result.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
The picture processing device for live game provided by the embodiment of the application acquires first data generated by live game of a target game watched by a user; acquiring second data generated by the virtual character of the target game in the live game; matching the first data with the second data to obtain a matching result; if the matching result is a first matching result, switching the live game picture into a game scene picture of at least one key virtual character; the key virtual role is a virtual role corresponding to the first matching result.
Therefore, the watching preference of the user to the virtual character in the live game is obtained by matching the watching data of the user with the data generated by the virtual character in the game, so that the live game picture can be accurately adjusted according to the watching preference.
Correspondingly, the embodiment of the present application further provides a computer device, where the computer device may be a terminal or a server, and the terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game machine, a Personal computer, and a Personal Digital Assistant (PDA).
As shown in fig. 9, fig. 9 is a schematic structural diagram of a computer device 500 provided in this embodiment of the present application, where the computer device 500 includes a processor 501 having one or more processing cores, a memory 502 having one or more computer-readable storage media, and a computer program stored in the memory 502 and capable of running on the processor. The processor 501 is electrically connected to the memory 502. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The processor 501 is a control center of the computer device 500, connects various parts of the entire computer device 500 using various interfaces and lines, performs various functions of the computer device 500 and processes data by running or loading software programs and/or modules stored in the memory 502, and calling data stored in the memory 502, thereby integrally monitoring the computer device 500.
In this embodiment of the application, the processor 501 in the computer device 500 loads instructions corresponding to processes of one or more applications into the memory 502, and the processor 501 runs the applications stored in the memory 502, so as to implement various functions as follows:
acquiring first data generated when a user watches live game of the target game;
acquiring second data generated by the virtual character of the target game in the live game;
matching the first data with the second data to obtain a matching result;
if the matching result is a first matching result, switching the live game picture into a game scene picture of at least one key virtual character; the key virtual role is a virtual role corresponding to the first matching result.
Therefore, the watching preference of the user to the virtual character in the live game is obtained by matching the watching data of the user with the data generated by the virtual character in the game, so that the live game picture can be accurately adjusted according to the watching preference.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 9, the computer device 500 further includes: touch-sensitive display screen 503, radio frequency circuit 504, audio circuit 505, input unit 506 and power 507. The processor 501 is electrically connected to the touch display screen 503, the radio frequency circuit 504, the audio circuit 505, the input unit 506, and the power supply 507, respectively. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 9 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The touch display screen 503 can be used for displaying a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. The touch display screen 503 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 501, and can receive and execute commands sent by the processor 501. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 501 to determine the type of the touch event, and then the processor 501 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 503 to implement input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display 503 can also be used as a part of the input unit 506 to implement an input function.
In the embodiment of the present application, a game application is executed by the processor 501 to generate a graphical user interface on the touch display screen 503, where a virtual scene on the graphical user interface includes at least one skill control area, and the skill control area includes at least one skill control. The touch display screen 503 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.
The rf circuit 504 may be used for transceiving rf signals to establish wireless communication with a network device or other computer device via wireless communication, and for transceiving signals with the network device or other computer device.
Audio circuitry 505 may be used to provide an audio interface between a user and a computer device through speakers, microphones. The audio circuit 505 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 505 and converted into audio data, which is then processed by the audio data output processor 501, and then transmitted to, for example, another computer device via the rf circuit 504, or output to the memory 502 for further processing. The audio circuitry 505 may also include an earbud jack to provide communication of a peripheral headset with the computer device.
The input unit 506 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 507 is used to power the various components of the computer device 500. Optionally, the power supply 507 may be logically connected to the processor 501 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 507 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 9, the computer device 500 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
Therefore, the watching preference of the user to the virtual character in the live game is obtained by matching the watching data of the user with the data generated by the virtual character in the game, so that the live game picture can be accurately adjusted according to the watching preference.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, where the computer programs can be loaded by a processor to execute the steps in any one of the live game screen processing methods provided in the embodiments of the present application. For example, the computer program may perform the steps of:
acquiring first data generated when a user watches live game of the target game;
acquiring second data generated by the virtual character of the target game in the live game;
matching the first data with the second data to obtain a matching result;
if the matching result is a first matching result, switching the live game picture into a game scene picture of at least one key virtual character; the key virtual role is a virtual role corresponding to the first matching result.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any live game screen processing method provided in the embodiment of the present application, beneficial effects that can be achieved by any live game screen processing method provided in the embodiment of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The foregoing detailed description has been made of a method, an apparatus, a storage medium, and a computer device for processing a live game frame provided in an embodiment of the present application, and specific examples are applied herein to explain principles and embodiments of the present application, where the description of the foregoing embodiments is only used to help understanding a method and a core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. A picture processing method for live game is characterized in that the picture processing method is applied to a client or a server, the client carries a target game or a live platform of the target game, and the server is the target game or a server corresponding to the live platform, and the method comprises the following steps:
acquiring first data generated when a user watches live game of the target game;
acquiring second data generated by the virtual character of the target game in the live game;
matching the first data with the second data to obtain a matching result;
if the matching result is a first matching result, switching the live game picture into a game scene picture of at least one key virtual character; the key virtual role is a virtual role corresponding to the first matching result.
2. The method of claim 1, wherein the first data comprises a first location and the second data comprises a second location, the first location being a location of a viewpoint of the user in the picture; the second position is the position of the virtual character on the picture;
the matching the first data and the second data to obtain a matching result includes:
acquiring the matching degree of the first position and the second position; wherein the matching degree is used for representing the coincidence degree of the first position and the second position;
and if the matching degree is greater than a preset threshold value, the matching result is the first matching result.
3. The method of claim 2, wherein said obtaining a degree of match of said first location with said second location comprises:
connecting the first positions according to the watching time sequence of the user to obtain a first track;
connecting the second positions according to the time sequence corresponding to the virtual character moving to the second positions to obtain a second track;
and acquiring the length of the overlapped part of the first track and the second track, calculating a first ratio of the length of the overlapped part to the corresponding length of the second track, and recording the first ratio as the matching degree.
4. The method of claim 2, wherein each of said second locations corresponds to a virtual range; the virtual range is determined according to at least one of the volume, the range and the skill range of the virtual character on the picture;
the obtaining of the matching degree between the first position and the second position includes:
acquiring a plurality of virtual ranges corresponding to the plurality of second positions according to the plurality of second positions;
acquiring the number of first positions in the plurality of virtual ranges in the plurality of first positions, calculating a second ratio between the number and the total number of the plurality of first positions, and recording the second ratio as the matching degree; wherein, for a first position in the plurality of virtual ranges, the first position and the virtual range in which the first position is located occur at the same time.
5. The method of claim 1, wherein the first data comprises a first location, the second data comprises a second location, the first location being a location of a viewpoint of the user in the picture; the second position is the position of the virtual character on the picture;
the matching the first data and the second data to obtain a matching result includes:
acquiring the current motion state of the virtual role;
if the current motion state of the virtual character is static and the static duration exceeds a preset duration, acquiring the distance between the first position and the second position;
and if the distance is smaller than a first preset threshold value, determining that the matching result is the first matching result.
6. The method of claim 1, wherein switching the live game view to the live game view of the at least one key virtual character comprises:
if a plurality of key virtual roles exist, acquiring the weights of the key virtual roles;
based on the weights, adjusting the positions of the plurality of key avatars in the picture.
7. The method of claim 6, wherein the first data includes portrait information of the user, and wherein the obtaining weights for the plurality of key avatars comprises:
generating the professional degree of the portrait information corresponding to the user according to the portrait information; the professionalism is used for representing the degree of understanding of the user on the target game;
generating a first weight of the first data according to the specialty; the magnitude of the first weight is proportional to the magnitude of the professionalism;
for each key virtual character in the plurality of key virtual characters, acquiring a plurality of target viewpoints corresponding to each key virtual character in a target viewpoint set, and acquiring a first sum of viewpoint stay time lengths corresponding to the plurality of target viewpoints, wherein the first sum is the total viewpoint stay time length corresponding to the key virtual character;
and calculating a product value of the first weight and the total stay time of the viewpoint, and taking the product value as the weight of the key virtual character.
8. The method of claim 7, wherein before obtaining all target viewpoints corresponding to each key virtual character in the set of target viewpoints and a total duration of viewpoint dwells for all the target viewpoints, the method further comprises:
acquiring an initial viewpoint of each user in a display interface of the live game and a dwell time corresponding to each initial viewpoint;
if the plurality of initial viewpoints are overlapped, acquiring a second sum of the stay time lengths of the plurality of overlapped initial viewpoints, and taking the plurality of overlapped initial viewpoints as a new initial viewpoint, wherein the second sum of the stay time lengths is the stay time length of the new initial viewpoint;
for a plurality of initial viewpoints, at least one target viewpoint with viewpoint stay time longer than a second preset threshold is obtained, and the at least one target viewpoint and the viewpoint stay time of the at least one target viewpoint are used as the target viewpoint set.
9. The method of claim 1, wherein the second data comprises attribute values of the virtual character, the method further comprising:
acquiring a preset screen splitting instruction, wherein the screen splitting instruction is used for controlling the client or the server to generate a split picture in the picture; the size of the sub-picture is smaller than that of the picture;
and responding to the fact that the attribute value of the key virtual character is smaller than a second preset threshold value, receiving the screen splitting instruction, and generating a split picture which is displayed and follows the key virtual character.
10. A picture processing device for game live broadcast is characterized in that the picture processing device is applied to a client or a server, the client carries a target game or a live broadcast platform of the target game, and the server is a server corresponding to the target game or the live broadcast platform, and the device comprises:
the first data acquisition module is used for acquiring first data generated when a user watches live game of the target game;
the second data acquisition module is used for acquiring second data generated by the virtual character of the target game in the live game;
the matching result acquisition module is used for matching the first data with the second data to obtain a matching result;
the picture switching module is used for switching the picture of the live game into a game scene picture of at least one key virtual character if the matching result is a first matching result; the key virtual role is a virtual role corresponding to the first matching result.
11. A computer device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps in the live game frame processing method according to any one of claims 1 to 9.
12. A computer-readable storage medium, wherein a plurality of instructions are stored, and the instructions are suitable for being loaded by a processor to execute the steps of the picture processing method for live game play of any one of claims 1 to 9.
CN202210744739.8A 2022-06-27 2022-06-27 Game live broadcast picture processing method, device, computer equipment and storage medium Active CN115225926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210744739.8A CN115225926B (en) 2022-06-27 2022-06-27 Game live broadcast picture processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210744739.8A CN115225926B (en) 2022-06-27 2022-06-27 Game live broadcast picture processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115225926A true CN115225926A (en) 2022-10-21
CN115225926B CN115225926B (en) 2023-12-12

Family

ID=83609695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210744739.8A Active CN115225926B (en) 2022-06-27 2022-06-27 Game live broadcast picture processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115225926B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117883789A (en) * 2024-03-15 2024-04-16 腾讯科技(深圳)有限公司 Data acquisition method, apparatus, device, readable storage medium, and program product

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109005099A (en) * 2017-06-06 2018-12-14 金德奎 A kind of reality scene sharing method and its social activity or method for gaming
CN110456907A (en) * 2019-07-24 2019-11-15 广东虚拟现实科技有限公司 Control method, device, terminal device and the storage medium of virtual screen
CN110597387A (en) * 2019-09-05 2019-12-20 腾讯科技(深圳)有限公司 Artificial intelligence based picture display method and device, computing equipment and storage medium
WO2020143145A1 (en) * 2019-01-10 2020-07-16 网易(杭州)网络有限公司 Display control method and device in game, storage medium, processor, and terminal
JP2020115981A (en) * 2019-01-21 2020-08-06 株式会社スクウェア・エニックス Video game processing program, video game processing device, video game processing method, and program for learning
CN111629225A (en) * 2020-07-14 2020-09-04 腾讯科技(深圳)有限公司 Visual angle switching method, device and equipment for live broadcast of virtual scene and storage medium
CN112947824A (en) * 2021-01-28 2021-06-11 维沃移动通信有限公司 Display parameter adjusting method and device, electronic equipment and medium
CN112967299A (en) * 2021-05-18 2021-06-15 北京每日优鲜电子商务有限公司 Image cropping method and device, electronic equipment and computer readable medium
CN113129112A (en) * 2021-05-11 2021-07-16 杭州海康威视数字技术股份有限公司 Article recommendation method and device and electronic equipment
CN113440846A (en) * 2021-07-15 2021-09-28 网易(杭州)网络有限公司 Game display control method and device, storage medium and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109005099A (en) * 2017-06-06 2018-12-14 金德奎 A kind of reality scene sharing method and its social activity or method for gaming
WO2020143145A1 (en) * 2019-01-10 2020-07-16 网易(杭州)网络有限公司 Display control method and device in game, storage medium, processor, and terminal
JP2020115981A (en) * 2019-01-21 2020-08-06 株式会社スクウェア・エニックス Video game processing program, video game processing device, video game processing method, and program for learning
CN110456907A (en) * 2019-07-24 2019-11-15 广东虚拟现实科技有限公司 Control method, device, terminal device and the storage medium of virtual screen
CN110597387A (en) * 2019-09-05 2019-12-20 腾讯科技(深圳)有限公司 Artificial intelligence based picture display method and device, computing equipment and storage medium
CN111629225A (en) * 2020-07-14 2020-09-04 腾讯科技(深圳)有限公司 Visual angle switching method, device and equipment for live broadcast of virtual scene and storage medium
CN112947824A (en) * 2021-01-28 2021-06-11 维沃移动通信有限公司 Display parameter adjusting method and device, electronic equipment and medium
CN113129112A (en) * 2021-05-11 2021-07-16 杭州海康威视数字技术股份有限公司 Article recommendation method and device and electronic equipment
CN112967299A (en) * 2021-05-18 2021-06-15 北京每日优鲜电子商务有限公司 Image cropping method and device, electronic equipment and computer readable medium
CN113440846A (en) * 2021-07-15 2021-09-28 网易(杭州)网络有限公司 Game display control method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117883789A (en) * 2024-03-15 2024-04-16 腾讯科技(深圳)有限公司 Data acquisition method, apparatus, device, readable storage medium, and program product
CN117883789B (en) * 2024-03-15 2024-05-28 腾讯科技(深圳)有限公司 Data acquisition method, apparatus, device, readable storage medium, and program product

Also Published As

Publication number Publication date
CN115225926B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
CN113101652A (en) Information display method and device, computer equipment and storage medium
CN113398590B (en) Sound processing method, device, computer equipment and storage medium
CN111491197A (en) Live content display method and device and storage medium
CN113082707B (en) Virtual object prompting method and device, storage medium and computer equipment
CN113350783A (en) Game live broadcast method and device, computer equipment and storage medium
CN113426124A (en) Display control method and device in game, storage medium and computer equipment
CN113485617A (en) Animation display method and device, electronic equipment and storage medium
CN115040873A (en) Game grouping processing method and device, computer equipment and storage medium
CN115225926B (en) Game live broadcast picture processing method, device, computer equipment and storage medium
CN114159789A (en) Game interaction method and device, computer equipment and storage medium
CN113332721B (en) Game control method, game control device, computer equipment and storage medium
CN115193043A (en) Game information sending method and device, computer equipment and storage medium
CN115382201A (en) Game control method and device, computer equipment and storage medium
CN115193064A (en) Virtual object control method and device, storage medium and computer equipment
CN114225412A (en) Information processing method, information processing device, computer equipment and storage medium
CN113398582A (en) Game fighting picture display method and device, computer equipment and storage medium
CN116139483A (en) Game function control method, game function control device, storage medium and computer equipment
CN113398564B (en) Virtual character control method, device, storage medium and computer equipment
CN116870472A (en) Game view angle switching method and device, computer equipment and storage medium
CN116139484A (en) Game function control method, game function control device, storage medium and computer equipment
CN116421968A (en) Virtual character control method, device, electronic equipment and storage medium
CN117205555A (en) Game interface display method, game interface display device, electronic equipment and readable storage medium
CN115193046A (en) Game display control method and device, computer equipment and storage medium
CN116603248A (en) Football game matching method and device, storage medium and computer equipment
CN117654028A (en) Game display control method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant