CN111544897A - Video clip display method, device, equipment and medium based on virtual scene - Google Patents

Video clip display method, device, equipment and medium based on virtual scene Download PDF

Info

Publication number
CN111544897A
CN111544897A CN202010432606.8A CN202010432606A CN111544897A CN 111544897 A CN111544897 A CN 111544897A CN 202010432606 A CN202010432606 A CN 202010432606A CN 111544897 A CN111544897 A CN 111544897A
Authority
CN
China
Prior art keywords
virtual
target
video clip
virtual object
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010432606.8A
Other languages
Chinese (zh)
Other versions
CN111544897B (en
Inventor
谭敏
李冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010432606.8A priority Critical patent/CN111544897B/en
Publication of CN111544897A publication Critical patent/CN111544897A/en
Application granted granted Critical
Publication of CN111544897B publication Critical patent/CN111544897B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/57Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player
    • A63F2300/577Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player for watching a game played by other players

Abstract

The application provides a video clip display method, device, equipment and medium based on a virtual scene, and belongs to the technical field of computers. The method comprises the following steps: determining at least one target video clip of a target virtual object in a virtual scene, wherein the target video clip is used for describing a virtual service corresponding to the target virtual object in the virtual scene; determining at least one target location in a virtual map; and displaying the playing interface of at least one target video clip on at least one target position of the virtual map respectively. According to the technical scheme provided by the embodiment of the application, the terminal user can see the target video clip of the terminal user in the virtual scene in the virtual map, and the terminal user can quickly see the wonderful expression of the terminal user without watching the complete game video because the target video clip is the video clip which is better played by the terminal user in the virtual scene, so that the efficiency of man-machine interaction is improved.

Description

Video clip display method, device, equipment and medium based on virtual scene
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for displaying a video clip based on a virtual scene.
Background
With the development of multimedia technology and the diversification of terminal functions, more and more games can be played on the terminal. The shooting game is a more popular game, and the terminal can display a virtual scene in the interface and display a virtual object in the virtual scene. In the game process, the terminal user can control the virtual prop to fight against virtual objects controlled by other users.
For some end users, the end users want to share their wonderful performances in the game with other users, and in order to achieve the purpose, the end users are often required to send game videos to other users. Other users can see the end user's wonderful representation in the game by watching the game video.
However, the highlight of the terminal user does not always last the whole game but exists in some segments of the game video, but other users cannot directly know the position of the highlight of the terminal user in the game video, so that more time is consumed for other users to watch the game video, and the efficiency of man-machine interaction is low.
Disclosure of Invention
The embodiment of the application provides a video clip display method, a device, equipment and a medium based on a virtual scene, which can improve the efficiency of human-computer interaction. The technical scheme is as follows:
in one aspect, a method for displaying a video segment based on a virtual scene is provided, the method comprising:
determining at least one target video clip of a target virtual object in a virtual scene, wherein the target video clip is used for describing a virtual service corresponding to the target virtual object in the virtual scene;
determining at least one target location in a virtual map, wherein the target location is determined according to location information of at least one virtual service executed by the target virtual object in the virtual scene;
and displaying the playing interface of the at least one target video clip on the at least one target position of the virtual map respectively.
In one aspect, a video clip display apparatus based on a virtual scene is provided, the apparatus including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for determining at least one target video clip of a target virtual object in a virtual scene, and the target video clip is used for describing a virtual service corresponding to the target virtual object in the virtual scene;
the determining module is used for determining at least one target position in a virtual map, wherein the target position is determined according to the position information of at least one virtual service executed by the controlled target virtual object in the virtual scene;
and the display module is used for respectively displaying the playing window interfaces of the at least one target video clip on the at least one target position of the virtual map.
In one possible embodiment, the at least one target position is obtained by:
for any target video clip, determining first position information of the target virtual object in the virtual scene when the virtual service is executed according to the position information of the target virtual object in the virtual scene;
and determining a target position corresponding to the target video clip on the virtual map according to first position information of the target virtual object in the virtual scene.
In a possible implementation manner, the display module is configured to determine a movement track of the target virtual object in the virtual map, where the movement track is determined based on the position information of the target virtual object in the virtual scene; and respectively displaying the playing interface of the at least one target video clip at the at least one target position of the moving track.
In one possible embodiment, the movement trajectory is obtained by: determining a plurality of position points on the virtual map according to the position information of the target virtual object in the virtual scene, wherein one position point corresponds to one position information; and determining a connecting line between the position points adjacent in time according to the time sequence of the plurality of position information so as to determine the moving track of the target virtual object.
In one possible embodiment, the colors of the plurality of line segments of the movement trajectory are different according to the virtual vehicle used by the target virtual object.
In one possible embodiment, the at least one target position is obtained by:
for any target video clip, determining second position information of the target virtual object in the virtual scene when the virtual match is finished according to the position information of the target virtual object in the virtual scene;
and determining a target position corresponding to the target video clip on the virtual map according to second position information of the target virtual object in the virtual scene.
In a possible embodiment, the display module is configured to display at least one anchor point on the at least one target location of the virtual map, respectively, where one anchor point corresponds to one target location; and respectively displaying the playing interfaces of the at least one target video clip at the corresponding positions of the at least one anchor point.
In a possible implementation manner, the playing interface of the target video clip is a playing window, and the playing window further includes at least one of ranking data of the target virtual object after executing the virtual service, a number of defeated virtual objects, and a survival time of the target virtual object in a virtual battle.
In a possible embodiment, the apparatus further comprises:
and the playing module is used for automatically playing the at least one target video according to a target playing sequence if the at least one target video clip belongs to at least two virtual battles respectively.
In a possible embodiment, the target playing order is determined according to at least one of a ranking of the target virtual objects in the corresponding virtual battle, a number of defeated virtual objects, and a duration of survival of the target virtual objects in the virtual battle.
In a possible implementation manner, the display module is further configured to display, on the virtual map, at least one of ranking data of the target virtual object after the virtual service is executed, a number of defeated virtual objects, and a survival time of the target virtual object in a virtual battle in response to the target video segment not being acquired.
In one possible embodiment, the target video segment is obtained by the following process:
responding to the target virtual object to start executing the virtual service, and recording a video of the target virtual object in the virtual scene;
and responding to the fact that the target virtual object finishes the virtual service, stopping recording the video, and taking the recorded video as the target video segment.
In one aspect, a computer device is provided and includes one or more processors and one or more memories having at least one program code stored therein, the program code being loaded and executed by the one or more processors to implement the operations performed by the virtual scene based video clip display method.
In one aspect, a storage medium having at least one program code stored therein is provided, the program code being loaded and executed by a processor to implement the operations performed by the virtual scene-based video clip displaying method.
In one aspect, a computer program product is provided, which stores one or more instructions executable by a processor of a server to perform the above-mentioned virtual scene-based video segment display method.
According to the technical scheme provided by the embodiment of the application, the terminal can display the playing window of the video clip of the target virtual object in the virtual scene on the target position of the virtual map, the terminal user can see the target video clip of the terminal user in the virtual scene on the virtual map, and the terminal user can quickly see the wonderful expression of the terminal user without watching the complete game video because the target video clip is the video clip which is better in the virtual scene of the terminal user, so that the efficiency of man-machine interaction is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a virtual scene-based video segment display method according to an embodiment of the present application;
fig. 2 is a flowchart of a method for displaying a video segment based on a virtual scene according to an embodiment of the present application;
fig. 3 is a flowchart of a method for displaying a video segment based on a virtual scene according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a movement trajectory determination provided by an embodiment of the present application;
FIG. 5 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 6 is a schematic view of an interface provided by an embodiment of the present application;
fig. 7 is a schematic diagram of an anchor point corresponding icon provided in an embodiment of the present application;
fig. 8 is a flowchart of a method for displaying a video segment based on a virtual scene according to an embodiment of the present application;
FIG. 9 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 10 is a schematic view of an interface provided by an embodiment of the present application;
fig. 11 is a flowchart of a method for displaying a video segment based on a virtual scene according to an embodiment of the present application;
fig. 12 is a block diagram of a video segment display apparatus based on a virtual scene according to an embodiment of the present application;
fig. 13 is a block diagram of a video segment display apparatus based on a virtual scene according to an embodiment of the present application;
fig. 14 is a block diagram of a terminal according to an embodiment of the present application;
fig. 15 is a block diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Hereinafter, terms related to the present application are explained.
Virtual scene: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene.
Virtual object: refers to a movable object in a virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Optionally, the virtual object may be a user role controlled by an operation on the client, and in this embodiment of the application, such a virtual object may be a target virtual object, that is, a virtual object controlled by an end user, an Artificial Intelligence (AI) set in a virtual scene match through training, or a Non-user role (NPC) set in a virtual scene interaction. Alternatively, the virtual object may be a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking a shooting game as an example, the user may control a virtual object to freely fall, glide, open a parachute to fall, run, jump, crawl, bend over, and the like in the sky of the virtual scene, or may control a virtual object to swim, float, or dive in the sea, or the like, and of course, the user may control a virtual object to move in the virtual scene by riding a virtual vehicle, for example, the virtual vehicle may be a virtual car, a virtual aircraft, a virtual yacht, and the like, and the above scenes are merely exemplified, and the present invention is not limited thereto. The user can also control the virtual object to interact with other virtual objects in a fighting mode and other modes through the virtual object, for example, the virtual object can be a throwing type virtual object such as a grenade, a beaming grenade and a viscous grenade (called viscous grenade for short), or a shooting type virtual object such as a machine gun, a pistol and a rifle, and the type of the virtual object is not limited in the application.
Hereinafter, a system architecture according to the present application will be described.
Fig. 1 is a schematic diagram of an implementation environment of a method for displaying a video segment based on a virtual scene according to an embodiment of the present application, and referring to fig. 1, the implementation environment includes: a terminal 120 and a server 160.
The terminal 120 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, etc.
The terminal 120 is installed and operated with an application program supporting the display of a virtual scene. The application program may be any one of a First-Person shooter game (FPS), a Third-Person shooter game (TPS), a Multiplayer Online tactical sports game (MOBA), a virtual reality application program, a three-dimensional map program, a military simulation program, or a Multiplayer gunfight type survival game. The terminal 120 may be a terminal used by a user, and the user uses the terminal 120 to operate a virtual object located in a virtual scene for activities, including but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the virtual object is a virtual character, such as a simulated persona or an animated persona.
The terminal 120 is installed and operated with an application program supporting a virtual map and a video clip display. The application program can be an independent application program or can be embedded into a social application program. If the application is a stand-alone application, the application may be a browser or a separate application for presenting game data. If the application program is embedded in the social application, the user can conveniently share the virtual map with the video clip to other users through the social application.
The server 160 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, a cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a distribution Network (CDN), a big data and artificial intelligence platform, and the like. The server 160 is used to provide background services for applications that support the display of virtual scenes. The terminal 120 may establish a network connection with the server 160 by way of a wired or wireless network.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 2 is a flowchart of a method for displaying a video segment based on a virtual scene according to an embodiment of the present application, and referring to fig. 2, the method may be applied to a terminal, and the method includes:
201. the terminal determines at least one target video clip of the target virtual object in the virtual scene, and the target video clip is used for describing the virtual service corresponding to the target virtual object in the virtual scene.
The virtual scene is a virtual space in which virtual objects in different camps are mutually confronted. A virtual service is some type of event that can be used to represent the behavior of a virtual object in a virtual scene. The target video clip may be determined by the server and sent to the terminal, or may be determined by the terminal, which is not limited in the embodiment of the present application.
202. The terminal determines at least one target position in the virtual map, wherein the target position is determined according to the position information of the target virtual object executing at least one virtual service in the virtual scene.
The virtual map is used for displaying the terrain of the virtual scene, and the virtual map can be a plan view or a top view of the virtual scene.
203. And the terminal respectively displays the playing interface of at least one target video clip on at least one target position of the virtual map.
The playing interface is used for displaying the target video clip, the display form of the playing interface may include but is not limited to a playing window, a playing link, even a hidden playing link, and the like, and the playing and browsing of the video clip are realized based on the interface.
According to the technical scheme provided by the embodiment of the application, the terminal can display the playing interface of the video clip of the target virtual object in the virtual scene on the target position of the virtual map, the terminal user can see the target video clip of the terminal user in the virtual scene on the virtual map, and the terminal user can quickly see the self wonderful expression without watching the complete game video because the target video clip is the video clip which is better in the virtual scene of the terminal user, so that the efficiency of man-machine interaction is improved.
In the embodiment of the application, the server can perform background processing related to the display of the video clip, the processing result is sent to the terminal, and the terminal displays the target video clip on the virtual map. In other possible embodiments, the technical solution provided in the present application may also be implemented by a terminal or a server as an execution subject. In the following, the method for displaying a video clip based on a virtual scene provided in the embodiment of the present application is described by taking an interaction between a terminal and a server as an example, with reference to fig. 3, the method includes:
301. the server determines at least one target video clip of the target virtual object in the virtual scene, wherein the target video clip is used for describing the virtual service corresponding to the target virtual object in the virtual scene.
Wherein the virtual service may include at least one of the following: the target virtual object beats other virtual objects, the target virtual object wins a virtual match, the target virtual object obtains a strong virtual prop, teammates of the target virtual object are beaten, the target virtual object drives a virtual carrier, the target virtual object is beaten by other virtual objects, the target virtual object continuously beats at least two other virtual objects and/or the target virtual object beats other virtual objects, and a distance between the target virtual object and the other virtual objects is greater than one or more of distance thresholds. The target video clip may be at least one video clip describing that the target virtual object performs a virtual service in the virtual scene.
In a possible implementation manner, the server may query the video clip database according to the end user identifier to obtain at least one target video clip of the target virtual object in the virtual scene. The video clip database is maintained correspondingly by the server, and target video clips of the virtual scene corresponding to the virtual battle corresponding to the terminal user identification are stored in the video clip database. In addition, the terminal user can designate the server to obtain a target video clip of a virtual scene corresponding to a certain field of virtual battle, and correspondingly, the server can query in the video clip database according to the terminal user identification and the identification of the field of virtual battle to obtain at least one target video clip of the target virtual object in the virtual scene corresponding to the field of virtual battle. Under the implementation mode, the server can directly and quickly acquire the target video clip according to the terminal user identification, and the human-computer interaction efficiency is high.
The virtual battle is taken as a First Person Shooting game (FPS) for explanation: the server may determine the end-user identification, which may be an identification number that uniquely identifies the end-user, such as an Identity Document (ID) of the end-user. The server can determine the identification of the virtual battle corresponding to the ABCD according to the ID 'ABCD' of the terminal user, and queries in the video segment database according to the identification of the virtual battle to obtain at least one target segment of the virtual scene corresponding to the virtual battle corresponding to the ABCD. Accordingly, when the terminal user wants to obtain the target video clip of the virtual scene corresponding to a certain field of virtual battle, the identifier of the field of virtual battle can be input on the terminal, and the identifier of the virtual battle can include but is not limited to the time when the virtual battle occurs. The terminal user can send a query instruction to the server through the terminal, wherein the query instruction carries the ID of the terminal user, such as 'ABCD' and the identifier of virtual battle, such as '3 months, 1 day, 17 hours and 0 minutes'. The server can query in the target video clip database according to the query instruction to obtain at least one target video clip corresponding to the ABCD and the 3 month, 1 day, 17 hour and 0 minute.
In one possible implementation, the server may obtain the engagement information of the virtual engagement, determine the virtual engagement of which the engagement information meets the target condition as the target virtual engagement, and determine at least one target video clip of the target virtual object in a virtual scene corresponding to the target virtual engagement. For example, the server may query a fight information database correspondingly maintained by the server according to the user identifier of the terminal user, and obtain the fight information of at least one virtual fight corresponding to the target virtual object. Wherein the fighting information may include at least one of: the number of beats of the target virtual object by other virtual objects, the damage caused by the target virtual object to other virtual objects, and the survival time of the target virtual object in the virtual battle. The server can determine a target virtual match with the match information according with the target condition, and query in the video segment database according to the user identifier and the identifier of the target virtual match to determine at least one target video segment of the target virtual object in a virtual scene corresponding to the target virtual match, wherein the target video segment can carry the occurrence time of the virtual service in the virtual scene. Wherein, the match information meeting the target condition may include at least one of the following: the number of the target virtual objects that beat other virtual objects is the largest number among all the virtual matches, the damage caused by the target virtual objects to other virtual objects is the largest number among all the virtual matches, and the survival time of the target virtual objects in the virtual matches is the longest number among all the virtual matches, which is not limited in the embodiment of the present application. In this implementation manner, the server may screen the virtual battles before acquiring the video segments, that is, acquire the video segments of the virtual scene corresponding to the virtual battles that the terminal user performs the best performance, so that, for one virtual battle, only a few video segments that the terminal user performs the best performance in the virtual scene may be displayed, the display of the complete battle video is not required, the display is simplified, and the efficiency of human-computer interaction is improved. In addition, in the subsequent sharing process of the terminal user, the terminal user does not need to manually screen, and can directly share the video clip with the best performance to other users, so that the human-computer interaction efficiency is further improved.
The virtual battle is taken as an example of an FPS game: the server can inquire in a fight information database correspondingly maintained by the server according to the ID 'ABCD' of the terminal user to acquire the fight information of at least one virtual fight corresponding to the 'ABCD'. Wherein the fighting information may include at least one of: the number of other game pieces that the user "ABCD" defeats, the damage the user "ABCD" inflicts on other game pieces, and the survival time the user "ABCD" pair is in virtual engagement. The server can screen the fight information of the multi-field virtual fight to obtain the target virtual fight meeting the target conditions, and the target virtual fight can be the best virtual fight of the user 'ABCD' and 'fight performance'. Wherein, the virtual battle with the best battle performance can be at least one of the following items: the number of the user "ABCD" that defeats other game characters in the virtual match is the most damage to the other game characters in the virtual match caused by the user "ABCD", the most survival time in the virtual match of the user "ABCD" in the virtual match, and the longest survival time in the virtual match, and the embodiment of the present invention does not limit this. The server can inquire in a video clip database correspondingly maintained by the server according to the identification of the target virtual match and the user 'ABCD' to obtain at least one target video clip of the game character corresponding to the user 'ABCD' in the virtual scene corresponding to the target virtual match. Wherein the target video segment may be at least one of: the video clips of the user 'ABCD' beat other game characters, the video clips of the user 'ABCD' advancing in the vehicle, and the video clips of the user 'ABCD' beat by other game characters.
In a possible implementation manner, the server may query the video segment database according to the terminal user identifier, obtain at least one video segment of the target virtual object in the virtual scene of the virtual battle, and determine, from the at least one video segment, a target video segment describing that the target virtual object executes the virtual service in the virtual scene.
Optionally, before executing step 301, the server may generate a target video clip of the virtual match according to the video of the virtual match, as follows:
in one possible implementation, when the end user controls the target virtual object to perform virtual battle, the server may record a complete video of the target virtual object in the virtual scene of the virtual battle. In the process of recording the video, the server may record the type of the virtual service and the completion time of the virtual service in response to the target virtual object executing the virtual service. After the recording of the complete video of the virtual scene corresponding to the field of virtual battle is completed, the server can determine at least one time point on the time axis of the complete video according to the completion time of the virtual service, each time point corresponds to one virtual service, and the video with the target duration before each time point is intercepted as a target video clip. The server may store the at least one target video clip, the user identification, the corresponding virtual service type, and the occurrence time binding in a video clip database maintained corresponding to the server. Still take the virtual battle as an example of an FPS-type game, the server may record a complete video of the terminal user ABCD controlling the game character to play the game, and in response to the terminal user ABCD defeating other game characters, the server may record the type of the virtual service as defeating other virtual objects, and the completion time of the virtual service is "11 minutes 20 seconds". After the server records the complete video, a time point corresponding to the virtual service end user 'ABCD' that defeats other game characters may be determined on the time axis of the complete video according to the completion time of the virtual service '11 minutes 20 seconds'. The server may intercept the video 30 seconds before the point in time as a target video clip. The server may store the at least one target video clip, the user identification, the corresponding virtual service type, and the occurrence time binding in a video clip database correspondingly maintained by the server.
It should be noted that the above is described by taking an example in which the server records the complete video and intercepts the complete video. In other possible embodiments, the terminal may record a complete video, send the complete video to the server, and the server intercepts the complete video after receiving the complete video to obtain the target video segment, which is not limited in the embodiment of the present application.
In a possible implementation manner, the server may also record not the complete video, but only when the target virtual object performs the virtual service, that is, taking the video recording by the server as an example, when the end user controls the target virtual object to perform the virtual battle in the virtual scene, the server may start to record the video segment of the target virtual object in the virtual scene in response to the target virtual object starting to perform the virtual service in the virtual scene. In response to the target virtual object completing the virtual service, the server may stop recording the video segment of the target virtual object in the virtual scene. The video clip recorded by the server is also a target video clip, and the server can bind and store the target video clip, the user identifier, the virtual service type and the occurrence time in a video clip database correspondingly maintained by the server. Also described with the example of virtual combat as an FPS-type game, the server may begin recording video clips of end user "ABCD" in response to the end user "ABCD" using a virtual motorcycle in a virtual scene. In response to the end user "ABCD" ceasing to use the virtual motorcycle, the server may stop recording the video clip of the end user "ABCD". The server may store the recorded video clip, the user identifier, the type of the virtual service corresponding to the target video clip, and the occurrence time binding in a video clip database correspondingly maintained by the server.
It should be noted that, the above is described by taking an example that the server records the target video segment, in other possible implementations, the terminal may also record the target video segment, send the target video segment to the server, and the server stores the target video segment in the video segment database, which is not limited in this application.
302. The server determines at least one target position in the virtual map, wherein the target position is determined according to the position information of the target virtual object executing at least one virtual service in the virtual scene.
The virtual map is used for displaying the terrain of the virtual scene, and the virtual map can be a plan view or a top view of the virtual scene.
In a possible implementation manner, for any target video clip, the server may determine, according to the position information of the target virtual object in the virtual scene, first position information of the target virtual object in the virtual scene when the target virtual object performs the virtual service. The server can determine a target position corresponding to the target video clip on the virtual map according to the first position information of the target virtual object in the virtual scene. In this embodiment, the server may use the position where the virtual service occurs as a reference position for subsequently displaying the video clip, so as to clearly represent the position of the virtual object in the virtual map when the virtual service occurs.
In a possible implementation, the determining the target position includes: and the server acquires the position information of the virtual object in the virtual scene corresponding to the target virtual battle according to the identifier of the target virtual battle. The server can determine the position information of the target virtual object in the virtual scene corresponding to the target virtual battle from the position information of the virtual object according to the user identification. The server may determine a time point at which the target virtual object executes the virtual service, and obtain a position of the target virtual object corresponding to the time point from position information of the target virtual object in a virtual scene corresponding to the target virtual match, where the position is also the first position information. The server determines a target position on the virtual map according to the corresponding relation between the position of the virtual scene and the position of the virtual map and the first position information.
The virtual battle is taken as an example of an FPS game, and the server may obtain position information of a plurality of game characters in a virtual scene corresponding to the target virtual battle according to the identifier of the target virtual battle. The server may query from the location information of the plurality of game pieces based on the identification of the end user to determine location information corresponding to the game piece controlled by the end user. The server may determine a time point when the end-user-controlled game character performs the virtual service, for example, a time point when the end-user-controlled game character defeats other game characters, perform an inquiry in the position information corresponding to the end-user-controlled game character according to the time point when the virtual service is performed, and determine a position of the end-user-controlled game character in the virtual scene when the virtual service is performed, that is, the first position information. The server may convert the first position information into a target position on the virtual map according to a correspondence between the three-dimensional coordinates of the virtual scene and the two-dimensional coordinates of the virtual map.
303. And the server determines the movement track of the target virtual object in the virtual map according to the position information of the target virtual object in the virtual scene.
In one possible embodiment, the server may determine a plurality of location points on the virtual map according to the location information of the target virtual object in the virtual scene, where one location point corresponds to one location information. The terminal can determine a connection line between position points adjacent in time according to the time sequence of the plurality of position information so as to determine the moving track of the target virtual object. For example, since the location information may include the locations of the target virtual object at different time points in the virtual scene, the server may determine a plurality of location points on the virtual map according to the location of the target virtual object in the virtual scene, sort each location point on the virtual map according to the time sequence according to the correspondence between the location of the target virtual object in the virtual scene and the time point, and connect temporally adjacent location points to obtain the movement trajectory of the target virtual object. Referring to fig. 4, fig. 4 includes 4 position points A, B, C and D, where the time point corresponding to position point a is 11 minutes 20 seconds, the time point corresponding to position point B is 11 minutes 23 seconds, the time point corresponding to position point C is 11 minutes 28 seconds, and the time point corresponding to position point D is 11 minutes 35 seconds. The server may connect the position point A, B, C and the position point D according to the time sequence between the position point A, B, C and the position point D to obtain the movement track a-B-C-D of the target virtual object.
In addition, the colors of the plurality of line segments of the movement trajectory may be different depending on the virtual vehicle used by the target virtual object in the virtual scene. The server may obtain virtual vehicle information of the target virtual object in the virtual scene according to the position information of the target virtual object, where the virtual vehicle information may include a type of the virtual vehicle and a time point at which the virtual vehicle is used. The server can determine the type of the virtual carrier used by the target virtual object at different position points according to the virtual carrier information, and determine the color of a line segment of the moving track according to the type of the virtual carrier.
To illustrate the virtual battle as an FPS-type game, when the game character controlled by the end user does not move from point a to point B of the virtual scene using the virtual vehicle, that is, the game character controlled by the end user moves from point a to point B of the virtual scene by walking, running or swimming, the server may determine the color of the movement track of the game character controlled by the end user as yellow; when the end-user controlled game piece moves from point B to point C of the virtual scene using the first type of virtual vehicle, such as when the end-user controlled game piece crosses over the virtual river from point B to point C of the virtual scene using a virtual boat, then the server may determine the end-user controlled game piece's movement trajectory to be blue; when the end-user controlled game piece moves from point C to point D of the virtual scene using the second type of virtual vehicle, such as the end-user controlled game piece moves from point C to point D of the virtual scene using a virtual motorcycle, the server may determine the end-user controlled game piece's movement trajectory to be green. Of course, the color of the movement track is merely described for convenience of understanding, and in other possible embodiments, the server may determine the movement track to be a different color according to an actual situation, which is not limited in this embodiment of the present application.
The above step 303 is described by taking the server to determine the moving track as an example, and in other possible embodiments, the terminal may determine the moving track according to the position information, which is not limited in this embodiment of the present application.
304. And the server sends the at least one target position and the movement track to the terminal.
In one possible implementation, in response to the terminal not having the virtual map stored thereon, the server may further send the virtual map to the terminal.
305. And the terminal receives at least one target position and the moving track sent by the server, and respectively displays a playing interface of at least one target video clip at the at least one target position of the moving track.
The playing interface is used for displaying the target video clip, the display form of the playing interface may include but is not limited to a playing window, a playing link, even a hidden playing link, and the like, and the playing and browsing of the video clip are realized based on the interface. For easy understanding, the following description takes the playing interface as the playing window as an example: a play button may be included in the play window to provide access for the user to operate. The playing window can be used for playing the target video clip, a playing button can be displayed in the playing window, and the terminal can play the target video clip in response to the playing button being clicked; or, after the terminal displays the play window, the target video clip may also be directly played, which is not limited in this application embodiment. In a possible implementation manner, in addition to the play button, a cover map of the target video clip may be displayed in the play window, and of course, other contents may also be displayed, which is not limited in this embodiment of the present application.
In a possible implementation manner, after the terminal receives the at least one target position and the moving track sent by the server, the terminal may determine the at least one target position on the moving track according to a correspondence between the target position and a position point on the moving track, and display a playing window of the at least one target video clip on the at least one target position respectively. Referring to fig. 5, a movement track 502 is displayed in a virtual map 501, and a play window 503 is displayed at a target position of the movement track. In the implementation mode, the terminal can display the moving track on the virtual map and simultaneously display the playing window at the target position on the moving track, and a terminal user or other users can know the moving condition of the target virtual object in the virtual battle according to the moving track of the target virtual object, so that the efficiency of human-computer interaction is higher.
In a possible implementation manner, the terminal may display at least one anchor point on at least one target position of the virtual map, respectively, where one anchor point corresponds to one target position. And displaying the playing windows of at least one target video clip on the corresponding positions of at least one anchor point respectively. That is to say, the terminal can display anchor points at a target position on the moving track, and since one anchor point corresponds to one target position, which is a position where the target virtual object executes virtual services, that is to say, each anchor point corresponds to one virtual service, the playing windows of different target video clips can be displayed on different anchor points on the terminal, so as to provide an entry for the user to play, and provide convenience for the user to view the video clips. Referring to fig. 5, in addition to displaying a movement track 502 and a playing window 503 on a virtual map 501, the terminal may also display an anchor point 504 on the movement track 502, and the terminal user may switch to display different playing windows by clicking different anchor points. Referring to fig. 6, a user may click on an anchor 601, and the terminal displays a play window as shown at 602 in response to the user's click operation.
Optionally, the terminal may use different icons to represent different anchor points according to different virtual services corresponding to the anchor points, and in this implementation, the terminal user may quickly learn the type of the virtual service by observing the icon of the anchor point, thereby further improving the efficiency of human-computer interaction. Referring to fig. 7, if a virtual service is that a target virtual object defeats other virtual objects, the terminal may use an icon 701 to represent the anchor point, and when virtual services corresponding to other anchor points also defeat other virtual objects for the target virtual object, the same icon 701 may be used to represent the anchor point; when the virtual service uses a virtual vehicle as a target virtual object, the terminal may determine an image of an anchor point corresponding to the virtual service as an icon 702. Of course, the icons 701 and 702 in fig. 7 are provided only for ease of understanding, and in other possible embodiments, the terminal may also use other icons to represent anchor points, which is not limited in this embodiment of the application.
In addition, in order to avoid mutual shielding between the playing windows, the terminal may determine the display positions of the anchor points corresponding to the playing windows according to the distance between the anchor points, wherein the display positions of each playing window are not overlapped. Under the implementation mode, the terminal can determine the display position of the playing window according to the distance between the anchor points, and the problems that the playing window is shielded mutually, the interface is disordered, and the terminal user watches corresponding videos inconveniently are solved. In addition, when a large number of target video segments exist in a virtual battle, the terminal may also display the playing windows corresponding to all anchor points at different times and only display the playing window corresponding to the target anchor point, where the virtual service corresponding to the target anchor point is a target type virtual service, for example, the target type virtual service defeats other virtual objects as the target virtual object. Under the implementation mode, the terminal can preferentially display the target video clip with better performance by the terminal user under the condition of more target video clips, the display of the playing window is simplified, and the efficiency of man-machine interaction is improved.
In one possible implementation, the terminal may display the virtual battle information in addition to the virtual map, the anchor point, the playing window and the moving track on the interface, and the battle information may include at least one of the following: the number of other virtual objects that the target virtual object defeats, the number of virtual objects that the target virtual object assists the virtual object of the same lineup to defeat a different lineup, the damage the target virtual object causes to other virtual objects, and the ranking of the target virtual object at the end of the virtual battle. Referring to fig. 5, the terminal can display the fight information in the frame 505 of the virtual map 501, in this implementation manner, the terminal user can not only see the video clip of the virtual fight that is best represented in the target time period, but also obtain the fight information in the virtual fight, and does not need to log in different clients to obtain the corresponding fight information, so that the efficiency of human-computer interaction is higher.
Taking virtual fighting as an example of an FPS-type game, the fighting information may be the number of game pieces eliminated from other game pieces controlled by the end user, the number of game pieces eliminated from different camps by game pieces that the end user controls to assist in the same camps, the damage to other game pieces caused by the end user controls to the game pieces, the ranking of the end user controls to the game pieces in the game, and the like.
Besides, the terminal can display the progress time (namely duration) of the virtual match on the interface. Certainly, the terminal can also display the sharing button on the interface, the terminal user can share the interface with other users by clicking the sharing button, the other users can quickly see the wonderful performance of the user in the target time period, and the efficiency of man-machine interaction is high. Further, the terminal can also display the bar code on the interface, and other users can scan the bar code by using the mobile terminal to obtain the wonderful expression of the virtual object controlled by the user in the target time period. Referring to fig. 5, the terminal may display a share button at a position 506 in the virtual map 501 and a barcode at a position 507.
After step 305, the terminal may determine whether a target video clip corresponding to the play window is stored in the storage space. In response to the storage space storing the target video clip corresponding to the playing window, the terminal can play the corresponding target video clip according to the playing operation of the terminal user; in response to that the storage space does not store the target video clip corresponding to the playing window, the terminal may send a target video clip acquisition request to the server, where the target video clip acquisition request carries an identifier of the target video clip. After receiving the target video clip acquisition request, the server can query in the target video clip database according to the identification of the target video clip to obtain the corresponding target video clip, and send the target video clip to the terminal. The terminal can store the target video clip in the storage space after receiving the target video clip sent by the server. And responding to the playing operation of the terminal user, determining the target video clip from the storage space, and playing the target video clip in the corresponding playing window.
It should be noted that, the above-mentioned steps 301-305 may be used to show the target video clip of a virtual match, where a virtual match may be a virtual match meeting the target condition in the target time period, or may be a virtual match specified by the user, and if the target time period is one week, a virtual match in the target time period may be a virtual match best represented by the user in one week. The embodiment of the present application does not limit this.
According to the technical scheme provided by the embodiment of the application, the terminal can display the playing interface of the video clip of the target virtual object in the virtual battle on the target position of the virtual map, the terminal user can see the target video clip of the terminal user in the virtual scene on the virtual map, and the terminal user can quickly see the self-wonderful expression without watching the complete game video because the target video clip is the video clip which is better in the virtual scene of the terminal user, so that the efficiency of man-machine interaction is improved. In addition, the terminal can display the moving track of the target virtual object on the virtual map, and a terminal user can review the fighting condition in the virtual scene according to the moving track of the target virtual object, so that the efficiency of man-machine interaction is further improved. Furthermore, the terminal user can share the virtual map with the playing window to other users, the other users can conveniently see the wonderful expression of the terminal user on the virtual map, and the human-computer interaction efficiency is high.
In addition to the target video segment of a virtual match shown by steps 301-305 described above, the video segment of the virtual match can also be shown by steps 801-805 described below. Fig. 8 is a video clip display method based on a virtual scene according to an embodiment of the present application, and referring to fig. 8, the method includes:
801. the server determines at least one target video clip of the target virtual object in the virtual scene, wherein the target video clip is used for recording and describing the virtual service corresponding to the target virtual object in the virtual scene.
The target video clip is the most representative video clip in the virtual scene of the virtual battle. The target video segment may be randomly obtained by the server from at least one video segment of the target virtual object in the virtual scene of the virtual battle, or the server uses a video segment corresponding to the virtual service of the target type as the target video segment, for example, uses a video segment corresponding to at least two other virtual objects, which are continuously defeated by the target virtual object in the virtual scene, as the target video segment, and of course, the target video segment may also be determined according to the actual situation, which is not limited in this embodiment of the present application. It should be noted that each virtual match may correspond to one or more target video clips, and for convenience of understanding, the following description will take an example in which each virtual match corresponds to one target video clip.
In a possible implementation manner, the server may query the video clip database according to the end user identifier to obtain a target video clip of the target virtual object in the virtual scene of the virtual battle. The video clip database is maintained correspondingly by the server, and target video clips of virtual scenes of virtual battles corresponding to the terminal user identifications are stored in the video clip database. Under the implementation mode, the server can directly and quickly acquire the target video clip according to the terminal user identification, and the human-computer interaction efficiency is high. In addition, the terminal user can designate the server to obtain a target video clip of at least one virtual battle virtual scene in the target time period, and correspondingly, the server can query in the video clip database according to the terminal user identification and the target time period to obtain the target video clip of the target virtual object in at least one virtual battle virtual scene in the target time period. Under the implementation mode, the server can directly and quickly acquire the target video clip according to the terminal user identification, and the human-computer interaction efficiency is high. Certainly, the terminal user can also designate the server to obtain the target video clip of the virtual scene of at least one virtual battle in the target time period, the personalization degree of subsequent display according to the target video clip is higher, and the terminal user experience is better.
The virtual battle is taken as an example of an FPS game: the server may determine the end user's ID, such as "ABCD". The server can determine at least one virtual match identifier corresponding to the ABCD according to the ID 'ABCD' of the terminal user, and queries in the video segment database according to the virtual match identifier to obtain at least one target segment of the virtual scene of the virtual match corresponding to the ABCD. In one possible implementation, when the terminal user wants to obtain a target video clip of at least one virtual scene of the virtual match in the target time period, the start and end dates of the target time period can be input on the terminal. The terminal user can send a query instruction to the server through the terminal, wherein the query instruction carries the ID of the terminal user, such as "ABCD", and the start-stop date of the target time period, such as "3 month 1 day to 3 month 9 days". The server can query in the target video clip database according to the query instruction to obtain at least one target video clip of the virtual scene of at least one virtual battle corresponding to the ABCD and the 3 months from 1 day to 9 days.
In one possible implementation, the server may obtain the fight information of at least one virtual fight, determine at least one virtual fight of which the fight information meets the target condition as the target virtual fight, and determine the target video clip of the target virtual object in the target virtual fight. For example, the server may query a fight information database correspondingly maintained by the server according to the user identifier of the terminal user, and obtain the fight information of at least one virtual fight corresponding to the target virtual object. The server can determine at least one virtual match with the match information meeting the target conditions, and determine a target video clip of the target virtual object in a virtual scene of the at least one virtual match according to the user identifier and the identifier of the at least one virtual match. The meeting of the fighting information with the target condition may include at least one of: the number of the target virtual objects which beat other virtual objects is larger than the number threshold, the damage of the target virtual objects to other virtual objects is larger than the damage threshold, and the survival time of the target virtual objects in the virtual battle is smaller than the time threshold.
The virtual battle is taken as an example of an FPS game: the server can inquire in a fight information database correspondingly maintained by the server according to the ID 'ABCD' of the terminal user to acquire the fight information of at least one virtual fight corresponding to the 'ABCD'. The server can screen the fighting information of the multi-field virtual fighting to obtain at least one target virtual fighting meeting the target conditions, wherein the target virtual fighting can be the virtual fighting of which the user 'ABCD' and 'combat achievement' meet the target combat achievement conditions. Wherein, the "battle performance" meeting the target battle performance condition may include at least one of the following: the number of other game pieces that the user ' ABCD ' defeats in the virtual match is greater than a number threshold, the damage that the user ' ABCD causes to other game pieces in the virtual match is greater than a damage threshold, and the survival time of the user ' ABCD ' in the virtual match is less than a time threshold. The server can inquire in a video clip database correspondingly maintained by the server according to the identification of at least one target virtual match and the user 'ABCD' to obtain a target video clip of the game character corresponding to the user 'ABCD' in the virtual scene of at least one target virtual match.
Optionally, before performing step 801, the server may filter out a target video segment of the virtual scene of the virtual match from the video segments of the virtual scene of the virtual match, as follows:
in a possible implementation manner, the server may perform screening according to a virtual service corresponding to a video clip of a virtual scene, and determine a video clip corresponding to a virtual service of a target type as a target video clip. For example, the server queries in the video segment database according to the identifier of the virtual match, determines the type of the virtual service corresponding to the video segment, and determines the video segment corresponding to the virtual service of the target type as the target video segment, where the target type may be that the target virtual object defeats other virtual objects.
802. The server determines at least one target position on the virtual map, wherein the target position is determined according to the position information of the target virtual object executing at least one virtual service in the virtual scene.
The target position may be a position where the target virtual object is located on the virtual map at the end of the virtual match.
In one possible implementation manner, for any target video clip, the server may determine second position information of the target virtual object in the virtual scene at the end of the virtual match according to the position information of the target virtual object in the virtual scene. And the server determines a target position corresponding to the target video clip on the virtual map according to the second position information of the target virtual object in the virtual scene. In such an embodiment, the server may use the position of the target virtual object at the end of the virtual match as a reference position for the subsequent display video clip, so as to clearly represent the position of the virtual object in the virtual map at the end of the virtual match.
In a possible implementation, the determining the target position includes: and the server acquires the position information of a plurality of virtual objects in a virtual scene corresponding to at least one target virtual battle according to the identification of the at least one target virtual battle. The server can determine the position information of the target virtual object in the virtual scene from the position information of the plurality of virtual objects according to the user identification. The server may determine a time point when at least one target virtual match ends, and obtain a position of the target virtual object corresponding to the time point from the position information of the target virtual object in the virtual scene, or the server may directly obtain a position corresponding to a latest time point in the position information, where the position is also the second position information. The server may determine the target position on the virtual map based on the correspondence between the virtual scene position and the virtual map position and the second position information.
Taking a virtual battle FPS game as an example for explanation, the server may obtain the position information of a plurality of game characters in the virtual scene of at least one field of target virtual battle according to the identifier of at least one field of target virtual battle. The server may query from the location information of the plurality of game pieces based on the identification of the end user to determine location information corresponding to the game piece controlled by the end user. The server may determine an ending time point of the at least one target virtual match. The server can inquire in the position information corresponding to the game character controlled by the terminal user according to the ending time point of the at least one target virtual match, and determine the position of the game character controlled by the terminal user in the virtual scene when the at least one target virtual match ends, namely the second position information. The server may convert the second position information into a target position on the virtual map according to a correspondence between the three-dimensional coordinates of the virtual scene and the two-dimensional coordinates of the virtual map.
803. The server sends the at least one target position to the terminal.
In one possible implementation, in response to the terminal not having the virtual map stored thereon, the server may further send the virtual map to the terminal.
804. And the terminal receives at least one target position and respectively displays a playing interface of at least one target video clip on the at least one target position of the virtual map.
The following description takes the playing interface as the playing window as an example: in a possible implementation manner, the terminal may display at least one anchor point on at least one target position of the virtual map, respectively, where one anchor point corresponds to one target position. And the terminal respectively displays the playing windows of at least one target video clip at the corresponding positions of at least one anchor point. Since the target position is the position of the target virtual object on the virtual map when the virtual match ends, the terminal can simultaneously display the playing window of the target video clip corresponding to the virtual scene of the multi-field virtual match on the virtual map. Under the implementation mode, the playing windows of different target video clips can be displayed on different anchor points on the terminal, so that an entrance for playing by a user is provided, and convenience is provided for the user to view the video clips. The user can click the anchor point, and the terminal responds to the click operation of the user and displays different virtual fighting target video clips. Referring to fig. 9, the terminal may display an anchor point 902 and a play window 903 on a virtual map 901.
In addition, in order to avoid mutual shielding between the playing windows, the terminal can determine the display positions of the anchor points corresponding to the playing windows according to the distance between the anchor points, the display positions of each playing window are not overlapped, and the playing windows are displayed according to the display positions. Under the implementation mode, the terminal can determine the display position of the playing window according to the distance between the anchor points, and the problems that the playing window is shielded mutually to cause interface confusion and the terminal user watches corresponding videos inconveniently are solved. In addition, when a virtual scene of a virtual battle corresponds to a large number of target video segments, the terminal may also not display the playing windows corresponding to all anchor points simultaneously, but only display the playing windows corresponding to the target anchor points, where the virtual service corresponding to the target anchor points is a virtual service of a target type, for example, the target virtual object defeats other virtual objects. Under the implementation mode, the terminal can preferentially display the target video clip with better performance by the terminal user under the condition of more target video clips, the display of the playing window is simplified, and the efficiency of man-machine interaction is improved.
In addition, the terminal may display at least one of the following in a play window of the target video clip: ranking data of the target virtual object after the target virtual object executes the virtual service, the number of defeated virtual objects and the survival time of the target virtual object in the virtual fight. In the implementation mode, a user can watch the target video clips in the virtual scenes of different virtual battles through the playing window, and can directly obtain at least one of ranking data, defeated virtual object number and survival time of the target virtual objects in different virtual battles in the playing window without checking battle information to obtain the information, so that the efficiency of man-machine interaction is high.
In one possible implementation, the terminal may display at least two interface switching buttons on the interface, and the interface switching buttons are used for switching to display different interfaces. Each interface may be associated with a different target condition, e.g., a ranking of a target virtual object above a ranking threshold in a virtual match may correspond to a first interface, and a time to live of the target virtual object below a time threshold in the virtual match may correspond to a second interface. The end user may toggle between the first interface and the second interface by touching the page button. Correspondingly, when the first interface is generated, the server can determine a first target position corresponding to the virtual battle of the target virtual object, wherein the first target position is higher than the ranking threshold value, the server can send at least one first target position to the terminal, and the terminal can respectively display a playing window of at least one target video clip on at least one first target position of the virtual map after receiving the at least one first target position; when the second interface is generated, the server may only determine a second target position corresponding to the virtual battle in which the survival time of the target virtual object is less than the time threshold, the server may send at least one second target position to the terminal, and the terminal may display a play window of at least one target video clip on at least one second target position of the virtual map after receiving the at least one second target position.
For example, the first interface may be configured to display virtual matches in which the target virtual objects are ranked higher than a ranking threshold, and the ranking threshold may be 10, that is, in the target virtual matches displayed on the first interface, the target virtual objects are all ranked higher than 10. The server can determine at least one target virtual match with a target virtual object ranking higher than 10 according to the match information of different virtual matches, and obtain a first target position corresponding to the at least one target virtual match with the ranking higher than 10. The server may send the at least one first target position to the terminal, and after the terminal receives the at least one first target position, the terminal may respectively display a play window of the at least one target video clip on the virtual map according to the at least one first target position, that is, the first interface. Further, the endpoint may display on the first interface a number of virtual matches for which the target virtual object is ranked above the ranking threshold. Referring to fig. 9, the terminal may display a switch button 905 corresponding to the first interface and an interface switch button 906 corresponding to the second interface on the first interface 904.
The second interface may be configured to display a virtual match in which the survival time of the target virtual object in the virtual match is less than a time threshold, where the time threshold may be 10 minutes, that is, the survival time of the target virtual object in the target virtual match displayed on the second interface is less than 10 minutes. The server can determine at least one target virtual match with the survival time of the target virtual object being less than 10 minutes according to the match information of different virtual matches, and the obtained second target position corresponding to the at least one target virtual match with the survival time being less than 10 minutes. The server may send the at least one second target position to the terminal, and after the terminal receives the at least one second target position, the terminal may respectively display a play window of the at least one target video clip on the virtual map according to the at least one second target position, that is, a second interface. Further, the terminal may display the number of virtual matches each having a survival time less than the time threshold on the second interface. Referring to fig. 10, the terminal may display a play window 1002 in a virtual map 1001, display a survival time in the play window 1002, and display an anchor point 1003 in the virtual map 1001.
In one possible implementation, in response to not acquiring the target video segment, the terminal may display on the virtual map at least one of ranking data of the target virtual object after performing the virtual service, the number of defeated virtual objects, and the survival time of the target virtual object in the virtual battle. Referring to fig. 9, in response to not acquiring the target video segment, the terminal may display a box as shown at 907 on the virtual map 901. Referring to fig. 10, in response to not acquiring the target video clip, the terminal may display a box as shown at 1004 on the virtual map 1001.
In one possible implementation mode, the terminal can display the virtual map, the anchor point and the playing window on the interface, and can also display the fighting information of the virtual fight which is best performed by the terminal user in the target time period. Under the implementation mode, the terminal user can not only see the video clips of multi-field virtual battles carried out by the terminal user at the target time, but also can visually obtain the battle information of the terminal user in the best-performing virtual battles, and does not need to log in different clients to obtain the corresponding battle information, so that the efficiency of man-machine interaction is higher.
Taking FPS-type games as an example, the fighting information may be the number of game pieces controlled by the end user to eliminate other game pieces, the number of game pieces controlled by the end user to assist in the same play to eliminate different plays, the damage caused by the game pieces controlled by the end user to other game pieces, and the ranking of the game pieces controlled by the end user in the game.
Optionally, after step 804, the terminal may further perform step 805.
805. And the terminal automatically plays at least one target video clip according to the target playing sequence.
And the target playing sequence is determined according to at least one of the ranking of the target virtual objects in the corresponding virtual battles, the number of defeated virtual objects and the survival time of the target virtual objects in the virtual battles.
In one possible implementation, the method for determining the target playing order includes: the server can determine a target playing sequence according to the ranking of the target virtual object in the virtual battle, and the higher the ranking of the target virtual object in the virtual battle is, the more forward the playing sequence is; the lower the ranking of the target virtual object in the virtual battle, the later the playing order. When the target virtual objects are ranked the same in at least two promotional battles, the server may further determine the target playing order according to the number of defeated virtual objects, for example, the greater the number of defeated virtual objects, the earlier the playing order, the fewer the number of defeated virtual objects, and the later the playing order. By analogy, the server can further determine the target playing sequence according to the survival time of the target virtual object in the virtual battle. Of course, the above description has been given by taking an example that the terminal determines the target playing sequence according to the ranking of the target virtual object in the virtual match, in other embodiments, the terminal may determine the target playing sequence according to the number of defeated virtual objects or the survival time of the target virtual object in the virtual match, which is not limited in the embodiment of the present application. After determining the target playing sequence, the server may send the target playing sequence to the terminal, and after receiving the target playing sequence, the terminal may automatically play the target video clip according to the target playing sequence.
In the above step 805, the server determines the target playing order, and sends the target playing order to the terminal for illustration, in other possible embodiments, the terminal may also determine the target playing order, and play the target video segment according to the target playing order, which is not limited in this embodiment of the application.
Before step 805, the terminal may determine whether a target video clip corresponding to the play window is stored in the storage space. In response to the storage space storing the target video clip corresponding to the playing window, the terminal can play the corresponding target video clip according to the playing operation of the terminal user; in response to that the storage space does not store the target video clip corresponding to the playing window, the terminal may send a target video clip acquisition request to the server, where the target video clip acquisition request carries an identifier of the target video clip. After receiving the target video clip acquisition request, the server can query in the target video clip database according to the identification of the target video clip to obtain the corresponding target video clip, and send the target video clip to the terminal. The terminal can store the target video clip in the storage space after receiving the target video clip sent by the server. And responding to the playing operation of the terminal user, determining the target video clip from the storage space, and playing the target video clip in the corresponding playing window.
It should be noted that, the above steps 801 and 805 may be used to display a target video clip of multiple virtual matches, where the multiple virtual matches may be multiple virtual matches in a target time period, or all virtual matches performed by a user. If the target time period is one week, the multiple virtual matches in the target time period may be multiple virtual matches that the user performs better or worse within one week. The embodiment of the present application does not limit this.
According to the technical scheme provided by the embodiment of the application, the terminal can display the playing window of the video clip of the target virtual object in the virtual scene on the target position of the virtual map, the terminal user can see the target video clip of the terminal user in the virtual scene on the virtual map, and the terminal user can quickly see the wonderful expression of the terminal user without watching the complete game video because the target video clip is the video clip which is better in the virtual scene of the terminal user, so that the efficiency of man-machine interaction is improved. In addition, the terminal can also display different interfaces according to the expression of the target virtual object, and the terminal user can see video clips which are good in expression and not good in expression through different interfaces, so that the improvement of the virtual fighting level of the terminal user is facilitated. Meanwhile, the terminal directly displays the most representative video clips in the virtual scenes of different virtual battles, so that a terminal user is not required to watch the complete video of the virtual battles, and the efficiency of human-computer interaction is further improved.
The above-mentioned step 201-203 is described by taking a terminal as an execution subject, the steps 301-305 and 801-805 are described by taking an interaction between a terminal and a server, and the following step 1101-1104 is described by taking a server as an execution subject.
Fig. 11 is a flowchart of a method for displaying a video segment based on a virtual scene according to an embodiment of the present application, where the method may be applied to a server, and referring to fig. 11, the method includes:
1101. the server determines at least one target video clip of the target virtual object in the virtual scene, wherein the target video clip is used for describing the virtual service corresponding to the target virtual object in the virtual scene.
1102. The server determines at least one target position on the virtual map according to the position information of the target virtual object in the virtual scene.
In a possible implementation manner, for any target video clip, the server may determine, according to the position information of the target virtual object in the virtual scene, first position information of the target virtual object in the virtual scene when the virtual service is executed; the server can determine a target position corresponding to the target video clip on the virtual map according to the first position information of the target virtual object in the virtual scene.
In one possible implementation manner, for any target video clip, the server may determine second position information of the target virtual object in the virtual scene at the end of the virtual match according to the position information of the target virtual object in the virtual scene. The server can determine a target position corresponding to the target video clip on the virtual map according to the second position information of the target virtual object in the virtual scene.
Alternatively, the server may perform step 1103 in response to the server determining the target location from the first location information. In response to the server determining the target location from the second location information, the server may perform step 1104.
1103. And the server determines the movement track of the target virtual object in the virtual map according to the position information of the target virtual object in the virtual scene.
In one possible implementation, the server determines a plurality of location points on the virtual map according to the location information of the target virtual object in the virtual scene, wherein one location point corresponds to one location information. And determining a connecting line between the position points adjacent in time according to the time sequence of the plurality of position information so as to determine the moving track of the target virtual object.
1104. And the server sends the target video clip and the target position to the terminal.
In a possible implementation manner, the server may send the clip information of the target video clip and the target position to the terminal, and the terminal may display the play interface of the target video clip at the target position of the virtual map according to the clip information of the target video clip. Taking the playing interface as the playing window for illustration, the clip information of the target video clip may include a cover page and a link of the target video clip. After receiving the clip information and the target position of the target video clip, the terminal may display a play window of the target video clip at the target position of the virtual map, where the play window may include a cover of the target video clip and a play button, and the play button may correspond to a link of the target video clip.
After the terminal displays the playing interface of the target video clip at the target position of the virtual map, the terminal can play the target video clip at the target position of the virtual map in response to the playing operation of the terminal user. For example, in response to the play button being triggered, the terminal may send a target video clip acquisition request to the server through the link of the target video clip, where the link of the target video clip carries the identifier of the target video clip. The server can receive the target video clip acquisition request, query in the video clip database according to the identification of the target video clip, obtain the target video clip corresponding to the identification of the target video clip, and send the target video clip to the terminal. After the terminal receives the target video clip, the terminal can play the target video clip in the playing window corresponding to the target video clip.
In addition, the server can also directly send the target video clip and the target position to the terminal, and after the terminal receives the target video clip, the terminal can store the target video clip in the storage space. In response to the play button of the play window being triggered, the terminal may play the target video clip in the play window. In one possible embodiment, in response to the server performing step 1103, the server may transmit the moving track of the target virtual object to the terminal in addition to the target video clip and the target location.
Through the steps 1101 and 1104, the server can execute the process of determining the target video segment, the target position and the moving track at the cloud, and the terminal only needs to perform subsequent display according to the target video segment, the target position and the moving track determined by the server, so that the computing resources of the terminal are saved, and the display efficiency is improved.
Fig. 12 is a block diagram of a video clip display apparatus based on a virtual scene according to an embodiment of the present application, and referring to fig. 12, the apparatus includes: an obtaining module 1201, a determining module 1202, and a displaying module 1203.
An obtaining module 1201, configured to determine at least one target video segment of a target virtual object in a virtual scene, where the target video segment is used to describe a virtual service corresponding to the target virtual object in the virtual scene.
A determining module 1202, configured to determine at least one target location on the virtual map, where the target location is determined according to location information of at least one virtual service executed by the controlled target virtual object in the virtual scene.
A display module 1203, configured to display, on at least one target position of the virtual map, a playing interface of at least one target video clip respectively.
In one possible embodiment, the at least one target position is obtained by:
and for any target video clip, determining first position information of the target virtual object in the virtual scene when the virtual service is executed according to the position information of the target virtual object in the virtual scene.
And determining a target position corresponding to the target video clip on the virtual map according to the first position information of the target virtual object in the virtual scene.
In one possible implementation, the display module is used for determining a moving track of the target virtual object in the virtual map, and the moving track is determined based on the position information of the target virtual object in the virtual battle virtual scene. And respectively displaying the playing window interfaces of at least one target video clip on at least one target position of the moving track.
In one possible embodiment, the movement trajectory is obtained by the following procedure: according to the position information of the target virtual object in the virtual scene of the virtual battle, a plurality of position points are determined on the virtual map, and one position point corresponds to one position information. And determining a connecting line between the position points adjacent in time according to the time sequence of the plurality of position information so as to determine the moving track of the target virtual object.
In one possible embodiment, the colors of the plurality of line segments of the movement trajectory are different according to the virtual vehicle used by the target virtual object.
In one possible embodiment, the at least one target position is obtained by:
and for any target video clip, determining second position information of the target virtual object in the virtual scene when the competitive virtual battle is finished according to the position information of the target virtual object in the competitive virtual battle scene.
And determining a target position corresponding to the target video clip on the virtual map according to the second position information of the target virtual object in the virtual scene.
In a possible embodiment, the display module is configured to display at least one anchor point on at least one target location of the virtual map, respectively, where one anchor point corresponds to one target location. And respectively displaying the playing window interfaces of at least one target video clip at the corresponding positions of at least one anchor point.
In a possible implementation manner, the playing interface of the target video clip is a playing window, and the playing window further includes at least one of ranking data of the target virtual object after executing the virtual service, a defeated virtual object number, and a survival time of the target virtual object in the competitive virtual battle.
In one possible embodiment, the apparatus further comprises:
and the playing module is used for automatically playing the at least one target video according to the target playing sequence if the at least one target video clip belongs to at least two competitive virtual battles respectively.
In one possible embodiment, the target playing order is determined according to at least one of the ranking of the target virtual objects in the corresponding competitive virtual battle, the number of defeated virtual objects, and the survival time of the target virtual objects in the competitive virtual battle.
In one possible implementation, the display module is further configured to display, on the virtual map, at least one of ranking data of the target virtual object after the virtual service is executed, the number of defeated virtual objects, and the survival time of the target virtual object in the competitive virtual battle in response to the target video clip not being acquired.
In one possible embodiment, the target video segment is obtained by the following process:
and recording the video of the target virtual object in the virtual scene in response to the target virtual object starting to execute the virtual service.
And responding to the fact that the target virtual object finishes the virtual service, stopping recording the video, and taking the recorded video as a target video segment.
According to the technical scheme provided by the embodiment of the application, the terminal can display the playing interface of the video clip of the target virtual object in the virtual scene on the target position of the virtual map, the terminal user can see the target video clip of the terminal user in the virtual scene on the virtual map, and the terminal user can quickly see the self wonderful expression without watching the complete game video because the target video clip is the video clip which is better in the virtual scene of the terminal user, so that the efficiency of man-machine interaction is improved.
Fig. 13 is a structural diagram of a video segment display apparatus based on a virtual scene according to an embodiment of the present application, and referring to fig. 13, the apparatus includes: an acquisition module 1301, a target position determination module 1302, and a trajectory determination module 1303.
The obtaining module 1301 is configured to determine at least one target video clip of the target virtual object in the virtual scene, where the target video clip is used to describe a virtual service corresponding to the target virtual object in the virtual scene.
The target position determining module 1302 is configured to determine at least one target position on the virtual map according to the position information of the target virtual object in the virtual scene.
The track determining module 1303 is configured to determine a moving track of the target virtual object in the virtual map according to the position information of the target virtual object in the virtual scene.
In a possible implementation manner, the location information determining module 1302 is configured to, for any target video segment, determine, according to the location information of the target virtual object in the virtual scene, first location information of the target virtual object in the virtual scene when the virtual service is executed; the server can determine a target position corresponding to the target video clip on the virtual map according to the first position information of the target virtual object in the virtual scene.
In a possible implementation manner, the location information determining module 1302 is configured to, for any target video segment, determine second location information of the target virtual object in the virtual scene at the end of the virtual match according to the location information of the target virtual object in the virtual scene. The server can determine a target position corresponding to the target video clip on the virtual map according to the second position information of the target virtual object in the virtual scene.
In a possible implementation manner, the trajectory determining module 1303 is configured to determine a plurality of location points on the virtual map according to the location information of the target virtual object in the virtual scene, where one location point corresponds to one location information. And determining a connecting line between the position points adjacent in time according to the time sequence of the plurality of position information so as to determine the moving track of the target virtual object.
Through the device provided by the embodiment of the application, the server can execute the process of determining the target video clip, the target position and the moving track at the cloud end, and the terminal only needs to perform subsequent display according to the target video clip, the target position and the moving track determined by the server, so that the computing resources of the terminal are saved, and the computing efficiency is improved.
The computer device in the embodiment of the present application may be implemented as a terminal, and a structure of the terminal is described below. Fig. 14 is a block diagram of a terminal according to an embodiment of the present application. The terminal 1400 may be: a smartphone, a tablet, a laptop, or a desktop computer. Terminal 1400 can also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
In general, terminal 1400 includes: one or more processors 1401 and one or more memories 1402.
Processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1401 may be implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array). Processor 1401 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1402 may include one or more computer-readable storage media, which may be non-transitory. Memory 1402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1402 is used to store at least one instruction for execution by processor 1401 to implement the virtual scene based video clip display method provided by the method embodiments herein.
In some embodiments, terminal 1400 may further optionally include: a peripheral device interface 1403 and at least one peripheral device. The processor 1401, the memory 1402, and the peripheral device interface 1403 may be connected by buses or signal lines. Each peripheral device may be connected to the peripheral device interface 1403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1404, a display 1405, a camera 1406, audio circuitry 1407, a positioning component 1408, and a power supply 1409.
The peripheral device interface 1403 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1401 and the memory 1402. In some embodiments, the processor 1401, memory 1402, and peripheral interface 1403 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1401, the memory 1402, and the peripheral device interface 1403 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1405 is a touch display screen, the display screen 1405 also has the ability to capture touch signals at or above the surface of the display screen 1405. The touch signal may be input to the processor 1401 for processing as a control signal. At this point, the display 1405 may also be used to provide virtual keys and/or a virtual keyboard, also referred to as soft keys and/or a soft keyboard. In some embodiments, the display 1405 may be one, providing the front panel of the terminal 1400; in other embodiments, display 1405 may be at least two, respectively disposed on different surfaces of terminal 1400 or in a folded design; in still other embodiments, display 1405 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1400. Even further, the display 1405 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1405 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 1406 is used to capture images or video. Optionally, camera assembly 1406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1401 for processing or inputting the electric signals to the radio frequency circuit 1404 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is then used to convert electrical signals from the processor 1401 or the radio frequency circuit 1404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1407 may also include a headphone jack.
The positioning component 1408 serves to locate the current geographic position of the terminal 1400 for navigation or LBS (location based Service). The positioning component 1408 may be a positioning component based on the GPS (global positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
Power supply 1409 is used to power the various components of terminal 1400. The power source 1409 may be alternating current, direct current, disposable or rechargeable. When the power source 1409 comprises a rechargeable battery, the rechargeable battery can support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1400 also includes one or more sensors 1410. The one or more sensors 1410 include, but are not limited to: acceleration sensor 1411, gyroscope sensor 1412, pressure sensor 1413, fingerprint sensor 1414, optical sensor 1415, and proximity sensor 1416.
The acceleration sensor 1411 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the terminal 1400. For example, the acceleration sensor 1411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1401 can control the display 1405 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1411. The acceleration sensor 1411 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1412 may detect a body direction and a rotation angle of the terminal 1400, and the gyro sensor 1412 and the acceleration sensor 1411 may cooperate to collect a 3D motion of the user on the terminal 1400. The processor 1401 can realize the following functions according to the data collected by the gyro sensor 1412: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1413 may be disposed on the side frames of terminal 1400 and/or underlying display 1405. When the pressure sensor 1413 is disposed on the side frame of the terminal 1400, the user's holding signal of the terminal 1400 can be detected, and the processor 1401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1413. When the pressure sensor 1413 is disposed at the lower layer of the display screen 1405, the processor 1401 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1405. The operability control comprises at least one of a key control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1414 is used for collecting a fingerprint of a user, and the processor 1401 identifies the user according to the fingerprint collected by the fingerprint sensor 1414, or the fingerprint sensor 1414 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 1401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. Fingerprint sensor 1414 may be disposed on the front, back, or side of terminal 1400. When a physical button or vendor Logo is provided on terminal 1400, fingerprint sensor 1414 may be integrated with the physical button or vendor Logo.
The optical sensor 1415 is used to collect ambient light intensity. In one embodiment, processor 1401 may control the display brightness of display 1405 based on the ambient light intensity collected by optical sensor 1415. Specifically, when the ambient light intensity is high, the display luminance of the display screen 1405 is increased; when the ambient light intensity is low, the display brightness of the display screen 1405 is reduced. In another embodiment, the processor 1401 can also dynamically adjust the shooting parameters of the camera assembly 1406 according to the intensity of the ambient light collected by the optical sensor 1415.
Proximity sensor 1416, also known as a distance sensor, is typically disposed on the front panel of terminal 1400. The proximity sensor 1416 is used to collect the distance between the user and the front surface of the terminal 1400. In one embodiment, when proximity sensor 1416 detects that the distance between the user and the front face of terminal 1400 is gradually decreased, processor 1401 controls display 1405 to switch from a bright screen state to a dark screen state; when proximity sensor 1416 detects that the distance between the user and the front face of terminal 1400 is gradually increasing, display 1405 is controlled by processor 1401 to switch from the sniff state to the brighten state.
Those skilled in the art will appreciate that the configuration shown in fig. 14 is not intended to be limiting with respect to terminal 1400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The computer device in the embodiment of the present application may be implemented as a server, and a structure of the server is described below. Fig. 15 is a block diagram of a server 1500 according to an embodiment of the present application, where the server 1500 may generate a relatively large difference due to a difference in configuration or performance, and may include one or more processors (CPUs) 1501 and one or more memories 1502, where at least one instruction is stored in the one or more memories 1502, and the at least one instruction is loaded and executed by the one or more processors 1501 to implement the method on the server side. Of course, the server 1500 may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the server 1500 may also include other components for implementing the functions of the device, which is not described herein again.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory, comprising instructions executable by a processor to perform the virtual scene based video clip display method in the above embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which includes one or more instructions executable by a processor of an electronic device to perform the virtual scene based video segment display method provided in the above embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method for displaying a video clip based on a virtual scene, the method comprising:
determining at least one target video clip of a target virtual object in a virtual scene, wherein the target video clip is used for describing a virtual service corresponding to the target virtual object in the virtual scene;
determining at least one target location in a virtual map, wherein the target location is determined according to location information of at least one virtual service executed by the target virtual object in the virtual scene;
and displaying the playing interface of the at least one target video clip on the at least one target position of the virtual map respectively.
2. The method of claim 1, wherein the at least one target location is obtained by:
for any target video clip, determining first position information of the target virtual object in the virtual scene when the virtual service is executed according to the position information of the target virtual object in the virtual scene;
and determining a target position corresponding to the target video clip on the virtual map according to first position information of the target virtual object in the virtual scene.
3. The method of claim 1, wherein the displaying the at least one target video clip on the at least one target location of the virtual map respectively comprises:
determining a movement track of the target virtual object in the virtual map, wherein the movement track is determined based on the position information of the target virtual object in the virtual scene;
and respectively displaying the playing interface of the at least one target video clip at the at least one target position of the moving track.
4. The method of claim 3, wherein the movement trajectory is obtained by:
determining a plurality of position points on the virtual map according to the position information of the target virtual object in the virtual scene, wherein one position point corresponds to one position information;
and determining a connecting line between the position points adjacent in time according to the time sequence of the plurality of position information so as to determine the moving track of the target virtual object.
5. The method of claim 3, wherein colors of the plurality of line segments of the movement trajectory differ according to a virtual vehicle used by the target virtual object.
6. The method of claim 1, wherein the at least one target location is obtained by:
for any target video clip, determining second position information of the target virtual object in the virtual scene when the virtual match is finished according to the position information of the target virtual object in the virtual scene;
and determining a target position corresponding to the target video clip on the virtual map according to second position information of the target virtual object in the virtual scene.
7. The method of claim 1, wherein the displaying the at least one target video clip on the at least one target location of the virtual map respectively comprises:
respectively displaying at least one anchor point on the at least one target position of the virtual map, wherein one anchor point corresponds to one target position;
and respectively displaying the playing interfaces of the at least one target video clip at the corresponding positions of the at least one anchor point.
8. The method according to claim 1, wherein the playing interface of the target video clip is a playing window, and the playing window further includes at least one of ranking data of the target virtual object after executing the virtual service, a number of defeated virtual objects, and a survival time of the target virtual object in a virtual battle.
9. The method of claim 1, wherein after displaying the playing interface of the at least one target video clip on the at least one target location of the virtual map respectively, the method further comprises:
and if the at least one target video clip belongs to at least two virtual battles respectively, automatically playing the at least one target video according to a target playing sequence.
10. The method of claim 9, wherein the target playback order is determined based on at least one of a ranking of the target virtual objects in the corresponding virtual battle, a number of defeated virtual objects, and a length of time the target virtual objects survive in the virtual battle.
11. The method of claim 1, further comprising:
and in response to not acquiring the target video clip, displaying at least one of ranking data of the target virtual object after executing the virtual service, the number of defeated virtual objects and the survival time of the target virtual object in the virtual battle on the virtual map.
12. The method of claim 1, wherein the target video segment is obtained by:
responding to the target virtual object to start executing the virtual service, and recording a video of the target virtual object in the virtual scene;
and responding to the fact that the target virtual object finishes the virtual service, stopping recording the video, and taking the recorded video as the target video segment.
13. An apparatus for displaying a video clip based on a virtual scene, the apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for determining at least one target video clip of a target virtual object in a virtual scene, and the target video clip is used for describing a virtual service corresponding to the target virtual object in the virtual scene;
the determining module is used for determining at least one target position on a virtual map, wherein the target position is determined according to the position information of at least one virtual service executed by the controlled target virtual object in the virtual scene;
and the display module is used for respectively displaying the playing interface of the at least one target video clip on the at least one target position of the virtual map.
14. A computer device comprising one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to perform operations performed by the virtual scene based video clip display method of any one of claims 1 to 12.
15. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor, to implement the operations performed by the virtual scene-based video clip displaying method according to any one of claims 1 to 12.
CN202010432606.8A 2020-05-20 2020-05-20 Video clip display method, device, equipment and medium based on virtual scene Active CN111544897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010432606.8A CN111544897B (en) 2020-05-20 2020-05-20 Video clip display method, device, equipment and medium based on virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010432606.8A CN111544897B (en) 2020-05-20 2020-05-20 Video clip display method, device, equipment and medium based on virtual scene

Publications (2)

Publication Number Publication Date
CN111544897A true CN111544897A (en) 2020-08-18
CN111544897B CN111544897B (en) 2023-03-10

Family

ID=71999882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010432606.8A Active CN111544897B (en) 2020-05-20 2020-05-20 Video clip display method, device, equipment and medium based on virtual scene

Country Status (1)

Country Link
CN (1) CN111544897B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113559503A (en) * 2021-06-30 2021-10-29 上海掌门科技有限公司 Video generation method, device and computer readable medium
CN114845136A (en) * 2022-06-28 2022-08-02 北京新唐思创教育科技有限公司 Video synthesis method, device, equipment and storage medium
WO2022257365A1 (en) * 2021-06-11 2022-12-15 完美世界征奇(上海)多媒体科技有限公司 Video generation method and apparatus, storage medium, and electronic apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090131177A1 (en) * 2007-01-29 2009-05-21 Sony Online Entertainment Llc System and method for creating, editing, and sharing video content relating to video game events
US20180020243A1 (en) * 2016-07-13 2018-01-18 Yahoo Holdings, Inc. Computerized system and method for automatic highlight detection from live streaming media and rendering within a specialized media player
CN108295468A (en) * 2018-02-28 2018-07-20 网易(杭州)网络有限公司 Information processing method, equipment and the storage medium of game
CN109672899A (en) * 2018-12-13 2019-04-23 南京邮电大学 The Wonderful time of object game live scene identifies and prerecording method in real time
CN110602544A (en) * 2019-09-12 2019-12-20 腾讯科技(深圳)有限公司 Video display method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090131177A1 (en) * 2007-01-29 2009-05-21 Sony Online Entertainment Llc System and method for creating, editing, and sharing video content relating to video game events
US20180020243A1 (en) * 2016-07-13 2018-01-18 Yahoo Holdings, Inc. Computerized system and method for automatic highlight detection from live streaming media and rendering within a specialized media player
CN108295468A (en) * 2018-02-28 2018-07-20 网易(杭州)网络有限公司 Information processing method, equipment and the storage medium of game
CN109672899A (en) * 2018-12-13 2019-04-23 南京邮电大学 The Wonderful time of object game live scene identifies and prerecording method in real time
CN110602544A (en) * 2019-09-12 2019-12-20 腾讯科技(深圳)有限公司 Video display method and device, electronic equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022257365A1 (en) * 2021-06-11 2022-12-15 完美世界征奇(上海)多媒体科技有限公司 Video generation method and apparatus, storage medium, and electronic apparatus
CN113559503A (en) * 2021-06-30 2021-10-29 上海掌门科技有限公司 Video generation method, device and computer readable medium
CN113559503B (en) * 2021-06-30 2024-03-12 上海掌门科技有限公司 Video generation method, device and computer readable medium
CN114845136A (en) * 2022-06-28 2022-08-02 北京新唐思创教育科技有限公司 Video synthesis method, device, equipment and storage medium
CN114845136B (en) * 2022-06-28 2022-09-16 北京新唐思创教育科技有限公司 Video synthesis method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111544897B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN110141859B (en) Virtual object control method, device, terminal and storage medium
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN109529356B (en) Battle result determining method, device and storage medium
CN111921197B (en) Method, device, terminal and storage medium for displaying game playback picture
CN110694273A (en) Method, device, terminal and storage medium for controlling virtual object to use prop
CN111603771B (en) Animation generation method, device, equipment and medium
CN111544897B (en) Video clip display method, device, equipment and medium based on virtual scene
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN111589146A (en) Prop operation method, device, equipment and storage medium based on virtual environment
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN110448908B (en) Method, device and equipment for applying sighting telescope in virtual environment and storage medium
CN113058264A (en) Virtual scene display method, virtual scene processing method, device and equipment
CN111589127A (en) Control method, device and equipment of virtual role and storage medium
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium
CN112717396A (en) Interaction method, device, terminal and storage medium based on virtual pet
CN112569596A (en) Video picture display method and device, computer equipment and storage medium
CN112221142A (en) Control method and device of virtual prop, computer equipment and storage medium
CN111760281A (en) Method and device for playing cut-scene animation, computer equipment and storage medium
CN110833695A (en) Service processing method, device, equipment and storage medium based on virtual scene
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN113181647A (en) Information display method, device, terminal and storage medium
CN112755517A (en) Virtual object control method, device, terminal and storage medium
CN112274936A (en) Method, device, equipment and storage medium for supplementing sub-props of virtual props
CN112843703B (en) Information display method, device, terminal and storage medium
CN113599810B (en) Virtual object-based display control method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40027946

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant