CN108924632A - A kind for the treatment of method and apparatus and storage medium of interactive application scene - Google Patents
A kind for the treatment of method and apparatus and storage medium of interactive application scene Download PDFInfo
- Publication number
- CN108924632A CN108924632A CN201810771582.1A CN201810771582A CN108924632A CN 108924632 A CN108924632 A CN 108924632A CN 201810771582 A CN201810771582 A CN 201810771582A CN 108924632 A CN108924632 A CN 108924632A
- Authority
- CN
- China
- Prior art keywords
- client
- server
- state
- scene
- simulated object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4334—Recording operations
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/79—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/79—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
- A63F13/798—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/85—Providing additional services to players
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4781—Games
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/20—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/55—Details of game data or player data management
- A63F2300/5546—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
- A63F2300/5553—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/55—Details of game data or player data management
- A63F2300/5546—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
- A63F2300/558—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history by assessing the players' skills or ranking
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Computer Security & Cryptography (AREA)
- General Business, Economics & Management (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The embodiment of the invention discloses the treating method and apparatus and storage medium of a kind of interactive application scene, the visualization for interactive process is played back, and user is made to recognize entire interactive process in detail.The embodiment of the present invention provides a kind of processing method of interactive application scene, including:Client receives the video playback information that server is sent, and the video playback information includes:Record data when simulated object executes in interactive application scene, the simulated object are controlled by the client and are executed;The client extracts the scene state and corresponding Obj State generated in multiple logical frames from the record data;The client generates scenario reduction video according to the scene state generated in multiple logical frames and Obj State, and the scenario reduction video is played, the scenario reduction video is for playing back implementation procedure of the simulated object in the interactive application scene.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind for the treatment of method and apparatus of interactive application scene with
And storage medium.
Background technique
Strategy game, which is supplied to player one, can think hard pondering a problem to handle the environment of more complex thing, Wan Jiaxu
To guarantee that oneself control object reaches the target of game definition when carrying out game.Player needs to think in the limit that game is approved
Method completes target to the greatest extent.
In strategy game, allow player freely control, manage and use game in people or things, by it is this oneself
By means and players use one's brains the method for the confrontation enemy found out and reach target required by game.According to above-mentioned
Strategy game the characteristics of, a large amount of duplicate units, game rule, model included by strategy need to occupy a large amount of system money
Source.
After the fight of strategy game, system can send an envelope war communique mail to fight both sides, wherein can be containing this
The information of secondary fight, player can understand the details of this fight by reading war communique mail.War communique mail be with text or
The form of list shows the process an of fight and as a result, can reflect the fight result in strategy game in this way.The prior art
In only by war communique mail show fight as a result, make user that can not understand fight details in depth, for fight to user
Interior randomness can not be embodied very well with tactic.
Summary of the invention
The embodiment of the invention provides the treating method and apparatus and storage medium of a kind of interactive application scene, are used for
The visualization of interactive process plays back, and user is made to recognize entire interactive process in detail.
The embodiment of the present invention provides following technical scheme:
On the one hand, the embodiment of the present invention provides a kind of processing method of interactive application scene, including:
Client receives the video playback information that server is sent, and the video playback information includes:Simulated object is being handed over
Record data when executing in mutual formula application scenarios, the simulated object are controlled by the client and are executed;
The client extracts the scene state generated in multiple logical frames and corresponding right from the record data
As state;
The client generates scenario reduction view according to the scene state generated in multiple logical frames and Obj State
Frequently, and the scenario reduction video is played, the scenario reduction video is answered for playing back the simulated object in the interactive mode
With the implementation procedure in scene.
On the other hand, the embodiment of the present invention also provides a kind of processing method of interactive application scene, including:
The scene state that server recording interactive application scenarios are generated in multiple logical frames, and record simulated object exist
Each logical frame corresponding Obj State when executing in the interactive application scene, the simulated object are held by client control
Row;
The server generates institute according to the scene state and corresponding Obj State generated in multiple logical frames
State record data when simulated object executes in the interactive application scene;
The server is sending video playback information to the client, and the video playback information includes:The note
Record data.
On the other hand, the embodiment of the present invention also provides a kind of client, including:
Receiving module, for receiving the video playback information of server transmission, the video playback information includes:Simulation pair
Record data when as executing in interactive application scene, the simulated object are controlled by the client and are executed;
State extraction module, for extracting scene state and phase in the generation of multiple logical frames from the record data
The Obj State answered;
Video recovery module, for generating scene according to the scene state generated in multiple logical frames and Obj State
Also original video, and the scenario reduction video is played, the scenario reduction video is for playing back the simulated object in the friendship
Implementation procedure in mutual formula application scenarios.
In aforementioned aspects, aforementioned one side face and various possible realization sides is can also be performed in the comprising modules of client
The step of described in formula, is detailed in the aforementioned explanation in aforementioned one side face and various possible implementations.
On the other hand, the embodiment of the present invention also provides a kind of server, including:
State acquisition module, for the scene state that recording interactive application scenarios are generated in multiple logical frames, Yi Jiji
Record simulated object each logical frame corresponding Obj State when being executed in the interactive application scene, the simulated object by
Client control executes;
Data generation module, for according to the scene state and corresponding object shape generated in multiple logical frames
State generates the record data when simulated object executes in the interactive application scene;
Sending module, for sending video playback information to the client, the video playback information includes:It is described
Record data.
In aforementioned aspects, aforementioned one side face and various possible realization sides is can also be performed in the comprising modules of server
The step of described in formula, is detailed in the aforementioned explanation in aforementioned one side face and various possible implementations.
On the other hand, the embodiment of the present invention provides a kind of client, which includes:Processor, memory;Memory
For storing instruction;Processor is used to execute the instruction in memory, so that any one of client executing such as aforementioned one side face
Method.
On the other hand, the embodiment of the present invention provides a kind of server, which includes:Processor, memory;Memory
For storing instruction;Processor is used to execute the instruction in memory, so that server is executed such as any one of aforementioned one side face
Method.
On the other hand, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storages
Instruction is stored in medium, when run on a computer, so that computer executes method described in above-mentioned various aspects.
In embodiments of the present invention, client receives the video playback information of server transmission, video playback information first
Including:Record data when simulated object executes in interactive application scene, simulated object are controlled by client and are executed, then
Client extracts the scene state and corresponding Obj State generated in multiple logical frames from record data.Last client
Scenario reduction video is generated according to the scene state and Obj State generated in multiple logical frames, and plays scenario reduction video,
Scenario reduction video is for playing back implementation procedure of the simulated object in interactive application scene.Due to being taken in the embodiment of the present invention
It include record data in the video playback information that business device is sent, client can be extracted from record data in multiple logical frames
The scene state and Obj State of generation, so as to generate scenario reduction video, when client terminal playing scenario reduction video,
User can recognize implementation procedure of the simulated object in interactive application scene by scenario reduction video, compared to existing
There is the war communique mail of technology, the embodiment of the present invention may be implemented the visualization playback of interactive process, recognize that user in detail whole
A interactive process.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those skilled in the art, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of showing for System Application Architecture of the processing method of interactive application scene provided in an embodiment of the present invention
It is intended to;
Fig. 2 is a kind of process blocks schematic diagram of the processing method of interactive application scene provided in an embodiment of the present invention;
Fig. 3 is a kind of process blocks schematic diagram of the processing method of interactive application scene provided in an embodiment of the present invention;
Fig. 4 is fight flow diagram provided in an embodiment of the present invention;
Fig. 5 is that the technical ability of simulated object provided in an embodiment of the present invention describes schematic diagram;
Fig. 6 is the realization schematic diagram of a scenario of the fight frame in strategy game scene provided in an embodiment of the present invention;
Fig. 7 is a video interception schematic diagram of scenario reduction video provided in an embodiment of the present invention;
Fig. 8 is a kind of composed structure schematic diagram of client provided in an embodiment of the present invention;
Fig. 9 is a kind of composed structure schematic diagram of server provided in an embodiment of the present invention;
Figure 10 is the composition that a kind of processing method of interactive application scene provided in an embodiment of the present invention is applied to terminal
Structural schematic diagram;
Figure 11 is the group that a kind of processing method of interactive application scene provided in an embodiment of the present invention is applied to server
At structural schematic diagram.
Specific embodiment
The embodiment of the invention provides the treating method and apparatus and storage medium of a kind of interactive application scene, are used for
The visualization of interactive process plays back, and user is made to recognize entire interactive process in detail.
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention
Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that disclosed below
Embodiment be only a part of the embodiment of the present invention, and not all embodiments.Based on the embodiments of the present invention, this field
Technical staff's every other embodiment obtained, shall fall within the protection scope of the present invention.
Term " includes " in description and claims of this specification and above-mentioned attached drawing and " having " and they
Any deformation, it is intended that covering non-exclusive includes so as to a series of process, method comprising units, system, product or to set
It is standby to be not necessarily limited to those units, but be not clearly listed or these process, methods, product or equipment are consolidated
The other units having.
Referring to FIG. 1, it illustrates applied by the processing method of interactive application scene provided by the embodiments of the present application
System architecture schematic diagram.The system may include:Server 110 and client 120, wherein server 110 can be to client
120 provide video playback information, which includes:Record when simulated object executes in interactive application scene
Data.Carried out data transmission between client 120 and server 110 by communication network.The client 120 specifically can be as
Terminal shown in FIG. 1, for another example the client 120 is also possible to game client.Terminal can be mobile phone, tablet computer, electronics
Book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert
Compression standard audio level 3), (Moving Picture Experts Group Audio Layer IV, dynamic image are special by MP4
Family's compression standard audio level 4) player, pocket computer on knee and desktop computer etc..
In embodiments of the present invention, it includes record number that server 110, which is sent in the video playback information of client 120,
According to terminal 120 can get video playback information from server 110 by communication network, so that it is determined that going out record data, visitor
Family end 120 can extract the scene state and Obj State generated in multiple logical frames from record data, so as to life
At scenario reduction video, when client 120 plays scenario reduction video, user can recognize mould by scenario reduction video
Quasi- implementation procedure of the object in interactive application scene, war communique mail compared with the prior art, the embodiment of the present invention can be with
The visualization playback for realizing interactive process, makes user recognize entire interactive process in detail.
It is described in detail individually below from client, the angle of server.The processing of interactive application scene of the present invention
One embodiment of method specifically can be applied into the scene of user's real-time report interactive process.In the embodiment of the present invention
Interactive application scene specifically can be scene of game, be also possible to the interaction scenarios of application program.For example, the present invention is real
The processing method for applying the interactive application scene of example offer can be adapted for the scene built for game role, be readily applicable to
The scene built in software application system for user object.It is shown in interactive application scene described in the embodiment of the present invention
There is simulated object, which can be the game role in scene of game, the hero and scholar being also possible in scene of game
Soldier, such as simulated object can be strategy game by the persons or things of user's control, herein without limitation.
It is illustrated, please refers to shown in Fig. 2 from client-side first, interactive mode provided by one embodiment of the present invention is answered
With the processing method of scene, may include steps of:
201, client receives the video playback information that server is sent, and video playback information includes:Simulated object is being handed over
Record data when executing in mutual formula application scenarios, simulated object are controlled by client and are executed.
In embodiments of the present invention, simulated object is controlled by client and is executed, such as interactive application scene can be plan
Slightly game, which can be the role (such as hero and soldier) in game, when simulated object is in the interaction of client
When executing in formula application scenarios, server is stored with record data when simulated object executes in interactive application scene.Clothes
Device transmission video playback information be engaged in client, so that client can be firstly received the video playback information.
In some embodiments of the invention, before step 201 client receives the video playback information that server is sent,
Method provided in an embodiment of the present invention further includes:
Client receives the result feedback information that server is sent, and as a result feedback information includes:Simulated object is in interactive mode
The result data and video playback entrance prompt information generated when being executed in application scenarios, video playback entrance prompt information
For prompting whether carry out video playback;
When client detects the triggering command of video playback entrance, user end to server sends video playback and asks
It asks, then triggering executes abovementioned steps 201:Client receives the video playback information that server is sent.
Wherein, whether server can be completed with real-time detection scene interactivity, and when interaction is completed, mould is can be generated in server
The result data that quasi- object generates when executing in interactive application scene, such as the result data can be in strategy game
War communique mail.Server sends result feedback information to client, so that the result feedback letter that client is sent by server
Breath gets result data.For example, by taking interactive application scene is strategy game scene as an example, after fight occurs, clothes
Being engaged in device can be to the client one sealed knot fruit feedback information of transmission of fight both sides, wherein can substantially wrap containing the information of this fight
It includes and fights the enemy, itself loss, robs resource etc..Player can understand the details of this fight by reading war communique mail.
It in some embodiments of the invention, not only include the above results number in the result feedback information that server is sent
According to whether server can carry out video playback to Client-Prompt, such as server can carry in result feedback information
Video playback entrance prompt information, video playback entrance prompt information is for prompting whether carry out video playback, thus client
The video playback entrance prompt information can be shown to user, user may determine whether that the triggering for issuing video playback entrance refers to
It enables, if user needs to play back video, the triggering command can be issued, when client detects the triggering command of video playback entrance
When, user end to server sends video playback request, and server can be requested based on the video playback to send video playback
Information.To realize control of the user to video playback entrance, server needs client-based request to return to send video
Information is put, and client can be compatible with the Duplex treatment of result data Yu video playback entrance.
202, client extracts the scene state and corresponding object shape generated in multiple logical frames from record data
State.
In embodiments of the present invention, after client receives video playback information, client is from the video playback information
Record data are got, client passes through the status data caused by logical frame recorded in the record data.Wherein, should
Logical frame is the frame of the stateful update in state synchronization algorithm.Client can be mentioned by recording data in the embodiment of the present invention
Taking out the scene state and Obj State generated in logical frame can extract respectively when the logical frame that state updates has multiple
The scene state and Obj State generated out in multiple logical frames.Wherein, scene state refers to interactive application scene
Status data, Obj State refer to the status data of simulated object.
In some embodiments of the invention, scene state includes:Interaction incipient stage, interaction progress stage and interaction knot
The beam stage.
Wherein, multiple simulated objects can execute under the control of client in interactive application scene, which can
To be respectively the following three stage according to progress:Interaction incipient stage, interaction progress stage and interaction ending phase.For
The interaction incipient stage can show the object oriented and arrangement of simulated object both sides, while having both sides when interaction starts first
Simulated object is interacted in a manner of text bubble, at this time and is not counted in interaction time, after the incipient stage to be interacted, is started
Timing, into formal interaction.The interaction progress stage refers to that both sides' simulated object formally interacts, such as in strategy game scene,
Interaction the progress stage may include:It seeks enemy's stage, phase of the attack, judge whether the interaction of single target point terminates.Interaction terminates rank
Section refers to that the interaction of both sides' simulated object is completed, such as in strategy game scene, interaction ending phase may include judging institute
Have whether target is all destroyed.Scene state includes interaction incipient stage, interaction progress stage and interaction in the embodiment of the present invention
Ending phase can be separately recorded in status data caused by each logical frame in each stage, to provide for scenario reduction
Initial data.
In some embodiments of the invention, Obj State includes:Approach phase, mobile phase, is attacked at idle stance
Hit stage, technical ability release stage, preliminary activities stage.
Wherein, Obj State refers to the status data of simulated object.It is right for the simulated object designed under different scenes
As state may include the combination in multiple and different stage and above-mentioned a variety of different phases.Approach phase, idle standing rank
Section, mobile phase, phase of the attack, technical ability release stage, preliminary activities stage respectively describe simulated object in interactive application
Different conditions in scene, approach phase refer to that simulated object enters interactive application scene, and idle stance refers to
Simulated object enters the idle waiting next step control operation after interactive application scene, and mobile phase refers to that simulated object exists
Mobile (such as all around move, jump) is carried out under the control of client, phase of the attack refers to simulated object in client
It launches a offensive under the control at end to the target simulation object of locking, specific attack pattern can be according to the technical ability to simulated object
It is arranged to determine, the technical ability release stage refers to that simulated object discharges technical ability to target simulation object under the control of client,
The technical ability design of simulated object is detailed in the explanation of subsequent embodiment, and the preliminary activities stage refers to simulated object in the control of client
Under be ready for waiting before next movement.
203, client generates scenario reduction video according to the scene state and Obj State generated in multiple logical frames, and
Scenario reduction video is played, scenario reduction video is for playing back implementation procedure of the simulated object in interactive application scene.
In embodiments of the present invention, client by record data above-mentioned extract scene state and Obj State it
Afterwards, client can be restored according to the scene state and Obj State generated in multiple logical frames by state synchronization algorithm
One section of video, record has the change procedure of scene state and the change procedure of Obj State, the video that can claim in the video
For scenario reduction video, by the generating mode of the video it is found that scenario reduction video is answered for playing back simulated object in interactive mode
With the implementation procedure in scene.
In some embodiments of the invention, after step 201 client receives the video playback information that server is sent,
Following steps can also be performed in addition to executing abovementioned steps 202 in method provided in an embodiment of the present invention:
Client determines the mark of simulated object according to record data;
Client obtains the skill factors information of simulated object according to the mark of simulated object from server;
Client generates simulated object in the corresponding technical ability configuration information of multiple logical frames according to skill factors information.
Wherein, the skill factors information that client can also be got from server as simulated object setting, skill factors
Information can be configured according to the design element to simulated object.After getting skill factors information, client can root
Simulated object is generated in the corresponding technical ability configuration information of multiple logical frames according to skill factors information, which indicates
The specific technical ability that simulated object is configured in multiple logical frames.
In some embodiments of the invention, simulated object is in the corresponding technical ability configuration information of multiple logical frames, including:Touching
Hair point configuration parameter, effect target configuration parameter, type of injury configuration parameter and technical ability effect configuration parameter.
Wherein, skill factors information above-mentioned may include:Trigger point element parameter, effect target component parameter, injury
Type elements parameter and technical ability effect element parameter.For example, trigger point refers to that probability triggers when attack, acts on target
The other side that can be technical ability application is all, or the partial target being locked out, type of injury can be 100% injury, or
The injury of much ratios, technical ability effect can there are many, such as can be 3.5 seconds with dizziness for the target simulation object having, or
There are other effects, is detailed in the illustration of subsequent embodiment.Technical ability can according to trigger point, effect target, type of injury with
And four aspect difference concrete configurations of technical ability effect.Client can according to trigger point element parameter, effect target component parameter,
Type of injury element parameter and technical ability effect element parameter determine trigger point configuration parameter, effect target configuration ginseng respectively
Number, type of injury configuration parameter and technical ability effect configuration parameter.
Further, trigger point configuration parameter, effect target configuration parameter, type of injury configuration parameter and technical ability effect
Configuration parameter is configured by the way of independently encapsulating.By separate configurations can under the premise of not changing logic,
The design and adjustment of technical ability are realized, so that technical ability design has flexibility and controllability.
Further, in some embodiments of the invention, when client gets the technical ability configuration information of simulated object
In the case where, step 203 client generates scenario reduction according to the scene state and Obj State generated in multiple logical frames and regards
Frequently, including:
Client is corresponding in each logical frame according to the Obj State and simulated object generated in each logical frame
Technical ability configuration information generates the simulated object more new state in each logical frame;
Client generates the interactive application in each logical frame according to the scene state generated in each logical frame
Scene update state;
Client according to the simulated object more new state and interactive application scene update state in each logical frame,
Generate the corresponding video pictures of each logical frame;
The update sequence of client logically frame generates scenario reduction by the corresponding video pictures of each logical frame
Video.
Firstly, client determines simulated object first if server-side simulated object is arranged skilled situation
In the corresponding technical ability configuration information of each logical frame, then for each logical frame, client is produced according in the logical frame
The simulation pair in the logical frame can be generated in the corresponding technical ability configuration information of the logical frame in raw Obj State and simulated object
As more new state, wherein include the shape that simulated object is updated relative to previous logical frame in simulated object more new state
State data, such as simulated object is compared to previous logical frame, the technical ability configuration updated in current logic frame and object shape
State.Then for each logical frame, client be can be generated according to the scene state generated in the logical frame in the logical frame
Interactive application scene update state, wherein include interactive application scene phase in interactive application scene update state
For the status data that previous logical frame is updated, such as interactive application scene is compared to previous logical frame, current
The scene state that logical frame is updated.
Next for each logical frame, client is according in the simulated object more new state of the logical frame and interaction
Formula application scenarios more new state carries out the video pictures reduction of current logic frame, corresponding so as to obtain each logical frame
Video pictures.Client is after obtaining the corresponding video pictures of multiple logical frames, and logically the update sequence of frame will be multiple
The corresponding video pictures of logical frame are synthesized, so as to generate scenario reduction video.By the generating mode of the video it is found that
Scenario reduction video is for playing back implementation procedure of the simulated object in interactive application scene.
By above embodiments to the description of the embodiment of the present invention it is found that client receives the video of server transmission first
Information is played back, video playback information includes:Record data when simulated object executes in interactive application scene, simulated object
It is controlled and is executed by client, then client extracts the scene state generated in multiple logical frames and corresponding from record data
Obj State.Last client generates scenario reduction view according to the scene state and Obj State generated in multiple logical frames
Frequently, and scenario reduction video is played, scenario reduction video is for playing back execution of the simulated object in interactive application scene
Journey.Due in the embodiment of the present invention server send video playback information in include record data, client can be from note
Record data extract the scene state and Obj State generated in multiple logical frames, so as to generate scenario reduction video,
When client terminal playing scenario reduction video, user can recognize simulated object in interactive application by scenario reduction video
The visual of interactive process may be implemented in implementation procedure in scene, war communique mail compared with the prior art, the embodiment of the present invention
Change playback, user is made to recognize entire interactive process in detail.
Previous embodiment is illustrated from client-side, is next described in detail from server-side, please join
It reads shown in Fig. 3, the processing method of interactive application scene provided by one embodiment of the present invention may include steps of:
301, the scene state that server recording interactive application scenarios are generated in multiple logical frames, and record simulation pair
Each logical frame corresponding Obj State when as executing in interactive application scene, simulated object are controlled by client and are executed.
In embodiments of the present invention, simulated object is controlled by client and is executed, such as interactive application scene can be plan
Slightly game, which can be the role (such as hero and soldier) in game, when simulated object is in the interaction of client
When being executed in formula application scenarios, client logically frame time interval requirement, actively to server report scene state and
Obj State, wherein the logical frame is the frame of the stateful update in state synchronization algorithm.Client is logical in the embodiment of the present invention
Overwriting data can extract the scene state and Obj State generated in logical frame, and the logical frame that state updates has multiple
When, the scene state and Obj State generated in multiple logical frames can be extracted respectively.Wherein, scene state refers to handing over
The status data of mutual formula application scenarios, Obj State refer to the status data of simulated object.
302, server generates simulation pair according to the scene state and corresponding Obj State generated in multiple logical frames
Record data when as being executed in interactive application scene.
In embodiments of the present invention, server receives scene state that client is reported in multiple logical frames and corresponding
Obj State, as unit of logical frame, server can store when simulated object executes in interactive application scene and record
Scene state and Obj State, so as to generate record data.
In some embodiments of the invention, step 302 server according to the scene state generated in multiple logical frames with
And corresponding Obj State, after generating record data when simulated object executes in interactive application scene, the present invention is real
Applying the method that example provides further includes:
Server, which judges whether interactive application scene interacts, to be terminated;
At the end of interactive application scene interactivity, server sends result feedback information to client, as a result feedback letter
Breath includes:The result data and video playback entrance that simulated object generates when executing in interactive application scene prompt letter
Breath, video playback entrance prompt information is for prompting whether carry out video playback;
Server receives the video playback request that client is sent, and is requested to execute abovementioned steps 302 according to video playback:
Server is sending video playback information to client.
Wherein, whether server can be completed with real-time detection scene interactivity, and when interaction is completed, mould is can be generated in server
The result data that quasi- object generates when executing in interactive application scene, such as the result data can be in strategy game
War communique mail.Server sends result feedback information to client, so that the result feedback letter that client is sent by server
Breath gets result data.For example, by taking interactive application scene is strategy game scene as an example, after fight occurs, clothes
Being engaged in device can be to the client one sealed knot fruit feedback information of transmission of fight both sides, wherein can substantially wrap containing the information of this fight
It includes and fights the enemy, itself loss, robs resource etc..Player can understand the details of this fight by reading war communique mail.
It in some embodiments of the invention, not only include the above results number in the result feedback information that server is sent
According to whether server can carry out video playback to Client-Prompt, such as server can carry in result feedback information
Video playback entrance prompt information, video playback entrance prompt information is for prompting whether carry out video playback, thus client
The video playback entrance prompt information can be shown to user, user may determine whether that the triggering for issuing video playback entrance refers to
It enables, if user needs to play back video, the triggering command can be issued, when client detects the triggering command of video playback entrance
When, user end to server sends video playback request, and server can be requested based on the video playback to send video playback
Information.To realize control of the user to video playback entrance, server needs client-based request to return to send video
Information is put, and client can be compatible with the Duplex treatment of result data Yu video playback entrance.
303, server is sending video playback information to client, and video playback information includes:Record data.
In embodiments of the present invention, server is stored with record number when simulated object executes in interactive application scene
According to, then server can send video playback information, so that the video playback for client-side provides record data, the note
It include the scene state and corresponding Obj State generated in multiple logical frames in record data.
In some embodiments of the invention, method provided in an embodiment of the present invention is other than executing abovementioned steps, also
Following steps can be executed:
The mark of server acquisition simulated object;
Server is that simulated object configures skill factors information, and establishes the corresponding relationship of skill factors information and mark,
Skill factors information includes:Trigger point element parameter, effect target component parameter, type of injury element parameter and technical ability effect
Element parameter.
Wherein, skill factors information above-mentioned may include:Trigger point element parameter, effect target component parameter, injury
Type elements parameter and technical ability effect element parameter.For example, trigger point refers to that probability triggers when attack, acts on target
The other side that can be technical ability application is all, or the partial target being locked out, type of injury can be 100% injury, or
The injury of much ratios, technical ability effect can there are many, such as can be 3.5 seconds with dizziness for the target simulation object having, or
There are other effects, is detailed in the illustration of subsequent embodiment.Technical ability can according to trigger point, effect target, type of injury with
And four aspect difference concrete configurations of technical ability effect.
Further, trigger point element parameter, effect target component parameter, type of injury element parameter and technical ability effect
Element parameter, is stored in the server by the way of independently encapsulating.
The design and adjustment of technical ability can be realized, so that technical ability is set under the premise of not changing logic by separate configurations
Meter has flexibility and controllability.
By above embodiments to the description of the embodiment of the present invention it is found that server recording interactive application scenarios are multiple
The scene state that logical frame generates, and record simulated object when being executed in interactive application scene each logical frame it is corresponding
Obj State, server generate simulated object according to the scene state and corresponding Obj State generated in multiple logical frames
Record data when executing in interactive application scene, server are sending video playback information, video playback to client
Information includes:Record data.Due in the embodiment of the present invention server send video playback information in include record data,
Client can extract the scene state and Obj State generated in multiple logical frames from record data, so as to generate
Scenario reduction video, when client terminal playing scenario reduction video, user can recognize simulation pair by scenario reduction video
As the implementation procedure in interactive application scene, war communique mail compared with the prior art, the embodiment of the present invention be may be implemented
The visualization of interactive process plays back, and user is made to recognize entire interactive process in detail.
In order to facilitate a better understanding and implementation of the above scheme of the embodiment of the present invention, corresponding application scenarios of illustrating below come
It is specifically described.
It with interactive application scene is specially that strategy game (Simulation Game, SLG) is in the embodiment of the present invention
Example, this kind of game, which is supplied to player one, can think hard pondering a problem allowing player certainly to handle the environment of more complex thing
By control, management and using in game people or things, use one's brains to find out by this free means and players
The method of enemy is fought to reach target required by game.In strategy game scene, simulated object is specially hero and scholar
Soldier, client can control hero and execute in strategy game scene, and hero is the main body unit of fight in game, soldier be by
Hero leads the unit fought.The hero can configure technical ability, and heroic technical ability refers to be discharged by hero, act on we or
Other side generates a series of mechanism of effects.
The embodiment of the present invention provides a kind of visualization fight scheme of single innings of fight in SLG game, and is added in fight
The heroic technical ability release of trigger-type.The expressive force that the embodiment of the present invention can be fought with significant increase SLG game, and simultaneous
Meet certain randomness and tactic demand on the basis of Gu balance, improves player experience.This programme devises a kind of war
The scheme of bucket and heroic technical ability, can be applied in the fight module of SLG game, for constructing the bottom fight of game.It is based on
It is subsequent to continue to extend technical ability mechanism after this scheme builds bottom fight frame, have in the subsequent development of game expansible
Property.
Fig. 4 is fight flow diagram provided in an embodiment of the present invention, after a fight occurs, can start by fight,
Seek enemy, attack, the several Main Stages of fight end.Wherein, every fight can all have hero to participate in fight, the core as fight
Element.Hero participates in fight, has the release of heroic technical ability.Whole field fight having time limitation, prevents the war that cannot terminate
Bucket.
Fight process mainly include it is following stage by stage:
1), fight starts.
When fight starts, troops and the arrangement of both sides can be shown first, while having both sides hero to propaganda directed to communicate with text bubble.This
When and the combat time of being not counted in start timing after the incipient stage of fighting, into formal fight.
2) enemy, is sought.
After starting fight timing, both sides move linearly toward the direction of other side, and using itself as the center of circle, and rope enemy's distance is radius
Search can target of attack.When have in this circle can target of attack when, then locking this target is object of attack, is moved to beside object
Start to attack.Wherein:
If circle in not can target of attack, continue to move linearly.
If there are multiple targets that can attack simultaneously, target is determined according to the sequencing for entering rope enemy's circle;Preferentially attack
Hit the target of first lairage.
If entering rope enemy's circle simultaneously, target is determined according to the heroic type of we and other side;Our specific type
Hero, can preferentially attack the certain types of hero of other side.
3) it, attacks.
After lock onto target, gathering of heros is moved to target proximity, starts to be attacked according to certain attack frequency.It attacks every time
Hit the soldier that can eliminate a part of other side.When target soldier's number is 0, determines that target is completely eliminated, reenter and seek enemy
Stage.Wherein, hero and soldier are an entirety, and a gathering of heros leads a certain number of soldiers, and institute's band soldier's quantity is 0
When, it just regards as being destroyed.
4), fight terminates.
The condition for having 2 kinds of fights to terminate, arbitrarily meets one kind, then fight terminates:Countdown of fighting is 0, alternatively, there is a side
All heroic soldier's numbers are all 0.
After fight, the soldier being destroyed can be divided into 3 parts:
A, restore troops, this part soldier can retain after fight.
B, wounded soldier, this part soldier can enter hospital after fight, wait to be treated.
C, troops are lost, this part soldier can cannot retain or be treated again as war damage.
In embodiments of the present invention, server can also be sent after detecting fight to the client of fight both sides
One envelope war communique mail can also provide the playback entrance of fight video, user can enter by this other than description fight result
Mouth checks the scene of fight, it is seen that the video for overall process of fighting, wherein fight result can be retouched in the form of text or list
It states.
Fig. 5 is that the technical ability of simulated object provided in an embodiment of the present invention describes schematic diagram.By taking simulated object is hero as an example,
Next the configuration mode of heroic technical ability is described.Heroic technical ability is abstracted, we are split as following element:Touching
Send out point, effect target, type of injury, technical ability effect.Hero's entitled beginning emperor deterrence, can design the hero type (such as
Active technical ability), grade (1/10), hero's description when attack (such as there is probability to damage and dizziness effect all enemies),
Present level technical ability effect (such as cause 100% injury, and dizziness 3 seconds).
Pass through the various combination of the above element, so that it may the technical ability of hero is easily designed, for example, the skill of hero
It can be split as:
Trigger point:Probability triggers when attack.
Act on target:Other side is all.
Type of injury:100% injury.
Technical ability effect:Dizziness 3.5 seconds.
It is special with other to generate another by changing any one or more conditions therein for the embodiment of the present invention
The technical ability of point.For example, being provided with a variety of options for each condition in strategy game scene.
As shown in table 1 below, the classification for a variety of technical ability of hero encapsulates table:
By the combination of conditions above, the embodiment of the present invention has devised hundreds of technical ability with their own characteristics.Meanwhile it is subsequent
If designing new characteristic, only need on this frame expansion condition, without developing again.
As shown in fig. 6, for the realization scene signal of the fight frame in strategy game scene provided in an embodiment of the present invention
Figure.Fight frame mainly include:Fight module, technical ability module and playback module, wherein fight module can be with technical ability mould
Block, playback module are separately connected.
Wherein, in playback module record have, both sides' essential information of fight, game state, hero state, game state
It is two states arranged side by side with heroic state, is stored in the form of text or list, the state note of playback module output
Record data packet is stored in server, when client needs usage record data can from server pull, such as client from
Server gets state recording data packet, then extracts the game state in the generation of multiple logical frames and heroic state, so
Scenario reduction video is generated afterwards, as shown in fig. 7, a video interception for scenario reduction video provided in an embodiment of the present invention is shown
It is intended to.
In some embodiments of the invention, game state includes:Attack the city wall stage, two armies are poised for battle the stage, fight knot
Beam.Obj State includes:Approach phase, idle stance, mobile phase, phase of the attack, technical ability discharge stage, preliminary activities
Stage.In actual design, (0.1 second) one next state of update of 10 frames, server output state records data packet, in frame 1 and frame
Multiple status datas are stored in 10 respectively.
In the embodiment of the present invention, the entire course of battle of game is all the backstage of server to calculate, respectively by fight mould
Block and technical ability module are handled, and have completely calculated a fight.Entire calculation process is the update of logical frame, each logic
Frame is all calculating the action states such as movement, attack, the technical ability of both sides hero.
In technical ability module, trigger point, effect target, injury, technical ability effect etc. can be encapsulated respectively, be matched by table
The design and adjustment of technical ability can be realized under the premise of not changing logic by setting.
By illustration above-mentioned it is found that the embodiment of the present invention realizes fight visualization, pass through state recording data
The playback to realize logical frame is wrapped, so as to realize fight visualization.Fight visualization, can for SLG game player
It is better seen that fight details, while having better expressive force, is preferably experienced.For designer, it can give
The randomness and tactic that player conveys oneself to want design allow player to understand how fight occurs.The embodiment of the present invention
In the heroic technical ability scheme used, for game player, heroic technical ability intuitively embodies a hero in militant spy
It puts and acts on, the collocation that can be Myself according to technical ability, the enjoyment on acquisition strategy.For designer, this set of skill
The scheme of being able to achieve can cover most technical ability, be adjusted by combination and numerical value, so that it may be combined into countless technical ability, greatly
Ground improves game depth.This scheme can constantly extend simultaneously, and new condition, new effect is added, can be under this frame
Iteration updates, and combination creates more unique technical ability.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because
According to the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules is not necessarily of the invention
It is necessary.
For the above scheme convenient for the better implementation embodiment of the present invention, phase for implementing the above scheme is also provided below
Close device.
It please refers to shown in Fig. 8, a kind of client 800 provided in an embodiment of the present invention may include:Receiving module 801, shape
State extraction module 802, video recovery module 803, wherein
Receiving module 801, for receiving the video playback information of server transmission, the video playback information includes:Mould
Record data when quasi- object executes in interactive application scene, the simulated object are controlled by the client and are executed;
State extraction module 802, for from it is described record data extract multiple logical frames generate scene state with
And corresponding Obj State;
Video recovery module 803, for being generated according to the scene state generated in multiple logical frames and Obj State
Scenario reduction video, and the scenario reduction video is played, the scenario reduction video is for playing back the simulated object in institute
State the implementation procedure in interactive application scene.
In some embodiments of the invention, client is also used before receiving the video playback information that server is sent
In the result feedback information for receiving server transmission, the result feedback information includes:Simulated object is in interactive application scene
The result data and video playback entrance prompt information generated when middle execution, the video playback entrance prompt information are used for
Whether prompt carries out video playback;When the client detects the triggering command of video playback entrance, to the server
Video playback request is sent, then triggering executes following steps:Receive the video playback information that server is sent.
In some embodiments of the invention, the scene state includes:Interaction incipient stage, interaction progress stage and friendship
Mutual ending phase;
The Obj State includes:Approach phase, idle stance, mobile phase, phase of the attack, technical ability discharge rank
Section, preliminary activities stage.
In some embodiments of the invention, client, after being also used to receive the video playback information that server is sent,
The mark of the simulated object is determined according to the record data;It is obtained according to the mark of the simulated object from the server
The skill factors information of the simulated object;The simulated object is generated in the multiple logic according to the skill factors information
The corresponding technical ability configuration information of frame.
In some embodiments of the invention, the client is also used to according to the object generated in each logical frame
State and the simulated object are generated in the corresponding technical ability configuration information of each described logical frame in each described logical frame
Simulated object more new state;The friendship in each logical frame is generated according to the scene state generated in each logical frame
Mutual formula application scenarios more new state;It is answered according in the simulated object more new state of each logical frame and the interactive mode
With scene update state, the corresponding video pictures of each described logical frame are generated;Logically the update sequence of frame passes through institute
It states the corresponding video pictures of each logical frame and generates the scenario reduction video.
In some embodiments of the invention, the simulated object matches confidence in the corresponding technical ability of the multiple logical frame
Breath, including:Trigger point configuration parameter, effect target configuration parameter, type of injury configuration parameter and technical ability effect configuration parameter.
In some embodiments of the invention, the trigger point configuration parameter, the effect target configuration parameter, the wound
Evil type configuration parameter and the technical ability effect configuration parameter, are configured by the way of independently encapsulating.
By above to the description of the embodiment of the present invention it is found that client receives the video playback letter of server transmission first
Breath, video playback information include:Record data when simulated object executes in interactive application scene, simulated object is by client
End control executes, and then client extracts the scene state and corresponding object generated in multiple logical frames from record data
State.Last client generates scenario reduction video according to the scene state and Obj State generated in multiple logical frames, and broadcasts
Scenario reduction video is put, scenario reduction video is for playing back implementation procedure of the simulated object in interactive application scene.Due to
It include record data in the video playback information that server is sent in the embodiment of the present invention, client can be mentioned from record data
The scene state and Obj State generated in multiple logical frames is taken out, so as to generate scenario reduction video, client is broadcast
When putting scenario reduction video, user can recognize simulated object in interactive application scene by scenario reduction video
Implementation procedure, war communique mail compared with the prior art, the embodiment of the present invention may be implemented the visualization playback of interactive process, make
User recognizes entire interactive process in detail.
It please refers to shown in Fig. 9, a kind of server 900 provided in an embodiment of the present invention may include:State acquisition module
901, data generation module 902, sending module 903, wherein
State acquisition module 901, for the scene state that recording interactive application scenarios are generated in multiple logical frames, and
Record the simulated object corresponding Obj State of each logical frame, simulated object when being executed in the interactive application scene
It is controlled and is executed by client;
Data generation module 902, for according to the scene state and corresponding object generated in multiple logical frames
State generates the record data when simulated object executes in the interactive application scene;
Sending module 903, for sending video playback information to the client, the video playback information includes:
The record data.
In some embodiments of the invention, the server is also used to according to described in the field that multiple logical frames generate
Scape state and corresponding Obj State generate the record number when simulated object executes in the interactive application scene
Terminate according to later, judging whether the interactive application scene interacts;At the end of the interactive application scene interactivity, to institute
It states client and sends result feedback information, the result feedback information includes:The simulated object is in the interactive application field
The result data and video playback entrance prompt information generated when executing in scape, the video playback entrance prompt information are used
Whether video playback is carried out in prompt;The video playback request that the client is sent is received, and is asked according to the video playback
Seek execution following steps:The server is sending video playback information to the client.
In some embodiments of the invention, the server is also used to obtain the mark of the simulated object;It is described
Simulated object configures skill factors information, and establishes the corresponding relationship of the skill factors information and the mark, the technical ability
Element information includes:Trigger point element parameter, effect target component parameter, type of injury element parameter and technical ability effect element
Parameter.
In some embodiments of the invention, the trigger point element parameter, the effect target component parameter, the wound
Evil Type elements parameter and the technical ability effect element parameter, are stored in the server by the way of independently encapsulating
In.
By above embodiments to the description of the embodiment of the present invention it is found that server recording interactive application scenarios are multiple
The scene state that logical frame generates, and record simulated object when being executed in interactive application scene each logical frame it is corresponding
Obj State, server generate simulated object according to the scene state and corresponding Obj State generated in multiple logical frames
Record data when executing in interactive application scene, server are sending video playback information, video playback to client
Information includes:Record data.Due in the embodiment of the present invention server send video playback information in include record data,
Client can extract the scene state and Obj State generated in multiple logical frames from record data, so as to generate
Scenario reduction video, when client terminal playing scenario reduction video, user can recognize simulation pair by scenario reduction video
As the implementation procedure in interactive application scene, war communique mail compared with the prior art, the embodiment of the present invention be may be implemented
The visualization of interactive process plays back, and user is made to recognize entire interactive process in detail.
The embodiment of the invention also provides another terminals, as shown in Figure 10, for ease of description, illustrate only and this hair
The relevant part of bright embodiment, it is disclosed by specific technical details, please refer to present invention method part.The terminal can be with
Being includes mobile phone, tablet computer, PDA (Personal Digital Assistant, personal digital assistant), POS (Point of
Sales, point-of-sale terminal), any terminal device such as vehicle-mounted computer, taking the terminal as an example:
Figure 10 shows the block diagram of the part-structure of mobile phone relevant to terminal provided in an embodiment of the present invention.With reference to figure
10, mobile phone includes:Radio frequency (Radio Frequency, RF) circuit 1010, memory 1020, input unit 1030, display unit
1040, sensor 1050, voicefrequency circuit 1060, Wireless Fidelity (wireless fidelity, WiFi) module 1070, processor
The components such as 1080 and power supply 1090.It will be understood by those skilled in the art that handset structure shown in Figure 10 is not constituted pair
The restriction of mobile phone may include perhaps combining certain components or different component cloth than illustrating more or fewer components
It sets.
It is specifically introduced below with reference to each component parts of the Figure 10 to mobile phone:
RF circuit 1010 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station
After downlink information receives, handled to processor 1080;In addition, the data for designing uplink are sent to base station.In general, RF circuit
1010 include but is not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (Low Noise
Amplifier, LNA), duplexer etc..In addition, RF circuit 1010 can also be logical with network and other equipment by wireless communication
Letter.Any communication standard or agreement, including but not limited to global system for mobile communications (Global can be used in above-mentioned wireless communication
System of Mobile communication, GSM), general packet radio service (General Packet Radio
Service, GPRS), CDMA (Code Division Multiple Access, CDMA), wideband code division multiple access
(Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution,
LTE), Email, short message service (Short Messaging Service, SMS) etc..
Memory 1020 can be used for storing software program and module, and processor 1080 is stored in memory by operation
1020 software program and module, thereby executing the various function application and data processing of mobile phone.Memory 1020 can be led
It to include storing program area and storage data area, wherein storing program area can be needed for storage program area, at least one function
Application program (such as sound-playing function, image player function etc.) etc.;Storage data area, which can be stored, uses institute according to mobile phone
Data (such as audio data, phone directory etc.) of creation etc..In addition, memory 1020 may include high random access storage
Device, can also include nonvolatile memory, and a for example, at least disk memory, flush memory device or other volatibility are solid
State memory device.
Input unit 1030 can be used for receiving the number or character information of input, and generate with the user setting of mobile phone with
And the related key signals input of function control.Specifically, input unit 1030 may include touch panel 1031 and other inputs
Equipment 1032.Touch panel 1031, also referred to as touch screen collect touch operation (such as the user of user on it or nearby
Use the behaviour of any suitable object or attachment such as finger, stylus on touch panel 1031 or near touch panel 1031
Make), and corresponding attachment device is driven according to preset formula.Optionally, touch panel 1031 may include touch detection
Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
It is converted into contact coordinate, then gives processor 1080, and order that processor 1080 is sent can be received and executed.In addition,
Touch panel 1031 can be realized using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch surface
Plate 1031, input unit 1030 can also include other input equipments 1032.Specifically, other input equipments 1032 may include
But in being not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc.
It is one or more.
Display unit 1040 can be used for showing information input by user or be supplied to user information and mobile phone it is each
Kind menu.Display unit 1040 may include display panel 1041, optionally, can use liquid crystal display (Liquid
Crystal Display, LCD), the forms such as Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED)
To configure display panel 1041.Further, touch panel 1031 can cover display panel 1041, when touch panel 1031 detects
After arriving touch operation on it or nearby, processor 1080 is sent to determine the type of touch event, is followed by subsequent processing device
1080 provide corresponding visual output according to the type of touch event on display panel 1041.Although in Figure 10, touch surface
Plate 1031 and display panel 1041 are the input and input function for realizing mobile phone as two independent components, but certain
In embodiment, can be integrated by touch panel 1031 and display panel 1041 and that realizes mobile phone output and input function.
Mobile phone may also include at least one sensor 1050, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light
Light and shade adjust the brightness of display panel 1041, proximity sensor can close display panel when mobile phone is moved in one's ear
1041 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (generally three axis) and add
The size of speed can detect that size and the direction of gravity when static, can be used to identify application (such as the horizontal/vertical screen of mobile phone posture
Switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Also as mobile phone
The other sensors such as configurable gyroscope, barometer, hygrometer, thermometer, infrared sensor, details are not described herein.
Voicefrequency circuit 1060, loudspeaker 1061, microphone 1062 can provide the audio interface between user and mobile phone.Audio
Electric signal after the audio data received conversion can be transferred to loudspeaker 1061, be converted by loudspeaker 1061 by circuit 1060
For voice signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 1062, by voicefrequency circuit 1060
Audio data is converted to after reception, then by after the processing of audio data output processor 1080, through RF circuit 1010 to be sent to ratio
Such as another mobile phone, or audio data is exported to memory 1020 to be further processed.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronics postal by WiFi module 1070
Part, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Figure 10 is shown
WiFi module 1070, but it is understood that, and it is not belonging to must be configured into for mobile phone, it can according to need do not changing completely
Become in the range of the essence of invention and omits.
Processor 1080 is the control centre of mobile phone, using the various pieces of various interfaces and connection whole mobile phone,
By running or execute the software program and/or module that are stored in memory 1020, and calls and be stored in memory 1020
Interior data execute the various functions and processing data of mobile phone, to carry out integral monitoring to mobile phone.Optionally, processor
1080 may include one or more processing units;Preferably, processor 1080 can integrate application processor and modulation /demodulation processing
Device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is mainly located
Reason wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1080.
Mobile phone further includes the power supply 1090 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply
Management system and processor 1080 are logically contiguous, to realize management charging, electric discharge and power consumption pipe by power-supply management system
The functions such as reason.
Although being not shown, mobile phone can also include camera, bluetooth module etc., and details are not described herein.
In embodiments of the present invention, processor 1080 included by the terminal also has control execution is above to be executed by terminal
Interactive application scene processing method process.
Figure 11 is a kind of server architecture schematic diagram provided in an embodiment of the present invention, which can be because of configuration or property
Energy is different and generates bigger difference, may include one or more central processing units (central processing
Units, CPU) 1122 (for example, one or more processors) and memory 1132, one or more storage applications
The storage medium 1130 (such as one or more mass memory units) of program 1142 or data 1144.Wherein, memory
1132 and storage medium 1130 can be of short duration storage or persistent storage.The program for being stored in storage medium 1130 may include one
A or more than one module (diagram does not mark), each module may include to the series of instructions operation in server.More into
One step, central processing unit 1122 can be set to communicate with storage medium 1130, execute storage medium on server 1100
Series of instructions operation in 1130.
Server 1100 can also include one or more power supplys 1126, one or more wired or wireless nets
Network interface 1150, one or more input/output interfaces 1158, and/or, one or more operating systems 1141, example
Such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM etc..
The process method step of the interactive application scene as performed by server can be based on the figure in above-described embodiment
Server architecture shown in 11.
In addition it should be noted that, the apparatus embodiments described above are merely exemplary, wherein described as separation
The unit of part description may or may not be physically separated, component shown as a unit can be or
It can not be physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to reality
Border needs to select some or all of the modules therein to achieve the purpose of the solution of this embodiment.In addition, provided by the invention
In Installation practice attached drawing, the connection relationship between module indicates there is communication connection between them, specifically can be implemented as one
Item or a plurality of communication bus or signal wire.Those of ordinary skill in the art are without creative efforts, it can
It understands and implements.
Through the above description of the embodiments, it is apparent to those skilled in the art that the present invention can borrow
Help software that the mode of required common hardware is added to realize, naturally it is also possible to by specialized hardware include specific integrated circuit, specially
It is realized with CPU, private memory, special components and parts etc..Under normal circumstances, all functions of being completed by computer program are ok
It is easily realized with corresponding hardware, moreover, being used to realize that the specific hardware structure of same function is also possible to a variety of more
Sample, such as analog circuit, digital circuit or special circuit etc..But software program is real in situations more for the purpose of the present invention
It is now more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words makes the prior art
The part of contribution can be embodied in the form of software products, which is stored in the storage medium that can be read
In, such as the floppy disk of computer, USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory
Device (RAM, Random Access Memory), magnetic or disk etc., including some instructions are with so that a computer is set
Standby (can be personal computer, server or the network equipment etc.) executes method described in each embodiment of the present invention.
In conclusion the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although referring to upper
Stating embodiment, invention is explained in detail, those skilled in the art should understand that:It still can be to upper
Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these
It modifies or replaces, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.
Claims (16)
1. a kind of processing method of interactive application scene, which is characterized in that including:
Client receives the video playback information that server is sent, and the video playback information includes:Simulated object is in interactive mode
Record data when executing in application scenarios, the simulated object are controlled by the client and are executed;
The client extracts the scene state and corresponding object shape generated in multiple logical frames from the record data
State;
The client generates scenario reduction video according to the scene state generated in multiple logical frames and Obj State, and
The scenario reduction video is played, the scenario reduction video is for playing back the simulated object in the interactive application scene
In implementation procedure.
2. the method according to claim 1, wherein the client receives the video playback letter that server is sent
Before breath, the method also includes:
Client receives the result feedback information that server is sent, and the result feedback information includes:Simulated object is in interactive mode
The result data and video playback entrance prompt information generated when being executed in application scenarios, the video playback entrance prompt
Information is for prompting whether carry out video playback;
When the client detects the triggering command of video playback entrance, the client sends video to the server
Playback request, then triggering executes following steps:Client receives the video playback information that server is sent.
3. the method according to claim 1, wherein the scene state includes:Interaction the incipient stage, interaction into
Row order section and interaction ending phase;
The Obj State includes:It is approach phase, idle stance, mobile phase, phase of the attack, the technical ability release stage, pre-
Standby action phase.
4. the method according to claim 1, wherein the client receives the video playback letter that server is sent
After breath, the method also includes:
The client determines the mark of the simulated object according to the record data;
The client is believed according to the mark of the simulated object from the skill factors that the server obtains the simulated object
Breath;
The client generates the simulated object in the corresponding technical ability of the multiple logical frame according to the skill factors information
Configuration information.
5. according to the method described in claim 4, it is characterized in that, what the client was generated according in multiple logical frames
Scene state and Obj State generate scenario reduction video, including:
The client is according to the Obj State and the simulated object generated in each logical frame in each described logic
The corresponding technical ability configuration information of frame generates the simulated object more new state in each logical frame;
The client generates the interactive mode in each logical frame according to the scene state generated in each logical frame
Application scenarios more new state;
The client is according to the simulated object more new state and the interactive application scene in each logical frame
More new state generates the corresponding video pictures of each described logical frame;
The client is logically described in the corresponding video pictures generation of update sequence each logical frame described in of frame
Scenario reduction video.
6. method according to claim 4 or 5, which is characterized in that the simulated object is corresponding in the multiple logical frame
Technical ability configuration information, including:Trigger point configuration parameter, effect target configuration parameter, type of injury configuration parameter and technical ability
Effect configuration parameter.
7. according to the method described in claim 6, it is characterized in that, the trigger point configuration parameter, the effect target configure
Parameter, described injury type configuration parameter and the technical ability effect configuration parameter, are carried out by the way of independently encapsulating
Configuration.
8. a kind of processing method of interactive application scene, which is characterized in that including:
The scene state that server recording interactive application scenarios are generated in multiple logical frames, and record simulated object is described
Each logical frame corresponding Obj State when executing in interactive application scene, the simulated object are controlled by client and are executed;
The server generates the mould according to the scene state and corresponding Obj State generated in multiple logical frames
Record data when quasi- object executes in the interactive application scene;
The server is sending video playback information to the client, and the video playback information includes:The record number
According to.
9. method according to claim 8, which is characterized in that the server is according to the scene generated in multiple logical frames
State and corresponding Obj State generate the record data when simulated object executes in the interactive application scene
Later, the method also includes:
The server, which judges whether the interactive application scene interacts, to be terminated;
At the end of the interactive application scene interactivity, the server sends result feedback information, institute to the client
Stating result feedback information includes:The result data that the simulated object generates when executing in the interactive application scene, with
And video playback entrance prompt information, the video playback entrance prompt information is for prompting whether carry out video playback;
The server receives the video playback request that the client is sent, and as follows according to video playback request execution
Step:The server is sending video playback information to the client.
10. method according to claim 8 or claim 9, which is characterized in that the method also includes:
The server obtains the mark of the simulated object;
The server is that the simulated object configures skill factors information, and establishes the skill factors information and the mark
Corresponding relationship, the skill factors information includes:Trigger point element parameter, effect target component parameter, type of injury element
Parameter and technical ability effect element parameter.
11. according to the method described in claim 10, it is characterized in that, the trigger point element parameter, the effect target are wanted
Plain parameter, described injury Type elements parameter and the technical ability effect element parameter, are deposited by the way of independently encapsulating
Storage is in the server.
12. a kind of client, which is characterized in that including:
Receiving module, for receiving the video playback information of server transmission, the video playback information includes:Simulated object exists
Record data when executing in interactive application scene, the simulated object are controlled by the client and are executed;
State extraction module, for extracting the scene state generated in multiple logical frames and corresponding from the record data
Obj State;
Video recovery module, for generating scenario reduction according to the scene state generated in multiple logical frames and Obj State
Video, and the scenario reduction video is played, the scenario reduction video is for playing back the simulated object in the interactive mode
Implementation procedure in application scenarios.
13. a kind of server, which is characterized in that the server includes:
State acquisition module, for the scene state that recording interactive application scenarios are generated in multiple logical frames, and record mould
Quasi- object each logical frame corresponding Obj State when executing in the interactive application scene, the simulated object is by client
End control executes;
Data generation module, for according to the scene state and corresponding Obj State generated in multiple logical frames, life
Record data when being executed in the interactive application scene at the simulated object;
Sending module, for sending video playback information to the client, the video playback information includes:The record
Data.
14. a kind of computer readable storage medium, including instruction, when run on a computer, so that computer executes such as
Method described in any one of claim 1 to 7 or 8 to 11.
15. a kind of client, which is characterized in that the client includes:Processor and memory;
The memory, for storing instruction;
The processor is executed as described in any one of claims 1 to 7 for executing the described instruction in the memory
Method.
16. a kind of server, which is characterized in that the server includes:Processor and memory;
The memory, for storing instruction;
The processor is executed as described in any one of claim 8 to 11 for executing the described instruction in the memory
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810771582.1A CN108924632A (en) | 2018-07-13 | 2018-07-13 | A kind for the treatment of method and apparatus and storage medium of interactive application scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810771582.1A CN108924632A (en) | 2018-07-13 | 2018-07-13 | A kind for the treatment of method and apparatus and storage medium of interactive application scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108924632A true CN108924632A (en) | 2018-11-30 |
Family
ID=64411983
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810771582.1A Pending CN108924632A (en) | 2018-07-13 | 2018-07-13 | A kind for the treatment of method and apparatus and storage medium of interactive application scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108924632A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109876444A (en) * | 2019-03-21 | 2019-06-14 | 腾讯科技(深圳)有限公司 | Method for exhibiting data and device, storage medium and electronic device |
CN113867734A (en) * | 2021-10-20 | 2021-12-31 | 北京思明启创科技有限公司 | Code block interpretation execution method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1798594A (en) * | 2003-06-02 | 2006-07-05 | 迪斯尼实业公司 | System and method of interactive video playback |
US20090288076A1 (en) * | 2008-05-16 | 2009-11-19 | Mark Rogers Johnson | Managing Updates In A Virtual File System |
US20100083324A1 (en) * | 2008-09-30 | 2010-04-01 | Microsoft Corporation | Synchronized Video Playback Among Multiple Users Across A Network |
CN104899912A (en) * | 2014-03-07 | 2015-09-09 | 腾讯科技(深圳)有限公司 | Cartoon manufacture method, playback method and equipment |
CN105013174A (en) * | 2015-07-28 | 2015-11-04 | 珠海金山网络游戏科技有限公司 | Method and system for playing back game video |
CN107050850A (en) * | 2017-05-18 | 2017-08-18 | 腾讯科技(深圳)有限公司 | The recording and back method of virtual scene, device and playback system |
-
2018
- 2018-07-13 CN CN201810771582.1A patent/CN108924632A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1798594A (en) * | 2003-06-02 | 2006-07-05 | 迪斯尼实业公司 | System and method of interactive video playback |
US20090288076A1 (en) * | 2008-05-16 | 2009-11-19 | Mark Rogers Johnson | Managing Updates In A Virtual File System |
US20100083324A1 (en) * | 2008-09-30 | 2010-04-01 | Microsoft Corporation | Synchronized Video Playback Among Multiple Users Across A Network |
CN104899912A (en) * | 2014-03-07 | 2015-09-09 | 腾讯科技(深圳)有限公司 | Cartoon manufacture method, playback method and equipment |
CN105013174A (en) * | 2015-07-28 | 2015-11-04 | 珠海金山网络游戏科技有限公司 | Method and system for playing back game video |
CN107050850A (en) * | 2017-05-18 | 2017-08-18 | 腾讯科技(深圳)有限公司 | The recording and back method of virtual scene, device and playback system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109876444A (en) * | 2019-03-21 | 2019-06-14 | 腾讯科技(深圳)有限公司 | Method for exhibiting data and device, storage medium and electronic device |
CN113867734A (en) * | 2021-10-20 | 2021-12-31 | 北京思明启创科技有限公司 | Code block interpretation execution method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11857878B2 (en) | Method, apparatus, and terminal for transmitting prompt information in multiplayer online battle program | |
CN108211358B (en) | Information display method and device, storage medium and electronic device | |
CN108965989B (en) | Processing method and device for interactive application scene and storage medium | |
CN110755844B (en) | Skill activation method and device, electronic equipment and storage medium | |
WO2021218406A1 (en) | Virtual object control method and apparatus, computer device and storage medium | |
CN110860087B (en) | Virtual object control method, device and storage medium | |
CN113117331B (en) | Message sending method, device, terminal and medium in multi-person online battle program | |
CN111298430A (en) | Virtual item control method and device, storage medium and electronic device | |
WO2022247129A1 (en) | Method and apparatus for generating special effect in virtual environment, and device and storage medium | |
CN112973117B (en) | Interaction method of virtual objects, reward issuing method, device, equipment and medium | |
WO2018192315A1 (en) | Method for determining match result, and user equipment | |
WO2023029836A1 (en) | Virtual picture display method and apparatus, device, medium, and computer program product | |
CN110124321A (en) | A kind of object processing method, device, equipment and medium | |
JP7459297B2 (en) | Effect generation method, device, equipment and computer program in virtual environment | |
TWI796933B (en) | Message display method, device, electronic apparatus, computer readable storage medium, and computer program product | |
CN108924632A (en) | A kind for the treatment of method and apparatus and storage medium of interactive application scene | |
CN109758766B (en) | Role state synchronization method and related device | |
CN112044072A (en) | Interaction method of virtual objects and related device | |
CN111921200A (en) | Virtual object control method and device, electronic equipment and storage medium | |
US20230033902A1 (en) | Virtual object control method and apparatus, device, storage medium, and program product | |
CN110743167A (en) | Method and device for realizing interactive function | |
CN109718552B (en) | Life value control method based on simulation object and client | |
CN114272608A (en) | Control method, device, terminal, storage medium and program product of virtual role | |
CN114288659A (en) | Interaction method, device, equipment, medium and program product based on virtual object | |
CN112843682B (en) | Data synchronization method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181130 |