CN108833818A - video recording method, device, terminal and storage medium - Google Patents
video recording method, device, terminal and storage medium Download PDFInfo
- Publication number
- CN108833818A CN108833818A CN201810688229.7A CN201810688229A CN108833818A CN 108833818 A CN108833818 A CN 108833818A CN 201810688229 A CN201810688229 A CN 201810688229A CN 108833818 A CN108833818 A CN 108833818A
- Authority
- CN
- China
- Prior art keywords
- target
- icon
- result
- interaction
- special efficacy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 101
- 230000002452 interceptive effect Effects 0.000 claims abstract description 127
- 230000003993 interaction Effects 0.000 claims abstract description 120
- 230000009471 action Effects 0.000 claims abstract description 111
- 230000008569 process Effects 0.000 claims abstract description 58
- 230000000694 effects Effects 0.000 claims abstract description 9
- 238000011897 real-time detection Methods 0.000 claims abstract description 8
- 230000014509 gene expression Effects 0.000 claims description 57
- 238000010586 diagram Methods 0.000 claims description 30
- 230000001960 triggered effect Effects 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 5
- 230000001360 synchronised effect Effects 0.000 claims description 5
- 241000406668 Loxodonta cyclotis Species 0.000 claims description 4
- 230000006870 function Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 12
- 230000001133 acceleration Effects 0.000 description 9
- 230000002093 peripheral effect Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 4
- 239000010931 gold Substances 0.000 description 4
- 229910052737 gold Inorganic materials 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 235000013399 edible fruits Nutrition 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 241000283973 Oryctolagus cuniculus Species 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000010009 beating Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4334—Recording operations
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a kind of video recording method, device, terminal and storage mediums, belong to Internet technical field.This method includes:When receiving video record instruction, the image of target object is acquired in real time, shows that interaction special efficacy, the interaction special efficacy include at least the icon for the interactive objects that the target object is interacted by target action in recording interface;In interactive process, the action message of the real-time detection target object;According to the action message of the target object, determine that the corresponding result special efficacy of the target action, the result special efficacy are used to indicate the interaction result that the target action is interacted with the interactive objects;The result special efficacy is shown on the recording interface;According to the image acquired in real time, the interaction special efficacy and result special efficacy that show in interactive process, video file is generated.By increasing interactive process, the activity of the user is promoted so that the video file includes the moment of multiple excellent interactions and enriches video content, improves the interest of video.
Description
Technical field
The present invention relates to Internet technical field, in particular to a kind of video recording method, device, terminal and storage are situated between
Matter.
Background technique
With the development of internet technology, user can in Video Applications recorded video, and by network by video reality
When share in the network platform of Video Applications, meanwhile, the user be also based on the Video Applications configuration U.S. face function, it is right
Face in video pictures is beautified.
In the related technology, video record process is:User can perform a variety of expressions, movement etc., terminal before camera
The multiple image of the user is acquired in real time.Meanwhile the user can be opened U.S. face function with triggering terminal and is somebody's turn to do in recording process
Terminal can carry out U.S. face processing to image based on the U.S. face function that user selects, for example, carrying out to the face part in image
Whitening, mill skin etc..Alternatively, some icons are added in the face position of face part in the picture, for example, adding dog on nose
The icons such as nose, crown position addition rabbit ears.The terminal can be generated based on the multiple image after the processing of U.S. face and be recorded
Video file.
Before video record, user needs to take a certain time expression, the movement etc. of designed, designed performance, the product of user
Polarity is not high, and the user activity of Video Applications is lower.Also, above-mentioned video record process is actually that terminal is unidirectionally recorded
The process that system, user unidirectionally perform, terminal only carry out U.S. face processing, therefore, the video for causing the above method to be recorded to image
Interest it is lower.
Summary of the invention
The embodiment of the invention provides a kind of video recording method, device, terminal and storage mediums, can solve related skill
The lower problem of the interest for the video recorded in art.The technical solution is as follows:
On the one hand, a kind of video recording method is provided, the method includes:
When receiving video record instruction, the image of target object is acquired in real time, shows that interaction is special in recording interface
Effect, the interaction special efficacy include at least the icon for the interactive objects that the target object is interacted by target action;
In interactive process, the action message of target object described in real-time detection;
According to the action message of the target object, determine that the corresponding result special efficacy of the target action, the result are special
Effect is used to indicate the interaction result that the target action is interacted with the interactive objects;
The result special efficacy is shown on the recording interface;
According to the image acquired in real time, the interaction special efficacy and the result special efficacy that show in the interactive process, view is generated
Frequency file.
On the other hand, a kind of video recording device is provided, described device includes:
Display module, for the image of target object being acquired in real time, in recording interface when receiving video record instruction
Middle display interacts special efficacy, and the interaction special efficacy includes at least the interactive objects that the target object is interacted by target action
Icon;
Detection module is used for the action message of target object described in real-time detection in interactive process;
Determining module determines that the corresponding result of the target action is special for the action message according to the target object
Effect, the result special efficacy are used to indicate the interaction result that the target action is interacted with the interactive objects;
The display module is also used to show the result special efficacy on the recording interface;
Generation module, for according to the image acquired in real time, the interaction special efficacy shown in the interactive process and the knot
Fruit special efficacy generates video file.
On the other hand, a kind of terminal is provided, the terminal includes processor and memory, is stored in the memory
At least one instruction, described instruction are loaded as the processor and are executed to realize as performed by above-mentioned video recording method
Operation.
On the other hand, a kind of computer readable storage medium is provided, at least one finger is stored in the storage medium
It enables, described instruction is loaded as processor and executed to realize the operation as performed by above-mentioned video recording method.
Technical solution bring beneficial effect provided in an embodiment of the present invention is:
By the way that when receiving video record instruction, which can show interaction special efficacy in recording interface, so that mesh
The interactive objects that mark object is interacted with this in special efficacy are interacted;In interactive process, terminal can be in real time according to the target pair
The action message of elephant determines the corresponding result special efficacy of the target action, and shows the result special efficacy;By increasing the interaction
Journey enriches the movement of target object during video record, improves the interest of video record, improves target object
Liveness.The interaction special efficacy shown in the image acquired in real time, the interactive process and the result special efficacy are generated video text by terminal
Part.Multiple splendid moments in interaction that target object is had recorded in the video file, view is recorded to greatly enrich
The video content of frequency improves the interest of video, increases the information content of video.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is a kind of schematic diagram of implementation environment provided in an embodiment of the present invention;
Fig. 2 is a kind of flow chart of video recording method provided in an embodiment of the present invention;
Fig. 3 is a kind of interface schematic diagram of video record provided in an embodiment of the present invention;
Fig. 4 is a kind of interface schematic diagram of video record provided in an embodiment of the present invention;
Fig. 5 is a kind of interface schematic diagram of video record provided in an embodiment of the present invention;
Fig. 6 is a kind of interface schematic diagram of video record provided in an embodiment of the present invention;
Fig. 7 is a kind of interface schematic diagram of video record provided in an embodiment of the present invention;
Fig. 8 is a kind of interface schematic diagram of video record provided in an embodiment of the present invention;
Fig. 9 is a kind of interface schematic diagram of video record provided in an embodiment of the present invention;
Figure 10 is a kind of interface schematic diagram of video record provided in an embodiment of the present invention;
Figure 11 is a kind of structural schematic diagram of video recording device provided in an embodiment of the present invention;
Figure 12 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Fig. 1 is a kind of schematic diagram of implementation environment provided in an embodiment of the present invention, which includes:101 He of terminal
Server 102.Video Applications can be installed in the terminal 101, which carries out video record in the Video Applications, and
Data interaction is carried out based on the Video Applications and server 102.
User can open the Video Applications, and triggering terminal 101 starts to record the view of target object in Video Applications
Frequently, in recording process, which can show interaction special efficacy in recording interface, so that target object passes through target action
It is interacted with interactive objects, also, the terminal 101 is also based on the interaction of the target action as a result, display result special efficacy.
Terminal is based on the interaction special efficacy, result special efficacy, realizes the interactive process between target object.Finally, terminal 101 is based on real-time
The multiple image of acquisition and interaction special efficacy, result special efficacy in interactive process generate video file.Also, the terminal 101
The video file can also be sent in server 102, by server 102, which is shared with the video and is answered
With the other users on platform.
Wherein, which can be that live streaming application, short Video Applications or the social activity for having video record function are answered
With etc..The server 102 is the background server of the Video Applications.
Fig. 2 is a kind of flow chart of video recording method provided in an embodiment of the present invention.The execution master of the inventive embodiments
Body is terminal, and referring to fig. 2, this method includes:
201, when receiving video record instruction, terminal acquires the image of target object, shows in recording interface mutual
Dynamic special efficacy.
Wherein, which includes at least the figure for the interactive objects that the target object is interacted by target action
Mark.In this step, when terminal receives video record instruction, which opens camera, starts to acquire target object
Image, and the display position according to interaction special efficacy on the recording interface, show the interaction special efficacy on the recording interface.Its
In, user the video record of triggering terminal can instruct in Video Applications;When the Video Applications are activated, which can be with
It is shown in current interface and records button.When terminal detects that the recording button is triggered, terminal receives the video record
Instruction.
Wherein, which can be with the random display interaction special efficacy.In addition, the interaction special efficacy can also include indicating the target
The icon of the change procedure of movement, the terminal can also show the interaction special efficacy in conjunction with the target action of the target object.Accordingly
, terminal shows that the mode of the icon of interactive objects includes following two on recording interface.
First way, when the interaction special efficacy includes the icon of the interactive objects, any of terminal in the recording interface
Position shows the icon of the interactive objects.
The terminal can randomly select display position in the recording interface, and the icon of the interactive objects is rendered and is shown at this
Show at position.The icon of the interactive objects can be balloon icon, the gold coin icon etc. to flash at random on recording interface, this hair
Bright embodiment is not especially limited this.
The second way, when the interaction special efficacy include interactive objects icon and movement icon when, terminal is according to the image
In the target object target site location information, shown in the recording interface movement icon, the recording interface appoint
The icon of the interactive objects is shown on one position.
Wherein, which is the position for executing the target action, and the movement icon is for indicating that the target site is held
Change procedure when row target action.In this step, which can identify the mesh in image based on the image acquired in real time
It marks position and the movement icon is shown in the recording interface according to the location information of target site in the images.
In a kind of possible embodiment, which can be the head of the target object, which is
The movement that head is waved, then terminal is shown dynamic according to the location information in target site region in the image in the recording interface
Include as icon:According to the location information on the head of target object in the image, the movement icon is shown in the above-head, it should
Movement icon is for indicating angle and direction of the head in rocking process.
In the embodiment of the present invention, when showing interaction special efficacy in the recording interface, which can pass through target portion
Corresponding target action is made in position, realizes the interactive process with terminal, to improve the interest of recorded video.Wherein, should
The movement feedback on head can be the movement that head is waved.The movement icon is shown the top on the head in image by the terminal,
When being swung left and right, which carries out also with the wiggly angle and direction in head on the head of the target object
It is swung left and right, realizes the interactive process with terminal interface.The icon of the interactive objects can appear randomly in appointing for recording interface
One position, the position which can control head towards the icon of the interactive objects are waved.For example, terminal can incite somebody to action
Top half in the screen of the recording interface marks off nine grids region, by the icon random display of the interactive objects at this
In any grid of nine grids.
In a kind of possible embodiment, which further includes the interaction expression on the icon of the interactive objects.
The terminal is according to the location information of face in the icon of the interactive objects, by interaction exaggerated expressions to be shown in the interactive objects
Face location.To which the subsequent target object can also imitate the interaction expression on the icon of the interactive objects, makes and be somebody's turn to do
The consistent expression of expression is interacted, realization is interacted with the expression between the terminal.
It should be noted that the terminal can determine the location information of target site in the image by server, it can also
Voluntarily to determine the location information of target site, terminal determines that the process of the location information of target site can be:Terminal is to clothes
Business device sends the image of the target object, which receives the image, to by presetting recognizer, identifies in the image
Target site, obtain the location information of target site in the image, and the location information of target site is sent to terminal, should
Terminal receives the location information of the target site;Alternatively, the image of the target object of the terminal based on acquisition, is known by default
Other algorithm, identifies the image, identifies the target site in the image, obtains the position letter of target site in the image
Breath.
Wherein, when terminal opens camera, which in real time sends the video data of acquisition to server, server
After obtaining the video data, which is converted into image, then the position of above-mentioned determining target site is executed based on the image
The process of confidence breath.Wherein, which can be the head of target object.The default recognizer can according to need into
Row setting, the embodiment of the present invention are not specifically limited to secondary, for example, the default recognizer can be for Adaboost algorithm (repeatedly
For algorithm).
Further, which can also beautify the image, which can be the head of target object
For portion, which can be:The terminal can pre-process the human face region, for example, carrying out at light compensation
Reason, greyscale transform process, histogram equalization processing, normalized, geometric correction processing, filtering processing and Edge contrast
Deng so that the face in treated image is more beautiful.
In a kind of possible embodiment, icon of flashing in the recording interface, the score icon is for indicating
Target action score corresponding with the interaction result that the interactive objects are interacted.In addition, the terminal is in current recording interface
Middle display time icon, the time icon are used to indicate the current duration of the interactive process.For example, the terminal can recorded
The icon of countdown is shown in interface.When terminal starts to record, the terminal start the time synchronisation interactive process it is current when
It is long, and the time icon is shown in current interface.
In the embodiment of the present invention, icon, movement icon, the time icon of the interactive objects can be based on being set
It sets, the embodiment of the present invention is not specifically limited in this embodiment.For example, the interactive process can be the game interaction for beating suslik, the movement
Icon can be hammer icon, the interactive objects icon can be suslik icon, which can be time countdown
Bar shaped icon.Certainly, which can also be game interaction, the game interaction for connecing gold coin etc. of aircraft Great War.If
Aircraft Great War game interaction is carried out, which can be airplane icon, and the icon of the interactive objects can be wait hit
Other airplane icons, the barrier icon to fall etc.;If connect the game interaction of gold coin, which can be
Cornucopia icon, the icon of the interactive objects can be the gold coin icon etc. to fall.Certainly, which can also be
Other positions of target object, for example, the target site can also be the hand of target object, which can pass through hand
Portion and terminal prick the game interaction of balloon, and the embodiment of the present invention is not specifically limited in this embodiment.In addition, the terminal can also lead to
Prompt information is crossed, prompts the target object in movement feedback procedure, the expression in the icon of interactive objects is imitated, to obtain
Higher score.
As shown in figure 3, terminal shows hammer figure on the target object crown so that target object beats the game interaction of suslik as an example
Mark, and in the nine grids region random display suslik icon of recording interface.Target object can be swung left and right head, and terminal is by mesh
The rocking action of mark object header is synchronized to the hammer icon, by the hammer icon waved, hits the ground that surrounding occurs at random
Mouse icon.The terminal can also show the corresponding bar shaped icon of countdown in 1 minute and the target pair in the lower section of recording interface
Score icon in the interactive process of elephant, to prompt the current duration of target object interactive process, and that currently completes obtain.
In addition, as shown in figure 4, the Fig. 4 be terminal practical surface chart, more can really show actual interactive scene.
In addition, the terminal can also show video record in current interface when the Video Applications are opened for the first time
Director information, the director information is for introducing the video record process.As shown in figure 5, when video record interface first time
When being opened, which can show director information in initial page:Suslik can occur from nine grids, move your neck
Son shakes.Meanwhile the terminal can also show start button in recording interface, for example, GO button, when the GO button is touched
When hair, terminal starts to be recorded.In addition, as shown in fig. 6, the Fig. 6 be terminal practical surface chart, more can really open up
Show actual interactive scene.
202, in interactive process, the action message of the terminal real-time detection target object.
Wherein, the action message can be include the location information for executing the target site of the target action.The present invention is real
It applies in example, during which is interacted by target action, which can obtain the target site in real time
Location information judges the interaction result based on the location information in order to subsequent.
In the embodiment of the present invention, which can be the movement of the direct triggering terminal of target site of target object,
For example, finger stabs the movement of balloon icon by triggering terminal screen.The target action can also execute for target site itself
The contactless movement with terminal, for example, the movement that head is waved.Correspondingly, this step can be real by following two mode
It is existing.
First way, terminal obtain the position of target site triggering based on the trigger position being triggered on terminal screen
Confidence breath.
When interactive process starts, target object can be interacted by triggering terminal screen with terminal, and the terminal is real
When acquire the trigger position being triggered on the terminal screen, and using location information of the trigger position in terminal screen as should
The location information of target site triggering.For example, in the balloon icon shown on the finger triggering terminal screen of target object, it should
Terminal obtains the location information of the finger triggering terminal screen.
The image of the second way, terminal based on the target object acquired in real time obtains target site in the images
Location information.
In this step, which can voluntarily determine the location information, and the location information can also be obtained by server.
The process can be:Terminal sends the image of the target object to server in real time, receives the location information of server return,
The image returns to the location information of the target site in the images to terminal based on the image for the server;Alternatively, eventually
End group identifies the target site in the image in the image of the target object acquired in real time, obtains the target site at this
Location information in image.
Wherein, which includes situation of change of the target site in interactive process.Wherein, when the target site
For target object head when, the target action be the movement to be waved of head when, the location information may include indicate head
The position coordinates in region, the deviation angle on head and/or direction etc..For example, the location information can be to deviate to the right 20 °, table
Show that the process that the head is waved is to have waved 20 ° to the right.
203, terminal determines the corresponding result special efficacy of the target action according to the action message of the target object.
Wherein, which is used to indicate the interaction result that the target action is interacted with the interactive objects.This step
In rapid, the terminal according to the action message of the target object, judge interacting between the target object and the interactive objects as a result,
And according to the interaction as a result, obtaining the corresponding result special efficacy of the interaction result.
Two ways based on step 202, the action message of the target object can be for target sites in the images
Location information, alternatively, the location information of target site triggering terminal screen, correspondingly, this step can be by following two side
Formula is realized.
First way, when the action message is target site location information in the picture, the terminal is according to the mesh
The location information of position in the images is marked, the target action which executes is synchronized to the action diagram of the interaction special efficacy
It puts on, obtains the action diagram target location information;It is corresponding to obtain the target action according to the action diagram target location information for terminal
Interaction as a result, terminal according to the interaction as a result, obtaining the corresponding result special efficacy of the interaction result.
Wherein, which includes when acting icon and the icon of the interactive objects, which is held by head
The row target action, the target action are the movement that head is waved, and terminal obtains the mesh according to the location information of the interactive objects
Mark acts corresponding interaction result:Terminal is according to action diagram target location information in the interaction special efficacy and the interactive objects
The location information of icon, judges whether the movement icon hits the icon of the interactive objects;When the movement icon hits the interaction
When the icon of object, terminal obtains first as a result, when the movement icon does not hit the icon of the interactive objects, and terminal obtains the
Two results.Wherein, which is used to indicate the icon that the movement icon hits the interactive objects;Second result is for referring to
Show that the movement icon does not hit the icon of the interactive objects.
Wherein, which, which is synchronized to the process that the action diagram is put on for the target action of the target object, to be:The end
End based on the multiple image of acquisition, obtains the swing angle of the target site and waves direction in real time, and controls the movement icon
It is moved according to the swing angle and direction, so that the movement icon can reflect that the movement of target site becomes in real time
Change situation.Wherein, the location information of the icon of the interactive objects includes but is not limited to:Action diagram target position coordinates, the movement
The deviation angle etc. of icon.
In a kind of possible embodiment, when in the icon of the interactive objects further including interaction expression, the terminal is also
The expression can be judged with this mutually according to the interaction expression in the expression of target object in the image and the icon of the interactive objects
Whether dynamic expression matches;When the expression interacts expression matching with this, third is obtained as a result, the third result is used to indicate the table
Feelings interact expression matching with this;When the expression, which interacts expression with this, to be mismatched, the 4th is obtained as a result, the 4th result is used to refer to
Show that the expression interacts expression mismatch with this.
In this step, the multiple interaction results of the terminal storage and multiple corresponding relationships interacted between special efficacy, the terminal root
According to the interaction as a result, obtaining the corresponding interaction special efficacy of the interaction result.In addition, the terminal can with the multiple interaction results in source and
Corresponding relationship between multiple scores, the terminal can also be according to the interactions as a result, from the correspondence between interaction result and score
In relationship, the corresponding score of interaction result is obtained.
Wherein, terminal recognition goes out the process of the expression of target object, can be executed by terminal, can also be held by server
Row.For obtaining the expression of the target object by server, which can be:Figure of the server based on target object
Picture, and by preset recognizer, identify the face part in the image.The server extracts working as in the people face part
Preceding five features, and according to preceding five features is deserved, in the corresponding relationship between expression and five features, obtain the target pair
The corresponding expression of current five features of elephant.
Wherein, which includes but is not limited to:Relative position etc. between the position coordinates of face, face.For example,
The coordinate of mouth can represent the radian that corners of the mouth position upwarps, the relative position etc. between eyes and mouth.Terminal is by the mesh
The current five features for marking object, five features corresponding with preset a variety of expressions are matched, and determine that face are special before deserving
The similarity between five features corresponding with every kind of expression is levied, the similarity pair of preset threshold will be not less than in multiple similarities
The expression answered is determined as the corresponding expression of current five features of the target object.Wherein, which may include but unlimited
In:Smile, laugh, anger, sadness, rude passion etc..
The second way, when the action message be target site triggering terminal screen location information when, the terminal according to
The location information of the icon of the location information and interactive objects of target site triggering, obtains the corresponding interaction of the target action
As a result;Terminal is according to the interaction as a result, obtaining the corresponding result special efficacy of the interaction result.
Wherein, which can be the movement of target site triggering terminal screen, in a kind of possible embodiment
In, which can be according to the location information of the target site location information triggered and the icon of the interactive objects, when the mesh
When the location information of mark position triggering is matched with the location information of the icon of the interactive objects, terminal obtains the 5th result;When this
When the location information of the icon of the location information and interactive objects of target site triggering mismatches, terminal obtains the 6th result.
Wherein, the target action that the 5th result is used to indicate the target site hits the icon of the interactive objects;6th result is used
In indicating that the target action of the target site do not hit the icons of the interactive objects.
In the second way, terminal obtains the process of the corresponding result special efficacy of interaction result, is and above-mentioned first way
Process similarly, details are not described herein again.
It should be noted that terminal by interaction special efficacy, indicates to carry out game interaction between target object and interactive objects,
And the target action based on target object, to considerably increase the interest of video record process, voluntarily without target object
The movement content for designing video record, enriches video content, improves target object and is based on method progress in the Video Applications
The enthusiasm of recorded video, and then improve the target object liveness of the Video Applications.
204, terminal shows the result special efficacy on the recording interface.
The terminal can show the knot according to the display position of the result special efficacy on the recording interface at the display position
Fruit special efficacy.
In a kind of possible embodiment, which can also show the score of interaction result in recording interface, into
One step, which is also based on the corresponding score of interaction result, to score of the target object in interactive process into
Row is cumulative, and the score icon by showing on recording interface, records the currently accumulative score of the target object.
As shown in fig. 7, the terminal can be with when the hammer icon hits the suslik icon before suslik icon whereabouts
" BOOM " icon is shown on the suslik icon.Certainly, when the expression of target object with suslik when interacting expression matching, the end
End can also show the icons such as " expression matching ", " expression is in place ".In addition, as shown in figure 8, the Fig. 8 is the corresponding terminal of Fig. 7
Practical surface chart more can really show actual interactive scene.
As shown in figure 9, terminal can also be broken in recording interface lower right corner display screen special efficacy.In addition, the terminal can be with
Score chart below recording interface puts on display present score.As shown in Figure 10, which is the reality of the corresponding terminal of Fig. 9
Surface chart more can really show actual interactive scene.
205, terminal is raw according to the interaction special efficacy shown in the image acquired in real time, the interactive process and the result special efficacy
At video file.
When terminal, which receives, records END instruction, which, will be in interactive process according to the multiple image acquired in real time
The interaction special efficacy and result special efficacy of display are added in corresponding image, by the multiframe figure of addition interaction special efficacy and result special efficacy
Picture generates the video file.
Wherein, recording END instruction can be triggered by target object, for example, target object is by recording conclusion button touching
Hair, or pass through specified speech instruction triggers etc..In addition, the recording END instruction can also voluntarily be triggered by terminal, for example, should
Terminal is based on interaction duration triggering and generates the recording END instruction.
The terminal, which receives the step of recording END instruction, to be:When terminal detects that recording conclusion button is triggered
When;Alternatively, when terminal detects specified speech;Alternatively, during terminal timing interacts duration, when the present timing reaches
When the interaction duration;The terminal receives the recording END instruction.
It should be noted that the terminal can play the video in preview interface when terminal generates the video file
File, user are also based on multiple image included by the video file, cut to the video file, choose the multiframe
Parts of images in image is as video file.The video file can be sent in server by the terminal, which will
The video file is shared to the Video Applications platform.
In the embodiment of the present invention, when receiving video record instruction, which can show interaction in recording interface
Special efficacy, so that the interactive objects that target object is interacted with this in special efficacy are interacted;In interactive process, terminal can real-time root
According to the action message of the target object, the corresponding result special efficacy of the target action is determined, and show the result special efficacy;Pass through increase
The interactive process enriches the movement of target object during video record, improves the interest of video record, improves mesh
Mark the liveness of object.Terminal gives birth to the interaction special efficacy shown in the image acquired in real time, the interactive process and the result special efficacy
At video file.Multiple splendid moments in interaction that target object is had recorded in the video file, to greatly enrich
The video content of institute's recorded video, improves the interest of video, increases the information content of video.
Figure 11 is a kind of structural schematic diagram of video recording device provided in an embodiment of the present invention.Referring to Figure 11, the device
Including:Display module 1101, detection module 1102, determining module 1103 and generation module 1104.
Display module 1101, for acquiring the image of target object in real time, recording when receiving video record instruction
Show that interaction special efficacy, the interaction special efficacy include at least the interactive objects that the target object is interacted by target action in interface
Icon;
Detection module 1102 is used for the action message of the real-time detection target object in interactive process;
Determining module 1103 determines that the corresponding result of the target action is special for the action message according to the target object
Effect, the result special efficacy are used to indicate the interaction result that the target action is interacted with the interactive objects;
The display module 1101 is also used to show the result special efficacy on the recording interface;
Generation module 1104, for according to the image acquired in real time, the interaction special efficacy shown in the interactive process and the knot
Fruit special efficacy generates video file.
Optionally, the interaction special efficacy include interactive objects icon and/or movement icon, the display module 1101, including:
First display unit shows the icon of the interactive objects for any position in the recording interface;
Second display unit, for the location information according to the target site of the target object in the image, in the recording
Movement icon is shown in interface, and the icon of the interactive objects is shown in any position of the recording interface;
Wherein, which is the position for executing the target action, and the movement icon is for indicating that the target site is held
Change procedure when row target action.
Optionally, which is the head of the target object, which is the movement that head is waved, this second
Display unit is also used to the location information according to the head of target object in the image, shows the movement in the top on the head
Icon, the movement icon is for indicating angle and direction of the head in rocking process.
Optionally, which further includes the interaction expression on the icon of the interactive objects.
Optionally, which includes the location information for executing the target site of the target action, the detection module
1102, including:
First acquisition unit obtains target site in the image for the image based on the target object acquired in real time
In location information;
Second acquisition unit, for based on the trigger position being triggered on terminal screen, obtaining target site triggering
Location information.
Optionally, first acquisition unit receives the server for sending the image of the target object to server in real time
The location information of return, the image return to the position of the target site in the images to terminal based on the image for the server
Confidence breath;Alternatively, the image based on the target object acquired in real time, identifies the target site in the image, the mesh is obtained
Mark position location information in the images.
Optionally, the action message of the target object includes the position letter of the target site of the target object in the images
Breath, the determining module 1103, including:
Synchronization unit, for the location information according to the target site in the images, the mesh which is executed
The action diagram that mark movement is synchronized to the interaction special efficacy is put on, and the action diagram target location information is obtained;
Acquiring unit, for obtaining the corresponding interaction result of the target action according to the action diagram target location information;
The acquiring unit is also used to according to the interaction as a result, obtaining the corresponding result special efficacy of the interaction result.
Optionally, which executes the target action by head, which is the movement that head is waved, should
Interaction special efficacy includes the icon for acting icon and the interactive objects, which is also used to according to the action diagram target position
The location information of the icon of information and the interactive objects, judges whether the movement icon hits the icon of the interactive objects;When this
Movement icon is when hitting the icon of the interactive objects, obtains first as a result, first result is used to indicate the movement icon hits
The icon of the interactive objects;When the movement icon does not hit the icon situation of the interactive objects, obtain second as a result, this second
As a result it is used to indicate the icon that the movement icon does not hit the interactive objects.
Optionally, which further includes:
Judgment module, for according to the interaction table in the expression of target object in the image and the icon of the interactive objects
Feelings judge that the expression interacts whether expression matches with this;
Module is obtained, for when the expression interacts expression matching with this, obtaining third as a result, the third result is for referring to
Show that the expression interacts expression matching with this;
The acquisition module when being also used to interact expression mismatch when the expression with this, obtains the 4th as a result, the 4th result
It is used to indicate the expression and interacts expression mismatch with this.
Optionally, the action message of the target object includes the position of the target site triggering terminal screen of the target object
Information, the determining module 1103, the position of the icon of location information and the interactive objects for being triggered according to the target site
Information obtains the corresponding interaction result of the target action;According to the interaction as a result, obtaining the corresponding result spy of the interaction result
Effect.
Optionally, the display module 1101, is also used to icon of flashing in the recording interface, which is used for
Indicate target action score corresponding with the interaction result that the interactive objects are interacted.
In the embodiment of the present invention, when receiving video record instruction, which can show interaction in recording interface
Special efficacy, so that the interactive objects that target object is interacted with this in special efficacy are interacted;In interactive process, terminal can real-time root
According to the action message of the target object, the corresponding result special efficacy of the target action is determined, and show the result special efficacy;Pass through increase
The interactive process enriches the movement of target object during video record, improves the interest of video record, improves mesh
Mark the liveness of object.Terminal gives birth to the interaction special efficacy shown in the image acquired in real time, the interactive process and the result special efficacy
At video file.Multiple splendid moments in interaction that target object is had recorded in the video file, to greatly enrich
The video content of institute's recorded video, improves the interest of video, increases the information content of video.
All the above alternatives can form the alternative embodiment of the disclosure, herein no longer using any combination
It repeats one by one.
It should be noted that:Video recording device provided by the above embodiment is in recorded video, only with above-mentioned each function
The division progress of module can according to need and for example, in practical application by above-mentioned function distribution by different function moulds
Block is completed, i.e., the internal structure of equipment is divided into different functional modules, to complete all or part of function described above
Energy.In addition, video recording device provided by the above embodiment and video recording method embodiment belong to same design, it is specific real
Existing process is detailed in embodiment of the method, and which is not described herein again.
Figure 12 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention.The terminal 1200 can be:Intelligent hand
(Moving Picture Experts Group Audio Layer III, dynamic image are special for machine, tablet computer, MP3 player
Family's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image
Expert's compression standard audio level 4) player, laptop or desktop computer.Terminal 1200 is also possible to referred to as user and sets
Other titles such as standby, portable terminal, laptop terminal, terminal console.
In general, terminal 1200 includes:Processor 1201 and memory 1202.
Processor 1201 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place
Reason device 1201 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field-
Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed
Logic array) at least one of example, in hardware realize.Processor 1201 also may include primary processor and coprocessor, master
Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing
Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.?
In some embodiments, processor 1201 can be integrated with GPU (Graphics Processing Unit, image processor),
GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 1201 can also be wrapped
AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning
Calculating operation.
Memory 1202 may include one or more computer readable storage mediums, which can
To be non-transient.Memory 1202 may also include high-speed random access memory and nonvolatile memory, such as one
Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 1202 can
Storage medium is read for storing at least one instruction, at least one instruction performed by processor 1201 for realizing this Shen
Please in embodiment of the method provide video recording method.
In some embodiments, terminal 1200 is also optional includes:Peripheral device interface 1203 and at least one periphery are set
It is standby.It can be connected by bus or signal wire between processor 1201, memory 1202 and peripheral device interface 1203.It is each outer
Peripheral equipment can be connected by bus, signal wire or circuit board with peripheral device interface 1203.Specifically, peripheral equipment includes:
In radio circuit 1204, touch display screen 1205, camera 1206, voicefrequency circuit 1207, positioning component 1208 and power supply 1209
At least one.
Peripheral device interface 1203 can be used for I/O (Input/Output, input/output) is relevant outside at least one
Peripheral equipment is connected to processor 1201 and memory 1202.In some embodiments, processor 1201, memory 1202 and periphery
Equipment interface 1203 is integrated on same chip or circuit board;In some other embodiments, processor 1201, memory
1202 and peripheral device interface 1203 in any one or two can be realized on individual chip or circuit board, this implementation
Example is not limited this.
Radio circuit 1204 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.
Radio circuit 1204 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 1204 is by telecommunications
Number being converted to electromagnetic signal is sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit
1204 include:Antenna system, one or more amplifiers, tuner, oscillator, digital signal processor, compiles solution at RF transceiver
Code chipset, user identity module card etc..Radio circuit 1204 can by least one wireless communication protocol come with it is other
Terminal is communicated.The wireless communication protocol includes but is not limited to:Metropolitan Area Network (MAN), each third generation mobile communication network (2G, 3G, 4G and
5G), WLAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, radio frequency electrical
Road 1204 can also include NFC (Near Field Communication, wireless near field communication) related circuit, the application
This is not limited.
Display screen 1205 is for showing UI (User Interface, user interface).The UI may include figure, text,
Icon, video and its their any combination.When display screen 1205 is touch display screen, display screen 1205 also there is acquisition to exist
The ability of the touch signal on the surface or surface of display screen 1205.The touch signal can be used as control signal and be input to place
Reason device 1201 is handled.At this point, display screen 1205 can be also used for providing virtual push button and/or dummy keyboard, it is also referred to as soft to press
Button and/or soft keyboard.In some embodiments, display screen 1205 can be one, and the front panel of terminal 1200 is arranged;Another
In a little embodiments, display screen 1205 can be at least two, be separately positioned on the different surfaces of terminal 1200 or in foldover design;
In still other embodiments, display screen 1205 can be flexible display screen, is arranged on the curved surface of terminal 1200 or folds
On face.Even, display screen 1205 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 1205 can be with
Using LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode,
Organic Light Emitting Diode) etc. materials preparation.
CCD camera assembly 1206 is for acquiring image or video.Optionally, CCD camera assembly 1206 includes front camera
And rear camera.In general, the front panel of terminal is arranged in front camera, the back side of terminal is arranged in rear camera.?
In some embodiments, rear camera at least two is that main camera, depth of field camera, wide-angle camera, focal length are taken the photograph respectively
As any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide
Pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are realized in camera fusion in angle
Shooting function.In some embodiments, CCD camera assembly 1206 can also include flash lamp.Flash lamp can be monochromatic temperature flash of light
Lamp is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for
Light compensation under different-colour.
Voicefrequency circuit 1207 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and
It converts sound waves into electric signal and is input to processor 1201 and handled, or be input to radio circuit 1204 to realize that voice is logical
Letter.For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of terminal 1200 to be multiple.
Microphone can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 1201 or radio frequency will to be come from
The electric signal of circuit 1204 is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramics loudspeaking
Device.When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, can also be incited somebody to action
Electric signal is converted to the sound wave that the mankind do not hear to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 1207 may be used also
To include earphone jack.
Positioning component 1208 is used for the current geographic position of positioning terminal 1200, to realize navigation or LBS (Location
Based Service, location based service).Positioning component 1208 can be the GPS (Global based on the U.S.
Positioning System, global positioning system), the dipper system of China, Russia Gray receive this system or European Union
The positioning component of Galileo system.
Power supply 1209 is used to be powered for the various components in terminal 1200.Power supply 1209 can be alternating current, direct current
Electricity, disposable battery or rechargeable battery.When power supply 1209 includes rechargeable battery, which can support wired
Charging or wireless charging.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 1200 further includes having one or more sensors 1210.One or more sensing
Device 1210 includes but is not limited to:Acceleration transducer 1211, gyro sensor 1212, pressure sensor 1213, fingerprint sensing
Device 1214, optical sensor 1215 and proximity sensor 1216.
Acceleration transducer 1211 can detecte the acceleration in three reference axis of the coordinate system established with terminal 1200
Size.For example, acceleration transducer 1211 can be used for detecting component of the acceleration of gravity in three reference axis.Processor
The 1201 acceleration of gravity signals that can be acquired according to acceleration transducer 1211, control touch display screen 1205 with transverse views
Or longitudinal view carries out the display of user interface.Acceleration transducer 1211 can be also used for game or the exercise data of user
Acquisition.
Gyro sensor 1212 can detecte body direction and the rotational angle of terminal 1200, gyro sensor 1212
Acquisition user can be cooperateed with to act the 3D of terminal 1200 with acceleration transducer 1211.Processor 1201 is according to gyro sensors
The data that device 1212 acquires, may be implemented following function:Action induction (for example changing UI according to the tilt operation of user) is clapped
Image stabilization, game control and inertial navigation when taking the photograph.
The lower layer of side frame and/or touch display screen 1205 in terminal 1200 can be set in pressure sensor 1213.When
When the side frame of terminal 1200 is arranged in pressure sensor 1213, user can detecte to the gripping signal of terminal 1200, by
Reason device 1201 carries out right-hand man's identification or prompt operation according to the gripping signal that pressure sensor 1213 acquires.Work as pressure sensor
1213 when being arranged in the lower layer of touch display screen 1205, is grasped by processor 1201 according to pressure of the user to touch display screen 1205
Make, realization controls the operability control on the interface UI.Operability control include button control, scroll bar control,
At least one of icon control, menu control.
Fingerprint sensor 1214 is used to acquire the fingerprint of user, is collected by processor 1201 according to fingerprint sensor 1214
Fingerprint recognition user identity, alternatively, by fingerprint sensor 1214 according to the identity of collected fingerprint recognition user.Knowing
Not Chu the identity of user when being trusted identity, authorize the user to execute relevant sensitive operation by processor 1201, which grasps
Make to include solving lock screen, checking encryption information, downloading software, payment and change setting etc..Fingerprint sensor 1214 can be set
Set the front, the back side or side of terminal 1200.When being provided with physical button or manufacturer Logo in terminal 1200, fingerprint sensor
1214 can integrate with physical button or manufacturer Logo.
Optical sensor 1215 is for acquiring ambient light intensity.In one embodiment, processor 1201 can be according to light
The ambient light intensity that sensor 1215 acquires is learned, the display brightness of touch display screen 1205 is controlled.Specifically, work as ambient light intensity
When higher, the display brightness of touch display screen 1205 is turned up;When ambient light intensity is lower, the aobvious of touch display screen 1205 is turned down
Show brightness.In another embodiment, the ambient light intensity that processor 1201 can also be acquired according to optical sensor 1215, is moved
The acquisition parameters of state adjustment CCD camera assembly 1206.
Proximity sensor 1216, also referred to as range sensor are generally arranged at the front panel of terminal 1200.Proximity sensor
1216 for acquiring the distance between the front of user Yu terminal 1200.In one embodiment, when proximity sensor 1216 is examined
When measuring the distance between the front of user and terminal 1200 and gradually becoming smaller, by processor 1201 control touch display screen 1205 from
Bright screen state is switched to breath screen state;When proximity sensor 1216 detect the distance between front of user and terminal 1200 by
When gradual change is big, touch display screen 1205 is controlled by processor 1201 and is switched to bright screen state from breath screen state.
It, can be with it will be understood by those skilled in the art that the restriction of the not structure paired terminal 1200 of structure shown in Figure 12
Including than illustrating more or fewer components, perhaps combining certain components or being arranged using different components.
In the exemplary embodiment, a kind of computer readable storage medium is additionally provided, the memory for example including instruction,
Above-metioned instruction can be executed by the processor in terminal to complete the video recording method in above-described embodiment.For example, the calculating
Machine readable storage medium storing program for executing can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices
Deng.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (15)
1. a kind of video recording method, which is characterized in that the method includes:
When receiving video record instruction, the image of target object is acquired in real time, and interaction special efficacy, institute are shown in recording interface
State the icon that interaction special efficacy includes at least the interactive objects that the target object is interacted by target action;
In interactive process, the action message of target object described in real-time detection;
According to the action message of the target object, determine that the corresponding result special efficacy of the target action, the result special efficacy are used
In the interaction result for indicating that the target action is interacted with the interactive objects;
The result special efficacy is shown on the recording interface;
According to the image acquired in real time, the interaction special efficacy and the result special efficacy that show in the interactive process, video text is generated
Part.
2. the method according to claim 1, wherein it is described interaction special efficacy include interactive objects icon and/or
Icon is acted, it is described to show that interaction special efficacy includes in recording interface:
The icon of the interactive objects is shown in any position of the recording interface;Alternatively,
The location information of the target site of the target object according to described image shows action diagram in the recording interface
Mark, shows the icon of the interactive objects in any position of the recording interface;
Wherein, the target site is the position for executing the target action, and the movement icon is for indicating the target portion
Change procedure when the performance objective movement of position.
3. according to the method described in claim 2, it is characterized in that, the target site is the head of the target object, institute
Stating target action is the movement that head is waved, the position letter of the target site of the target object according to described image
Breath shows that movement icon includes in the recording interface:
According to the location information on the head of target object in described image, the movement icon is shown in the top on the head,
The movement icon is for indicating angle and direction of the head in rocking process.
4. according to the method described in claim 3, it is characterized in that, the interaction special efficacy further includes the icon of the interactive objects
On interaction expression.
5. the method according to claim 1, wherein the action message includes the mesh for executing the target action
The location information at position is marked, described in interactive process, the action message of target object described in real-time detection includes:
Based on the image of the target object acquired in real time, location information of the target site in described image is obtained;Alternatively,
Based on the trigger position being triggered on terminal screen, the location information of the target site triggering is obtained.
6. according to the method described in claim 5, it is characterized in that, the figure based on the target object acquired in real time
Picture, obtaining location information of the target site in described image includes:
The image for sending the target object to server in real time receives the location information that the server returns, described image
Location information of the target site in described image is returned to terminal based on described image for the server;Alternatively,
Based on the image of the target object acquired in real time, the target site in described image is identified, obtain the target
Position location information in described image.
7. the method according to claim 1, wherein the action message of the target object includes the target pair
Location information of the target site of elephant in described image, the action message according to the target object, determines the mesh
Mark acts corresponding result special efficacy:
According to location information of the target site in described image, the target action that the target site executes is synchronized to
The action diagram of the interaction special efficacy is put on, and the action diagram target location information is obtained;
According to the action diagram target location information, the corresponding interaction result of the target action is obtained;
According to the interaction as a result, obtaining the corresponding result special efficacy of the interaction result.
8. being moved the method according to the description of claim 7 is characterized in that the target object executes the target by head
To make, the target action is the movement that head is waved, and the interaction special efficacy includes the icon for acting icon and the interactive objects,
It is described according to the action diagram target location information, obtaining the corresponding interaction result of the target action includes:
According to the location information of the action diagram target location information and the icon of the interactive objects, the movement icon is judged
Whether the icon of the interactive objects is hit;
When the movement icon hits the icon of the interactive objects, first is obtained as a result, first result is used to indicate
The movement icon hits the icon of the interactive objects;
When the movement icon does not hit the icon situation of the interactive objects, second is obtained as a result, second result is used
The icon of the interactive objects is not hit in the instruction movement icon.
9. according to the method described in claim 8, it is characterized in that, the method also includes:
According to the interaction expression in the icon of the expression of target object in described image and the interactive objects, the expression is judged
Interact whether expression matches with described;
When the expression with it is described interact expression matching when, obtain third as a result, the third result is used to indicate the expression
Expression matching is interacted with described;
When the expression with it is described interact expression and mismatch when, obtain the 4th as a result, the 4th result is used to indicate the table
Feelings are mismatched with the expression that interacts.
10. the method according to claim 1, wherein the action message of the target object includes the target
The location information of the target site triggering terminal screen of object, the action message according to the target object, determine described in
The corresponding result special efficacy of target action includes:
According to the location information of the location information of target site triggering and the icon of the interactive objects, the target is obtained
Act corresponding interaction result;
According to the interaction as a result, obtaining the corresponding result special efficacy of the interaction result.
11. the method according to claim 1, wherein the method also includes:
It flashes in the recording interface icon, the score icon is for indicating that the target action interacts pair with described
As the corresponding score of interaction result interacted.
12. a kind of video recording device, which is characterized in that described device includes:
Display module is shown in recording interface for acquiring the image of target object in real time when receiving video record instruction
Show that interaction special efficacy, the interaction special efficacy include at least the figure for the interactive objects that the target object is interacted by target action
Mark;
Detection module is used for the action message of target object described in real-time detection in interactive process;
Determining module determines the corresponding result special efficacy of the target action, institute for the action message according to the target object
It states result special efficacy and is used to indicate the interaction result that the target action is interacted with the interactive objects;
The display module is also used to show the result special efficacy on the recording interface;
Generation module, for special according to the image acquired in real time, the interaction special efficacy shown in the interactive process and the result
Effect generates video file.
13. device according to claim 12, which is characterized in that it is described interaction special efficacy include interactive objects icon and/
Or movement icon, the display module, including:
First display unit, for showing the icon of the interactive objects in any position of the recording interface;
Second display unit, the location information of the target site for the target object according to described image, in the record
Movement icon is shown in interface processed, and the icon of the interactive objects is shown in any position of the recording interface;
Wherein, the target site is the position for executing the target action, and the movement icon is for indicating the target portion
Change procedure when the performance objective movement of position.
14. a kind of terminal, which is characterized in that the terminal includes processor and memory, is stored at least in the memory
One instruction, described instruction are loaded by the processor and are executed to realize such as any one of claim 1 to claim 11 institute
Operation performed by the video recording method stated.
15. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, institute in the storage medium
Instruction is stated to be loaded by processor and executed to realize such as claim 1 to the described in any item video recording methods of claim 11
Performed operation.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810688229.7A CN108833818B (en) | 2018-06-28 | 2018-06-28 | Video recording method, device, terminal and storage medium |
CN202110296689.7A CN112911182B (en) | 2018-06-28 | 2018-06-28 | Game interaction method, device, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810688229.7A CN108833818B (en) | 2018-06-28 | 2018-06-28 | Video recording method, device, terminal and storage medium |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110296689.7A Division CN112911182B (en) | 2018-06-28 | 2018-06-28 | Game interaction method, device, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108833818A true CN108833818A (en) | 2018-11-16 |
CN108833818B CN108833818B (en) | 2021-03-26 |
Family
ID=64133599
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110296689.7A Active CN112911182B (en) | 2018-06-28 | 2018-06-28 | Game interaction method, device, terminal and storage medium |
CN201810688229.7A Active CN108833818B (en) | 2018-06-28 | 2018-06-28 | Video recording method, device, terminal and storage medium |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110296689.7A Active CN112911182B (en) | 2018-06-28 | 2018-06-28 | Game interaction method, device, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN112911182B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109348277A (en) * | 2018-11-29 | 2019-02-15 | 北京字节跳动网络技术有限公司 | Move pixel special video effect adding method, device, terminal device and storage medium |
CN109803165A (en) * | 2019-02-01 | 2019-05-24 | 北京达佳互联信息技术有限公司 | Method, apparatus, terminal and the storage medium of video processing |
CN109889893A (en) * | 2019-04-16 | 2019-06-14 | 北京字节跳动网络技术有限公司 | Method for processing video frequency, device and equipment |
CN110110142A (en) * | 2019-04-19 | 2019-08-09 | 北京大米科技有限公司 | Method for processing video frequency, device, electronic equipment and medium |
CN111258415A (en) * | 2018-11-30 | 2020-06-09 | 北京字节跳动网络技术有限公司 | Video-based limb movement detection method, device, terminal and medium |
CN111586423A (en) * | 2020-04-24 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Live broadcast room interaction method and device, storage medium and electronic device |
CN111659114A (en) * | 2019-03-08 | 2020-09-15 | 阿里巴巴集团控股有限公司 | Interactive game generation method and device, interactive game processing method and device and electronic equipment |
CN111695376A (en) * | 2019-03-13 | 2020-09-22 | 阿里巴巴集团控股有限公司 | Video processing method, video processing device and electronic equipment |
CN111857923A (en) * | 2020-07-17 | 2020-10-30 | 北京字节跳动网络技术有限公司 | Special effect display method and device, electronic equipment and computer readable medium |
CN111914523A (en) * | 2020-08-19 | 2020-11-10 | 腾讯科技(深圳)有限公司 | Multimedia processing method and device based on artificial intelligence and electronic equipment |
CN112148188A (en) * | 2020-09-23 | 2020-12-29 | 北京市商汤科技开发有限公司 | Interaction method and device in augmented reality scene, electronic equipment and storage medium |
CN112243065A (en) * | 2020-10-19 | 2021-01-19 | 维沃移动通信有限公司 | Video recording method and device |
CN112396676A (en) * | 2019-08-16 | 2021-02-23 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN112702625A (en) * | 2020-12-23 | 2021-04-23 | Oppo广东移动通信有限公司 | Video processing method and device, electronic equipment and storage medium |
CN112887631A (en) * | 2019-11-29 | 2021-06-01 | 北京字节跳动网络技术有限公司 | Method and device for displaying object in video, electronic equipment and computer-readable storage medium |
CN113014949A (en) * | 2021-03-10 | 2021-06-22 | 读书郎教育科技有限公司 | Student privacy protection system and method for smart classroom course playback |
CN114567805A (en) * | 2022-02-24 | 2022-05-31 | 北京字跳网络技术有限公司 | Method and device for determining special effect video, electronic equipment and storage medium |
WO2022116751A1 (en) * | 2020-12-02 | 2022-06-09 | 北京字节跳动网络技术有限公司 | Interaction method and apparatus, and terminal, server and storage medium |
WO2022194097A1 (en) * | 2021-03-15 | 2022-09-22 | 北京字跳网络技术有限公司 | Object control method and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070162854A1 (en) * | 2006-01-12 | 2007-07-12 | Dan Kikinis | System and Method for Interactive Creation of and Collaboration on Video Stories |
US20120189168A1 (en) * | 2006-02-07 | 2012-07-26 | Qualcomm Incorporated | Multi-mode region-of-interest video object segmentation |
CN103413468A (en) * | 2013-08-20 | 2013-11-27 | 苏州跨界软件科技有限公司 | Parent-child educational method based on a virtual character |
CN105617658A (en) * | 2015-12-25 | 2016-06-01 | 新浪网技术(中国)有限公司 | Multiplayer moving shooting game system based on real indoor environment |
CN106231415A (en) * | 2016-08-18 | 2016-12-14 | 北京奇虎科技有限公司 | A kind of interactive method and device adding face's specially good effect in net cast |
CN106730815A (en) * | 2016-12-09 | 2017-05-31 | 福建星网视易信息系统有限公司 | The body-sensing interactive approach and system of a kind of easy realization |
CN107944397A (en) * | 2017-11-27 | 2018-04-20 | 腾讯音乐娱乐科技(深圳)有限公司 | Video recording method, device and computer-readable recording medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1689172B1 (en) * | 2001-06-05 | 2016-03-09 | Microsoft Technology Licensing, LLC | Interactive video display system |
US8549442B2 (en) * | 2005-12-12 | 2013-10-01 | Sony Computer Entertainment Inc. | Voice and video control of interactive electronically simulated environment |
CN107613310B (en) * | 2017-09-08 | 2020-08-04 | 广州华多网络科技有限公司 | Live broadcast method and device and electronic equipment |
-
2018
- 2018-06-28 CN CN202110296689.7A patent/CN112911182B/en active Active
- 2018-06-28 CN CN201810688229.7A patent/CN108833818B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070162854A1 (en) * | 2006-01-12 | 2007-07-12 | Dan Kikinis | System and Method for Interactive Creation of and Collaboration on Video Stories |
US20120189168A1 (en) * | 2006-02-07 | 2012-07-26 | Qualcomm Incorporated | Multi-mode region-of-interest video object segmentation |
CN103413468A (en) * | 2013-08-20 | 2013-11-27 | 苏州跨界软件科技有限公司 | Parent-child educational method based on a virtual character |
CN105617658A (en) * | 2015-12-25 | 2016-06-01 | 新浪网技术(中国)有限公司 | Multiplayer moving shooting game system based on real indoor environment |
CN106231415A (en) * | 2016-08-18 | 2016-12-14 | 北京奇虎科技有限公司 | A kind of interactive method and device adding face's specially good effect in net cast |
CN106730815A (en) * | 2016-12-09 | 2017-05-31 | 福建星网视易信息系统有限公司 | The body-sensing interactive approach and system of a kind of easy realization |
CN107944397A (en) * | 2017-11-27 | 2018-04-20 | 腾讯音乐娱乐科技(深圳)有限公司 | Video recording method, device and computer-readable recording medium |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109348277A (en) * | 2018-11-29 | 2019-02-15 | 北京字节跳动网络技术有限公司 | Move pixel special video effect adding method, device, terminal device and storage medium |
CN109348277B (en) * | 2018-11-29 | 2020-02-07 | 北京字节跳动网络技术有限公司 | Motion pixel video special effect adding method and device, terminal equipment and storage medium |
CN111258415B (en) * | 2018-11-30 | 2021-05-07 | 北京字节跳动网络技术有限公司 | Video-based limb movement detection method, device, terminal and medium |
CN111258415A (en) * | 2018-11-30 | 2020-06-09 | 北京字节跳动网络技术有限公司 | Video-based limb movement detection method, device, terminal and medium |
CN109803165A (en) * | 2019-02-01 | 2019-05-24 | 北京达佳互联信息技术有限公司 | Method, apparatus, terminal and the storage medium of video processing |
CN111659114A (en) * | 2019-03-08 | 2020-09-15 | 阿里巴巴集团控股有限公司 | Interactive game generation method and device, interactive game processing method and device and electronic equipment |
CN111659114B (en) * | 2019-03-08 | 2023-09-15 | 阿里巴巴集团控股有限公司 | Interactive game generation method and device, interactive game processing method and device and electronic equipment |
CN111695376A (en) * | 2019-03-13 | 2020-09-22 | 阿里巴巴集团控股有限公司 | Video processing method, video processing device and electronic equipment |
WO2020211422A1 (en) * | 2019-04-16 | 2020-10-22 | 北京字节跳动网络技术有限公司 | Video processing method and apparatus, and device |
CN109889893A (en) * | 2019-04-16 | 2019-06-14 | 北京字节跳动网络技术有限公司 | Method for processing video frequency, device and equipment |
CN110110142A (en) * | 2019-04-19 | 2019-08-09 | 北京大米科技有限公司 | Method for processing video frequency, device, electronic equipment and medium |
JP2022545394A (en) * | 2019-08-16 | 2022-10-27 | 北京字節跳動網絡技術有限公司 | IMAGE PROCESSING METHOD, DEVICE, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM |
US11516411B2 (en) | 2019-08-16 | 2022-11-29 | Beijing Bytedance Network Technology Co., Ltd. | Image processing method and apparatus, electronic device and computer-readable storage medium |
JP7338041B2 (en) | 2019-08-16 | 2023-09-04 | 北京字節跳動網絡技術有限公司 | IMAGE PROCESSING METHOD, DEVICE, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM |
CN112396676A (en) * | 2019-08-16 | 2021-02-23 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
WO2021031847A1 (en) * | 2019-08-16 | 2021-02-25 | 北京字节跳动网络技术有限公司 | Image processing method and apparatus, electronic device and computer-readable storage medium |
CN112396676B (en) * | 2019-08-16 | 2024-04-02 | 北京字节跳动网络技术有限公司 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
CN112887631B (en) * | 2019-11-29 | 2022-08-12 | 北京字节跳动网络技术有限公司 | Method and device for displaying object in video, electronic equipment and computer-readable storage medium |
CN112887631A (en) * | 2019-11-29 | 2021-06-01 | 北京字节跳动网络技术有限公司 | Method and device for displaying object in video, electronic equipment and computer-readable storage medium |
CN111586423A (en) * | 2020-04-24 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Live broadcast room interaction method and device, storage medium and electronic device |
CN111586423B (en) * | 2020-04-24 | 2021-09-10 | 腾讯科技(深圳)有限公司 | Live broadcast room interaction method and device, storage medium and electronic device |
CN111857923A (en) * | 2020-07-17 | 2020-10-30 | 北京字节跳动网络技术有限公司 | Special effect display method and device, electronic equipment and computer readable medium |
CN111914523A (en) * | 2020-08-19 | 2020-11-10 | 腾讯科技(深圳)有限公司 | Multimedia processing method and device based on artificial intelligence and electronic equipment |
CN112148188A (en) * | 2020-09-23 | 2020-12-29 | 北京市商汤科技开发有限公司 | Interaction method and device in augmented reality scene, electronic equipment and storage medium |
CN112243065B (en) * | 2020-10-19 | 2022-02-01 | 维沃移动通信有限公司 | Video recording method and device |
CN112243065A (en) * | 2020-10-19 | 2021-01-19 | 维沃移动通信有限公司 | Video recording method and device |
WO2022116751A1 (en) * | 2020-12-02 | 2022-06-09 | 北京字节跳动网络技术有限公司 | Interaction method and apparatus, and terminal, server and storage medium |
CN112702625B (en) * | 2020-12-23 | 2024-01-02 | Oppo广东移动通信有限公司 | Video processing method, device, electronic equipment and storage medium |
CN112702625A (en) * | 2020-12-23 | 2021-04-23 | Oppo广东移动通信有限公司 | Video processing method and device, electronic equipment and storage medium |
CN113014949B (en) * | 2021-03-10 | 2022-05-06 | 读书郎教育科技有限公司 | Student privacy protection system and method for smart classroom course playback |
CN113014949A (en) * | 2021-03-10 | 2021-06-22 | 读书郎教育科技有限公司 | Student privacy protection system and method for smart classroom course playback |
WO2022194097A1 (en) * | 2021-03-15 | 2022-09-22 | 北京字跳网络技术有限公司 | Object control method and device |
CN114567805A (en) * | 2022-02-24 | 2022-05-31 | 北京字跳网络技术有限公司 | Method and device for determining special effect video, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112911182B (en) | 2022-08-23 |
CN112911182A (en) | 2021-06-04 |
CN108833818B (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108833818A (en) | video recording method, device, terminal and storage medium | |
US11710351B2 (en) | Action recognition method and apparatus, and human-machine interaction method and apparatus | |
US20170255767A1 (en) | Identity Authentication Method, Identity Authentication Device, And Terminal | |
CN111382624B (en) | Action recognition method, device, equipment and readable storage medium | |
CN107967706A (en) | Processing method, device and the computer-readable recording medium of multi-medium data | |
CN110087123A (en) | Video file production method, device, equipment and readable storage medium storing program for executing | |
CN107844781A (en) | Face character recognition methods and device, electronic equipment and storage medium | |
CN108401124A (en) | The method and apparatus of video record | |
CN110141857A (en) | Facial display methods, device, equipment and the storage medium of virtual role | |
CN108900858A (en) | A kind of method and apparatus for giving virtual present | |
CN108256505A (en) | Image processing method and device | |
CN108829881A (en) | video title generation method and device | |
US20220309836A1 (en) | Ai-based face recognition method and apparatus, device, and medium | |
CN110222789A (en) | Image-recognizing method and storage medium | |
CN110166786A (en) | Virtual objects transfer method and device | |
CN110956580B (en) | Method, device, computer equipment and storage medium for changing face of image | |
CN110136228B (en) | Face replacement method, device, terminal and storage medium for virtual character | |
CN109634489A (en) | Method, apparatus, equipment and the readable storage medium storing program for executing made comments | |
CN110533585A (en) | A kind of method, apparatus that image is changed face, system, equipment and storage medium | |
CN109646944A (en) | Control information processing method, device, electronic equipment and storage medium | |
CN110263617A (en) | Three-dimensional face model acquisition methods and device | |
CN108965922A (en) | Video cover generation method, device and storage medium | |
CN109327608A (en) | Method, terminal, server and the system that song is shared | |
CN110135336A (en) | Training method, device and the storage medium of pedestrian's generation model | |
CN109522863A (en) | Ear's critical point detection method, apparatus and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |