CN108833818B - Video recording method, device, terminal and storage medium - Google Patents

Video recording method, device, terminal and storage medium Download PDF

Info

Publication number
CN108833818B
CN108833818B CN201810688229.7A CN201810688229A CN108833818B CN 108833818 B CN108833818 B CN 108833818B CN 201810688229 A CN201810688229 A CN 201810688229A CN 108833818 B CN108833818 B CN 108833818B
Authority
CN
China
Prior art keywords
icon
target
interactive
action
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810688229.7A
Other languages
Chinese (zh)
Other versions
CN108833818A (en
Inventor
陈春勇
余墉林
陈骢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810688229.7A priority Critical patent/CN108833818B/en
Priority to CN202110296689.7A priority patent/CN112911182B/en
Publication of CN108833818A publication Critical patent/CN108833818A/en
Application granted granted Critical
Publication of CN108833818B publication Critical patent/CN108833818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations

Abstract

The invention discloses a video recording method, a video recording device, a video recording terminal and a video recording storage medium, and belongs to the technical field of the Internet. The method comprises the following steps: when a video recording instruction is received, acquiring an image of a target object in real time, and displaying an interactive special effect in a recording interface, wherein the interactive special effect at least comprises an icon of an interactive object for the target object to interact through a target action; detecting the action information of the target object in real time in the interaction process; determining a result special effect corresponding to the target action according to the action information of the target object, wherein the result special effect is used for indicating an interaction result of the target action and the interaction object; displaying the result special effect on the recording interface; and generating a video file according to the image acquired in real time, the interactive special effect displayed in the interactive process and the result special effect. Through increasing the interactive process, promote user's liveness for this video file includes a plurality of wonderful interactive moments, has richened the video content, has improved the interest of video.

Description

Video recording method, device, terminal and storage medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a video recording method, apparatus, terminal, and storage medium.
Background
With the development of the internet technology, a user can record a video in a video application, and share the video on a network platform of the video application in real time through a network, and meanwhile, the user can beautify the face in a video picture based on the facial beautification function configured by the video application.
In the related art, the video recording process is as follows: the user can perform various expressions, actions and the like in front of the camera, and the terminal collects multi-frame images of the user in real time. Meanwhile, the user can trigger the terminal to start the beautifying function, and in the recording process, the terminal can perform beautifying processing on the image based on the beautifying function selected by the user, for example, whitening, peeling and the like on the face part in the image. Alternatively, some icons are added to the five sense organs of the face part in the image, for example, a dog nose is added to the nose, a rabbit ear is added to the top of the head, and the like. The terminal may generate a recorded video file based on the multi-frame image after the beauty processing.
Before recording the video, the user needs to spend a certain time to design expressions, actions and the like of the performance, the enthusiasm of the user is not high, and the user activity of the video application is low. Moreover, the video recording process is actually a process of one-way recording by the terminal and one-way performance of the user, and the terminal only performs beauty processing on the image, so that the interestingness of the video recorded by the method is low.
Disclosure of Invention
The embodiment of the invention provides a video recording method, a video recording device, a video recording terminal and a video recording storage medium, which can solve the problem of low interestingness of recorded videos in the related art. The technical scheme is as follows:
in one aspect, a video recording method is provided, and the method includes:
when a video recording instruction is received, acquiring an image of a target object in real time, and displaying an interactive special effect in a recording interface, wherein the interactive special effect at least comprises an icon of an interactive object for the target object to interact through a target action;
detecting the action information of the target object in real time in the interaction process;
determining a result special effect corresponding to the target action according to the action information of the target object, wherein the result special effect is used for indicating an interaction result of the target action and the interaction object;
displaying the result special effect on the recording interface;
and generating a video file according to the image acquired in real time, the interactive special effect displayed in the interactive process and the result special effect.
In another aspect, there is provided a video recording apparatus, the apparatus comprising:
the display module is used for acquiring images of the target object in real time when a video recording instruction is received, and displaying an interactive special effect in a recording interface, wherein the interactive special effect at least comprises an icon of an interactive object for the target object to interact through a target action;
the detection module is used for detecting the action information of the target object in real time in the interaction process;
the determining module is used for determining a result special effect corresponding to the target action according to the action information of the target object, wherein the result special effect is used for indicating an interaction result of the target action and the interaction object;
the display module is further used for displaying the result special effect on the recording interface;
and the generating module is used for generating a video file according to the image acquired in real time, the interactive special effect displayed in the interactive process and the result special effect.
In another aspect, a terminal is provided, which includes a processor and a memory, where at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operation performed by the video recording method as described above.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the operations performed by the video recording method as described above.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
when a video recording instruction is received, the terminal can display an interactive special effect in a recording interface so as to enable a target object to interact with an interactive object in the interactive special effect; in the interaction process, the terminal can determine a result special effect corresponding to the target action in real time according to the action information of the target object and display the result special effect; by adding the interaction process, the actions of the target object in the video recording process are enriched, the interestingness of video recording is improved, and the activity of the target object is improved. And the terminal generates a video file according to the image acquired in real time, the interaction special effect displayed in the interaction process and the result special effect. A plurality of wonderful moments of the target object during interaction are recorded in the video file, so that the video content of the recorded video is greatly enriched, the interestingness of the video is improved, and the information content of the video is increased.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the invention;
fig. 2 is a flowchart of a video recording method according to an embodiment of the present invention;
fig. 3 is a schematic view of an interface for recording video according to an embodiment of the present invention;
fig. 4 is a schematic view of an interface for recording video according to an embodiment of the present invention;
fig. 5 is a schematic interface diagram of a video recording according to an embodiment of the present invention;
fig. 6 is a schematic interface diagram of a video recording according to an embodiment of the present invention;
fig. 7 is a schematic view of an interface for recording video according to an embodiment of the present invention;
fig. 8 is a schematic interface diagram of a video recording according to an embodiment of the present invention;
fig. 9 is a schematic view of an interface for recording video according to an embodiment of the present invention;
fig. 10 is a schematic view of an interface for recording video according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present invention, where the implementation environment includes: a terminal 101 and a server 102. A video application can be installed on the terminal 101, and the terminal 101 records a video in the video application and performs data interaction with the server 102 based on the video application.
The user can start the video application, and trigger the terminal 101 to start recording the video of the target object in the video application, and in the recording process, the terminal 101 can display an interactive special effect on a recording interface, so that the target object interacts with the interactive object through the target action, and the terminal 101 can also display a result special effect based on the interactive result of the target action. And the terminal realizes the interaction process with the target object based on the interaction effect and the result effect. Finally, the terminal 101 generates a video file based on the multi-frame image acquired in real time, the interactive special effect and the result special effect in the interactive process. In addition, the terminal 101 may also send the video file to the server 102, and share the video file with other users on the video application platform through the server 102.
The video application can be a live application with a video recording function, a short video application, a social application or the like. The server 102 is a background server for the video application.
Fig. 2 is a flowchart of a video recording method according to an embodiment of the present invention. The execution subject of the embodiment of the present invention is a terminal, and referring to fig. 2, the method includes:
201. when a video recording instruction is received, the terminal collects the image of the target object and displays the interactive special effect in the recording interface.
The interactive special effect at least comprises an icon of an interactive object which is interacted with the target object through a target action. In this step, when the terminal receives the video recording instruction, the terminal starts a camera to start collecting the image of the target object, and displays the interactive special effect on the recording interface according to the display position of the interactive special effect on the recording interface. The user can trigger a video recording instruction of the terminal in the video application; when the video application is started, the terminal may display a record button on the current interface. And when the terminal detects that the recording button is triggered, the terminal receives the video recording instruction.
Wherein, the terminal can display the interactive special effect randomly. In addition, the interactive special effect may further include an icon representing a change process of the target motion, and the terminal may also display the interactive special effect in combination with the target motion of the target object. Correspondingly, the method for displaying the icon of the interactive object on the recording interface by the terminal comprises the following two modes.
In a first mode, when the interactive special effect comprises the icon of the interactive object, the terminal displays the icon of the interactive object at any position of the recording interface.
The terminal can randomly select a display position on the recording interface, and the icon of the interactive object is rendered at the display position. The icon of the interactive object may be a balloon icon, a gold coin icon, or the like that randomly flashes on the recording interface, which is not specifically limited in this embodiment of the present invention.
And in a second mode, when the interactive special effect comprises an icon and an action icon of an interactive object, the terminal displays the action icon in the recording interface according to the position information of the target part of the target object in the image, and displays the icon of the interactive object at any position of the recording interface.
The target part is a part for executing the target action, and the action icon is used for representing a change process when the target part executes the target action. In this step, the terminal may identify a target portion in the image based on the image acquired in real time, and display the action icon in the recording interface according to the position information of the target portion in the image.
In a possible implementation manner, the target portion may be a head of the target object, the target motion is a head swing motion, and displaying, by the terminal, the motion icon in the recording interface according to the position information of the target portion area in the image includes: and displaying the action icon above the head according to the position information of the head of the target object in the image, wherein the action icon is used for representing the angle and the direction of the head in the swinging process.
In the embodiment of the invention, when the interaction special effect is displayed in the recording interface, the target object can make a corresponding target action through the target part, so that the interaction process with the terminal is realized, and the interestingness of the recorded video is improved. The motion feedback of the head can be the motion of head swing. The terminal displays the action icon above the head in the image, and when the head of the target object swings left and right, the action icon also swings left and right along with the angle and the direction of the head swing left and right, so that the interaction process with the terminal interface is realized. The icon of the interactive object can randomly appear at any position of the recording interface, and the target object can control the head to swing towards the position of the icon of the interactive object. For example, the terminal may divide the upper half of the screen of the recording interface into a nine-square grid area, and randomly display the icon of the interactive object in any grid of the nine-square grid.
In one possible embodiment, the interactive special effect further comprises an interactive expression on an icon of the interactive object. And the terminal renders the interactive expression to be displayed at the position of the face of the interactive object according to the position information of the face in the icon of the interactive object. Therefore, the target object can also simulate the interactive expression on the icon of the interactive object subsequently, and make the expression consistent with the interactive expression, so that the expression interaction with the terminal is realized.
The terminal may determine the position information of the target portion in the image through the server, or may determine the position information of the target portion by itself, and the process of determining the position information of the target portion by the terminal may be: the terminal sends the image of the target object to a server, the server receives the image, identifies the target part in the image through a preset identification algorithm, obtains the position information of the target part in the image, sends the position information of the target part to the terminal, and the terminal receives the position information of the target part; or the terminal identifies the image through a preset identification algorithm based on the acquired image of the target object, identifies the target part in the image and obtains the position information of the target part in the image.
When the camera is started by the terminal, the terminal sends acquired video data to the server in real time, the server converts the video data into an image after acquiring the video data, and then the process of determining the position information of the target part is executed based on the image. Wherein the target part may be a head of the target object. The preset recognition algorithm may be set as needed, and the present invention is not particularly limited in this embodiment, for example, the preset recognition algorithm may be an Adaboost algorithm (iterative algorithm).
Further, the server may beautify the image, where the target portion may be a head of a target object, and the beautifying process may be: the terminal can perform preprocessing on the face region, for example, performing light compensation processing, gray level transformation processing, histogram equalization processing, normalization processing, geometric correction processing, filtering processing, sharpening processing and the like, so that the face in the processed image is more attractive.
In one possible implementation manner, a score icon is displayed in the recording interface, and the score icon is used for representing a score corresponding to an interaction result of the target action interacting with the interactive object. In addition, the terminal displays the time icon in the current recording interface, and the time icon is used for indicating the current duration of the interaction process. For example, the terminal may display a countdown icon in the recording interface. When the terminal starts to record, the terminal starts to synchronously time the current duration of the interaction process, and the time icon is displayed on the current interface.
In the embodiment of the present invention, the icon, the action icon, and the time icon of the interactive object may be set based on needs, which is not specifically limited in the embodiment of the present invention. For example, the interaction process may be a playing interaction of a groundmouse, the action icon may be a hammer icon, the icon of the interaction object may be a groundmouse icon, and the time icon may be a bar icon for counting down the time. Of course, the interaction process can also be game interaction of airplane battle, game interaction of money receiving, and the like. If the airplane battle game interaction is carried out, the action icon can be an airplane icon, and the icon of the interaction object can be other airplane icons to be hit, falling obstacle icons and the like; if the game interaction of receiving the gold coins is carried out, the action icon can be a treasure bowl icon, and the icon of the interaction object can be a gold coin icon falling downwards. Of course, the target portion may also be another portion of the target object, for example, the target portion may also be a hand of the target object, and the target object may perform a game interaction of rolling a balloon with the terminal through the hand, which is not specifically limited in the embodiment of the present invention. In addition, the terminal can prompt the target object to imitate the expression in the icon of the interactive object in the action feedback process through the prompt message, so as to obtain a higher score.
As shown in fig. 3, taking the game interaction of the target object for playing the squirrel as an example, the terminal displays a hammer icon at the top of the head of the target object, and randomly displays the squirrel icon in the squared area of the recording interface. The target object can swing the head left and right, the terminal synchronizes the swing motion of the head of the target object to the hammer icon, and the user can hit the squirrel icon which randomly appears around the target object through the swinging hammer icon. The terminal can also display a bar icon corresponding to the 1-minute countdown and a score icon in the interaction process of the target object below the recording interface so as to prompt the current time length of the interaction process of the target object and the current completed result. In addition, as shown in fig. 4, fig. 4 is an actual interface diagram of the terminal, which can show an actual interactive scene more realistically.
In addition, when the video application is opened for the first time, the terminal can also display guide information for video recording on the current interface, wherein the guide information is used for introducing the video recording process. As shown in fig. 5, when the video recording interface is opened for the first time, the terminal may display guide information in an initial page: the squirrel appears in the squared figure, moves your neck and shakes. Meanwhile, the terminal can also display a start button, such as a GO button, on the recording interface, and when the GO button is triggered, the terminal starts recording. In addition, as shown in fig. 6, fig. 6 is an actual interface diagram of the terminal, which can show an actual interactive scene more realistically.
202. And in the interaction process, the terminal detects the action information of the target object in real time.
The motion information may be position information including a target portion for performing the target motion. In the embodiment of the invention, in the process that the target object interacts through the target action, the terminal can acquire the position information of the target part in real time so as to conveniently judge the interaction result based on the position information.
In the embodiment of the present invention, the target action may be an action of directly triggering the terminal for a target portion of the target object, for example, an action of triggering a balloon icon of a terminal screen stamp by a finger. The target motion may also be a motion in which the target portion itself performs no contact with the terminal, for example, a head-swinging motion. Accordingly, the present step can be implemented in the following two ways.
In the first mode, the terminal acquires the position information triggered by the target part based on the triggered triggering position on the terminal screen.
When the interaction process starts, the target object can interact with the terminal through the trigger terminal screen, the terminal collects the triggered trigger position on the terminal screen in real time, and the position information of the trigger position in the terminal screen is used as the position information triggered by the target part. For example, when a finger of a target object triggers a balloon icon displayed on a terminal screen, the terminal acquires position information of the finger triggering the terminal screen.
In the second mode, the terminal acquires the position information of the target part in the image based on the image of the target object acquired in real time.
In this step, the terminal may determine the location information by itself, or may obtain the location information through the server. The process may be: the terminal sends an image of the target object to a server in real time, and receives position information returned by the server, wherein the image is used for returning the position information of the target part in the image to the terminal by the server based on the image; or the terminal identifies a target part in the image based on the image of the target object acquired in real time, and acquires the position information of the target part in the image.
Wherein, the position information comprises the change condition of the target part in the interaction process. When the target portion is a head of the target object and the target motion is a motion in which the head is to swing, the position information may include position coordinates indicating a head region, a head offset angle and/or a head direction, and the like. For example, the position information may be shifted by 20 ° to the right, indicating that the head swing is performed by 20 ° to the right.
203. And the terminal determines a result special effect corresponding to the target action according to the action information of the target object.
The result special effect is used for indicating an interaction result of the target action interacting with the interaction object. In this step, the terminal determines an interaction result between the target object and the interaction object according to the action information of the target object, and obtains a result special effect corresponding to the interaction result according to the interaction result.
Based on the two manners of step 202, the motion information of the target object may be position information of the target portion in the image, or the target portion triggers position information of a terminal screen.
In the first mode, when the motion information is the position information of a target part in an image, the terminal synchronizes a target motion executed by the target part to a motion icon of the interactive special effect according to the position information of the target part in the image to obtain the position information of the motion icon; and the terminal acquires an interaction result corresponding to the target action according to the position information of the action icon, and acquires a result special effect corresponding to the interaction result according to the interaction result.
When the interactive special effect comprises an action icon and an icon of the interactive object, the target object executes the target action through the head, the target action is used as the head swinging action, and the terminal acquires the interactive result corresponding to the target action according to the position information of the interactive object and comprises the following steps: the terminal judges whether the action icon hits the icon of the interactive object or not according to the position information of the action icon in the interactive special effect and the position information of the icon of the interactive object; and when the action icon hits the icon of the interactive object, the terminal acquires a first result, and when the action icon misses the icon of the interactive object, the terminal acquires a second result. Wherein the first result is used for indicating that the action icon hits the icon of the interactive object; the second result is used for indicating that the action icon misses the icon of the interactive object.
The process of synchronizing the target action of the target object to the action icon by the terminal may be: the terminal acquires the swing angle and the swing direction of the target part in real time based on the collected multi-frame images, and controls the action icon to move according to the swing angle and the swing direction, so that the action icon can reflect the action change condition of the target part in real time. The position information of the icon of the interactive object includes but is not limited to: position coordinates of the action icon, an offset angle of the action icon, and the like.
In a possible implementation manner, when the icon of the interactive object further includes an interactive expression, the terminal may further determine whether the expression is matched with the interactive expression according to the expression of the target object in the image and the interactive expression in the icon of the interactive object; when the expression is matched with the interactive expression, acquiring a third result, wherein the third result is used for indicating that the expression is matched with the interactive expression; and when the expression is not matched with the interactive expression, acquiring a fourth result, wherein the fourth result is used for indicating that the expression is not matched with the interactive expression.
In this step, the terminal stores a corresponding relationship between a plurality of interaction results and a plurality of interaction effects, and the terminal obtains the interaction effect corresponding to the interaction result according to the interaction result. In addition, the terminal can also present the corresponding relationship between the multiple interaction results and the multiple scores, and the terminal can also obtain the scores corresponding to the interaction results from the corresponding relationship between the interaction results and the scores according to the interaction results.
The process of recognizing the expression of the target object by the terminal can be executed by the terminal or the server. Taking the example of obtaining the expression of the target object through the server, the process may be: the server identifies the face part in the image based on the image of the target object through a preset identification algorithm. The server extracts the current features of the five sense organs in the face part, and obtains the expression corresponding to the current features of the five sense organs of the target object from the corresponding relation between the expression and the features of the five sense organs according to the current features of the five sense organs.
Wherein the features of the five sense organs include, but are not limited to: the position coordinates of the five sense organs, the relative positions between the five sense organs, etc. For example, the coordinates of the mouth may indicate the arc of the mouth corner portion being raised, the relative position between the eyes and the mouth, and the like. The terminal matches the current facial features of the target object with facial features corresponding to a plurality of preset expressions, determines the similarity between the current facial features and the facial features corresponding to each expression, and determines the expression corresponding to the similarity which is not less than a preset threshold value in the similarity as the expression corresponding to the current facial features of the target object. Wherein the expression may include, but is not limited to: smile, laugh, anger, sadness, anger, etc.
In a second mode, when the action information is position information of a target part triggering a terminal screen, the terminal acquires an interaction result corresponding to the target action according to the position information triggered by the target part and the position information of the icon of the interaction object; and the terminal acquires a result special effect corresponding to the interaction result according to the interaction result.
In a possible implementation manner, the terminal may obtain a fifth result when the position information triggered by the target portion matches the position information of the icon of the interactive object according to the position information triggered by the target portion and the position information of the icon of the interactive object; and when the position information triggered by the target part is not matched with the position information of the icon of the interactive object, the terminal acquires a sixth result. Wherein the fifth result is used for indicating that the target action of the target part hits the icon of the interactive object; the sixth result is used to indicate that the target motion of the target portion misses the icon of the interactive object.
In the second manner, the process of the terminal obtaining the result special effect corresponding to the interaction result is the same as the process of the first manner, and is not described herein again.
It should be noted that the terminal indicates the target object and the interactive object to perform game interaction through the interactive special effect, and the target action based on the target object greatly increases the interest of the video recording process, the action content of video recording does not need to be designed by the target object, the video content is enriched, the enthusiasm of the target object for recording the video based on the method in the video application is improved, and the activity of the target object of the video application is further improved.
204. And the terminal displays the result special effect on the recording interface.
The terminal can display the result special effect at the display position on the recording interface according to the display position of the result special effect.
In a possible implementation manner, the terminal may further display the score of the interaction result in a recording interface, and further, the terminal may further accumulate the scores of the target object in the interaction process based on the score corresponding to the interaction result, and record the currently accumulated score of the target object through a score icon displayed on the recording interface.
As shown in fig. 7, when the hammer icon hits the squirrel icon before the squirrel icon falls, the terminal may display a "BOOM" icon on the squirrel icon. Of course, when the expression of the target object is matched with the interactive expression of the hamster, the terminal can also display icons such as 'expression matching', 'expression in place', and the like. In addition, as shown in fig. 8, fig. 8 is an actual interface diagram of the terminal corresponding to fig. 7, which can show an actual interactive scene more realistically.
As shown in fig. 9, the terminal may also display a special effect of breaking the screen in the lower right corner of the recording interface. In addition, the terminal can also display the current score on a score icon below the recording interface. As shown in fig. 10, fig. 10 is an actual interface diagram of the terminal corresponding to fig. 9, which can show an actual interaction scene more truly.
205. And the terminal generates a video file according to the image acquired in real time, the interaction special effect displayed in the interaction process and the result special effect.
When the terminal receives a recording ending instruction, the terminal adds an interactive special effect and a result special effect displayed in the interactive process to corresponding images according to a multi-frame image collected in real time, and generates the video file by adding the multi-frame image with the interactive special effect and the result special effect.
The recording end instruction may be triggered by the target object, for example, the target object is triggered by a recording end button, or by a specified voice instruction, and the like. In addition, the recording ending instruction can also be triggered by the terminal, for example, the terminal generates the recording ending instruction based on the interactive duration trigger.
The step of receiving the recording end instruction by the terminal may be: when the terminal detects that a recording end button is triggered; or, when the terminal detects a specified voice; or, in the process of timing the interaction duration by the terminal, when the current timing reaches the interaction duration; the terminal receives the recording end instruction.
It should be noted that, when the terminal generates the video file, the terminal may play the video file on the preview interface, and the user may further clip the video file based on the multiple frames of images included in the video file, and select a partial image in the multiple frames of images as the video file. The terminal can send the video file to a server, and the server shares the video file to the video application platform.
In the embodiment of the invention, when a video recording instruction is received, the terminal can display the interactive special effect in the recording interface so as to enable the target object to interact with the interactive object in the interactive special effect; in the interaction process, the terminal can determine a result special effect corresponding to the target action in real time according to the action information of the target object and display the result special effect; by adding the interaction process, the actions of the target object in the video recording process are enriched, the interestingness of video recording is improved, and the activity of the target object is improved. And the terminal generates a video file according to the image acquired in real time, the interaction special effect displayed in the interaction process and the result special effect. A plurality of wonderful moments of the target object during interaction are recorded in the video file, so that the video content of the recorded video is greatly enriched, the interestingness of the video is improved, and the information content of the video is increased.
Fig. 11 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present invention. Referring to fig. 11, the apparatus includes: the device comprises a display module 1101, a detection module 1102, a determination module 1103 and a generation module 1104.
The display module 1101 is configured to, when a video recording instruction is received, acquire an image of a target object in real time, and display an interactive special effect in a recording interface, where the interactive special effect at least includes an icon of an interactive object where the target object interacts through a target action;
the detection module 1102 is configured to detect motion information of the target object in real time during an interaction process;
a determining module 1103, configured to determine, according to the motion information of the target object, a result special effect corresponding to the target motion, where the result special effect is used to indicate an interaction result of the target motion interacting with the interaction object;
the display module 1101 is further configured to display the result special effect on the recording interface;
and the generating module 1104 is configured to generate a video file according to the image acquired in real time, the interactive special effect displayed in the interactive process, and the result special effect.
Optionally, the interactive special effect includes an icon of an interactive object and/or an action icon, and the display module 1101 includes:
the first display unit is used for displaying the icon of the interactive object at any position of the recording interface;
the second display unit is used for displaying an action icon in the recording interface according to the position information of the target part of the target object in the image, and displaying the icon of the interactive object at any position of the recording interface;
the target part is a part for executing the target action, and the action icon is used for representing a change process when the target part executes the target action.
Optionally, the target portion is a head of the target object, the target motion is a head swing motion, and the second display unit is further configured to display the motion icon above the head according to position information of the head of the target object in the image, where the motion icon is used to indicate an angle and a direction of the head in a swing process.
Optionally, the interactive special effect further includes an interactive expression on an icon of the interactive object.
Optionally, the action information includes position information of a target portion executing the target action, and the detecting module 1102 includes:
the first acquisition unit is used for acquiring the position information of the target part in the image based on the image of the target object acquired in real time;
and the second acquisition unit is used for acquiring the position information triggered by the target part based on the triggered triggering position on the terminal screen.
Optionally, the first obtaining unit is configured to send an image of the target object to a server in real time, and receive location information returned by the server, where the image is used for the server to return location information of the target portion in the image to a terminal based on the image; or, based on the image of the target object acquired in real time, the target part in the image is identified, and the position information of the target part in the image is acquired.
Optionally, the motion information of the target object includes position information of a target portion of the target object in the image, and the determining module 1103 includes:
the synchronization unit is used for synchronizing the target action executed by the target part to the action icon of the interactive special effect according to the position information of the target part in the image to obtain the position information of the action icon;
the acquisition unit is used for acquiring an interaction result corresponding to the target action according to the position information of the action icon;
the obtaining unit is further configured to obtain a result special effect corresponding to the interaction result according to the interaction result.
Optionally, the target object executes the target action through the head, the target action is a head swing action, the interactive special effect includes an action icon and an icon of the interactive object, and the obtaining unit is further configured to determine whether the action icon hits the icon of the interactive object according to the position information of the action icon and the position information of the icon of the interactive object; when the action icon hits the icon of the interactive object, obtaining a first result, wherein the first result is used for indicating that the action icon hits the icon of the interactive object; and when the action icon misses the icon condition of the interactive object, acquiring a second result, wherein the second result is used for indicating that the action icon misses the icon of the interactive object.
Optionally, the apparatus further comprises:
the judging module is used for judging whether the expression is matched with the interactive expression or not according to the expression of the target object in the image and the interactive expression in the icon of the interactive object;
the acquisition module is used for acquiring a third result when the expression is matched with the interactive expression, and the third result is used for indicating that the expression is matched with the interactive expression;
the obtaining module is further configured to obtain a fourth result when the expression is not matched with the interactive expression, and the fourth result is used for indicating that the expression is not matched with the interactive expression.
Optionally, the action information of the target object includes position information of a target portion of the target object triggering a terminal screen, and the determining module 1103 is configured to obtain an interaction result corresponding to the target action according to the position information triggered by the target portion and the position information of an icon of the interaction object; and obtaining a result special effect corresponding to the interaction result according to the interaction result.
Optionally, the display module 1101 is further configured to display a score icon in the recording interface, where the score icon is used to indicate a score corresponding to an interaction result of the target action interacting with the interaction object.
In the embodiment of the invention, when a video recording instruction is received, the terminal can display the interactive special effect in the recording interface so as to enable the target object to interact with the interactive object in the interactive special effect; in the interaction process, the terminal can determine a result special effect corresponding to the target action in real time according to the action information of the target object and display the result special effect; by adding the interaction process, the actions of the target object in the video recording process are enriched, the interestingness of video recording is improved, and the activity of the target object is improved. And the terminal generates a video file according to the image acquired in real time, the interaction special effect displayed in the interaction process and the result special effect. A plurality of wonderful moments of the target object during interaction are recorded in the video file, so that the video content of the recorded video is greatly enriched, the interestingness of the video is improved, and the information content of the video is increased.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the video recording apparatus provided in the foregoing embodiment, when recording a video, only the division of the functional modules is described as an example, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the video recording apparatus and the video recording method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
Fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present invention. The terminal 1200 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1200 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1200 includes: a processor 1201 and a memory 1202.
The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1202 is used to store at least one instruction for execution by processor 1201 to implement a video recording method as provided by method embodiments herein.
In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, touch display 1205, camera 1206, audio circuitry 1207, pointing component 1208, and power source 1209.
The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. The radio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1204 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1204 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to acquire touch signals on or over the surface of the display screen 1205. The touch signal may be input to the processor 1201 as a control signal for processing. At this point, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one, providing the front panel of the terminal 1200; in other embodiments, the display 1205 can be at least two, respectively disposed on different surfaces of the terminal 1200 or in a folded design; in still other embodiments, the display 1205 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 1200. Even further, the display screen 1205 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display panel 1205 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
Camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1201 for processing or inputting the electric signals into the radio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided at different locations of terminal 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The positioning component 1208 is configured to locate a current geographic Location of the terminal 1200 to implement navigation or LBS (Location Based Service). The Positioning component 1208 can be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union galileo System.
The power supply 1209 is used to provide power to various components within the terminal 1200. The power source 1209 may be alternating current, direct current, disposable or rechargeable. When the power source 1209 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, fingerprint sensor 1214, optical sensor 1215, and proximity sensor 1216.
The acceleration sensor 1211 can detect magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1200. For example, the acceleration sensor 1211 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1201 may control the touch display 1205 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the terminal 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211. The processor 1201 can implement the following functions according to the data collected by the gyro sensor 1212: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1213 may be disposed on a side bezel of terminal 1200 and/or an underlying layer of touch display 1205. When the pressure sensor 1213 is disposed on the side frame of the terminal 1200, the user's holding signal of the terminal 1200 can be detected, and the processor 1201 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed at a lower layer of the touch display screen 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1205. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1214 is used for collecting a fingerprint of the user, and the processor 1201 identifies the user according to the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 1201 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1214 may be provided on the front, back, or side of the terminal 1200. When a physical button or vendor Logo is provided on the terminal 1200, the fingerprint sensor 1214 may be integrated with the physical button or vendor Logo.
The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, the processor 1201 may control the display brightness of the touch display 1205 according to the ambient light intensity collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display brightness of the touch display panel 1205 is increased; when the ambient light intensity is low, the display brightness of the touch display panel 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the camera head 1206 shooting parameters based on the ambient light intensity collected by optical sensor 1215.
A proximity sensor 1216, also known as a distance sensor, is typically disposed on the front panel of the terminal 1200. The proximity sensor 1216 is used to collect a distance between the user and the front surface of the terminal 1200. In one embodiment, when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually decreases, the processor 1201 controls the touch display 1205 to switch from the bright screen state to the dark screen state; when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually becomes larger, the processor 1201 controls the touch display 1205 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting of terminal 1200 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal to perform the video recording method in the above embodiments is also provided. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (18)

1. A method for video recording, the method comprising:
when a video recording instruction is received, acquiring an image of a target object in real time, and displaying the target object, a scene where the target object is located and an interaction special effect in a recording interface, wherein the interaction special effect comprises an icon of the interaction object for the target object to interact through a target action and an action icon rendered in the scene where the target object is located according to the target object, the action icon is used for representing a change process when a target part executes the target action, and the target part is a part for executing the target action;
detecting motion information of the target object in real time in an interaction process, wherein the motion information of the target object comprises position information of a target part of the target object in the image;
determining an interaction result between the action icon of the target object and the interaction object according to the action information of the target object, and acquiring a result special effect corresponding to the interaction result according to the interaction result, wherein the result special effect is used for indicating the interaction result of the target action and the interaction object;
displaying the result special effect on the recording interface;
generating a video file according to the image acquired in real time, the icon of the interactive object displayed in the interactive process, the action icon of the target object and the result special effect;
the determining of the interaction result between the action icon of the target object and the interaction object includes:
according to the position information of the target part in the image, synchronizing the target action executed by the target part to the action icon of the interactive special effect to obtain the position information of the action icon; judging whether the action icon hits the icon of the interactive object or not according to the position information of the action icon and the position information of the icon of the interactive object; when the action icon hits the icon of the interactive object, obtaining a first result, wherein the first result is used for indicating that the action icon hits the icon of the interactive object; and when the action icon misses the icon condition of the interactive object, acquiring a second result, wherein the second result is used for indicating that the action icon misses the icon of the interactive object.
2. The method of claim 1, wherein displaying the target object, the scene in which the target object is located, and the interactive special effect in the recording interface comprises:
displaying the image of the target object in the recording interface, wherein the image of the target object comprises the target object and a scene where the target object is located, displaying an action icon in the recording interface according to position information of a target part of the target object in the image, dividing the recording interface into nine-grid areas, and randomly displaying the icon of the interactive object in any grid of the nine-grid.
3. The method of claim 2, wherein the displaying an action icon in the recording interface according to the position information of the target part of the target object in the image comprises:
and displaying the action icon above the head according to the position information of the head of the target object in the image, wherein the action icon is used for representing the angle and the direction of the head in the swinging process.
4. The method of claim 3, wherein the interactive special effects further comprise interactive emoticons on an icon of the interactive object.
5. The method of claim 1, wherein detecting the motion information of the target object in real time during the interaction comprises:
acquiring position information of a target part in the image based on the image of the target object acquired in real time; or acquiring the position information triggered by the target part based on the triggered triggering position on the terminal screen.
6. The method of claim 5, wherein the obtaining position information of a target portion in the image based on the image of the target object acquired in real time comprises:
sending an image of the target object to a server in real time, and receiving position information returned by the server, wherein the image is used for returning the position information of the target part in the image to a terminal by the server based on the image; alternatively, the first and second electrodes may be,
and identifying a target part in the image based on the image of the target object acquired in real time, and acquiring the position information of the target part in the image.
7. The method of claim 1, further comprising:
judging whether the expression is matched with the interactive expression or not according to the expression of the target object in the image and the interactive expression in the icon of the interactive object;
when the expression is matched with the interactive expression, obtaining a third result, wherein the third result is used for indicating that the expression is matched with the interactive expression;
and when the expression is not matched with the interactive expression, acquiring a fourth result, wherein the fourth result is used for indicating that the expression is not matched with the interactive expression.
8. The method of claim 1, further comprising:
displaying a score icon in the recording interface, wherein the score icon is used for representing a score corresponding to an interaction result of the target action interacting with the interaction object.
9. A video recording apparatus, characterized in that the apparatus comprises:
the display module is used for acquiring images of a target object in real time when a video recording instruction is received, and displaying the target object, a scene where the target object is located and an interaction special effect in a recording interface, wherein the interaction special effect comprises an icon of the interaction object for the target object to interact through a target action and an action icon rendered in the scene where the target object is located according to the target object, the action icon is used for representing a change process when a target part executes the target action, and the target part is used for executing the target action;
the detection module is used for detecting the action information of the target object in real time in the interaction process, wherein the action information of the target object comprises the position information of a target part of the target object in the image;
the determining module is used for determining an interaction result between the action icon of the target object and the interaction object according to the action information of the target object, and acquiring a result special effect corresponding to the interaction result according to the interaction result, wherein the result special effect is used for indicating the interaction result of the target action and the interaction object;
the display module is further used for displaying the result special effect on the recording interface;
the generating module is used for generating a video file according to the image acquired in real time, the icon of the interactive object displayed in the interactive process, the action icon of the target object and the result special effect;
the determining module is further configured to synchronize a target action executed by the target portion to an action icon of the interactive special effect according to the position information of the target portion in the image, so as to obtain the position information of the action icon; judging whether the action icon hits the icon of the interactive object or not according to the position information of the action icon and the position information of the icon of the interactive object; when the action icon hits the icon of the interactive object, obtaining a first result, wherein the first result is used for indicating that the action icon hits the icon of the interactive object; and when the action icon misses the icon condition of the interactive object, acquiring a second result, wherein the second result is used for indicating that the action icon misses the icon of the interactive object.
10. The apparatus of claim 9, wherein the display module comprises:
and the second display unit is used for displaying the image of the target object in the recording interface, the image of the target object comprises the target object and a scene where the target object is located, displaying an action icon in the recording interface according to the position information of the target part of the target object in the image, dividing the recording interface into a nine-grid area, and randomly displaying the icon of the interactive object in any grid of the nine-grid.
11. The apparatus according to claim 10, wherein the second display unit is further configured to display the action icon above the head according to position information of the head of the target object in the image, and the action icon is configured to indicate an angle and a direction of the head during the swing.
12. The apparatus of claim 11, wherein the interactive special effect further comprises an interactive expression on an icon of the interactive object.
13. The apparatus of claim 9, wherein the detection module comprises:
the first acquisition unit is used for acquiring the position information of a target part in the image based on the image of the target object acquired in real time;
and the second acquisition unit is used for acquiring the position information triggered by the target part based on the triggered triggering position on the terminal screen.
14. The apparatus according to claim 13, wherein the first obtaining unit is configured to send an image of the target object to a server in real time, and receive location information returned by the server, where the image is used by the server to return location information of the target portion in the image to a terminal based on the image; or, identifying a target part in the image based on the image of the target object acquired in real time, and acquiring the position information of the target part in the image.
15. The apparatus of claim 9, further comprising:
the judging module is used for judging whether the expression is matched with the interactive expression or not according to the expression of the target object in the image and the interactive expression in the icon of the interactive object;
the obtaining module is used for obtaining a third result when the expression is matched with the interactive expression, and the third result is used for indicating that the expression is matched with the interactive expression;
the obtaining module is further configured to obtain a fourth result when the expression is not matched with the interactive expression, and the fourth result is used for indicating that the expression is not matched with the interactive expression.
16. The device of claim 9, wherein the display module is further configured to display a score icon in the recording interface, where the score icon is used to represent a score corresponding to an interaction result of the target action interacting with the interactive object.
17. A terminal, characterized in that the terminal comprises a processor and a memory, wherein at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operations performed by the video recording method according to any one of claims 1 to 8.
18. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by a video recording method according to any one of claims 1 to 8.
CN201810688229.7A 2018-06-28 2018-06-28 Video recording method, device, terminal and storage medium Active CN108833818B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810688229.7A CN108833818B (en) 2018-06-28 2018-06-28 Video recording method, device, terminal and storage medium
CN202110296689.7A CN112911182B (en) 2018-06-28 2018-06-28 Game interaction method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810688229.7A CN108833818B (en) 2018-06-28 2018-06-28 Video recording method, device, terminal and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110296689.7A Division CN112911182B (en) 2018-06-28 2018-06-28 Game interaction method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN108833818A CN108833818A (en) 2018-11-16
CN108833818B true CN108833818B (en) 2021-03-26

Family

ID=64133599

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110296689.7A Active CN112911182B (en) 2018-06-28 2018-06-28 Game interaction method, device, terminal and storage medium
CN201810688229.7A Active CN108833818B (en) 2018-06-28 2018-06-28 Video recording method, device, terminal and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110296689.7A Active CN112911182B (en) 2018-06-28 2018-06-28 Game interaction method, device, terminal and storage medium

Country Status (1)

Country Link
CN (2) CN112911182B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109348277B (en) * 2018-11-29 2020-02-07 北京字节跳动网络技术有限公司 Motion pixel video special effect adding method and device, terminal equipment and storage medium
CN111258415B (en) * 2018-11-30 2021-05-07 北京字节跳动网络技术有限公司 Video-based limb movement detection method, device, terminal and medium
CN109803165A (en) * 2019-02-01 2019-05-24 北京达佳互联信息技术有限公司 Method, apparatus, terminal and the storage medium of video processing
CN111659114B (en) * 2019-03-08 2023-09-15 阿里巴巴集团控股有限公司 Interactive game generation method and device, interactive game processing method and device and electronic equipment
CN111695376A (en) * 2019-03-13 2020-09-22 阿里巴巴集团控股有限公司 Video processing method, video processing device and electronic equipment
CN109889893A (en) * 2019-04-16 2019-06-14 北京字节跳动网络技术有限公司 Method for processing video frequency, device and equipment
CN110110142A (en) * 2019-04-19 2019-08-09 北京大米科技有限公司 Method for processing video frequency, device, electronic equipment and medium
CN112396676B (en) * 2019-08-16 2024-04-02 北京字节跳动网络技术有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium
CN112887631B (en) * 2019-11-29 2022-08-12 北京字节跳动网络技术有限公司 Method and device for displaying object in video, electronic equipment and computer-readable storage medium
CN111586423B (en) * 2020-04-24 2021-09-10 腾讯科技(深圳)有限公司 Live broadcast room interaction method and device, storage medium and electronic device
CN111857923B (en) * 2020-07-17 2022-10-28 北京字节跳动网络技术有限公司 Special effect display method and device, electronic equipment and computer readable medium
CN111914523B (en) * 2020-08-19 2021-12-14 腾讯科技(深圳)有限公司 Multimedia processing method and device based on artificial intelligence and electronic equipment
CN112148188A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Interaction method and device in augmented reality scene, electronic equipment and storage medium
CN112243065B (en) * 2020-10-19 2022-02-01 维沃移动通信有限公司 Video recording method and device
CN112560605B (en) * 2020-12-02 2023-04-18 北京字节跳动网络技术有限公司 Interaction method, device, terminal, server and storage medium
CN112702625B (en) * 2020-12-23 2024-01-02 Oppo广东移动通信有限公司 Video processing method, device, electronic equipment and storage medium
CN113014949B (en) * 2021-03-10 2022-05-06 读书郎教育科技有限公司 Student privacy protection system and method for smart classroom course playback
CN112988027B (en) * 2021-03-15 2023-06-27 北京字跳网络技术有限公司 Object control method and device
CN114567805A (en) * 2022-02-24 2022-05-31 北京字跳网络技术有限公司 Method and device for determining special effect video, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106730815A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 The body-sensing interactive approach and system of a kind of easy realization

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1689172B1 (en) * 2001-06-05 2016-03-09 Microsoft Technology Licensing, LLC Interactive video display system
US8549442B2 (en) * 2005-12-12 2013-10-01 Sony Computer Entertainment Inc. Voice and video control of interactive electronically simulated environment
US20070162854A1 (en) * 2006-01-12 2007-07-12 Dan Kikinis System and Method for Interactive Creation of and Collaboration on Video Stories
US8150155B2 (en) * 2006-02-07 2012-04-03 Qualcomm Incorporated Multi-mode region-of-interest video object segmentation
CN103413468A (en) * 2013-08-20 2013-11-27 苏州跨界软件科技有限公司 Parent-child educational method based on a virtual character
CN105617658A (en) * 2015-12-25 2016-06-01 新浪网技术(中国)有限公司 Multiplayer moving shooting game system based on real indoor environment
CN106231415A (en) * 2016-08-18 2016-12-14 北京奇虎科技有限公司 A kind of interactive method and device adding face's specially good effect in net cast
CN107613310B (en) * 2017-09-08 2020-08-04 广州华多网络科技有限公司 Live broadcast method and device and electronic equipment
CN107944397A (en) * 2017-11-27 2018-04-20 腾讯音乐娱乐科技(深圳)有限公司 Video recording method, device and computer-readable recording medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106730815A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 The body-sensing interactive approach and system of a kind of easy realization

Also Published As

Publication number Publication date
CN112911182B (en) 2022-08-23
CN112911182A (en) 2021-06-04
CN108833818A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108833818B (en) Video recording method, device, terminal and storage medium
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN108401124B (en) Video recording method and device
CN110572711B (en) Video cover generation method and device, computer equipment and storage medium
CN110865754B (en) Information display method and device and terminal
CN110300274B (en) Video file recording method, device and storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN108848394A (en) Net cast method, apparatus, terminal and storage medium
CN112044065B (en) Virtual resource display method, device, equipment and storage medium
CN112929654B (en) Method, device and equipment for detecting sound and picture synchronization and storage medium
CN110956580A (en) Image face changing method and device, computer equipment and storage medium
CN112581358A (en) Training method of image processing model, image processing method and device
CN111437600A (en) Plot showing method, plot showing device, plot showing equipment and storage medium
CN111028566A (en) Live broadcast teaching method, device, terminal and storage medium
CN111131867B (en) Song singing method, device, terminal and storage medium
CN110891181B (en) Live broadcast picture display method and device, storage medium and terminal
CN109819308B (en) Virtual resource acquisition method, device, terminal, server and storage medium
CN110152309B (en) Voice communication method, device, electronic equipment and storage medium
CN110300275B (en) Video recording and playing method, device, terminal and storage medium
CN112511889B (en) Video playing method, device, terminal and storage medium
CN114388001A (en) Multimedia file playing method, device, equipment and storage medium
CN111898488A (en) Video image identification method and device, terminal and storage medium
CN111986700A (en) Method, device, equipment and storage medium for triggering non-contact operation
CN111367492A (en) Webpage display method and device and storage medium
CN110942426A (en) Image processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant