CN111611941B - Special effect processing method and related equipment - Google Patents

Special effect processing method and related equipment Download PDF

Info

Publication number
CN111611941B
CN111611941B CN202010443569.0A CN202010443569A CN111611941B CN 111611941 B CN111611941 B CN 111611941B CN 202010443569 A CN202010443569 A CN 202010443569A CN 111611941 B CN111611941 B CN 111611941B
Authority
CN
China
Prior art keywords
action
special effect
accuracy
initialized
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010443569.0A
Other languages
Chinese (zh)
Other versions
CN111611941A (en
Inventor
吴雪蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010443569.0A priority Critical patent/CN111611941B/en
Publication of CN111611941A publication Critical patent/CN111611941A/en
Application granted granted Critical
Publication of CN111611941B publication Critical patent/CN111611941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Social Psychology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of artificial intelligence, and provides a special effect processing method and related equipment, wherein the method comprises the following steps: performing image detection on an original video to obtain at least two target video frames comprising a moving target; performing motion recognition on the moving target based on at least two target video frames; when an action recognition result is obtained, determining an action sequence for initializing the special effect; performing matching calculation on each action in the action sequence of the initialized special effect and each action in the action recognition result, determining action accuracy between each action in the action sequence of the initialized special effect and the corresponding action matched in the action recognition result, and determining the special effect accuracy of the initialized special effect according to the action accuracy obtained by calculation; and acquiring and displaying the target special effect associated with the initialized special effect based on the special effect accuracy. The implementation of the application improves the accuracy of special effect matching and enhances the interactivity of the special effect matching process.

Description

Special effect processing method and related equipment
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a special effect processing method and related equipment.
Background
When the equipment is adopted to shoot the video at present, if special effects are required to be added, a video editor is adopted to edit the video after the video shooting is finished, and additional functions carried by a shooting application program can be adopted to process the video in the video shooting process; such additional functionality for adding special effects is typically accomplished by identifying the actions exhibited by the target object in the captured video image, thereby matching the special effects according to the actions.
However, in the prior art, when identifying actions, only a single action is generally identified, and then a special effect is matched based on the single action, and the accuracy of the special effect matching method is low and the interactivity is weak.
Disclosure of Invention
The application provides a special effect processing method and related equipment, which can solve at least one technical problem. The technical scheme is as follows:
in a first aspect, a special effect processing method is provided, including: performing image detection on an original video to obtain at least two target video frames comprising a moving target; performing motion recognition on the moving target based on the at least two target video frames; when an action recognition result is obtained, determining an action sequence for initializing the special effect; performing matching calculation on each action in the action sequence of the initialization special effect and each action in the action recognition result, determining the action accuracy between each action in the action sequence of the initialization special effect and the corresponding action matched in the action recognition result, and determining the special effect accuracy of the initialization special effect according to the action accuracy obtained by calculation; and acquiring a target special effect associated with the initialized special effect based on the special effect accuracy and displaying the target special effect.
With reference to the first aspect, in a first implementation manner of the first aspect, the performing, based on the at least two target video frames, motion recognition on the moving target includes any one of the following: the motion change trend of the moving targets in the front and rear two adjacent target video frames is recognized in sequence, so that the motion recognition of the moving targets is completed; selecting at least two video frames to be identified from the at least two target video frames at a preset frequency, and identifying the motion change trend of a moving target in the at least two video frames to be identified so as to finish motion identification of the moving target.
With reference to the first aspect, in a second implementation manner of the first aspect, when the action recognition result is obtained, determining an action sequence of initializing the special effect includes any one of the following: when an action recognition result is obtained, determining at least one action sequence for initializing the special effects based on the first action in the action recognition result according to time sequencing; when an action recognition result is obtained, taking a preset action sequence of the special effect as an action sequence of the initialized special effect; the preset special effects include at least one.
With reference to the first aspect, in a third implementation manner of the first aspect, the performing a matching calculation on each action in the action sequence of the initialized special effect and each action in the action recognition result, determining an action accuracy between each action in the action sequence of the initialized special effect and a corresponding action matched in the action recognition result, and determining the special effect accuracy of the initialized special effect according to the calculated action accuracy includes: sequentially matching each action in the action sequence of the initialization special effect with each action in the action recognition result according to time sequence; respectively calculating the action accuracy between each action in the action sequence of the initialization special effect and the corresponding action matched in the action recognition result; when each action in the action sequence of the initialized special effect is matched with the corresponding action in the action recognition result, determining the special effect accuracy of the initialized special effect according to the calculated action accuracy.
With reference to the first aspect, in a fourth implementation manner of the first aspect, the action accuracy includes action point position accuracy, action trigger time accuracy, and action duration accuracy; the determining the special effect accuracy of the initialization special effect according to the calculated action accuracy comprises the following steps: calculating the sum of the accuracy of action points, the sum of the accuracy of action triggering time and the sum of the accuracy of action duration of all actions in the action sequence of the initialization special effect; and calculating the accuracy sum of the action points, the accuracy sum of the action triggering time and the accuracy sum of the action duration based on preset weights to obtain the special effect accuracy of the initialization special effect.
With reference to the second implementation manner of the first aspect, in a fifth implementation manner of the first aspect, when an initialized special effect is included, the acquiring, based on the special effect accuracy, a target special effect associated with the initialized special effect is displayed, including: and adjusting the display effect of the initialized special effect based on the special effect accuracy, and displaying the initialized special effect after adjusting the display effect as a target special effect.
With reference to the second implementation manner of the first aspect, in a sixth implementation manner of the first aspect, when at least two initialized special effects are included, the acquiring, based on the special effect accuracy, a target special effect associated with the initialized special effect is displayed, including: and determining the initialization special effect with the highest special effect accuracy as a target special effect, and displaying the target special effect.
In a second aspect, there is provided a special effect processing apparatus including: the detection module is used for carrying out image detection on the original video and obtaining at least two target video frames comprising a moving target; the identification module is used for identifying the motion target based on the at least two target video frames; the determining module is used for determining an action sequence of the initialization special effect when the action recognition result is obtained; the calculation module is used for carrying out matching calculation on each action in the action sequence of the initialization special effect and each action in the action recognition result, determining the action accuracy between each action in the action sequence of the initialization special effect and the corresponding action matched in the action recognition result, and determining the special effect accuracy of the initialization special effect according to the action accuracy obtained by calculation; and the display module is used for acquiring and displaying the target special effect associated with the initialized special effect based on the special effect accuracy.
With reference to the second aspect, in a first implementation manner of the second aspect, the identification module includes any one of the following units: the first recognition unit is used for recognizing motion change trends of moving targets in two adjacent front and rear target video frames in sequence to finish motion recognition of the moving targets; the second recognition unit is used for selecting at least two video frames to be recognized from the at least two target video frames at a preset frequency, and recognizing the motion change trend of the moving target in the at least two video frames to be recognized so as to complete motion recognition of the moving target.
With reference to the second aspect, in a second implementation manner of the second aspect, the determining module includes any one of the following units: the first determining unit is used for determining at least one action sequence for initializing the special effects based on the first action in the action recognition result according to time sequence when the action recognition result is obtained; the second determining unit is used for taking the action sequence of the preset special effect as the action sequence of the initialized special effect when the action recognition result is obtained; the preset special effects include at least one.
With reference to the second aspect, in a third implementation manner of the second aspect, the computing module includes: the matching unit is used for sequentially matching each action in the action sequence of the initialization special effect with each action in the action recognition result according to time sequence; the action calculation unit is used for calculating action accuracy between each action in the action sequence of the initialization special effect and the corresponding action matched in the action recognition result respectively; and the special effect calculation unit is used for determining the special effect accuracy of the initialized special effect according to the calculated action accuracy when each action in the action sequence of the initialized special effect is matched with the corresponding action in the action recognition result.
With reference to the second aspect, in a fourth implementation manner of the second aspect, the action accuracy includes action point position accuracy, action trigger time accuracy, and action duration accuracy; the computing module includes: the first calculating unit is used for calculating the action point position accuracy sum, the action triggering time accuracy sum and the action duration accuracy sum of all actions in the action sequence of the initialization special effect; and the second calculation unit is used for calculating the action point position accuracy sum, the action trigger time accuracy sum and the action duration time accuracy sum based on preset weights to obtain the special effect accuracy of the initialization special effect.
With reference to the second implementation manner of the second aspect, in a fifth implementation manner of the second aspect, when an initialized special effect is included, the display module includes a first display unit, configured to adjust a display effect of the initialized special effect based on the special effect accuracy, and display the initialized special effect after the display effect is adjusted as a target special effect.
With reference to the second implementation manner of the second aspect, in a sixth implementation manner of the second aspect, when at least two initialized special effects are included, the display module includes a second display unit, configured to determine, as a target special effect, the initialized special effect with the highest special effect accuracy, and display the target special effect.
In a third aspect, an electronic device is provided, comprising: one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: the special effect processing method described in any implementation manner of the first aspect is executed.
In a fourth aspect, there is provided a computer readable storage medium storing at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the special effect processing method of any one of the embodiments of the first aspect and the first aspect.
The technical scheme provided by the application has the beneficial effects that:
the method comprises the steps of obtaining at least two target video frames comprising a moving target through image detection of an original video, and identifying the moving target based on the at least two target video frames; when an action recognition result is obtained, determining an action sequence for initializing the special effects, namely triggering special effect recognition corresponding to a series of actions; performing matching calculation on each action in the action sequence of the initialized special effect and each action in the action recognition result, determining action accuracy between each action in the action sequence of the initialized special effect and the corresponding action matched in the action recognition result, and determining the special effect accuracy of the initialized special effect according to the action accuracy obtained by calculation, namely determining the special effect accuracy of the initialized special effect by the action accuracy of at least one action; and acquiring and displaying the target special effect associated with the initialized special effect based on the special effect accuracy. According to the method, the action sequence corresponding to the initialized special effect and comprising at least one action is set, the action recognition result comprising at least one action is obtained by recognizing the action of the moving target in the original video, each action in the action sequence of the initialized special effect is matched with each action in the action recognition result, and finally the special effect accuracy is determined according to the action accuracy of each action matched with the corresponding action in the action recognition result in the action sequence of the initialized special effect, so that the target special effect is obtained according to the special effect accuracy for displaying, the accuracy of special effect matching is improved, and the interactivity of the special effect matching process is enhanced.
Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flow chart of a special effect processing method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a special effect processing method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a special effect processing method according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a special effect processing method according to an embodiment of the present application;
fig. 5 is a schematic flow chart of a special effect processing method according to an embodiment of the present application;
fig. 6 is a schematic flow chart of a special effect processing method according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a special effect processing method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of displaying special effects on a display interface according to an embodiment of the present application;
fig. 9 is an application flowchart of a special effect processing method provided in an embodiment of the present application;
Fig. 10 is a schematic structural diagram of a special effect processing device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. Computer Vision (CV) is a subject of research on how to "look" at a machine in artificial intelligence technology, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing, so that the Computer processes the target into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, gesture recognition, fingerprint recognition, and others.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
In the special effect processing method of the embodiment of the application, the method can be specifically executed by the electronic equipment of the embodiment of the application, and the electronic equipment can be specifically a mobile terminal. As shown in fig. 1, an embodiment of the present application includes the steps of:
s101, performing image detection on an original video to obtain at least two target video frames comprising a moving target;
s102, performing action recognition on the moving target based on the at least two target video frames;
s103, when an action recognition result is obtained, determining an action sequence for initializing the special effect;
s104, performing matching calculation on each action in the action sequence of the initialized special effect and each action in the action recognition result, determining the action accuracy between each action in the action sequence of the initialized special effect and the corresponding action matched in the action recognition result, and determining the special effect accuracy of the initialized special effect according to the action accuracy obtained by calculation;
S105, acquiring a target special effect associated with the initialized special effect based on the special effect accuracy and displaying the target special effect.
In step S101, image detection is performed on an original video, and at least two target video frames including a moving target are obtained; specifically, the original video is content photographed by an electronic device; when the image detection is carried out on the original video, the image detection can be carried out on the video frame of the original video by adopting a model designed by combining an optimized lightweight network shuffle block structure with a multi-scale feature fusion method. The moving object is a target object for image detection, and the moving object can be a human hand, a human face, a limb, an object with biological characteristics, and the like. In the process of shooting a video, a moving object does not always appear in a shot video image, in consideration of reducing the calculation amount of motion recognition of the moving object, image detection is performed on an original video, and at least two target video frames including the moving object are acquired as a basis for motion recognition. In an embodiment, the moving object is assumed to be a human hand, when an original video is acquired, gesture detection is sequentially performed on each video frame in the original video, if a human hand is detected, gesture tracking is triggered (the gesture tracking is that an initial gesture area of a certain video frame is obtained for one end of a video stream through a detection algorithm, or a gesture area of a certain video frame is predicted, so that an accurate gesture area of a next video frame is obtained; when the gesture tracking fails, carrying out image detection on the next video frame again; according to the embodiment of the application, at least two target video frames comprising a moving target are obtained through gesture tracking.
In step S102, performing motion recognition on the moving object based on the at least two object video frames; specifically, an action recognition model designed by an optimized lightweight network shuffle block structure is adopted to perform action recognition on a moving target; and when the moving object is a human hand, the model is designed into a gesture recognition model so as to recognize the gesture of the human hand. Specifically, a key point positioning method is also adopted for action recognition; according to the face detector blazeface network structure, designing an action recognition model aiming at the key point positioning task, and if the moving target is a human hand, designing a gesture detection blazehand model. When the moving object is a human hand, as the freedom degree of movement of the finger joints is larger, in order to restrict the position relation between the local joint points and the full-hand joint points, the regression loss of the local joint points (such as 21 joint points) and the structure loss of the global joint points are designed so as to enhance the stability of the regression of the key points of the gesture of the model. In the embodiment of the application, the motion recognition is performed on the video frames including the moving object, which is specifically to recognize the category of the moving object in the target video frames, for example, when the moving object is a human hand, the gesture of a certain target video frame is recognized as a fist, and at least one motion of the moving object can be determined to be a punch, a punch and other motions by recognizing the target video frames of at least two frames.
In step S103, when the action recognition result is obtained, determining an action sequence for initializing the special effect; specifically, when motion recognition is performed on a moving object, triggering to perform special effect matching is performed when at least one motion displayed by the moving object in shooting is determined. In the implementation of the application, one special effect corresponds to a plurality of actions, namely, the special effect is triggered after a plurality of continuous actions are displayed by a moving object; for example, a special effect named as the palm of the hand, the corresponding actions include boxing, extending the fist from front to back, extending the palm of the hand, etc.; when the moving object shows a plurality of corresponding actions, the action recognition structure of the moving object is matched with the special effects such as the palm of the hand, and the special effects of the huge fingered citron appear on the display interface. In step S103, when the action recognition result is obtained, an action sequence of the initialization special effect is determined to match the action in the action sequence of the initialization special effect with the action in the action recognition result. Alternatively, when the moving object is a human hand, the initialization special effect may be a special effect related to martial arts. The martial arts and martial arts related special effects consist of a series of martial arts actions, each action is provided with corresponding completion time, action point position accuracy and duration accuracy; the method specifically comprises the step of combining common martial arts with action details through disassembling to form a specific action sequence. The initialization special effect can be a set special effect, an interface for setting variables can be provided for the outside, and the initialization special effect is set according to the material of the special effect.
In step S104, performing matching calculation on each action in the action sequence of the initialized special effect and each action in the action recognition result, determining action accuracy between each action in the action sequence of the initialized special effect and the corresponding action matched in the action recognition result, and determining special effect accuracy of all actions in the action sequence of the initialized special effect by the moving target according to the calculated action accuracy; specifically, performing matching calculation on each action in the action sequence of the initialization special effect and each action in the action recognition structure through an action recognizer; the action identifier is used for judging whether the current action accords with an action mode (action in an action series of an initialization special effect) preset by the system or not, judging whether the action is accurate (whether the point position of the action is accurate, whether the action triggering time is accurate, whether the action duration is accurate or not, and all three variables are provided with accuracy values or not). Calculating the special effect accuracy of the initialized special effect according to the action accuracy by the special effect identifier; specifically, the special effect identifier comprises a sequence of action identifiers, and the accuracy of the action sequence is judged; when the action recognition result is obtained in step S103, the special effect recognizer is turned on, the actions are recognized one by one according to the order in the sequence of the action recognizer, and the special effect accuracy of the initialized special effect is calculated according to the point position accuracy, the trigger time accuracy and the duration accuracy of each action.
In step S105, a target special effect associated with the initialized special effect is acquired based on the special effect accuracy and displayed; specifically, the target special effect associated with the initialized special effect is obtained and displayed based on the special effect accuracy of the action completed by the moving target calculated in step S104. Optionally, a standard value of the special effect accuracy of the initialized special effect is set in the special effect identifier, when determining that the current moving target completes the special effect accuracy of the corresponding action sequence of the initialized special effect through step S104, a deviation value between the special effect accuracy of the actually completed action and the set standard value can be calculated, and the target special effect associated with the initialized special effect is obtained according to the deviation value and displayed. The target special effect is formed by adjusting or selecting the initialized special effect through the special effect accuracy. The special effect accuracy can be represented by normalized values. For example, initializing the special effect to enable a light ring to appear around the moving object behind the moving object, adjusting the size and/or rendering degree of the light ring to 80% of the original light ring display effect when the accuracy of the special effect is 0.8, and displaying the adjusted light ring as the target special effect.
According to the method, the device and the system, the action sequence corresponding to the initialized special effect and comprising at least one action is set, the action recognition result comprising at least one action is obtained by recognizing the action of the moving target in the original video, each action in the action sequence of the initialized special effect is matched with each action in the action recognition result, and finally the special effect accuracy is determined according to the action accuracy between each action in the action sequence of the initialized special effect and the corresponding action matched in the action recognition result, so that the target special effect is obtained according to the special effect accuracy for display, the accuracy of special effect matching is improved, and the interactivity of the special effect matching process is enhanced.
In one embodiment, as shown in fig. 2, step S102 performs motion recognition on the moving object based on the at least two object video frames, including any one of the following:
s201, sequentially identifying motion change trends of moving targets in two adjacent front and rear target video frames to finish motion identification of the moving targets;
s202, selecting at least two video frames to be identified from the at least two target video frames at a preset frequency, and identifying the motion change trend of a moving target in the at least two video frames to be identified so as to complete motion identification of the moving target.
Specifically, when the motion recognition is performed on the moving object, in step S201, the motion change trend of the moving object in two adjacent front and rear object video frames may be sequentially recognized; in view of reducing the complexity of calculation, in step S202, at least two video frames to be identified may be selected from at least two target video frames including a moving target at a predetermined frequency, and the motion change trend of the moving target in the at least two video frames to be identified may be identified; if 7 frames are taken as a period, extracting target video frames such as a first frame, a seventh frame, a fourteenth frame and the like from the at least two target video frames for identification. In an embodiment, the at least two target video frames are consecutive video frames each comprising a moving target.
In one embodiment, as shown in fig. 3, step S103 determines an action sequence for initializing the special effects when the action recognition result is obtained, including any one of the following:
s301, when an action recognition result is obtained, determining an action sequence of at least one initialized special effect based on the first action in time sequence in the action recognition result;
s302, when an action recognition result is obtained, taking a preset action sequence of the special effect as an action sequence of the initialized special effect; the preset special effects include at least one.
In step S301, when an action recognition result is obtained, determining an action sequence of at least one initialized special effect based on a first action in the action recognition result according to time ordering; specifically, when the action recognition result is acquired, at least one action sequence for initializing the special effects is determined based on the acquired first action. For example, if the first action in the current action recognition result is a fist action, an action sequence of the first action and the fist related action is obtained from the stored multiple special effects and is used as an action sequence of the initialized special effects. Optionally, displaying a guide for triggering the special effect on a display interface shot by the terminal, if the special effect such as the palm of the hand is required to be triggered, guiding the user to take a fist starting action, and determining an action sequence for initializing the special effect when the user finishes the action, namely taking the action sequence corresponding to the special effect such as the palm of the hand as the action sequence for initializing the special effect, and triggering the action identifier and the special effect identifier. That is, in step S301, the action sequence of the initialization special effect is related to the first action completed by the moving object.
In step S302, when an action recognition result is obtained, taking a preset action sequence of the special effect as an action sequence of the initialized special effect; the preset special effects comprise at least one; specifically, when the motion recognition result is obtained, the preset motion sequence of the special effect is used as the motion sequence of the initialized special effect, namely the motion sequence of the initialized special effect is irrelevant to the motion currently displayed by the moving target, and the motion sequence of the initialized special effect is the preset motion sequence of the special effect. The action sequence of the preset special effects comprises at least one of the following steps: (1) A fixed motion sequence, such as a fist-palm-fist motion sequence; (2) For a randomly generated action sequence, if a plurality of actions are included in the action pool, randomly grabbing any number of actions to be combined into the action sequence; (3) The action sequences of the special effects in the special effect pool, such as 3 special effects included in the special effect pool, and the special effects a, b and c respectively correspond to the respective action sequences. In an embodiment, in step S302, taking the action sequence of the preset special effect as the action sequence of the initialized special effect includes taking all action sequences of the special effects in the special effect pool as the action sequences of the initialized special effects.
In an embodiment, as shown in fig. 4, step S104 performs matching calculation on each action in the action sequence of the initialized special effect and each action in the action recognition result, determines an action accuracy between each action in the action sequence of the initialized special effect and a corresponding action matched in the action recognition result, and determines the special effect accuracy of the initialized special effect according to the calculated action accuracy, including:
s401, sequentially matching each action in the action sequence of the initialization special effect with each action in the action recognition result according to time sequence;
s402, respectively calculating the action accuracy between each action in the action sequence of the initialization special effect and the corresponding action matched in the action recognition result;
s403, when each action in the action sequence of the initialized special effect is matched with the corresponding action in the action recognition result, determining the special effect accuracy of the initialized special effect according to the calculated action accuracy.
Specifically, the motion recognition and the motion matching are performed while the motion target displays the motion, that is, after the motion sequence of the initialized special effect is determined in step S103, the motion in the motion sequence of the initialized special effect is used as a reference to match with the motion completed by the motion target. Specifically, when one action in the action sequence of the initialization special effect is completed, the action accuracy will be calculated for that action; namely, the action accuracy calculation of each action is completed while the action matching is sequentially carried out. In one embodiment, in step S401, if the third action in the action sequence of the initializing special effect cannot be matched with the action in the action recognition result, the action matching operation is exited, that is, the action recognizer and the recruitment recognizer are turned off, and the subsequent steps are not executed any more; that is, the process of motion matching is sequentially performed, and when any motion in the motion sequence of the initialization special effect cannot be completed, the subsequent motion is not matched any more. Optionally, when any action in the action sequence of the initiating special effect cannot be completed, in step S402, the value corresponding to the action accuracy of the action matching is set to 0, and the matching of the subsequent actions is continued. In step S402, the motion accuracy of the motion completed by the moving object in the motion sequence of the initialization special effect is calculated, one motion in the motion sequence of the initialization special effect corresponds to only one motion in the motion recognition result, and motion matching is performed in accordance with the time sequence. In step S403, when it is determined that each action in the action sequence of the initialized special effect matches the corresponding action in the action recognition result, the special effect accuracy of the initialized special effect is determined according to the calculated action accuracy of each action.
In an embodiment, when only one action sequence for initializing the special effects is included, the user can be guided to make actions in the action sequence for initializing the special effects by displaying action guidelines on a display interface of the terminal. If the initialization special effect is as the palm of the hand, the corresponding action sequence comprises three actions, namely, boxing, stretching the fist from front to back and stretching the palm. Assuming that the user triggers matching of actions such as the palm special effects by completing the action of the fist playing, step S402 will calculate the action accuracy between the first action of the initialization special effects and the first action completed by the moving object; the second action fist is displayed on the terminal interface and is guided to complete the second action in the action sequence of the initialization special effect after extending from the front, and when the step S401 determines that the second action of the moving object is matched with the second action of the initialization special effect, the action accuracy between the second action of the initialization special effect and the second action completed by the moving object is calculated through the step S402; the guiding diagram of the third action extending out of the palm is displayed on the terminal interface, the user is guided to finish the third action in the action sequence of the initialization special effect, and when the step S401 determines that the third action of the moving target is matched with the third action of the initialization special effect, the action accuracy between the third action of the initialization special effect and the third action finished by the moving target is calculated through the step S402; at this time, step S403 determines that each action in the action sequence of the initialized special effect matches the corresponding action in the action recognition result, and determines the special effect accuracy of the initialized special effect according to the action accuracies of the three actions.
In one embodiment, as shown in FIG. 5, the action accuracy includes action point location accuracy, action trigger time accuracy, and action duration accuracy; step S104 determines the special effect accuracy of the initialized special effect according to the calculated action accuracy, including:
s501, calculating the sum of the accuracy of action points, the sum of the accuracy of action triggering time and the sum of the accuracy of action duration of all actions in the action sequence of the initialization special effect;
s502, calculating the accuracy sum of the action points, the accuracy sum of the action triggering time and the accuracy sum of the action duration based on preset weights to obtain the special effect accuracy of the initialization special effect.
Specifically, the action accuracy consists of three variables, namely action point position accuracy, action trigger time accuracy and action duration accuracy; the accuracy of the action point position can be determined through positioning of the key point; the action triggering time accuracy is determined by the display time stamp corresponding to the first frame of video frame in each action; the action duration accuracy is determined by the time period corresponding to the first frame to the last frame of video frame in completing each action. The action accuracy of each action may be represented by a normalized value, and the special effect accuracy determined from the action accuracy of each action is illustrated in conjunction with the following table 1:
TABLE 1
As can be seen from table 1, the action sequence of the initial special effect includes 3 actions, and in the action recognition result, 3 actions correspond to the action sequence. The above values may be calculated by absolute values, for example, assuming that the standard duration for completing the first action in the initial special effect is 5ms, if the duration for completing the first action of the moving object is 4ms or 6ms, the action duration accuracy value of the first action is 0.8 (1- |4-5|/5=0.8; 1- |6-5|/5=0.8). The other two variables (action point position accuracy, action trigger time accuracy) are calculated in the same way. In step S501, the accuracy of action points corresponding to all actions in the action sequence of the initialization special effect is calculated to be 2.4, the accuracy of action triggering time is calculated to be 2.5, and the accuracy of action duration is calculated to be 2.7; in step S502, the special effect accuracy of the initialized special effect calculated based on the preset weights of the respective variables is 2.51. When the accuracy values of the three variables are represented by normalized values, the standard value of the special effect accuracy is 3. When two or more motion sequences of the initialized special effects are determined in step S103, each initialized special effect has its corresponding special effect accuracy.
Optionally, when only one action sequence for initializing the special effect is determined in step S103, a standard value calculation mode may be adopted when calculating accuracy values corresponding to three variables (action point accuracy, action trigger time accuracy, action duration accuracy); taking the accuracy of the action duration as an example, initializing the standard value range of the action duration of the second action in the special effect to be 5ms-10ms, and if the moving target takes 7ms when finishing the second action, setting the accuracy of the action duration of the second action to be 1; namely, the accuracy is 1 as long as the action duration time of the moving object for completing the second action is within the standard value range; if the action duration of the moving object completing the second action is not in the standard value range, judging that the moving object does not complete initializing the second action in the special effect, exiting the action identifier and the special effect identifier at the moment, and determining that the action identification fails. The other two variables (action point position accuracy, action trigger time accuracy) are calculated in the same way. When two or more motion sequences of the initialized special effects are determined in step S103, assuming that there are three motion sequences of the initialized special effects (the motion sequences corresponding to the initialized special effect a, the initialized special effect B, and the initialized special effect C), the first motion completed by the moving object is matched with the first motion of the three initialized special effects, and then the motion accuracy of the first motion of the three initialized special effects is 1; when the second action completed by the moving target is only matched with the initialization special effect A and the initialization special effect B, the action matching of the initialization special effect C is exited, and the action accuracy of the initialization special effect A and the initialization special effect B on the second action is 3; when the third action completed by the moving object is only matched with the initialization special effect A, the action matching of the initialization special effect B is exited, the action accuracy of the initialization special effect A on the third action is 3, and if the initialization special effect A also comprises a fourth action and more, the action matching with the action completed by the moving object is continued; if the initialized special effect A has only three actions, the accuracy of the special effect of the motion target finished action corresponding to the initialized special effect A is 3. Since the action accuracy is composed of three variables, the standard value of the special effect accuracy is 3.
In one embodiment, as shown in fig. 6, when an initialized special effect is included, step S105 obtains a target special effect associated with the initialized special effect based on the special effect accuracy and displays the target special effect, including:
s601, adjusting the display effect of the initialized special effect based on the special effect accuracy, and displaying the initialized special effect after adjusting the display effect as a target special effect.
Specifically, when the motion sequence of the initialization special effects determined in step S103 includes only one, the display effect of the initialization special effects is adjusted based on the special effect accuracy, and the adjusted initialization special effects are taken as target special effects. For example, assuming that the standard value of the special effect accuracy of the initialized special effect is 3, and the actual value of the special effect accuracy calculated in the step S104 is 2.51, the display effect of the initialized special effect is adjusted according to the proportion of the actual value to the standard value; if the display effect of the initialization special effect under the standard is that a light ring with the size of 7x7 appears, the display effect of the adjusted initialization special effect is that a light ring with the size of 5.8x5.8 appears; and displaying the adjusted halo as a target special effect on a display interface.
In one embodiment, as shown in fig. 7, when at least two initialized special effects are included, step S105 obtains a target special effect associated with the initialized special effect based on the special effect accuracy and displays the target special effect, including:
S701, determining the initialized special effect with the highest special effect accuracy as a target special effect, and displaying the target special effect.
Specifically, when two or more of the action sequences of the initialized special effects determined in step S103 are included, the initialized special effect with the highest actual value of the special effect accuracy calculated in step S104 is determined as the target special effect; if the motion sequences of the three initialized special effects are determined in step S103, the motion sequences respectively correspond to the initialized special effect a, the initialized special effect B and the initialized special effect C, then the special effect accuracy values of the moving object and the initialized special effects when the motion is completed are respectively calculated, and the special effect accuracy of the moving object corresponding to the initialized special effect a is assumed to be 2.7, the special effect accuracy of the initialized special effect B is 2.63 and the special effect accuracy of the initialized special effect C is 2.82 when the motion is completed; and acquiring the initialized special effect C as a target special effect for display. In one embodiment, as shown in fig. 8, it is assumed that when the corresponding action sequence of the initialized special effect C is an action sequence related to a yang finger in the martial arts, a special effect related to a yang finger appears on the display interface.
In an application example, as shown in fig. 9, it is assumed that the moving object is a human hand. A user starts a video shooting application program on a terminal to shoot video; in the shooting process, a user wants to increase the interestingness of a shot video in a special effect adding mode, for example, if the user wants to add a positive effect related to martial arts, wherein the action sequence of the positive effect comprises two, namely, a fist starting, extending one finger and keeping a preset duration, and the preset duration is 3 seconds, the fist is displayed in a video shooting area, and the special effect matching is triggered (a trigger gesture of a plurality of special effects can be displayed on a terminal display interface, and the user performs different special effect matching by displaying different gesture triggers, and sets the trigger gesture of the positive effect as the fist); at this time, an action guiding diagram of a first action appears on a terminal display interface (the action guiding diagram can be a word or an action path; if the word is a prompt such as "fist playing"), when a user completes the current first action according to the action guiding diagram, an action guiding diagram of a second action appears on the display interface (the word in the action guiding diagram of the second action can be expressed as "stretch a finger and hold for 3 seconds"; the terminal calculates the action accuracy of the user completing the first action at the same time); when the user completes the second action according to the action guidance chart (the terminal calculates the action accuracy of the user completing the second action, calculates the special effect accuracy based on the action accuracy of each action, further adjusts the display effect of the special effect of the male finger based on the special effect accuracy, and displays the special effect of the male finger after the display effect is adjusted as the target special effect), a sticker of the special effect of the male finger appears on the terminal display interface, as shown in fig. 8.
Optionally, the gesture triggering the effect matching may be the first action in the action sequence of initializing the effect, i.e. the user triggers the effect matching of the corresponding effect by completing the first action. In the above embodiment, the user triggers to perform special effect matching of a positive effect by completing the fist playing action; at this time, the terminal display interface will display the action guidance chart of the second action (extending a finger and keeping a preset time length) in the action sequence (the terminal calculates the action accuracy of the user to finish the first action at the same time); the subsequent steps are the same as those of the above embodiment, and will not be described again.
In one embodiment, as shown in fig. 10, there is provided a special effects processing apparatus 100, including:
the detection module 101 is configured to perform image detection on an original video, and obtain at least two target video frames including a moving target;
an identification module 102, configured to perform motion identification on the moving target based on the at least two target video frames;
a determining module 103, configured to determine an action sequence for initializing the special effect when the action recognition result is obtained;
the computing module 104 is configured to perform matching computation on each action in the action sequence of the initialized special effect and each action in the action recognition result, determine action accuracy between each action in the action sequence of the initialized special effect and a corresponding action matched in the action recognition result, and determine special effect accuracy of the initialized special effect according to the action accuracy obtained by computation;
And the display module 105 is used for acquiring and displaying the target special effects associated with the initialized special effects based on the special effect accuracy.
In one embodiment, the identification module 102 includes any of the following elements: the first recognition unit is used for recognizing motion change trends of moving targets in two adjacent front and rear target video frames in sequence to finish motion recognition of the moving targets; the second recognition unit is used for selecting at least two video frames to be recognized from the at least two target video frames at a preset frequency, and recognizing the motion change trend of the moving target in the at least two video frames to be recognized so as to complete motion recognition of the moving target.
In an embodiment, the determining module 103 includes any one of the following units: the first determining unit is used for determining at least one action sequence for initializing the special effects based on the first action in the action recognition result according to time sequence when the action recognition result is obtained; the second determining unit is used for taking the action sequence of the preset special effect as the action sequence of the initialized special effect when the action recognition result is obtained; the preset special effects include at least one.
In one embodiment, the computing module 104 includes: the matching unit is used for sequentially matching each action in the action sequence of the initialization special effect with each action in the action recognition result according to time sequence; the action calculation unit is used for calculating action accuracy between each action in the action sequence of the initialization special effect and the corresponding action matched in the action recognition result respectively; and the special effect calculation unit is used for determining the special effect accuracy of the initialized special effect according to the calculated action accuracy when each action in the action sequence of the initialized special effect is matched with the corresponding action in the action recognition result.
In an embodiment, the action accuracy includes action point location accuracy, action trigger time accuracy, and action duration accuracy; the calculation module 104 includes a first calculation unit, configured to calculate a sum of accuracy of action points, a sum of accuracy of action trigger time, and a sum of accuracy of action duration of all actions in the action sequence of the initialization special effect; and the second calculation unit is used for calculating the action point position accuracy sum, the action trigger time accuracy sum and the action duration time accuracy sum based on preset weights to obtain the special effect accuracy of the initialization special effect.
In an embodiment, when an initialized special effect is included, the display module 105 includes a first display unit, configured to adjust a display effect of the initialized special effect based on the accuracy of the special effect, and display the initialized special effect after the adjustment of the display effect as a target special effect.
In an embodiment, when at least two initialized special effects are included, the display module 105 includes a second display unit, configured to determine the initialized special effect with the highest accuracy of the special effects as a target special effect, and display the target special effect.
The special effect processing device according to the embodiments of the present application may execute a special effect processing method provided by the embodiments of the present application, and its implementation principle is similar, and actions executed by each module in the special effect processing device according to each embodiment of the present application correspond to steps in the special effect processing method according to each embodiment of the present application, and detailed functional descriptions of each module in the special effect processing device may be specifically referred to descriptions in the corresponding special effect processing method shown in the foregoing, which are not repeated herein.
Based on the same principles as the methods shown in the embodiments of the present application, there is also provided in the embodiments of the present application an electronic device, which may include, but is not limited to: a processor and a memory; a memory for storing computer operating instructions; and the processor is used for executing the special effect processing method shown in the embodiment by calling the computer operation instruction.
In an alternative embodiment, there is provided an electronic device, as shown in fig. 11, the electronic device 4000 shown in fig. 11 includes: a processor 4001 and a memory 4003. Wherein the processor 4001 is coupled to the memory 4003, such as via a bus 4002. Optionally, the electronic device 4000 may also include a transceiver 4004. It should be noted that, in practical applications, the transceiver 4004 is not limited to one, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The processor 4001 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this summary. The processor 4001 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc. The processor 4001 executes the special effect processing method shown in the above embodiment by calling the computer operation instruction, the special effect processing method including: performing image detection on an original video to obtain at least two target video frames comprising a moving target; performing motion recognition on the moving target based on the at least two target video frames; when an action recognition result is obtained, determining an action sequence for initializing the special effect; performing matching calculation on each action in the action sequence of the initialization special effect and each action in the action recognition result, determining the action accuracy between each action in the action sequence of the initialization special effect and the corresponding action matched in the action recognition result, and determining the special effect accuracy of the initialization special effect according to the action accuracy obtained by calculation; and acquiring a target special effect associated with the initialized special effect based on the special effect accuracy and displaying the target special effect.
Bus 4002 may include a path to transfer information between the aforementioned components. Bus 4002 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The bus 4002 can be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 11, but not only one bus or one type of bus.
Memory 4003 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (Electrically Erasable Programmable Read Only Memory ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 4003 is used for storing application program codes for executing the inventive arrangements, and is controlled to be executed by the processor 4001. The processor 4001 is configured to execute application program codes stored in the memory 4003 to realize what is shown in the foregoing method embodiment.
Among them, electronic devices include, but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 11 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present application.
Embodiments of the present application provide a computer-readable storage medium having a computer program stored thereon, which when run on a computer, causes the computer to perform the corresponding method embodiments described above.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The computer readable medium of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. The name of the module is not limited to the module itself in some cases, for example, the detection module may also be described as "a module for performing image detection on an original video and acquiring at least two target video frames including a moving target".
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in the present application is not limited to the specific combinations of technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the spirit of the disclosure. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.

Claims (13)

1. A special effect processing method, characterized by comprising:
performing image detection on an original video to obtain at least two target video frames comprising a moving target;
performing motion recognition on the moving target based on the at least two target video frames;
when an action recognition result is obtained, determining an action sequence for initializing the special effect;
performing matching calculation on each action in the action sequence of the initialization special effect and each action in the action recognition result, and determining the action accuracy between each action in the action sequence of the initialization special effect and the corresponding action matched in the action recognition result; the action accuracy comprises action point position accuracy, action triggering time accuracy and action duration accuracy;
Determining the special effect accuracy of the initialized special effect according to the calculated action accuracy, including: calculating the sum of the accuracy of action points, the sum of the accuracy of action triggering time and the sum of the accuracy of action duration of all actions in the action sequence of the initialization special effect; calculating the accuracy sum of the action points, the accuracy sum of the action triggering time and the accuracy sum of the action duration based on preset weights to obtain the special effect accuracy of the initialization special effect;
and acquiring a target special effect associated with the initialized special effect based on the special effect accuracy and displaying the target special effect.
2. The method of claim 1, wherein the motion recognition of the moving object based on the at least two object video frames comprises any one of:
the motion change trend of the moving targets in the front and rear two adjacent target video frames is recognized in sequence, so that the motion recognition of the moving targets is completed;
selecting at least two video frames to be identified from the at least two target video frames at a preset frequency, and identifying the motion change trend of a moving target in the at least two video frames to be identified so as to finish motion identification of the moving target.
3. The method according to claim 1, wherein when the action recognition result is obtained, determining the action sequence of initializing the special effects comprises any one of the following:
when an action recognition result is obtained, determining at least one action sequence for initializing the special effects based on the first action in the action recognition result according to time sequencing;
when an action recognition result is obtained, taking a preset action sequence of the special effect as an action sequence of the initialized special effect; the preset special effects include at least one.
4. The method according to claim 1, wherein the performing a matching calculation on each action in the action sequence of the initialized special effect and each action in the action recognition result, determining an action accuracy between each action in the action sequence of the initialized special effect and a corresponding action matched in the action recognition result, and determining the special effect accuracy of the initialized special effect according to the calculated action accuracy, includes:
sequentially matching each action in the action sequence of the initialization special effect with the actions according to time sequence in the action recognition result;
respectively calculating the action accuracy between each action in the action sequence of the initialization special effect and the corresponding action matched in the action recognition result;
When each action in the action sequence of the initialized special effect is matched with the corresponding action in the action recognition result, determining the special effect accuracy of the initialized special effect according to the calculated action accuracy.
5. The method of claim 3, wherein when comprising an initialized effect, the obtaining the target effect associated with the initialized effect based on the effect accuracy is presented, comprising:
and adjusting the display effect of the initialized special effect based on the special effect accuracy, and displaying the initialized special effect after adjusting the display effect as a target special effect.
6. The method of claim 3, wherein when at least two initialized special effects are included, the obtaining a target special effect associated with the initialized special effect based on the special effect accuracy is exhibited, comprising:
and determining the initialization special effect with the highest special effect accuracy as a target special effect, and displaying the target special effect.
7. A special effect processing apparatus, characterized by comprising:
the detection module is used for carrying out image detection on the original video and obtaining at least two target video frames comprising a moving target;
The identification module is used for identifying the motion target based on the at least two target video frames;
the determining module is used for determining an action sequence of the initialization special effect when the action recognition result is obtained;
the calculation module is used for carrying out matching calculation on each action in the action sequence of the initialization special effect and each action in the action recognition result, and determining the action accuracy between each action in the action sequence of the initialization special effect and the corresponding action matched in the action recognition result; the action accuracy comprises action point position accuracy, action triggering time accuracy and action duration accuracy; determining the special effect accuracy of the initialized special effect according to the calculated action accuracy; the computing module comprises a first computing unit and a second computing unit which are specifically applied when the special effect accuracy of the initialization special effect is determined according to the action accuracy obtained by computing; the first calculating unit is used for calculating the sum of the accuracy of action points of all actions in the action sequence of the initialization special effect, the sum of the accuracy of action triggering time and the sum of the accuracy of action duration; the second calculating unit is used for calculating the action point position accuracy sum, the action triggering time accuracy sum and the action duration accuracy sum based on preset weights to obtain the special effect accuracy of the initialization special effect;
And the display module is used for acquiring and displaying the target special effect associated with the initialized special effect based on the special effect accuracy.
8. The apparatus of claim 7, wherein the identification module comprises any one of:
the first recognition unit is used for recognizing motion change trends of moving targets in two adjacent front and rear target video frames in sequence to finish motion recognition of the moving targets;
the second recognition unit is used for selecting at least two video frames to be recognized from the at least two target video frames at a preset frequency, and recognizing the motion change trend of the moving target in the at least two video frames to be recognized so as to complete motion recognition of the moving target.
9. The apparatus of claim 7, wherein the determining module comprises any one of:
the first determining unit is used for determining at least one action sequence for initializing the special effects based on the first action in the action recognition result according to time sequence when the action recognition result is obtained;
the second determining unit is used for taking the action sequence of the preset special effect as the action sequence of the initialized special effect when the action recognition result is obtained; the preset special effects include at least one.
10. The apparatus of claim 7, wherein the computing module comprises:
the matching unit is used for sequentially matching each action in the action sequence of the initialization special effect with each action in the action recognition result according to time sequence;
the action calculation unit is used for calculating action accuracy between each action in the action sequence of the initialization special effect and the corresponding action matched in the action recognition result respectively;
and the special effect calculation unit is used for determining the special effect accuracy of the initialized special effect according to the calculated action accuracy when each action in the action sequence of the initialized special effect is matched with the corresponding action in the action recognition result.
11. The apparatus of claim 9, wherein when an initialized special effect is included, the display module includes a first display unit configured to adjust a display effect of the initialized special effect based on the special effect accuracy, and display the initialized special effect after the adjustment of the display effect as a target special effect.
12. An electronic device, comprising:
one or more processors;
A memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: the special effect processing method according to any one of claims 1 to 6 is performed.
13. A computer readable storage medium storing at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program, the code set, or instruction set being loaded and executed by a processor to implement the special effects processing method of any one of claims 1-6.
CN202010443569.0A 2020-05-22 2020-05-22 Special effect processing method and related equipment Active CN111611941B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010443569.0A CN111611941B (en) 2020-05-22 2020-05-22 Special effect processing method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010443569.0A CN111611941B (en) 2020-05-22 2020-05-22 Special effect processing method and related equipment

Publications (2)

Publication Number Publication Date
CN111611941A CN111611941A (en) 2020-09-01
CN111611941B true CN111611941B (en) 2023-09-19

Family

ID=72199320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010443569.0A Active CN111611941B (en) 2020-05-22 2020-05-22 Special effect processing method and related equipment

Country Status (1)

Country Link
CN (1) CN111611941B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112333473B (en) * 2020-10-30 2022-08-23 北京字跳网络技术有限公司 Interaction method, interaction device and computer storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017152794A1 (en) * 2016-03-10 2017-09-14 Zhejiang Shenghui Lighting Co., Ltd. Method and device for target tracking
WO2019024750A1 (en) * 2017-08-03 2019-02-07 腾讯科技(深圳)有限公司 Video communications method and apparatus, terminal, and computer readable storage medium
CN109600559A (en) * 2018-11-29 2019-04-09 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN109618183A (en) * 2018-11-29 2019-04-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN110222576A (en) * 2019-05-07 2019-09-10 北京字节跳动网络技术有限公司 Punch action recognition methods, device and electronic equipment
CN110472531A (en) * 2019-07-29 2019-11-19 腾讯科技(深圳)有限公司 Method for processing video frequency, device, electronic equipment and storage medium
WO2020020156A1 (en) * 2018-07-23 2020-01-30 腾讯科技(深圳)有限公司 Video processing method and apparatus, terminal device, server, and storage medium
CN110913205A (en) * 2019-11-27 2020-03-24 腾讯科技(深圳)有限公司 Video special effect verification method and device
CN111104930A (en) * 2019-12-31 2020-05-05 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017152794A1 (en) * 2016-03-10 2017-09-14 Zhejiang Shenghui Lighting Co., Ltd. Method and device for target tracking
WO2019024750A1 (en) * 2017-08-03 2019-02-07 腾讯科技(深圳)有限公司 Video communications method and apparatus, terminal, and computer readable storage medium
WO2020020156A1 (en) * 2018-07-23 2020-01-30 腾讯科技(深圳)有限公司 Video processing method and apparatus, terminal device, server, and storage medium
CN109600559A (en) * 2018-11-29 2019-04-09 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN109618183A (en) * 2018-11-29 2019-04-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN110222576A (en) * 2019-05-07 2019-09-10 北京字节跳动网络技术有限公司 Punch action recognition methods, device and electronic equipment
CN110472531A (en) * 2019-07-29 2019-11-19 腾讯科技(深圳)有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN110913205A (en) * 2019-11-27 2020-03-24 腾讯科技(深圳)有限公司 Video special effect verification method and device
CN111104930A (en) * 2019-12-31 2020-05-05 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111611941A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN109584276B (en) Key point detection method, device, equipment and readable medium
CN109657533A (en) Pedestrian recognition methods and Related product again
US20190116323A1 (en) Method and system for providing camera effect
CN110610154A (en) Behavior recognition method and apparatus, computer device, and storage medium
CN112148197A (en) Augmented reality AR interaction method and device, electronic equipment and storage medium
CN113034652A (en) Virtual image driving method, device, equipment and storage medium
US20210281744A1 (en) Action recognition method and device for target object, and electronic apparatus
EP3617934A1 (en) Image recognition method and device, electronic apparatus, and readable storage medium
EP3987443A1 (en) Recurrent multi-task convolutional neural network architecture
CN108096833B (en) Motion sensing game control method and device based on cascade neural network and computing equipment
WO2022174594A1 (en) Multi-camera-based bare hand tracking and display method and system, and apparatus
CN111273772B (en) Augmented reality interaction method and device based on slam mapping method
CN112560622B (en) Virtual object action control method and device and electronic equipment
US20230401799A1 (en) Augmented reality method and related device
CN114241597A (en) Posture recognition method and related equipment thereof
CN111611941B (en) Special effect processing method and related equipment
CN114202454A (en) Graph optimization method, system, computer program product and storage medium
CN111274489B (en) Information processing method, device, equipment and storage medium
WO2019022829A1 (en) Human feedback in 3d model fitting
CN116452946A (en) Model training method and electronic equipment
CN111104911A (en) Pedestrian re-identification method and device based on big data training
CN115497094A (en) Image processing method and device, electronic equipment and storage medium
CN110263743B (en) Method and device for recognizing images
CN114648556A (en) Visual tracking method and device and electronic equipment
CN112418153A (en) Image processing method, image processing device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028078

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant