CN111263084B - Video-based gesture jitter detection method, device, terminal and medium - Google Patents

Video-based gesture jitter detection method, device, terminal and medium Download PDF

Info

Publication number
CN111263084B
CN111263084B CN201811458369.1A CN201811458369A CN111263084B CN 111263084 B CN111263084 B CN 111263084B CN 201811458369 A CN201811458369 A CN 201811458369A CN 111263084 B CN111263084 B CN 111263084B
Authority
CN
China
Prior art keywords
gesture
video
key position
determining
central point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811458369.1A
Other languages
Chinese (zh)
Other versions
CN111263084A (en
Inventor
郑微
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811458369.1A priority Critical patent/CN111263084B/en
Publication of CN111263084A publication Critical patent/CN111263084A/en
Application granted granted Critical
Publication of CN111263084B publication Critical patent/CN111263084B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The embodiment of the disclosure discloses a gesture shake detection method, a gesture shake detection device, a gesture shake detection terminal and a gesture shake detection medium based on videos, wherein the gesture shake detection method comprises the following steps: acquiring at least two first key position points of a gesture in a current data frame in a shooting video; determining a first central point of the gesture according to the at least two first key position points; determining at least two second key position points of the gesture in the next frame data of the current data frame; determining a second central point of the gesture according to the at least two second key position points; and determining whether the gesture shakes in the video shooting process according to the relation between the deviation value between the first central point and the second central point and a preset threshold value. The embodiment of the disclosure solves the problem that the detection result of the user gesture change is inaccurate in the video shooting process in the prior art, realizes the accurate detection of the shake of the user gesture, and ensures the display effect of the video special effect.

Description

Video-based gesture jitter detection method, device, terminal and medium
Technical Field
The embodiment of the disclosure relates to the technical field of internet, in particular to a gesture shaking detection method, device, terminal and medium based on video.
Background
The development of network technology makes video interaction application very popular in people's daily life.
With the increase of application functions, a user can add various video special effects in the video through gesture control. At the moment, the display of the video special effect is directly influenced on the result precision of the user gesture detection. For example, in the video shooting process, a user wants to control flames appearing in a video to change slowly from large to small through gestures, but the position of the flames in the finally shot video effect continuously shakes, so that the phenomenon occurs because the terminal does not accurately detect the gesture change in the video data in the shooting process.
Therefore, how to accurately detect gestures of video data to ensure the display effect of video special effects still remains a problem to be solved currently.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
The embodiment of the disclosure provides a gesture shaking detection method, a gesture shaking detection device, a gesture shaking detection terminal and a gesture shaking detection medium based on a video, so that whether a user gesture shakes or not can be accurately detected in a video shooting process.
In a first aspect, an embodiment of the present disclosure provides a video-based gesture shake detection method, where the method includes:
acquiring at least two first key position points of a gesture in a current data frame in a shooting video according to a preset marking relation between the gesture and key position points;
determining a first central point of the gesture according to the at least two first key position points;
determining at least two second key location points of the gesture in data next to the current data frame;
determining a second central point of the gesture according to the at least two second key position points;
and determining whether the gesture shakes in the video shooting process according to the relation between the deviation value between the first central point and the second central point and a preset threshold value.
Optionally, the method further includes:
and triggering a gesture control event corresponding to the determination result in the next frame of data according to the determination result of the gesture jitter.
Optionally, determining whether the gesture shakes during the video shooting process according to a relationship between a deviation value between the first central point and the second central point and a preset threshold, including:
if the deviation value is smaller than or equal to the preset threshold value, the gesture does not shake in the video shooting process;
and if the deviation value is larger than the preset threshold value, the gesture shakes in the video shooting process.
Optionally, the triggering, according to the determination result of the gesture jitter, the gesture control event corresponding to the determination result in the next frame of data includes:
if the gesture does not shake, maintaining the same video effect as that in the current data frame in the next frame data;
and if the gesture shakes, triggering the video effect controlled after the gesture shakes in the next frame data.
Optionally, the determination of the gesture center point includes:
determining the center of a geometric figure formed by at least two key position points of the gesture in the video data frame as a gesture center point; or
And determining the center of a geometric figure which is formed by at least two key position points of the gesture in the video data frame and is connected with the circumscribed figure as a gesture center point.
In a second aspect, an embodiment of the present disclosure further provides a video-based gesture shake detection apparatus, where the apparatus includes:
the first key position point acquisition module is used for acquiring at least two first key position points of a gesture in a current data frame in a shooting video according to a preset marking relation between the gesture and the key position points;
the first central point determining module is used for determining a first central point of the gesture according to the at least two first key position points;
the second key position point acquisition module is used for determining at least two second key position points of the gesture in the next frame data of the current data frame;
the second central point determining module is used for determining a second central point of the gesture according to the at least two second key position points;
and the gesture shaking result determining module is used for determining whether the gesture shakes in the video shooting process according to the relation between the deviation value between the first central point and the second central point and a preset threshold value.
Optionally, the apparatus further comprises:
and the gesture control event triggering module is used for triggering a gesture control event corresponding to the determination result in the next frame data according to the determination result of the gesture jitter.
Optionally, the gesture shaking result determining module is specifically configured to:
if the deviation value is smaller than or equal to the preset threshold value, the gesture does not shake in the video shooting process;
if the deviation value is larger than the preset threshold value, the gesture shakes in the video shooting process;
optionally, the gesture control event triggering module is specifically configured to:
if the gesture does not shake, maintaining the same video effect as that in the current data frame in the next frame data;
and if the gesture shakes, triggering the video effect controlled after the gesture shakes in the next frame data.
Optionally, the first central point determining module or the second central point determining module is specifically configured to:
determining the center of a geometric figure formed by at least two key position points of the gesture in the video data frame as a gesture center point; or
And determining the center of a geometric figure which is formed by at least two key position points of the gesture in the video data frame and is connected with the circumscribed figure as a gesture center point.
In a third aspect, an embodiment of the present disclosure further provides a terminal, including:
one or more processing devices;
a storage device for storing one or more programs,
when executed by the one or more processing devices, cause the one or more processing devices to implement a video-based gesture shake detection method according to any of the embodiments of the present disclosure.
In a fourth aspect, the embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by an apparatus, implements a video-based gesture shake detection method according to any embodiment of the present disclosure.
According to the embodiment of the method and the device, the key position points of the user gesture in the current data frame and the next frame data are obtained and the central points of the user gesture in each frame data are respectively calculated according to the mark relation of the preset gesture and the key position points, and whether the user gesture shakes or not is determined according to the relation between the deviation value of the two central points and the preset threshold value in the video shooting process.
Drawings
Fig. 1 is a schematic flowchart of a video-based gesture shake detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating another video-based gesture shaking detection method provided by the embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a video-based gesture shaking detection apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a hardware structure of a terminal according to an embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the disclosure and are not limiting of the disclosure. It should be further noted that, for the convenience of description, only some of the structures relevant to the present disclosure are shown in the drawings, not all of them.
Optional features and examples are provided in each of the embodiments described below, and each of the features described in the embodiments may be combined to form multiple alternatives.
Fig. 1 is a schematic flowchart of a video-based gesture shake detection method according to an embodiment of the present disclosure, where the present embodiment is applicable to detecting whether a user gesture shakes during a video shooting process, and the method may be executed by a video-based gesture shake detection apparatus, which may be implemented in a software and/or hardware manner, and may be integrated on any terminal with a network communication function, such as a smart phone, a computer, and an ipad.
As shown in fig. 1, a video-based gesture shaking detection method provided by an embodiment of the present disclosure may include:
s110, at least two first key position points of the gesture in the current data frame in the shooting video are obtained.
The user can start an application with a video shooting function installed on the terminal, such as a video interaction application, to shoot a video, and the application supports the user to add a video effect through gesture control in the video shooting process, wherein the video effect refers to an editing effect on the video on the basis of the original shooting video, such as adding an animation special effect in the video, changing the display form of the video special effect, adding a video filter and the like.
The key position points on the gesture are used for marking the gesture, and the marking relation between the key position points and the gesture can be preset. When a user controls the video effect in the current shooting video by using a gesture, the gesture can be recognized by acquiring at least two key position points on the gesture, and whether the gesture position changes can be determined based on the key position points of the gesture in different data frames, so that the video effect corresponding to the gesture position is triggered in different data frames. Different gestures are marked by using different key position points, which is not specifically limited by the embodiment. Wherein the key location points may be represented based on screen coordinates.
Illustratively, the gestures in the present embodiment include gestures composed of different numbers of fingers and gestures composed of cooperation of fingers and a palm, and of course, a single palm may be used as one gesture. If a gesture is made up of a different number of fingers, the key location points of the gesture include, but are not limited to: the finger tip position point of each finger, the joint position point of each finger, four vertexes of a finger abdomen circumscribed rectangle of each finger and the like; if the gesture is composed of the cooperation of fingers and a palm, the key position points of the gesture can also comprise position points on the outline of the palm. In this embodiment, the key location point of the current gesture in the current data frame of the captured video is referred to as a first key location point, and both are substantially the same.
And S120, determining a first central point of the gesture according to the at least two first key position points.
Corresponding to the first key location point of the current gesture, the location of the center point of the current gesture in the current data frame is referred to as a first center point in this embodiment. The position change of the gesture in different data frames can be accurately represented according to the position change of the central point of the same gesture in different data frames. For different gestures, different calculation rules may be employed to determine the center point thereof.
Optionally, the determination of the gesture center point includes:
determining the center of a geometric figure formed by at least two key position points of the gesture in the video data frame as a gesture center point; or
And determining the center of a geometric figure which is formed by at least two key position points of the gesture in the video data frame and is connected with the circumscribed figure as a gesture center point.
For example, if a plurality of key location points on the acquired gesture constitute a regular geometric figure, the geometric center of the regular geometric figure, for example, the center of a line segment, the geometric center of a rectangle, the center of a circle, and the like, is determined as the gesture center point. If a plurality of key position points on the acquired gesture form an irregular geometric figure, the geometric center of a circumscribed figure associated with the irregular geometric figure, for example, the geometric center of a circumscribed circle or a circumscribed rectangle, can be determined as the gesture center point.
For example, in the video shooting process, the current gesture is a gesture formed by a thumb, the first key position points of the gesture in the current data frame are four vertexes of a circumscribed rectangle of the finger pulp of the thumb, and the first central point of the gesture is the geometric center of the circumscribed rectangle of the finger pulp in the current data frame. Or, a plurality of first key position points of the current gesture may form a triangle, and a center of a circle circumscribing the triangle is determined as a first center point of the current gesture.
The determination of the gesture center point is related to a specific gesture type, and on the basis of ensuring that the gesture center point can be accurately determined, the determination mode can be flexibly selected.
S130, determining at least two second key position points of the gesture in the next frame data of the current data frame.
The second key location point is a key location point of the current gesture in the frame data next to the current data frame. The first key position point and the second key position point are position coordinates of the same part on the gesture in different data frames. As video capture progresses, the user gesture does not always remain in one location, and thus, the same gesture differs to different degrees between corresponding key location points in different data frames.
And S140, determining a second central point of the gesture according to the at least two second key position points.
The second center point refers to a position of the current gesture center point in data of a frame next to the current data frame, and the second center point of the gesture can be determined by the same method as the method for determining the first center point of the current gesture.
S150, determining whether the gesture shakes in the video shooting process according to the relation between the deviation value between the first central point and the second central point and a preset threshold value.
The preset threshold value is a tolerance threshold for determining whether the user gesture shakes in the gesture detection process, and if the deviation value between the first central point and the second central point of the current gesture does not exceed the tolerance threshold value, it is determined that the user gesture does not shake, namely, the tiny change of the user gesture position belongs to the gesture range for controlling the same video effect.
Specifically, determining whether the gesture shakes in the video shooting process according to the relationship between the deviation value between the first central point and the second central point and a preset threshold value, includes:
if the deviation value is smaller than or equal to the preset threshold value, the gesture does not shake in the video shooting process;
if the deviation value is larger than the preset threshold value, the gesture shakes in the video shooting process.
Compared with the method that whether the gesture shakes or not is judged directly according to the difference value between the corresponding key position points of the gesture in different data frames, whether the gesture shakes or not is determined based on the central point of the gesture in the embodiment, and the accuracy of the judgment result is higher.
For example, for a gesture composed of a finger and a palm, the position of one finger in the gesture in the next frame data of the current data frame changes, but the gesture as a whole does not actually change in position, and if the gesture is directly based on the difference between the corresponding key position points of the gesture in the two consecutive frames of data, it is determined that the user gesture shakes, whereas if according to the embodiment scheme, even if the position of one finger in the gesture changes, the gesture center point in the two consecutive frames of data does not change, and therefore, it can be determined that the user gesture does not shake, which is apparent that the gesture shake detection scheme in the embodiment has higher stability and higher accuracy.
According to the technical scheme of the embodiment, according to the mark relationship between the preset gesture and the key position point of the preset gesture, in the video shooting process, whether the user gesture shakes or not is determined by acquiring the key position point of the user gesture in the current data frame and the key position point in the next frame data, respectively calculating the central point of the user gesture in each frame data, and according to the relationship between the deviation value between the two central points and the preset threshold value, the problem that the detection result of the user gesture change in the video shooting process in the prior art is inaccurate is solved, the accurate detection of whether the user gesture shakes or not is realized, and the display effect of the video special effect is ensured.
Fig. 2 is a schematic flow chart of another video-based gesture shaking detection method provided by the embodiment of the present disclosure, which is expanded on the basis of various alternatives in the above embodiment, and can be combined with various alternatives in the above embodiment. As shown in fig. 2, the method may include:
s210, at least two first key position points of the gesture in the current data frame in the shooting video are obtained.
S220, determining a first central point of the gesture according to the at least two first key position points.
And S230, determining at least two second key position points of the gesture in the next frame data of the current data frame.
S240, determining a second central point of the gesture according to the at least two second key position points.
And S250, determining whether the gesture shakes in the video shooting process according to the relation between the deviation value between the first central point and the second central point and a preset threshold value.
And S260, triggering a gesture control event corresponding to the determination result in the next frame of data according to the determination result of the gesture jitter.
The gesture control event refers to triggering a corresponding video effect according to a corresponding relation between gesture position change and the video effect, the corresponding relation can be preset in a terminal application development process, or can be based on materials provided by a terminal application, and a user can define the corresponding video effect before starting shooting.
Optionally, triggering a gesture control event corresponding to the determination result in the next frame of data according to the determination result of the gesture jitter includes:
if the gesture does not shake, maintaining the same video effect as that in the current data frame in the next frame data;
and if the gesture shakes, triggering the video effect controlled after the gesture shakes in the next frame data.
Optionally, the shake of the gesture includes a position change of the palm and a position change of the fingers in the gesture composed of different fingers. The gesture jitter may have different manifestations depending on the composition of the gesture.
Example one, if the deviation value of the same gesture between the central points in the two consecutive frames of data is smaller than the preset threshold value, and the gesture is not jittered, the same video effect is maintained in the two consecutive frames of data, which may avoid the occurrence of a change of the video effect caused by a slight change of the gesture position, for example, a shake or jump of the displayed video effect occurs along with the slight change of the gesture position, and the user does not actually wish to trigger such a video effect.
In the second example, when it is determined that the gesture shakes, a corresponding video effect, for example, a video effect in which a balloon shakes left and right is added to a shot video, is triggered by a change in the position of the palm, or a change in the position of a finger in the gesture composed of the finger, or a change in the overall position of the gesture composed of the finger and the palm.
According to the technical scheme, whether the gesture of the user shakes is determined according to the relation between the deviation value of the same gesture between the central points in different data frames and the preset threshold value, and the gesture control event corresponding to the result is triggered in the next frame of data according to the determination result of the gesture shaking, so that the problem that the detection result of the gesture change of the user in the video shooting process is inaccurate in the prior art is solved, the accurate detection of whether the gesture of the user shakes is realized, the display effect of the video special effect is guaranteed, and the display of the video effect which the user does not want to trigger is avoided.
Fig. 3 is a schematic structural diagram of a video-based gesture shaking detection apparatus according to an embodiment of the present disclosure, which is applicable to detecting whether a user gesture shakes during a video shooting process. The device can be realized by adopting a software and/or hardware mode, and can be integrated on any terminal with a network communication function.
As shown in fig. 3, the video-based gesture shaking detection apparatus provided in the embodiment of the present disclosure includes a first key location point obtaining module 310, a first central point determining module 320, a second key location point obtaining module 330, a second central point determining module 340, and a gesture shaking result determining module 350, where:
a first key position point obtaining module 310, configured to obtain at least two first key position points of a gesture in a current data frame in a captured video;
a first center point determining module 320, configured to determine a first center point of the gesture according to the at least two first key location points;
a second key location point obtaining module 330, configured to determine at least two second key location points of the gesture in data of a next frame of the current data frame;
the second central point determining module 340 is configured to determine a second central point of the gesture according to the at least two second key location points;
and a gesture shaking result determining module 350, configured to determine whether the gesture shakes in the video shooting process according to a relationship between the deviation value between the first central point and the second central point and a preset threshold.
Optionally, the gesture shaking detection apparatus further includes:
and the gesture control event triggering module is used for triggering a gesture control event corresponding to the determination result in the next frame data according to the determination result of the gesture jitter.
Optionally, the gesture shaking result determining module 350 is specifically configured to:
if the deviation value is smaller than or equal to the preset threshold value, the gesture does not shake in the video shooting process;
if the deviation value is larger than the preset threshold value, the gesture shakes in the video shooting process.
Optionally, the gesture control event triggering module is specifically configured to:
if the gesture does not shake, maintaining the same video effect as that in the current data frame in the next frame data;
and if the gesture shakes, triggering the video effect controlled after the gesture shakes in the next frame data.
Optionally, the shake of the gesture includes a position change of the palm and a position change of the fingers in the gesture composed of different fingers.
Optionally, the first central point determining module 320 or the second central point determining module 340 is specifically configured to:
determining the center of a geometric figure formed by at least two key position points of the gesture in the video data frame as a gesture center point; or
And determining the center of a geometric figure which is formed by at least two key position points of the gesture in the video data frame and is connected with the circumscribed figure as a gesture center point.
The video-based gesture shake detection device can execute the video-based gesture shake detection method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 4 is a schematic diagram of a hardware structure of a terminal according to an embodiment of the present disclosure. Referring now to fig. 4, a block diagram of a terminal 400 suitable for use in implementing embodiments of the present disclosure is shown. The terminal in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The terminal shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the terminal 400 may include one or more processing devices (e.g., central processing units, graphics processors, etc.) 401, and a storage device 408 for storing one or more programs. Among other things, the processing device 401 may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the terminal 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the terminal 400 to communicate with other devices, either wirelessly or by wire, for exchanging data. While fig. 4 illustrates a terminal 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the terminal; or may exist separately and not be assembled into the terminal.
The computer readable medium carries one or more programs which, when executed by the terminal, cause the terminal to: acquiring at least two first key position points of a gesture in a current data frame in a shooting video; determining a first central point of the gesture according to the at least two first key position points; determining at least two second key location points of the gesture in data next to the current data frame; determining a second central point of the gesture according to the at least two second key position points; and determining whether the gesture shakes in the video shooting process according to the relation between the deviation value between the first central point and the second central point and a preset threshold value.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (8)

1. A video-based gesture shake detection method is characterized by comprising the following steps:
acquiring at least two first key position points of a gesture in a current data frame in a shooting video;
determining a first central point of the gesture according to the at least two first key position points;
determining at least two second key location points of the gesture in data next to the current data frame;
determining a second central point of the gesture according to the at least two second key position points;
determining whether the gesture shakes in the video shooting process according to the relation between the deviation value between the first central point and the second central point and a preset threshold value;
if the gesture does not shake, maintaining the same video effect as that in the current data frame in the next frame data;
and if the gesture shakes, triggering a video effect controlled after the gesture shakes in the next frame of data, wherein the video effect is a video effect corresponding to the position change of the gesture.
2. The method of claim 1, wherein determining whether the gesture shakes during the video shooting process according to a relationship between a deviation value between the first central point and the second central point and a preset threshold comprises:
if the deviation value is smaller than or equal to the preset threshold value, the gesture does not shake in the video shooting process;
and if the deviation value is larger than the preset threshold value, the gesture shakes in the video shooting process.
3. The method according to any one of claims 1-2, wherein the determination of the gesture center point comprises:
if the plurality of key position points on the acquired gesture form a regular geometric figure, determining the center of the geometric figure formed by at least two key position points of the gesture in the video data frame as a gesture center point; or
And if the plurality of key positions on the acquired gesture form an irregular geometric figure, determining the center of a related circumscribed figure of the geometric figure formed by at least two key position points of the gesture in the video data frame as a gesture center point.
4. A video-based gesture shake detection apparatus, comprising:
the first key position point acquisition module is used for acquiring at least two first key position points of a gesture in a current data frame in a shooting video;
the first central point determining module is used for determining a first central point of the gesture according to the at least two first key position points;
the second key position point acquisition module is used for determining at least two second key position points of the gesture in the next frame data of the current data frame;
the second central point determining module is used for determining a second central point of the gesture according to the at least two second key position points;
the gesture shaking result determining module is used for determining whether the gesture shakes in the video shooting process according to the relation between the deviation value between the first central point and the second central point and a preset threshold value;
the gesture control event triggering module is used for maintaining the same video effect as that in the current data frame in the next frame data if the gesture does not shake;
and if the gesture shakes, triggering a video effect controlled after the gesture shakes in the next frame of data, wherein the video effect controlled after the gesture shakes is a video effect corresponding to the position change of the gesture.
5. The apparatus of claim 4, wherein the gesture shaking result determination module is specifically configured to:
if the deviation value is smaller than or equal to the preset threshold value, the gesture does not shake in the video shooting process;
and if the deviation value is larger than the preset threshold value, the gesture shakes in the video shooting process.
6. The apparatus according to any of claims 4-5, wherein the first or second centroid determining module is specifically configured to:
if the plurality of key position points on the acquired gesture form a regular geometric figure, determining the center of the geometric figure formed by at least two key position points of the gesture in the video data frame as a gesture center point; or
And if the plurality of key positions on the acquired gesture form an irregular geometric figure, determining the center of a related circumscribed figure of the geometric figure formed by at least two key position points of the gesture in the video data frame as a gesture center point.
7. A terminal, comprising:
one or more processing devices;
a storage device for storing one or more programs,
when executed by the one or more processing devices, cause the one or more processing devices to implement the video-based gesture shake detection method according to any of claims 1-3.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processing device, carries out a video-based gesture shake detection method according to any one of claims 1-3.
CN201811458369.1A 2018-11-30 2018-11-30 Video-based gesture jitter detection method, device, terminal and medium Active CN111263084B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811458369.1A CN111263084B (en) 2018-11-30 2018-11-30 Video-based gesture jitter detection method, device, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811458369.1A CN111263084B (en) 2018-11-30 2018-11-30 Video-based gesture jitter detection method, device, terminal and medium

Publications (2)

Publication Number Publication Date
CN111263084A CN111263084A (en) 2020-06-09
CN111263084B true CN111263084B (en) 2021-02-05

Family

ID=70944821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811458369.1A Active CN111263084B (en) 2018-11-30 2018-11-30 Video-based gesture jitter detection method, device, terminal and medium

Country Status (1)

Country Link
CN (1) CN111263084B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114694269A (en) * 2022-02-28 2022-07-01 江西中业智能科技有限公司 Human behavior monitoring method, system and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107589850A (en) * 2017-09-26 2018-01-16 深圳睛灵科技有限公司 A kind of recognition methods of gesture moving direction and system
CN108197596A (en) * 2018-01-24 2018-06-22 京东方科技集团股份有限公司 A kind of gesture identification method and device
CN108255285A (en) * 2016-12-29 2018-07-06 广州映博智能科技有限公司 It is a kind of based on the motion gesture detection method that detection is put between the palm
CN108446657A (en) * 2018-03-28 2018-08-24 京东方科技集团股份有限公司 Gesture shakes recognition methods and device, gesture identification method
CN108882025A (en) * 2018-08-07 2018-11-23 北京字节跳动网络技术有限公司 Video frame treating method and apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140282275A1 (en) * 2013-03-15 2014-09-18 Qualcomm Incorporated Detection of a zooming gesture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108255285A (en) * 2016-12-29 2018-07-06 广州映博智能科技有限公司 It is a kind of based on the motion gesture detection method that detection is put between the palm
CN107589850A (en) * 2017-09-26 2018-01-16 深圳睛灵科技有限公司 A kind of recognition methods of gesture moving direction and system
CN108197596A (en) * 2018-01-24 2018-06-22 京东方科技集团股份有限公司 A kind of gesture identification method and device
CN108446657A (en) * 2018-03-28 2018-08-24 京东方科技集团股份有限公司 Gesture shakes recognition methods and device, gesture identification method
CN108882025A (en) * 2018-08-07 2018-11-23 北京字节跳动网络技术有限公司 Video frame treating method and apparatus

Also Published As

Publication number Publication date
CN111263084A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
JP7181375B2 (en) Target object motion recognition method, device and electronic device
CN109542304B (en) Display content loading method, device, equipment and storage medium
CN111177137B (en) Method, device, equipment and storage medium for data deduplication
US20180283873A1 (en) Calibration method based on dead reckoning technology and portable electronic device
CN113721807B (en) Information display method and device, electronic equipment and storage medium
CN112306235B (en) Gesture operation method, device, equipment and storage medium
CN112667118A (en) Method, apparatus and computer readable medium for displaying historical chat messages
US9665232B2 (en) Information-processing device, storage medium, information-processing method, and information-processing system for enlarging or reducing an image displayed on a display device
CN111263084B (en) Video-based gesture jitter detection method, device, terminal and medium
WO2023186009A1 (en) Step counting method and apparatus
CN111597797A (en) Method, device, equipment and medium for editing social circle message
CN108874141B (en) Somatosensory browsing method and device
CN107977147B (en) Sliding track display method and device
US8694509B2 (en) Method and apparatus for managing for handwritten memo data
EP4170588A2 (en) Video photographing method and apparatus, and device and storage medium
CN110618776B (en) Picture scaling method, device, equipment and storage medium
CN110807164B (en) Automatic image area adjusting method and device, electronic equipment and computer readable storage medium
CN110807728B (en) Object display method and device, electronic equipment and computer-readable storage medium
CN110035231B (en) Shooting method, device, equipment and medium
CN111435442B (en) Character selection method and device, point reading equipment, electronic equipment and storage medium
CN109472873B (en) Three-dimensional model generation method, device and hardware device
CN111258415B (en) Video-based limb movement detection method, device, terminal and medium
CN111259694B (en) Gesture moving direction identification method, device, terminal and medium based on video
CN111813473A (en) Screen capturing method and device and electronic equipment
CN110070600B (en) Three-dimensional model generation method, device and hardware device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant