CN108121963A - Processing method, device and the computing device of video data - Google Patents
Processing method, device and the computing device of video data Download PDFInfo
- Publication number
- CN108121963A CN108121963A CN201711394200.XA CN201711394200A CN108121963A CN 108121963 A CN108121963 A CN 108121963A CN 201711394200 A CN201711394200 A CN 201711394200A CN 108121963 A CN108121963 A CN 108121963A
- Authority
- CN
- China
- Prior art keywords
- data
- combinative movement
- human region
- video data
- data set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
- G06F16/784—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Human Computer Interaction (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of processing method of video data, device and computing device, wherein, method includes:Human body segmentation's processing is carried out for the multiple images frame in video data, is obtained and the corresponding multiple human region data of multiple images frame;By multiple human region data respectively compared with the multiple body-sensing action datas included in default combinative movement data set;When definite comparative result meets preset matching rule, the combinative movement processing rule corresponding to the combinative movement data set with multiple human region data match is obtained;Rule is handled according to combinative movement to handle video data, the video data after display processing.Which can quickly and accurately capture the body-sensing action of human body, video data is handled by driving of body-sensing action, the video data that captor emotion does not depend on high-precision, the camera of high depth is shot, suitable for any mobile terminal with camera, Anti infrared interference ability is strong, at low cost.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of processing method of video data, device and calculating
Equipment.
Background technology
With the development of science and technology, higher level human-computer interaction theory proposes interactive mode higher and higher want
Ask, for example, body-sensing man-machine interaction mode, people can very directly using limb action, with the device on periphery or environment into
Row is interactive, without using the control device of any complexity, people can be allowed to do interaction with device or environment with being personally on the scene.
But inventor has found in the implementation of the present invention:Body-sensing man-machine interaction mode of the prior art is often
The body-sensing for accurately capturing user is needed to act, such as needs to position the artis of human body to determine that the body-sensing of user acts;
Secondly, body-sensing man-machine interaction mode of the prior art often relies on high-precision, the camera of high depth with the body-sensing to user
Action is predicted, however the camera of high-precision, high depth is of high cost, and can only be in the situation without strong Infrared jamming
Lower use, the man-machine interaction mode based on which are difficult to promote on mobile terminals;In addition, the body-sensing based on RGB image is moved
Make to capture and generally require very big calculation amount.It can be seen that the above problem can be well solved by lacking one kind in the prior art
Method.
The content of the invention
In view of the above problems, it is proposed that the present invention overcomes the above problem in order to provide one kind or solves at least partly
State processing method, device and the computing device of the video data of problem.
According to an aspect of the invention, there is provided a kind of processing method of video data, including:For the video counts
Multiple images frame in carries out human body segmentation's processing, obtains and the corresponding multiple human region numbers of described multiple images frame
According to;By the multiple human region data respectively with multiple body-sensing action datas for being included in default combinative movement data set into
Row compares;When definite comparative result meets preset matching rule, the group with the multiple human region data match is obtained
Close the combinative movement processing rule corresponding to action data collection;According to the combinative movement handle rule to the video data into
Row is handled, the video data after display processing.
Optionally, the default combinative movement data set includes:Multiple groups being stored in default body-sensing maneuver library
Action data collection is closed, and at least two body-sensing action datas are included in each combinative movement data set;
It is then described by the multiple human region data multiple bodies with being included in default combinative movement data set respectively
The step of sense action data is compared specifically includes:
By the multiple human region data each combinative movement data set with being stored in the body-sensing maneuver library respectively
In multiple body-sensing action datas for including be compared.
Optionally, the preset matching rule includes:
When the M human region data included in the multiple human region data respectively with combinative movement number to be compared
During according to concentrating the M included individual sense action data matchings, the multiple human region data and the combination to be compared are determined
Action data collection meets the matched rule;
Wherein, the total quantity of the multiple human region data is greater than or equal to M, the combinative movement data to be compared
The total quantity of the multiple body-sensing action datas included is concentrated to be greater than or equal to M;Wherein, M is the natural number more than 1.
Optionally, each body-sensing action data included in the combinative movement data set to be compared has time sequence number
Mark, then the M human region data included in the multiple human region data respectively with combinative movement data to be compared
The step of concentrating the M included individual sense action data matchings specifically includes:
Appearance of the M human region data for judging to include in the multiple human region data in the video data
Whether order matches with the time sequence number mark of the M individual sense action datas included in combinative movement data set to be compared;
If so, determine in the multiple human region data M human region data including respectively with it is to be compared
The M individual sense action data matchings included in combinative movement data set.
Optionally, the multiple images frame in the video data carries out human body segmentation's processing, obtain with it is described
The step of multiple images frame corresponding multiple human region data, specifically includes:
According to appearance order of each picture frame in the video data, obtain what is included in the video data in real time
Currently pending picture frame carries out human body segmentation's processing to the currently pending picture frame, obtains currently treating with described
The corresponding human region data of picture frame of processing.
Optionally, it is described by the multiple human region data respectively with included in default combinative movement data set it is more
The step of individual sense action data is compared specifically includes:
By the corresponding human region data of the currently pending picture frame respectively in each combinative movement data set
Comprising multiple body-sensing action datas be compared;
Comparative result is determined as the first action data for successful body-sensing action data, by the first action data institute
Combinative movement data set be determined as the first action data collection;
Corresponding human region data of rear N number of picture frame corresponding to by the currently pending picture frame and described the
Each body-sensing action data that one action data concentration includes is compared;Wherein, N is the natural number more than or equal to 1.
Optionally, group of the acquisition corresponding to the combinative movement data set of the multiple human region data match
The step of conjunction action processing rule specifically includes:
Storehouse is handled according to default combinative movement, determines the combinative movement number with the multiple human region data match
Rule is handled according to the corresponding combinative movement of collection;
Wherein, the combinative movement processing storehouse is used to store the combinative movement processing corresponding to each combinative movement data set
Rule.
Optionally, the combinative movement processing rule includes:According to the corresponding effect textures of combinative movement data set,
The video data is handled.
Optionally, described the step of being handled according to combinative movement processing rule the video data, specifically wraps
It includes:
To the rear L picture frame corresponding to currently pending picture frame and/or the currently pending picture frame into
Row processing;Wherein, the L is the natural number more than 1.
Optionally, the video data includes:Video data, and/or man-machine friendship by image capture device captured in real-time
The video data included in mutual class game.
According to another aspect of the present invention, a kind of processing unit of video data is provided, including:Split module, be suitable for
Human body segmentation's processing is carried out for the multiple images frame in the video data, is obtained corresponding more with described multiple images frame
A human region data;Comparison module, suitable for by the multiple human region data respectively with default combinative movement data set
In multiple body-sensing action datas for including be compared;Rule acquisition module is handled, suitable for that ought determine that it is default that comparative result meets
During matched rule, obtain at the combinative movement corresponding to the combinative movement data set with the multiple human region data match
Reason rule;Processing module is handled the video data suitable for handling rule according to the combinative movement;Display module,
Suitable for the video data after display processing.
Optionally, the default combinative movement data set includes:Multiple groups being stored in default body-sensing maneuver library
Action data collection is closed, and at least two body-sensing action datas are included in each combinative movement data set;
The comparison module is further adapted for:
By the multiple human region data each combinative movement data set with being stored in the body-sensing maneuver library respectively
In multiple body-sensing action datas for including be compared.
Optionally, the preset matching rule includes:
When the M human region data included in the multiple human region data respectively with combinative movement number to be compared
During according to concentrating the M included individual sense action data matchings, the multiple human region data and the combination to be compared are determined
Action data collection meets the matched rule;
Wherein, the total quantity of the multiple human region data is greater than or equal to M, the combinative movement data to be compared
The total quantity of the multiple body-sensing action datas included is concentrated to be greater than or equal to M;Wherein, M is the natural number more than 1.
Optionally, each body-sensing action data included in the combinative movement data set to be compared has time sequence number
Mark, then the comparison module is further adapted for:
Appearance of the M human region data for judging to include in the multiple human region data in the video data
Whether order matches with the time sequence number mark of the M individual sense action datas included in combinative movement data set to be compared;
If so, determine in the multiple human region data M human region data including respectively with it is to be compared
The M individual sense action data matchings included in combinative movement data set.
Optionally, the segmentation module is further adapted for:
According to appearance order of each picture frame in the video data, obtain what is included in the video data in real time
Currently pending picture frame carries out human body segmentation's processing to the currently pending picture frame, obtains currently treating with described
The corresponding human region data of picture frame of processing.
Optionally, the comparison module is further adapted for:
By the corresponding human region data of the currently pending picture frame respectively in each combinative movement data set
Comprising multiple body-sensing action datas be compared;
Comparative result is determined as the first action data for successful body-sensing action data, by the first action data institute
Combinative movement data set be determined as the first action data collection;
Corresponding human region data of rear N number of picture frame corresponding to by the currently pending picture frame and described the
Each body-sensing action data that one action data concentration includes is compared;Wherein, N is the natural number more than or equal to 1.
Optionally, the processing rule acquisition module is further adapted for:
Storehouse is handled according to default combinative movement, determines the combinative movement number with the multiple human region data match
Rule is handled according to the corresponding combinative movement of collection;
Wherein, the combinative movement processing storehouse is used to store the combinative movement processing corresponding to each combinative movement data set
Rule.
Optionally, the combinative movement processing rule includes:According to the corresponding effect textures of combinative movement data set,
The video data is handled.
Optionally, the processing module is further adapted for:
To the rear L picture frame corresponding to currently pending picture frame and/or the currently pending picture frame into
Row processing;Wherein, the L is the natural number more than 1.
Optionally, the video data includes:Video data, and/or man-machine friendship by image capture device captured in real-time
The video data included in mutual class game.
According to another aspect of the invention, a kind of computing device is provided, including:Processor, memory, communication interface and
Communication bus, the processor, the memory and the communication interface complete mutual communication by the communication bus;
For the memory for storing an at least executable instruction, it is above-mentioned that the executable instruction performs the processor
The corresponding operation of processing method of video data.
In accordance with a further aspect of the present invention, provide a kind of computer storage media, be stored in the storage medium to
A few executable instruction, the executable instruction make processor perform the corresponding operation of processing method such as above-mentioned video data.
Processing method, device and the computing device of the video data provided according to the present invention, this method can be quickly and accurate
The body-sensing action of human body really is captured, using body-sensing action handling video data as driving, captor, which moves, to be made not
High-precision is relied on, the video data that the camera of high depth is shot, suitable for any mobile terminal with camera, counter infrared ray
Interference performance is strong, at low cost;A kind of man-machine interaction mode using body-sensing action as driving based on human region segmentation is provided,
The quick processing rule for determining to handle video data, and the video data after display processing can be acted according to body-sensing,
Improve the display effect of video data.
Above description is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention,
And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can
It is clearer and more comprehensible, below the special specific embodiment for lifting the present invention.
Description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this field
Technical staff will be apparent understanding.Attached drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present invention
Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows the flow chart of the processing method of video data according to an embodiment of the invention;
Fig. 2 shows the flow chart of the processing method of video data in accordance with another embodiment of the present invention;
Fig. 3 shows the flow diagram for each sub-steps that step S220 is included;
Fig. 4 shows the structure diagram of the processing unit of the video data of further embodiment according to the present invention;
Fig. 5 shows the structure diagram of computing device according to embodiments of the present invention.
Specific embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
Completely it is communicated to those skilled in the art.
Fig. 1 shows the flow chart of the processing method of video data according to an embodiment of the invention.As shown in Figure 1,
This method comprises the following steps:
Step S110 carries out human body segmentation's processing for the multiple images frame in video data, obtains and multiple images frame
Corresponding multiple human region data.
Video data can be that the real-time video data of camera shooting can also be either the pre- of local or high in the clouds
The video data of camera recording is first passed through, can also be the video data being combined by multiple pictures.Wherein, multiple images frame
Can be the multiple images frame that default time interval is spaced in continuous multiple images frame or video data, this hair
Bright concrete form and source to video data is without limiting.
Human body segmentation's processing is carried out to multiple images frame specifically to may be accomplished by:First, each figure is detected
As the human region in frame, can specifically classify to judge each picture frame by the pixel for including each picture frame
In human region.Then, human region is split from corresponding picture frame, it specifically can be by the corresponding picture of human region
Vegetarian refreshments is split, and obtains multiple human region data corresponding with each picture frame.
Step S120, by multiple human region data multiple body-sensings with being included in default combinative movement data set respectively
Action data is compared.
In the method for the present embodiment, according to the operation that the triggering of body-sensing combination of actions handles video data, therefore need
Judge whether multiple human region data meet trigger condition, the multiple body-sensings included in default combinative movement data set are moved
It is to judge whether multiple human region data meet the foundation of trigger condition as data.Wherein, human region data may include people
Whether the pixel and the coordinate position of pixel that body region includes, this step specifically may determine that multiple human region data
Whether the matching degree of consistent with multiple body-sensing action datas respectively or multiple human region data and multiple body-sensing action datas
Include multiple body-sensing action datas more than preset matching degree threshold value, such as default beating dragon 18 palms combinative movement data set, then
By multiple human region data and the plurality of body-sensing action data respectively compared with.
Step S130 when definite comparative result meets preset matching rule, is obtained and multiple human region data phases
Combinative movement processing rule corresponding to the combinative movement data set matched somebody with somebody.
Preset matching rule can be configured according to specific application scenarios, such as stronger for real-time and interactivity
Game or the application scenarios of live streaming, recognize when multiple human region data and the relatively low matching degree of multiple body-sensing action datas
To meet preset matching rule or the application scenarios of post-processing being carried out for video data, when multiple human region data
With the matching degree of multiple body-sensing action datas it is higher when think to meet preset matching rule, in concrete application, art technology
Personnel can be configured according to actual needs.When going out multiple human region data and combinative movement according to preset matching rule judgment
When data set matches, then the combinative movement processing rule corresponding to the combinative movement data set is obtained.According to the above, if
Multiple human region data are matched with beating dragon 18 palms action data collection, then obtain corresponding group of beating dragon 18 palms action data collection
Conjunction action processing rule.
Step S140 handles rule according to combinative movement and video data is handled, the video data after display processing.
Video data is handled, specially the picture frame that video data includes is handled, combinative movement processing rule
Can be then all kinds of processing rules such as addition special effect processing rule, such as according to the corresponding combination of beating dragon 18 palms action data collection
Action processing rule handles each picture frame included in video data, and the video data of processing is shown,
So that the video data of display includes the special efficacy of beating dragon 18 palms.The present invention does not limit the specific rules of video processing, only
Video display effect can be promoted.
According to the image processing method provided in this embodiment based on image capture device, in video data
Multiple images frame carries out human body segmentation's processing, obtains and the corresponding multiple human region data of multiple images frame;By multiple people
Body region data are respectively compared with the multiple body-sensing action datas included in default combinative movement data set;When definite ratio
When relatively result meets preset matching rule, obtain corresponding to the combinative movement data set with multiple human region data match
Combinative movement processing rule;Rule is handled according to combinative movement to handle video data, the video data after display processing.
Which can quickly and accurately capture human body body-sensing action, using body-sensing action for drive to video data at
Reason, the video data that captor emotion does not depend on high-precision, the camera of high depth is shot have camera suitable for any
Mobile terminal, Anti infrared interference ability is strong, at low cost.
Fig. 2 shows the flow chart of the processing method of video data in accordance with another embodiment of the present invention, such as Fig. 2 institutes
Show, this method includes:
Step S210 carries out human body segmentation's processing for the multiple images frame in video data, obtains and multiple images frame
Corresponding multiple human region data.
Specifically, the appearance order according to each picture frame in video data obtains what is included in video data in real time
Currently pending picture frame carries out human body segmentation's processing to currently pending picture frame, obtains and currently pending figure
As the corresponding human region data of frame.
Video data can be the video data taken by camera, go out according to each picture frame in video data
Existing sequencing obtains the currently pending picture frame included in video data in real time, wherein, due to the side of the present embodiment
Method is the operation handled according to multiple body-sensing action triggers video data, it is therefore desirable to obtain what is included in video data
Multiple images frame is handled.Specifically, video data can also be the video data that shooting is recorded in advance, at this point, this method
It is that postprocessing operation is carried out to video data, it, can be successively by specified time section in video data according to time order and function order
The each picture frame inside included is determined as currently pending picture frame successively, can also be determined currently to wait to locate according to detection algorithm
The picture frame of reason specifically, the picture frame of human region, bag by the picture frame and its afterwards is included according to detection algorithm detection
Multiple images frame containing human region is determined as currently pending picture frame successively;Video data can also include:By image
The video data included in video data, and/or human-computer interaction the class game of collecting device captured in real-time, such as live scene,
The video data that vision facilities gathers in real time in somatic sensation television game interaction scenarios, it is then every by what is included in video data in real time at this time
One two field picture is determined as currently pending picture frame according to time-series successively, and this is not limited by the present invention.
Human body segmentation's processing is carried out to currently pending picture frame specifically to may be accomplished by:First, detect
Human region in currently pending picture frame can specifically be detected in currently pending picture frame by neural network algorithm
Comprising human region.Wherein, neural network algorithm can constantly learn the feature of human region by modes such as deep learnings, and
The human region included in currently pending picture frame is detected according to learning outcome.Then, by the human region detected from
Split in currently pending picture frame, the corresponding pixel of human region can specifically be split, obtain with it is each
The corresponding multiple human region data of picture frame, wherein, human region data include the corresponding pixel of human region and
The information such as the location information of pixel, the colouring information of pixel.
The side that the human region included in currently pending picture frame is detected by neural network algorithm mentioned above
Formula belongs to detection mode.In addition to being realized by detection mode, this step can also be further combined with being realized by track algorithm
Tracking mode carries out human body segmentation's processing to currently pending picture frame.Specifically, detect currently to treat by detection mode
In the picture frame of processing after human region, the location information of human region is supplied to tracker, by tracker according to current
The position of human region in pending picture frame to the human region in subsequent picture frame into line trace, due to usual feelings
Under condition, the same area in video data, there are relevance, therefore, can be added in continuous multiple image by tracking mode
The detection efficiency of fast subsequent image frames.Also, tracking result can also be supplied to the detector for detection by tracker, for
Detector determines one piece of regional area as detection range from whole two field picture, and is only detected in the detection range, from
And promote detection efficiency.In short, by the combined use of detection mode and tracking mode, the efficiency and essence of detection can be promoted
Degree.
Step S220, by multiple human region data multiple body-sensings with being included in default combinative movement data set respectively
Action data is compared.
Wherein, default combinative movement data set includes:Multiple combinative movements being stored in default body-sensing maneuver library
Data set, and include at least two body-sensing action datas in each combinative movement data set;Then by multiple human region data point
Not compared with the multiple body-sensing action datas included in default combinative movement data set the step of, specifically includes:It will be multiple
Human region data act number with the multiple body-sensings included in each combinative movement data set for being stored in body-sensing maneuver library respectively
According to being compared.
Body-sensing maneuver library is pre-set, since the method for the present embodiment is according to a set of continuous multiple body-sensings detected
The operation that action triggers handle video data, and move to make to trigger only by an individual and video data is carried out
Therefore at least two individual sense action datas, are determined as a combinative movement data set, by group by the operation of processing in the present embodiment
It closes action data collection and its association of corresponding at least two body-sensings action data is stored in body-sensing maneuver library.When from each image
It is partitioned into frame after multiple human region data, multiple human region data and multiple body-sensing action datas is compared respectively
Compared with to determine the corresponding combinative movement data set of multiple human region data.
Wherein, the multiple body-sensing action datas included in default combinative movement data set are respectively provided with time sequence number mark,
For example, first lifted in a combinative movement data set comprising lifting the right hand and putting down two individual sense action datas of the right hand
The right hand is put down after the right hand and corresponds to beat ground combinative movement data set, and first puts down that the right hand combination that corresponds to swish a whip is lifted after the right hand is dynamic
Make data set, it follows that various combination action data, which is concentrated, may include the same body-sensing action data, it is each by setting
The time sequence number mark of body-sensing action data can distinguish each combinative movement data set.
Specifically, in the present embodiment, further multiple sub-steps, Fig. 3 show what step S220 was included to this step
The flow diagram of each sub-steps, as shown in figure 3, step S220 is specifically included:
Sub-step S221, by the currently pending corresponding human region data of picture frame respectively with each combinative movement number
It is compared according to the multiple body-sensing action datas included are concentrated.
The human region data being partitioned into currently pending picture frame, by human region data respectively with each body-sensing
Action data is compared, and the profile of human region can be specifically determined according to the pixel information that human region data are included
And/or area, each body-sensing of the profile of human region and/or area respectively with being included in each combinative movement data set is moved
The profile and/or area for making the human region corresponding to data are compared, in addition, in order to improve matching efficiency, can be incited somebody to action current
The first individual corresponding with each combinative movement data set moves work to the corresponding human region data of pending picture frame respectively
Multiple body-sensing action datas that data are compared or order corresponding with each combinative movement data set is forward carry out respectively
Compare.
Comparative result is determined as the first action data for successful body-sensing action data, first is moved by sub-step S222
Combinative movement data set where making data is determined as the first action data collection.
It is compared according to sub-step S221, in the present embodiment, if the corresponding human region of currently pending picture frame
Profile it is consistent with the profile of a corresponding human region of individual sense action data or matching degree is more than default outline
Spend the area face corresponding with an individual sense action data of threshold value and/or the corresponding human region of currently pending picture frame
Difference between product is consistent or both is less than default difference threshold, then it is assumed that the human body area of the currently pending picture frame
It is that successful body-sensing action data is determined as the by comparative result successfully that the comparative result of numeric field data and the body-sensing action data, which be,
Combinative movement data set where first action data is determined as the first action data collection by one action data.
Sub-step S223, the corresponding human region data of rear N number of picture frame corresponding to by currently pending picture frame
Compared with each body-sensing action data that the first action data concentration includes;Wherein, N is the natural number more than or equal to 1.
Corresponding human region data of rear N number of picture frame corresponding to by picture frame currently pending in video data with
Each body-sensing action data that first action data concentration includes is compared respectively, and the mode compared can be found in above-mentioned steps
The method of S221, details are not described herein, for example, being split processing to currently pending picture frame, obtains corresponding human body
Area data compares the human region data with each body-sensing action data included in each combinative movement data set
Compared with if being successfully, by the body-sensing action data there are an individual sense action data and the comparative result of the human region data
The combinative movement data set at place is determined as the first action data collection, then further will be each after currently pending picture frame
The corresponding human region data of picture frame are led to compared with each body-sensing action data that the first action data collection is included
The scope of comparison other can be reduced by crossing which, accelerate to inquire about the process of action data collection corresponding with each picture frame.
Step S230 when definite comparative result meets preset matching rule, handles storehouse, really according to default combinative movement
Combinative movement processing rule corresponding to the fixed combinative movement data set with multiple human region data match;Wherein, combine
Action processing storehouse is used to store the combinative movement processing rule corresponding to each combinative movement data set.
Preset matching rule includes:When the M human region data included in multiple human region data are respectively with waiting to compare
Compared with combinative movement data set in include M individual sense action data matching when, determine multiple human region data with it is to be compared
Combinative movement data set meet matched rule;Wherein, the total quantity of multiple human region data is greater than or equal to M, to be compared
Combinative movement data set in the total quantity of multiple body-sensing action datas that includes be greater than or equal to M;Wherein, M is oneself more than 1
So number.
Combinative movement data set to be compared refers to default combinative movement data set, and human region data are acted with body-sensing
Data Matching refer to the comparative result of the human region data and the body-sensing action data be successfully, in the application of reality, meeting
The multiple body-sensings action that there is a situation where user acts not quite identical, example with the body-sensing corresponding to a combinative movement data set
Such as, for the corresponding multiple body-sensing actions of combinative movement data set, in the multiple body-sensings action for detecting user, exist
The body-sensing action of mistake or the body-sensing action omitted, if the action of multiple body-sensings and the combinative movement number of strict regulations user
The operation handled video data is just triggered when acting completely the same according to corresponding multiple body-sensings, can be caused not to user
Just, the experience of user's body feeling interaction is influenced.
Therefore, those skilled in the art can set preset matching rule according to specific application scenarios, for example, setting certain
Matching ratio threshold value, matching ratio refer in multiple human region data with included in a combinative movement data set it is multiple
The comparative result of body-sensing action data accounts for the quantity of the plurality of body-sensing action data for the quantity of successful human region data
Ratio, if matching ratio is not less than matching ratio threshold value, it is determined that comparative result meets preset matching rule.For example, if
Feel action data comprising five individuals in one combinative movement data set, determine four human region data with being somebody's turn to do by above-mentioned steps
The comparative result of wherein four human region data of five individual sense action datas is respectively success, and matching ratio is at this time
80%, then it is assumed that four human region data match with the combinative movement data set;It is furthermore it is also possible to dynamic for a combination
Make data concentration each body-sensing action data priority sequence number is set respectively, however, it is determined that multiple human region data respectively with group
The comparative result for closing the higher body-sensing action data of multiple priority sequence numbers that action data is concentrated is successfully, then it is assumed that the plurality of
Human region data match with the combinative movement data set.
Further determine that combinative movement processing corresponding with the combinative movement data set of multiple human region data match
Rule, combinative movement processing rule can be addition special effect processing rules, can be additive effect stick picture disposing rule, can also
It is display animation process rule, for example, in live scene, user makes the body-sensing action first lifted and the right hand is put down after the right hand,
Determine that corresponding combinative movement processing rule for addition special effect processing rule, is then added video data special effect processing, table
The special efficacy picture on ground is now beaten for display;For another example, in somatic sensation television game, user makes the body-sensing action of right hand impact tennis, determines
Corresponding combinative movement processing rule is display animation process rule, then animation and display processing are added to video data,
Show as the animation of display right hand impact tennis.The present invention is not limited the intension of combinative movement processing rule.
Wherein, each body-sensing action data included in combinative movement data set to be compared is identified with time sequence number,
The M human region data then included in multiple human region data are respectively with including in combinative movement data set to be compared
The step of M individual sense action data matchings, specifically includes:
Whether appearance order of the M human region data for judging to include in multiple human region data in video data
It is matched with the time sequence number mark of the M individual sense action datas included in combinative movement data set to be compared;If so, it determines
The M human region data included in multiple human region data the M with being included in combinative movement data set to be compared respectively
Individual sense action data matching.
The multiple body-sensing action datas included in default combinative movement data set are respectively provided with time sequence number mark, then need
By the appearance order of each human region data compared with multiple body-sensing action datas, for example, a combinative movement
Comprising lifting the right hand and putting down two individual sense action datas of the right hand in data set, first lift and put down the right hand after the right hand and correspond to beat
Ground combinative movement data set, and lift the right hand after first putting down the right hand and correspond to the combinative movement data set that swishes a whip, it follows that different
The same multiple body-sensing action datas may be included in combinative movement data set, by the time for setting each body-sensing action data
Sequence number mark can distinguish each combinative movement data set, corresponding, in inquiry and the combination of multiple human region Data Matchings
During action data collection, not only it needs to be determined that multiple human region data comparison knot with multiple body-sensing action datas respectively
Fruit, it is also necessary to determine whether the order that multiple human region data occur in video data moves with combination to be compared
Make the time sequence number mark matching for multiple body-sensing action datas that data concentration includes, only when multiple human region data and one
The comparative result for multiple body-sensing action datas that a combinative movement data set includes meets preset matching rule, and multiple human bodies
The order that area data occurs in video data is multiple body-sensing action datas with being included in the combinative movement data set
During the mark matching of time sequence number, just determine that the plurality of human region data match with the combinative movement data set.
Step S240 handles rule according to combinative movement and video data is handled, the video data after display processing.
Specifically, according to the corresponding effect textures of combinative movement data set, video data is handled, and is shown
Treated video data.It is that the picture frame that video data includes is handled to video data processing, it is dynamic according to combination
Rule is dealt with to handle video data, such as above-mentioned addition special effect processing rule, special efficacy is added to video data
Processing, and the video data of display processing, such as the corresponding combinative movement of combinative movement data set according to beating dragon 18 palms
Processing rule handles corresponding each picture frame in video data, and the video data of processing is shown so that
The video data of display includes the special efficacy of beating dragon 18 palms.
Specifically, to the rear L picture frame corresponding to currently pending picture frame and/or currently pending picture frame
It is handled;Wherein, L is the natural number more than 1.In actual application, exist according to currently pending picture frame
Determine the situation of corresponding combinative movement processing rule, then to currently pending picture frame and its corresponding rear L image
Frame is handled accordingly;Alternatively, it is determined jointly in the presence of several picture frames according to currently pending picture frame and its afterwards
The situation of corresponding combinative movement processing rule is then handled the currently pending corresponding rear L picture frame of picture frame.
According to the processing method of video data provided in this embodiment, mode of the which based on human body segmentation can be quick
And the body-sensing action of human body is captured exactly, using body-sensing action handling video data as driving, also, pass through god
There do not have through network detection human region and by the mode that human region is split from image for picture pick-up device to be any special
It is required that independent of high-precision, the video data of the camera shooting of high depth, suitable for any mobile end with camera
End, Anti infrared interference ability is strong, at low cost.Further, since which passes through moving corresponding to the human region in multiple image
Make combination to trigger corresponding special efficacy, only just perform subsequent step on the premise of the equal successful match of multiple image, therefore, carry
The accuracy of processing has been risen, has reduced False Rate.Provide it is a kind of based on human region segmentation using body-sensing action for drive
Man-machine interaction mode can act the quick processing rule for determining to handle video data, and display processing according to body-sensing
Video data afterwards improves the display effect of video data.
Fig. 4 shows the structure diagram of the processing unit of the video data of further embodiment according to the present invention, such as Fig. 4
Shown, which includes:
Split module 41, human body segmentation's processing carried out suitable for the multiple images frame that is directed in video data, obtain with it is multiple
The corresponding multiple human region data of picture frame;
Comparison module 42, suitable for by multiple human region data respectively with included in default combinative movement data set it is more
Individual sense action data is compared;
Rule acquisition module 43 is handled, suitable for when definite comparative result meets preset matching rule, obtaining and multiple people
Combinative movement processing rule corresponding to the combinative movement data set of body region data match;
Processing module 44 is handled video data suitable for handling rule according to combinative movement;
Display module 45, suitable for the video data after display processing.
Optionally, default combinative movement data set includes:Multiple combinations being stored in default body-sensing maneuver library are moved
Make data set, and at least two body-sensing action datas are included in each combinative movement data set;
Then comparison module 42 is further adapted for:
By multiple human region data respectively with including in each combinative movement data set for being stored in body-sensing maneuver library
Multiple body-sensing action datas are compared.
Optionally, preset matching rule includes:
When the M human region data included in multiple human region data respectively with combinative movement data set to be compared
In include M individual sense action data matchings when, determine that multiple human region data are accorded with combinative movement data set to be compared
Close matched rule;
Wherein, the total quantity of multiple human region data is greater than or equal to M, is included in combinative movement data set to be compared
Multiple body-sensing action datas total quantity be greater than or equal to M;Wherein, M is the natural number more than 1.
Optionally, each body-sensing action data included in combinative movement data set to be compared has time sequence number mark
Know, then comparison module 42 is further adapted for:
Whether appearance order of the M human region data for judging to include in multiple human region data in video data
It is matched with the time sequence number mark of the M individual sense action datas included in combinative movement data set to be compared;
If so, determine in multiple human region data M human region data including respectively with combination to be compared
The M individual sense action data matchings that action data concentration includes.
Optionally, segmentation module 41 is further adapted for:
According to appearance order of each picture frame in video data, what is included in real time in acquisition video data currently waits to locate
The picture frame of reason carries out human body segmentation's processing to currently pending picture frame, obtains opposite with currently pending picture frame
The human region data answered.
Optionally, comparison module 42 is further adapted for:
By the currently pending corresponding human region data of picture frame respectively with being included in each combinative movement data set
Multiple body-sensing action datas be compared;
Comparative result is determined as the first action data for successful body-sensing action data, it will be where the first action data
Combinative movement data set is determined as the first action data collection;
The corresponding human region data of rear N number of picture frame and the first action number corresponding to by currently pending picture frame
It is compared according to each body-sensing action data included is concentrated;Wherein, N is the natural number more than or equal to 1.
Optionally, processing rule acquisition module 43 is further adapted for:
Storehouse is handled according to default combinative movement, determines the combinative movement data set with multiple human region data match
Corresponding combinative movement processing rule;
Wherein, combinative movement processing storehouse is used to store the combinative movement processing rule corresponding to each combinative movement data set
Then.
Optionally, combinative movement processing rule includes:According to the corresponding effect textures of combinative movement data set, to regarding
Frequency is according to being handled.
Optionally, processing module 44 is further adapted for:
At the rear L picture frame corresponding to currently pending picture frame and/or currently pending picture frame
Reason;Wherein, L is the natural number more than 1.
Optionally, video data includes:By the video data of image capture device captured in real-time, and/or human-computer interaction class
The video data included in game.
The concrete structure and operation principle of above-mentioned modules can refer to the description of corresponding steps in embodiment of the method, herein
It repeats no more.
The another embodiment of the application provides a kind of nonvolatile computer storage media, and the computer storage media is deposited
An at least executable instruction is contained, which can perform the video data in above-mentioned any means embodiment
Processing method.
Fig. 5 shows a kind of structure diagram of computing device according to embodiments of the present invention, the specific embodiment of the invention
The specific implementation of computing device is not limited.
As shown in figure 5, the computing device can include:Processor (processor) 502, communication interface
(Communications Interface) 504, memory (memory) 506 and communication bus 508.
Wherein:
Processor 502, communication interface 504 and memory 506 complete mutual communication by communication bus 508.
Communication interface 504, for communicating with the network element of miscellaneous equipment such as client or other servers etc..
Processor 502, for performing program 510, in the processing method embodiment that can specifically perform above-mentioned video data
Correlation step.
Specifically, program 510 can include program code, which includes computer-managed instruction.
Processor 502 may be central processor CPU or specific integrated circuit ASIC (Application
Specific Integrated Circuit) or be arranged to implement the embodiment of the present invention one or more integrate electricity
Road.The one or more processors that computing device includes can be same type of processor, such as one or more CPU;Also may be used
To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 506, for storing program 510.Memory 506 may include high-speed RAM memory, it is also possible to further include
Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 510 specifically can be used for so that processor 502 performs following operation:For the multiple images in video data
Frame carries out human body segmentation's processing, obtains and the corresponding multiple human region data of multiple images frame;By multiple human region numbers
According to respectively compared with the multiple body-sensing action datas included in default combinative movement data set;When definite comparative result accords with
When closing preset matching rule, the combinative movement corresponding to the combinative movement data set with multiple human region data match is obtained
Processing rule;Rule is handled according to combinative movement to handle video data, the video data after display processing.
In a kind of optional mode, default combinative movement data set includes:It is multiple to be stored in default body-sensing action
Combinative movement data set in storehouse, and include at least two body-sensing action datas in each combinative movement data set;Then program 510
Can specifically it be further used for so that processor 502 performs following operation:Multiple human region data are acted respectively with body-sensing
The multiple body-sensing action datas included in each combinative movement data set stored in storehouse are compared.
In a kind of optional mode, preset matching rule includes:When the M human body included in multiple human region data
When area data matches respectively with the M individual sense action datas included in combinative movement data set to be compared, multiple people are determined
Body region data meet matched rule with combinative movement data set to be compared;Wherein, the total quantity of multiple human region data
More than or equal to M, the total quantity of the multiple body-sensing action datas included in combinative movement data set to be compared is greater than or equal to
M;Wherein, M is the natural number more than 1.
In a kind of optional mode, each body-sensing action data included in combinative movement data set to be compared has
Time sequence number identifies, then program 510 can specifically be further used for so that processor 502 performs following operation:Judge multiple people
The M human region data included in body region data in video data occur order whether with combinative movement to be compared
The time sequence number mark matching of the M individual sense action datas included in data set;If so, it determines in multiple human region data
Comprising M human region data respectively with included in combinative movement data set to be compared M individual sense action data match.
In a kind of optional mode, program 510 can specifically be further used for so that processor 502 performs following behaviour
Make:According to appearance order of each picture frame in video data, included in real time in acquisition video data currently pending
Picture frame carries out human body segmentation's processing to currently pending picture frame, obtains corresponding with currently pending picture frame
Human region data.
In a kind of optional mode, program 510 can specifically be further used for so that processor 502 performs following behaviour
Make:By the currently pending corresponding human region data of picture frame respectively with included in each combinative movement data set it is multiple
Body-sensing action data is compared;Comparative result is determined as the first action data for successful body-sensing action data, by first
Combinative movement data set where action data is determined as the first action data collection;It will be corresponding to currently pending picture frame
Each body-sensing action data that the corresponding human region data of N number of picture frame are included with the first action data concentration afterwards is compared
Compared with;Wherein, N is the natural number more than or equal to 1.
In a kind of optional mode, program 510 can specifically be further used for so that processor 502 performs following behaviour
Make:Storehouse is handled according to default combinative movement, it is right with the combinative movement data set of multiple human region data match institute to determine
The combinative movement processing rule answered;Wherein, combinative movement processing storehouse is used to store the group corresponding to each combinative movement data set
Conjunction action processing rule.
In a kind of optional mode, combinative movement processing rule includes:According to corresponding with combinative movement data set
Effect textures, handle video data.
In a kind of optional mode, program 510 can specifically be further used for so that processor 502 performs following behaviour
Make:Rear L picture frame corresponding to currently pending picture frame and/or currently pending picture frame is handled;Its
In, L is the natural number more than 1.
In a kind of optional mode, video data includes:By image capture device captured in real-time video data and/
Or the video data included in the game of human-computer interaction class.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein.
Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of system
Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize various
Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair
Bright preferred forms.
In the specification provided in this place, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention
Example can be put into practice without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this description.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of each inventive aspect,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor
Shield the present invention claims the more features of feature than being expressly recited in each claim.It is more precisely, such as following
Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,
Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim is in itself
Separate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodiment
Change and they are arranged in one or more equipment different from the embodiment.It can be the module or list in embodiment
Member or component be combined into a module or unit or component and can be divided into addition multiple submodule or subelement or
Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it may be employed any
Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and attached drawing) and so to appoint
Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification is (including adjoint power
Profit requirement, summary and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included some features rather than other feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be with hardware realization or to be run on one or more processor
Software module realize or realized with combination thereof.It will be understood by those of skill in the art that it can use in practice
Microprocessor or digital signal processor (DSP) realize the processing computing device of video data according to embodiments of the present invention
In some or all components some or all functions.The present invention is also implemented as performing as described herein
The some or all equipment or program of device (for example, computer program and computer program product) of method.So
Realization the present invention program can may be stored on the computer-readable medium or can have one or more signal shape
Formula.Such signal can be downloaded from internet website to be obtained either providing or with any other shape on carrier signal
Formula provides.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability
Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real
It is existing.If in the unit claim for listing equipment for drying, several in these devices can be by same hardware branch
To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame
Claim.
Claims (10)
1. a kind of processing method of video data, including:
Human body segmentation's processing is carried out for the multiple images frame in the video data, is obtained corresponding with described multiple images frame
Multiple human region data;
By the multiple human region data multiple body-sensing action datas with being included in default combinative movement data set respectively
It is compared;
When definite comparative result meets preset matching rule, obtain the combination with the multiple human region data match and move
Make the combinative movement processing rule corresponding to data set;
Rule is handled according to the combinative movement to handle the video data, the video data after display processing.
2. according to the method described in claim 1, wherein, the default combinative movement data set includes:It is multiple be stored in it is pre-
If body-sensing maneuver library in combinative movement data set, and act number comprising at least two body-sensings in each combinative movement data set
According to;
It is then described to move multiple body-sensings of the multiple human region data respectively with being included in default combinative movement data set
The step of being compared as data specifically includes:
By the multiple human region data respectively with being wrapped in each combinative movement data set for being stored in the body-sensing maneuver library
The multiple body-sensing action datas contained are compared.
3. method according to claim 1 or 2, wherein, the preset matching rule includes:
When the M human region data included in the multiple human region data respectively with combinative movement data set to be compared
In include M individual sense action data matching when, determine the multiple human region data and the combinative movement to be compared
Data set meets the matched rule;
Wherein, the total quantity of the multiple human region data is greater than or equal to M, in the combinative movement data set to be compared
Comprising multiple body-sensing action datas total quantity be greater than or equal to M;Wherein, M is the natural number more than 1.
4. according to the method described in claim 3, wherein, each body-sensing included in the combinative movement data set to be compared
Action data with time sequence number identify, then the M human region data included in the multiple human region data respectively with
The step of M individual sense action data matchings included in combinative movement data set to be compared, specifically includes:
Appearance order of the M human region data for judging to include in the multiple human region data in the video data
Whether matched with the time sequence number mark of the M individual sense action datas included in combinative movement data set to be compared;
If so, determine in the multiple human region data M human region data including respectively with combination to be compared
The M individual sense action data matchings that action data concentration includes.
5. according to any methods of claim 1-4, wherein, the multiple images frame in the video data into
Pedestrian's body dividing processing, the step of obtaining multiple human region data corresponding with described multiple images frame, specifically include:
According to appearance order of each picture frame in the video data, obtain in real time included in the video data it is current
Pending picture frame carries out human body segmentation's processing to the currently pending picture frame, obtain with it is described currently pending
The corresponding human region data of picture frame.
6. according to the method described in claim 5, wherein, it is described by the multiple human region data respectively with default combination
The step of multiple body-sensing action datas that action data concentration includes are compared specifically includes:
By the corresponding human region data of the currently pending picture frame respectively with being included in each combinative movement data set
Multiple body-sensing action datas be compared;
Comparative result is determined as the first action data for successful body-sensing action data, it will be where first action data
Combinative movement data set is determined as the first action data collection;
The corresponding human region data of rear N number of picture frame corresponding to by the currently pending picture frame are moved with described first
Make each body-sensing action data that data concentration includes to be compared;Wherein, N is the natural number more than or equal to 1.
7. according to any methods of claim 1-6, wherein, the acquisition and the multiple human region data match
Combinative movement data set corresponding to combinative movement processing rule step specifically include:
Storehouse is handled according to default combinative movement, determines the combinative movement data set with the multiple human region data match
Corresponding combinative movement processing rule;
Wherein, the combinative movement processing storehouse is used to store the combinative movement processing rule corresponding to each combinative movement data set
Then.
8. a kind of processing unit of video data, including:
Split module, human body segmentation's processing carried out suitable for the multiple images frame that is directed in the video data, obtain with it is described more
A corresponding multiple human region data of picture frame;
Comparison module, suitable for by the multiple human region data respectively with included in default combinative movement data set it is multiple
Body-sensing action data is compared;
Rule acquisition module is handled, suitable for when definite comparative result meets preset matching rule, obtaining and the multiple human body
Combinative movement processing rule corresponding to the combinative movement data set that area data matches;
Processing module is handled the video data suitable for handling rule according to the combinative movement;
Display module, suitable for the video data after display processing.
9. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage
Device and the communication interface complete mutual communication by the communication bus;
For the memory for storing an at least executable instruction, the executable instruction makes the processor perform right such as will
Ask the corresponding operation of processing method of the video data any one of 1-7.
10. a kind of computer storage media, an at least executable instruction, the executable instruction are stored in the storage medium
Make the corresponding operation of processing method of video data of the processor execution as any one of claim 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711394200.XA CN108121963B (en) | 2017-12-21 | 2017-12-21 | Video data processing method and device and computing equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711394200.XA CN108121963B (en) | 2017-12-21 | 2017-12-21 | Video data processing method and device and computing equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108121963A true CN108121963A (en) | 2018-06-05 |
CN108121963B CN108121963B (en) | 2021-08-24 |
Family
ID=62230850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711394200.XA Active CN108121963B (en) | 2017-12-21 | 2017-12-21 | Video data processing method and device and computing equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108121963B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114885210A (en) * | 2022-04-22 | 2022-08-09 | 海信集团控股股份有限公司 | Course video processing method, server and display equipment |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101989326A (en) * | 2009-07-31 | 2011-03-23 | 三星电子株式会社 | Human posture recognition method and device |
CN102724449A (en) * | 2011-03-31 | 2012-10-10 | 青岛海信电器股份有限公司 | Interactive TV and method for realizing interaction with user by utilizing display device |
CN104268138A (en) * | 2014-05-15 | 2015-01-07 | 西安工业大学 | Method for capturing human motion by aid of fused depth images and three-dimensional models |
CN104834913A (en) * | 2015-05-14 | 2015-08-12 | 中国人民解放军理工大学 | Flag signal identification method and apparatus based on depth image |
CN105100672A (en) * | 2014-05-09 | 2015-11-25 | 三星电子株式会社 | Display apparatus and method for performing videotelephony using the same |
CN105930072A (en) * | 2015-02-28 | 2016-09-07 | 三星电子株式会社 | Electronic Device And Control Method Thereof |
CN106060676A (en) * | 2016-05-17 | 2016-10-26 | 腾讯科技(深圳)有限公司 | Online interaction method and apparatus based on live streaming |
CN106096062A (en) * | 2016-07-15 | 2016-11-09 | 乐视控股(北京)有限公司 | video interactive method and device |
CN106331801A (en) * | 2016-08-31 | 2017-01-11 | 北京乐动卓越科技有限公司 | Man-machine interaction method and system of smart television motion sensing game |
CN106804007A (en) * | 2017-03-20 | 2017-06-06 | 合网络技术(北京)有限公司 | The method of Auto-matching special efficacy, system and equipment in a kind of network direct broadcasting |
-
2017
- 2017-12-21 CN CN201711394200.XA patent/CN108121963B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101989326A (en) * | 2009-07-31 | 2011-03-23 | 三星电子株式会社 | Human posture recognition method and device |
CN102724449A (en) * | 2011-03-31 | 2012-10-10 | 青岛海信电器股份有限公司 | Interactive TV and method for realizing interaction with user by utilizing display device |
CN105100672A (en) * | 2014-05-09 | 2015-11-25 | 三星电子株式会社 | Display apparatus and method for performing videotelephony using the same |
CN104268138A (en) * | 2014-05-15 | 2015-01-07 | 西安工业大学 | Method for capturing human motion by aid of fused depth images and three-dimensional models |
CN105930072A (en) * | 2015-02-28 | 2016-09-07 | 三星电子株式会社 | Electronic Device And Control Method Thereof |
CN104834913A (en) * | 2015-05-14 | 2015-08-12 | 中国人民解放军理工大学 | Flag signal identification method and apparatus based on depth image |
CN106060676A (en) * | 2016-05-17 | 2016-10-26 | 腾讯科技(深圳)有限公司 | Online interaction method and apparatus based on live streaming |
CN106096062A (en) * | 2016-07-15 | 2016-11-09 | 乐视控股(北京)有限公司 | video interactive method and device |
CN106331801A (en) * | 2016-08-31 | 2017-01-11 | 北京乐动卓越科技有限公司 | Man-machine interaction method and system of smart television motion sensing game |
CN106804007A (en) * | 2017-03-20 | 2017-06-06 | 合网络技术(北京)有限公司 | The method of Auto-matching special efficacy, system and equipment in a kind of network direct broadcasting |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114885210A (en) * | 2022-04-22 | 2022-08-09 | 海信集团控股股份有限公司 | Course video processing method, server and display equipment |
CN114885210B (en) * | 2022-04-22 | 2023-11-28 | 海信集团控股股份有限公司 | Tutorial video processing method, server and display device |
Also Published As
Publication number | Publication date |
---|---|
CN108121963B (en) | 2021-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Molchanov et al. | Online detection and classification of dynamic hand gestures with recurrent 3d convolutional neural network | |
US10974152B2 (en) | System and method for toy recognition | |
CN108154105B (en) | Underwater biological detection and identification method and device, server and terminal equipment | |
CN107204012A (en) | Reduce the power consumption of time-of-flight depth imaging | |
CN108090561B (en) | Storage medium, electronic device, and method and device for executing game operation | |
CN107995442A (en) | Processing method, device and the computing device of video data | |
CN103353935A (en) | 3D dynamic gesture identification method for intelligent home system | |
CN110812845B (en) | Plug-in detection method, plug-in recognition model training method and related device | |
Hsieh et al. | A kinect-based people-flow counting system | |
CN108256404A (en) | Pedestrian detection method and device | |
Frintrop et al. | A cognitive approach for object discovery | |
CN112527113A (en) | Method and apparatus for training gesture recognition and gesture recognition network, medium, and device | |
CN110532883A (en) | On-line tracking is improved using off-line tracking algorithm | |
CN103020885A (en) | Depth image compression | |
Vieriu et al. | On HMM static hand gesture recognition | |
CN109242000B (en) | Image processing method, device, equipment and computer readable storage medium | |
Cong et al. | Reinforcement learning with vision-proprioception model for robot planar pushing | |
Kirkland et al. | Perception understanding action: adding understanding to the perception action cycle with spiking segmentation | |
CN107077730B (en) | Silhouette-based limb finder determination | |
CN108121963A (en) | Processing method, device and the computing device of video data | |
Ruiz-Santaquiteria et al. | Improving handgun detection through a combination of visual features and body pose-based data | |
De Geest et al. | Dense interest features for video processing | |
CN114445898B (en) | Face living body detection method, device, equipment, storage medium and program product | |
CN103839032A (en) | Identification method and electronic device | |
CN117809009A (en) | Identity recognition method, equipment, device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |