CN110532984A - Critical point detection method, gesture identification method, apparatus and system - Google Patents
Critical point detection method, gesture identification method, apparatus and system Download PDFInfo
- Publication number
- CN110532984A CN110532984A CN201910830741.5A CN201910830741A CN110532984A CN 110532984 A CN110532984 A CN 110532984A CN 201910830741 A CN201910830741 A CN 201910830741A CN 110532984 A CN110532984 A CN 110532984A
- Authority
- CN
- China
- Prior art keywords
- frame
- target object
- target
- position frame
- key point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of critical point detection methods, gesture identification method, apparatus and system, are related to field of artificial intelligence, comprising: obtain image to be detected, include target object in image to be detected;Target detection is carried out to image to be detected, obtains the position frame of each target object;Position frame based on each target object carries out the merging of position frame, obtains target and merges frame;It includes at least two target objects being in contact that the target, which merges in the corresponding image of frame,;Merge frame based on target and critical point detection is carried out to image to be detected, obtains multiple thermodynamic charts of each target object;Wherein, different thermodynamic charts is used to characterize the key point being located on target object at different location;The key point for each target object being in contact is determined based on thermodynamic chart.The present invention can effectively promote the detection accuracy of key point.
Description
Technical field
The present invention relates to field of artificial intelligence, more particularly, to a kind of critical point detection method, gesture identification method,
Apparatus and system.
Background technique
Detection technique to the target objects such as gesture, face in image is the important application of artificial intelligence.Target object
Critical point detection is the key link in target detection, and the key point information of target object helps that target is determined more accurately
The posture of object.In recent years, the development with deep learning method in target object Attitude estimation problem, target object detection
The method that combining target object key point is estimated is most common at present and the efficient critical point detection method of calculating, but this method
But individual target object individual or distinct multiple target objects can only be handled, once between target object
Contact interaction is carried out, such as: both hands intersect, a hand writes in the centre of the palm of another hand, are difficult to accurately detect mesh
Mark the key point of object.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of critical point detection method, gesture identification method, device and being
System, can effectively promote the detection accuracy of key point.
To achieve the goals above, technical solution used in the embodiment of the present invention is as follows:
In a first aspect, the embodiment of the invention provides a kind of critical point detection methods, which comprises obtain to be detected
Image includes target object in described image to be detected;Target detection is carried out to described image to be detected, is obtained each described
The position frame of target object;Position frame based on each target object carries out the merging of position frame, obtains target and merges frame;Its
In, it includes at least two target objects being in contact that the target, which merges in the corresponding image of frame,;Merge frame based on the target
Critical point detection is carried out to described image to be detected, obtains multiple thermodynamic charts of each target object;Wherein, different institute
Thermodynamic chart is stated for characterizing the key point being located on the target object at different location;It is in contact based on thermodynamic chart determination
Each target object key point.
Further, the position frame based on each target object carries out the merging of position frame, obtains target and merges frame
The step of, comprising: the position frame based on each target object repeats preset union operation to position frame, until
Degree of overlapping between the frame of any two position obtains target and merges frame no more than default degree of overlapping threshold value.
Further, the position frame based on each target object carries out the merging of position frame, obtains target and merges frame
The step of, comprising: the position frame based on each target object repeats preset union operation to position frame, until
Degree of overlapping between the frame of any two position obtains candidate merging frame no more than default degree of overlapping threshold value;According to the candidate
The quantity for merging target object included in frame, screening obtains target merging frame from the candidate merging frame.
Further, the union operation includes: that position frame pair to be combined is determined from multiple position frames;Wherein, described
Position frame is to including two position frames;Calculate the degree of overlapping between the position frame center position frame;If the degree of overlapping calculated
Greater than default degree of overlapping threshold value, the position frame of the position frame centering is merged into new position frame;Wherein, the new position
The boundary of frame is to be determined according to the boundary of two position frames of the position frame centering.
Further, the step that position frame pair to be combined is determined from multiple position frames, comprising: obtain position frame
Confidence level;The position frame is ranked up according to the confidence level of each position frame, obtains position frame ranking results;According to institute's rheme
It sets frame ranking results and determines position frame pair to be combined from multiple position frames.
Further, the step of confidence level for obtaining position frame, comprising: obtain described two position frames of position frame centering
Confidence level, the confidence level of the new position frame is obtained according to the confidence level of described two position frames.
Further, described that frame is merged to described image to be detected progress critical point detection based on the target, it obtains each
The step of multiple thermodynamic charts of the target object, comprising: frame is merged based on the target and is taken from described image to be detected
Topography;It include the target object being in contact in the topography;Size adjusting is carried out to the topography, and right
Topography after size adjusting carries out critical point detection, obtains multiple thermodynamic charts of each target object.
Further, the topography to after size adjusting carries out critical point detection, obtains each target pair
The step of multiple thermodynamic charts of elephant, comprising: the topography after size adjusting is closed by the detection model after training
The detection of key point, obtains multiple thermodynamic charts of each target object.
Further, the method also includes: input multiple keys for being labeled with target object to detection model to be trained
The training image of point position, wherein include at least two target objects that are in contact in the training image, and any two institute
It states the degree of overlapping between target object and reaches default degree of overlapping threshold value;By the detection model to be trained to the training
Image is detected, and the thermodynamic chart of each target object in the training image is exported;Based on institute each in the training image
The thermodynamic chart for stating target object obtains the key point position in the training image in each target object;Based on described to be trained
The detection model to be trained is joined in the key point position that detection model obtains and the key point position marked
Number optimization, until the matching between the obtained key point position of detection model to be trained and the key point position marked
When degree reaches preset matching and spends, determine that training terminates, the detection model after being trained.
Further, the step of key point that each target object being in contact is determined based on the thermodynamic chart, packet
It includes: obtaining the brightness value of each pixel in the thermodynamic chart;Wherein, the brightness value is corresponding in the thermodynamic chart for characterizing
The confidence level of key point;The thermodynamic chart was carried out according to preset key point luminance threshold and the maximum brightness value got
Filter;The key point for each target object being in contact is determined according to filtered thermodynamic chart.
Second aspect, the embodiment of the present invention also provide a kind of gesture identification method, which comprises using such as above-mentioned the
On the one hand described in any item critical point detection methods carry out critical point detection to hand images to be detected, obtain each hand
Key point;Gesture classification is identified according to the key point of each hand.
The third aspect, the embodiment of the present invention also provide a kind of critical point detection device, and described device includes: that image obtains mould
Block includes target object in described image to be detected for obtaining image to be detected;Module of target detection, for described
Image to be detected carries out target detection, obtains the position frame of each target object;Position frame merging module, for based on every
The position frame of a target object carries out the merging of position frame, obtains target and merges frame;Wherein, it is corresponding to merge frame for the target
It include at least two target objects being in contact in image;Critical point detection module, for merging frame to institute based on the target
It states image to be detected and carries out critical point detection, obtain multiple thermodynamic charts of each target object;Wherein, the different heat
Try hard to for characterizing the key point being located on the target object at different location;Key point determining module, for based on described
Thermodynamic chart determines the key point for each target object being in contact.
Fourth aspect, the embodiment of the present invention also provide a kind of gesture identifying device, and described device includes: the inspection of hand key point
Module is surveyed, it is crucial for being carried out using such as described in any item critical point detection methods of first aspect to hand images to be detected
Point detection, obtains the key point of each hand;Gesture recognition module, for identifying gesture class according to the key point of each hand
Not.
5th aspect, the embodiment of the invention provides a kind of critical point detection system, the system comprises: image collector
It sets, processor and storage device;Described image acquisition device, for acquiring image to be detected;It is stored on the storage device
Computer program, the computer program execute such as the described in any item key points of first aspect when being run by the processor
Detection method and the gesture identification method as described in second aspect.
6th aspect, the embodiment of the invention provides a kind of electronic equipment, including memory, processor, the memories
In be stored with the computer program that can be run on the processor, processor realizes above-mentioned first party when executing computer program
The step of the step of described in any item critical point detection methods in face and gesture identification method as described in second aspect.
7th aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage
Computer program is stored on medium, the computer program is executed when being run by processor described in above-mentioned any one of first aspect
Critical point detection method the step of and the step of gesture identification method as described in second aspect.
The embodiment of the invention provides a kind of critical point detection methods, gesture identification method, apparatus and system, can pass through
Target detection is carried out to image to be detected first, the position frame of each target object is obtained, is then based on each target object
Position frame carries out the merging of position frame, obtains target and merges frame, and it includes being in contact at least that target, which merges in the corresponding image of frame,
Two target objects;The detection that frame carries out key point to image to be detected is merged based on target again, obtains each target object
Multiple thermodynamic charts;Wherein, different thermodynamic charts is used to characterize the key point being located on target object at different location;Finally it is based on
Thermodynamic chart determines the key point for each target object being in contact.The present embodiment can be based at least two target objects being in contact
Corresponding target merges the detection that frame carries out key point, obtains that the key on the target object being in contact at different location can be characterized
Multiple thermodynamic charts of point, to determine the key point of the target object based on thermodynamic chart, this mode can be effectively improved because of mesh
The problem for marking critical point detection inaccuracy caused by contact interaction between object, improves the detection accuracy of key point.
Other features and advantages of the present invention will illustrate in the following description, alternatively, Partial Feature and advantage can be with
Deduce from specification or unambiguously determine, or by implement the disclosure above-mentioned technology it can be learnt that.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate
Appended attached drawing, is described in detail below.
Detailed description of the invention
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art
Embodiment or attached drawing needed to be used in the description of the prior art be briefly described, it should be apparent that, it is described below
Attached drawing is some embodiments of the present invention, for those of ordinary skill in the art, before not making the creative labor
It puts, is also possible to obtain other drawings based on these drawings.
Fig. 1 shows the structural schematic diagram of a kind of electronic equipment provided by the embodiment of the present invention;
Fig. 2 shows a kind of critical point detection method flow diagrams provided by the embodiment of the present invention;
Fig. 3 shows the schematic diagram for two hands that three kinds of exposure levels are different provided by the embodiment of the present invention;
Fig. 4 shows a kind of method flow diagram that union operation is performed a plurality of times provided by the embodiment of the present invention;
Fig. 5 shows a kind of merging schematic diagram of position frame provided by the embodiment of the present invention;
Fig. 6 shows the method flow diagram that union operation is performed a plurality of times in another kind provided by the embodiment of the present invention;
Fig. 7 shows a kind of schematic diagram of hand thermodynamic chart provided by the embodiment of the present invention;
Fig. 8 shows a kind of structural block diagram of critical point detection device provided by the embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention
Technical solution be clearly and completely described, it is clear that described embodiments are some of the embodiments of the present invention, rather than
Whole embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not making creative work premise
Under every other embodiment obtained, shall fall within the protection scope of the present invention.
In view of the detection method of existing object key point, it is difficult accurately to detect the key on the object of contact interaction
Point.Based on this, to improve problem above, a kind of critical point detection method provided in an embodiment of the present invention, gesture identification method, dress
It sets and system, the technology can be applied to the various fields for needing to use critical point detection such as human-computer interaction, gesture recognition, for just
In understanding, describe in detail below to the embodiment of the present invention.
Embodiment one:
Firstly, referring to Fig.1 come describe the critical point detection method for realizing the embodiment of the present invention, gesture identification method,
The exemplary electronic device 100 of apparatus and system.
The structural schematic diagram of a kind of electronic equipment as shown in Figure 1, electronic equipment 100 include one or more processors
102, one or more storage devices 104, input unit 106, output device 108 and image collecting device 110, these components
It is interconnected by bindiny mechanism's (not shown) of bus system 112 and/or other forms.It should be noted that electronic equipment shown in FIG. 1
100 component and structure be it is illustrative, and not restrictive, as needed, the electronic equipment also can have other
Component and structure.
The processor 102 can be central processing unit (CPU) or have data-handling capacity and/or instruction execution
The processing unit of the other forms of ability, and the other components that can control in the electronic equipment 100 are desired to execute
Function.
The storage device 104 may include one or more computer program products, and the computer program product can
To include various forms of computer readable storage mediums, such as volatile memory and/or nonvolatile memory.It is described easy
The property lost memory for example may include random access memory (RAM) and/or cache memory (cache) etc..It is described non-
Volatile memory for example may include read-only memory (ROM), hard disk, flash memory etc..In the computer readable storage medium
On can store one or more computer program instructions, processor 102 can run described program instruction, to realize hereafter institute
The client functionality (realized by processor) in the embodiment of the present invention stated and/or other desired functions.In the meter
Can also store various application programs and various data in calculation machine readable storage medium storing program for executing, for example, the application program use and/or
The various data etc. generated.
The input unit 106 can be the device that user is used to input instruction, and may include keyboard, mouse, wheat
One or more of gram wind and touch screen etc..
The output device 108 can export various information (for example, image or sound) to external (for example, user), and
It and may include one or more of display, loudspeaker etc..
Described image acquisition device 110 can shoot the desired image of user (such as photo, video etc.), and will be clapped
The image taken the photograph is stored in the storage device 104 for the use of other components.
Illustratively, for realizing a kind of critical point detection method according to an embodiment of the present invention, gesture identification method, dress
It sets and the exemplary electronic device of system may be implemented as such as smart phone, tablet computer, computer, VR equipment, camera
On equal intelligent terminals.
Embodiment two:
Referring to a kind of flow chart of critical point detection method shown in Fig. 2, this method specifically comprises the following steps:
Step S202 obtains image to be detected, includes target object in image to be detected.Image to be detected can be figure
As the original image that acquisition device is shot, it is also possible to the image for being downloaded by network, being locally stored or manually uploading.This is to be detected
It may include at least two target objects being in contact in image, target object people, face, hand, vehicle etc. connect
At least two target objects of touching can refer to two different hands of three kinds of exposure levels shown in Fig. 3: the left hand view in Fig. 3 is phase
Two hands suffered, middle graph are two hands of partial intersection together, and right part of flg is two almost overlapping hands.Certainly
It can also include other separated target objects except the target object being in contact, this is not restricted.
Step S204 carries out target detection to image to be detected, obtains the position frame of each target object.
In some optional embodiments, CNN network model (Convolutional Neural can be based on
Networks, convolutional neural networks), the neural networks mould such as R-CNN (Region-CNN) network model or Segnet network model
Type carries out target detection to image to be detected, to obtain the position frame of each target object in image to be detected.
Step S206, the position frame based on each target object carry out the merging of position frame, obtain target and merge frame;Wherein,
It includes at least two target objects being in contact that target, which merges in the corresponding image of frame,.
At least two target objects contacted with each other can overlap, also will be mutual between the position frame of each target object
It overlaps, the position frame of at least two target objects for overlapping each other merges, and obtains target and merges frame.It can manage
Solution, for merging frame based on the obtained target of the position frame that overlaps each other, included target object in corresponding image
Quantity will not be more than the quantity for the target object being in contact in image to be detected, target pair lesser for some contacts area
As may not target merge frame correspondence image in the range of.
Step S208 merges frame based on target and carries out critical point detection to image to be detected, obtains each target object
Multiple thermodynamic charts;Wherein, different thermodynamic charts is used to characterize the key point being located on target object at different location.
In the present embodiment, merging frame based on target can be regarded as image to be detected progress critical point detection: to be checked
The image that target merges at frame in altimetric image detects the key point of each target object by the way of from bottom to top, obtains target
Merge the thermodynamic chart of each target object in frame.Thermodynamic chart is to show key point position with special shapes such as highlighted or colours
Diagram.It wherein, can be current goal pair during obtaining the thermodynamic chart of each target object based on target merging frame
As one corresponding thermodynamic chart of upper each key point generation, that is, only embodying a key point in every thermodynamic chart.For example,
Target object is hand, and the key point conventionally used for positioning hand has 21, and the corresponding obtained thermodynamic chart of every hand is 21,
Different thermodynamic charts correspond to the key point at hand different location.
Step S210 determines the key point for each target object being in contact based on thermodynamic chart.Every heat can be obtained first
Try hard to the position coordinates of characterized key point;Then according to the mapping relations between preset thermodynamic chart and target object, really
Determine home position coordinate of the position coordinates of key point in thermodynamic chart in target object;Finally according to the original of each key point
Position coordinates determine the key point for each target object being in contact.
A kind of critical point detection method provided in an embodiment of the present invention, can be by carrying out target to image to be detected first
Detection, obtains the position frame of each target object, and the position frame for being then based on each target object carries out the merging of position frame, obtains
Target merges frame, and it includes at least two target objects being in contact that target, which merges in the corresponding image of frame,;It is closed again based on target
And frame carries out the detection of key point to image to be detected, obtains multiple thermodynamic charts of each target object;Wherein, different heating power
Figure is for characterizing the key point being located on target object at different location;Each target pair being in contact finally is determined based on thermodynamic chart
The key point of elephant.The present embodiment can merge frame based on the corresponding target of at least two target objects being in contact and carry out key point
Detection, obtain to characterize multiple thermodynamic charts of the key point on the target object being in contact at different location, thus based on heat
Try hard to determine the key point of the target object, caused by this mode can be effectively improved because of contact interaction between target object
The problem of critical point detection inaccuracy, improves the detection accuracy of key point.
When executing above-mentioned steps S206, mode can be merged using the following two kinds position frame according to actually detected scene and obtained
Merge frame to target:
Merging mode one: the position frame based on each target object repeats preset union operation to position frame, directly
To the degree of overlapping between the frame of any two position no more than default degree of overlapping threshold value, obtains target and merge frame.The merging mode
One is suitble to the target position frame such as obtained to be only one, alternatively, obtained target position frame is more than two and each target position
Set the same relatively simple detection scene of quantity phase of target object included in frame.
For having the target object that multiple groups are in contact in more complicated detection scene, such as an image to be detected, both
Including two target objects A and B being in contact, and three including being in contact target objects C, D and E, and C, D, E and A, B it
Between separately.In this scenario, target can be obtained using following merging mode two merge frame.
Merging mode two: being primarily based on the position frame of each target object, repeats preset merging to position frame and grasps
Make, until the degree of overlapping between the frame of any two position obtains candidate merging frame no more than default degree of overlapping threshold value;Wherein,
The quantity that candidate merges frame is at least two, and the candidate quantity for merging target object included in frame is not identical.Then
The quantity for merging target object included in frame further according to candidate merges screening in frame from candidate and obtains target merging frame.This
Embodiment can merge frame to candidate according to the quantity for the target object for being included and classify, and same class candidate merges to wrap in frame
The quantity of the target object contained is identical, and same class candidate merging frame is determined as a kind of target merging frame;It can also be according to reality
The quantity demand (if specified target object is two) of border detected target object, will include the target object of required quantity
Candidate merges frame and is determined as target merging frame.
It can be when executing step S208, convenient for selecting to merge with target the pass that frame matches using above-mentioned merging mode two
Key point detection mode, such as when carrying out critical point detection by critical point detection model based on convolutional neural networks, Neng Gougen
Merge the different number of target object included in frame according to target and selects different critical point detection models, so that selection
Critical point detection model and pending critical point detection task matching degree with higher, to promote the accurate of testing result
Property.
Above two merging mode in order to facilitate understanding, the present embodiment are real to the possibility of union operation in above-mentioned merging mode
Existing mode is described, referring to a kind of method flow diagram that union operation is performed a plurality of times as shown in Figure 4, above-mentioned union operation packet
Following steps S402 is included to step S406:
Step S402 determines position frame pair to be combined from multiple position frames;Wherein, position frame is to including two positions
Frame, and when the first run executes union operation, the position frame is to the included position frame for two target objects.By multiple positions
Every two position frame in frame is determined as one group of position frame pair to be combined, than if any position frame a, position frame b, position frame c
With position frame d, then the position frame that combination of two determines is to including { ab }, { ac }, { ad }, { bc }, { bd } and { cd }.
Step S404, the degree of overlapping between the frame center position frame of calculating position.For each pair of above-mentioned position frame pair,
Lap area between two position frames of calculating position frame centering and the ratio between the gross area of two position frames, obtain
The degree of overlapping of two position frames.
Step S406 merges the position frame of position frame centering if the degree of overlapping calculated is greater than default degree of overlapping threshold value
For new position frame;Wherein, the boundary of new position frame is to be determined according to the boundary of two position frames of position frame centering.Example
Such as, if the degree of overlapping of position frame a, position frame b are greater than default degree of overlapping threshold value (such as 70%), by position frame a, position frame b into
Row, which merges, generates new position frame.New position frame can refer to position frame as shown in Figure 5 and merge schematic diagram, new position frame
Boundary is to obtain after each boundary of the non-overlap part of position frame a, position frame b is extended and intersected respectively.
It is understood that above-mentioned steps S402 is returned to after obtaining new position frame by execution above-mentioned steps S406, from
Continue to determine position frame pair to be combined in new position frame and other positions frame, i.e., above-mentioned steps is re-executed to position frame
S402 to step S406, until the degree of overlapping between the frame of any two position obtains candidate no more than default degree of overlapping threshold value
Merge frame or directly obtain target and merges frame.
In practical applications, some position frames, which may position, is inaccurate, and causes position frame to deviate target object, to these
Position frame merges the accuracy for not only influencing combined efficiency but also impact position frame amalgamation result.Based on this, in execution
State step S402 and determine position frame clock synchronization to be combined from multiple position frames, it is further contemplated that the confidence level of position frame with
Improve the above problem, referring in particular to following content:
Firstly, obtaining the confidence level of position frame.It can when by neural network model to image to be detected progress target detection
Generate the confidence level of the position frame of each target object.
Then, position frame is ranked up according to the confidence level of each position frame, obtains position frame ranking results.When according to setting
When reliability is from high to low ranked up position frame, the confidence level for the rearward position frame that sorts is lower, indicates that these position frames can
Target can be deviateed, reliability is poor, can screen out some position frames based on this.It in specific implementation, can be based on default
Position frame confidence threshold value filter out the lower position frame of partial belief degree;Alternatively, can be with the original ranking results of position frame
Filter out the position frame after specified rank;To determine position frame ranking results according to the position frame after screening.Assuming that position frame
Ranking results are position frame a, position frame b, position frame c and the position frame d that confidence level arranges from high to low.The present embodiment is based on position
The confidence level for setting frame is ranked up position frame, is conducive to reduce in subsequent merging position frame to the lower position frame of confidence level
Merging, so as to effective raised position frame combined efficiency and accuracy.
Finally, determining position frame pair to be combined from multiple position frames according to position frame ranking results;Wherein, position frame
To including two position frames.Above-mentioned position frame ranking results can embody the accuracy and reliability of each position frame, as a result, basis
On the one hand position frame ranking results can determine the position frame pair for being suitble to preferentially merge, on the other hand can be with sequential by right
It determines each position frame pair to be combined, position frame has been avoided to be missed.It is position frame referring to above-mentioned position frame ranking results
A, position frame b, position frame c and position frame d, in the present embodiment position frame to be combined to may include { ab }, { ac }, { ad },
{ bc }, { bd } and { cd }.
It is above-mentioned position frame pair is determined based on confidence level on the basis of, the present embodiment can also provide another union operation
Possibility implementation, can refer to the method flow diagram that union operation is performed a plurality of times in another kind as shown in FIG. 6, above-mentioned merging behaviour
Work includes the following steps S602 to step S610:
Step S602 obtains the confidence level of position frame.
Step S604 is ranked up position frame according to the confidence level of each position frame, obtains position frame ranking results.
Step S606 determines position frame pair to be combined according to position frame ranking results from multiple position frames.
Step S608, the degree of overlapping between the frame center position frame of calculating position.
Step S610 merges the position frame of position frame centering if the degree of overlapping calculated is greater than default degree of overlapping threshold value
For new position frame.
It is understood that returning to above-mentioned steps S602, i.e., after obtaining new position frame by execution above-mentioned steps S610
Above-mentioned steps S602 to step S610 is re-executed to position frame, until any two position frame between degree of overlapping no more than
Default degree of overlapping threshold value obtains candidate merging frame or directly obtains target merging frame.
Wherein, when executing the step S602 of a new round, which can be obtained in the following way for new position frame
It sets the confidence level of frame: obtaining the confidence level of two position frames of position frame centering, obtained newly according to the confidence level of two position frames
The confidence level of position frame.
In view of the position frame ranking results based on confidence level are determined for the position frame pair for being suitble to preferentially merge, because
This, the high confidence level in the confidence level of two position frames can be determined as the confidence level of new position frame by the present embodiment.Certainly,
The mode of the confidence level for the position frame that can also have other a variety of determinations new, such as: it will be low in the confidence level of two position frames
Confidence level is determined as the confidence level of new position frame, alternatively, randomly selecting a confidence level in the confidence level of two position frames
As the confidence level of new position frame, alternatively, the average value of the confidence level of two position frames to be determined as to setting for new position frame
Reliability.
In order to reduce the critical point detection pressure to image to be detected, the present embodiment is based on target in step S208 and merges frame
First image to be detected can be handled when carrying out critical point detection to image to be detected, i.e., referring to following content: being based on mesh
Mark merges frame and takes topography from image to be detected;It include the target object being in contact in topography.Specifically, obtaining
Target is taken to merge location parameter of the frame in image to be detected, location parameter may include the left upper apex coordinate of target position frame
The height and width of (x, y) and target merging frame;Using such as OpenCv (Open Source Computer Vision Library,
Increase income computer vision library) in CvRect function frame is merged to target location parameter carry out operation, output position parameter is true
Fixed image, the image are that topography is taken from image to be detected.
The size of the topography usually taken is different, can play a game to adapt to a variety of critical point detection scenes
Portion's image carries out size adjusting, and carries out critical point detection to the topography after size adjusting, obtains each target object
Multiple thermodynamic charts.In practical applications, the size of topography (including height and width) directly can be reset into target
Size (including object height and target width), the size adjusting mode efficiency easy to operate are higher.Alternatively, can also be in order to true
Protect topography original size ratio, the height of topography is adjusted to object height h, and the width of topography not
When sufficient, by the width adjustment of topography to target width by the way of being filled using 0, make object height h adjusted and
Target width still meets original size ratio.
By it is above-mentioned take topography and using resetting or filling mode the length and width dimensions of topography are adjusted
Whole step can make the model for detecting key point when learning critical point detection, it is only necessary to which learning ability is placed on fixed size
The topography comprising target object (such as hand) on, to effectively mitigate the study that is detected to key point of model
Pressure.Each mesh is obtained the present embodiment provides critical point detection is carried out to the topography after size adjusting as follows based on this
It marks the example of multiple thermodynamic charts of object: the topography after size adjusting being carried out by the detection model after training crucial
Point detection, obtains multiple thermodynamic charts of each target object.
The detection model can be the critical point detection model based on convolutional neural networks, input be one having a size of
The topography of (h, w), and assume that the quantity for the target object being in contact for including in the topography is two;The detection mould
The output of type is the 2*n thermodynamic charts having a size of (h/s, w/s);Every thermodynamic chart characterizes the confidence level of a key point, heat
Try hard to the position that the highest position of middle confidence level is exactly corresponding key point.Wherein, s is thermodynamic chart relative to adopting under topography
Sample rate, down-sampling rate are coupled in the network structure design of detection model;N is the keypoint quantity of single target object, and each mesh
The keypoint quantity for marking object is identical;What 2*n thermodynamic chart indicated is all passes of two target objects (such as left and right two hands)
Key point position, preceding n thermodynamic chart indicate the key point of first aim object (such as left hand), and rear n thermodynamic chart indicates second
The key point of target object (such as right hand);And each thermodynamic chart of n of each target object characterized it is crucially fixed.Due to
The quantity for the thermodynamic chart that the detection model provided in this embodiment is exported is twice of keypoint quantity in single target object,
Therefore the detection model can be referred to as to Dual channel detection model.Of course, it should be understood that when including in topography
When the quantity for the target object being in contact is that M (M=3,4,5 ...) are a, heat that the detection model after corresponding training is exported
The quantity tried hard to is M times of keypoint quantity in single target object, and detection model at this time can be referred to as M Air conduct measurement mould
Type.
For ease of understanding, the key point of thermodynamic chart and its characterization can be described using hand as target object;Ginseng
According to the schematic diagram of hand thermodynamic chart shown in Fig. 7, wherein the 1st thermodynamic chart of a hand (right hand) can indicate thumb finger tip
The location information of key point, then corresponding mark another hand (left hand) the thumb finger tip key point of (n+1)th thermodynamic chart
Location information.Corresponding, the 2nd thermodynamic chart indicates the location information of the index finger tip key point of the right hand, then the n-th+2 thermodynamic charts
Indicate the location information of the index finger tip key point of left hand.
1) the step of determining the key point for each target object being in contact based on thermodynamic chart may include steps of to step
It is rapid 3):
Step 1) calculates the brightness value of each pixel in thermodynamic chart;Wherein, the brightness value is right in thermodynamic chart for characterizing
Answer the confidence level of key point.Thermodynamic chart essence is the two-dimensional matrix of a h*w, and wherein the numerical value of matrix corresponding position is bigger, can
Brightness value depending on changing pixel in corresponding thermodynamic chart is higher.It is found in two-dimensional matrix using following formula (1) with most light
The position coordinates of the pixel of angle value:
(x, y)=argmax (hx, hy) (1)
Wherein, argmax (hx, hy) indicates to begin stepping through thermodynamic chart from the reference coordinate (hx, hy) of thermodynamic chart, thus really
Surely the position coordinates (x, y) of the pixel with maximum brightness value.
Step 2), referring to following formula (2) according to preset key point luminance threshold and the maximum brightness value pair got
Thermodynamic chart is filtered;It, will namely if the position coordinates (x, y) of the pixel with maximum brightness value are unsatisfactory for formula (2)
Thermodynamic chart belonging to the pixel is filtered out.
heatmap[x,y]>conf_thresh (2)
Wherein, heatmap [x, y] indicates the brightness value of the maximum brightness value namely position coordinates (x, y) that get,
Conf_thresh is preset key point luminance threshold.If the brightness value of the position coordinates (x, y) is lower than preset luminance threshold
Value, shows that pixel brighter in the thermodynamic chart is as caused by the disturbing factors such as noise, is not the key point of target object.
Step 3) determines the key point for each target object being in contact according to filtered thermodynamic chart.
The heating power can be calculated using such as soft-argmax function scheduling algorithm first to filtered every thermodynamic chart
The coordinate of key point in figure will be in the thermodynamic chart of calculating then according to mapping relations preset between thermodynamic chart and target object
Coordinate be converted to the home position coordinate in target object;Finally according to the home position coordinate of whole key points after conversion
Determine the key point of each target object.
After the key point for determining each target object, posture recovery is carried out to whole key points of the target object, from
And according to contents such as posture, the expressions of restoration result identification target object.
In addition, the training process of above-mentioned detection model can refer to following four steps:
The first, the training image for the key point position that multiple are labeled with target object is inputted to detection model to be trained,
It wherein, include at least two target objects being in contact in training image, and the degree of overlapping between any two target object is equal
Reach default degree of overlapping threshold value.
The second, training image is detected by detection model to be trained, exports each target object in training image
Thermodynamic chart.
Third, the thermodynamic chart based on target object each in training image obtain the key in training image in each target object
Point position.
4th, training is treated in the key point position obtained based on detection model to be trained and the key point position marked
Detection model carry out parameter optimization, until the obtained key point position of detection model to be trained and the key point marked
Matching degree between setting reaches preset matching when spending, and determines that training terminates, the detection model after being trained.
To sum up, above-described embodiment can merge frame based on the corresponding target of at least two target objects being in contact and be closed
The detection of key point obtains to characterize multiple thermodynamic charts of the key point on the target object being in contact at different location, thus base
The key point of the target object is determined in thermodynamic chart, and this mode can be effectively improved to be led because contacting interaction between target object
The problem of the critical point detection inaccuracy of cause, improves the detection accuracy of key point.
Embodiment three:
For critical point detection method provided in embodiment two, the embodiment of the invention provides a kind of gesture identification sides
Method includes the following steps 1 and step 2:
Step 1 carries out key to hand images to be detected using critical point detection method provided in embodiment two
Point detection, obtains the key point of each hand.To briefly describe, the step of this obtains the key point of hand, can refer to preceding method reality
Apply corresponding contents in example two.
Step 2 identifies gesture classification according to the key point of each hand.In a kind of specific implementation, for each hand
Portion can carry out posture recovery to whole key points of the hand, thus according to restoration result identify such as the shaking hands of the hand,
Hold the gestures classification such as fist, crawl.
Gesture identification method provided in this embodiment, can be by using key point provided in embodiment two to examine first
Survey method obtains the key point of each hand, then identifies gesture classification according to the key point of each hand.The present embodiment can be based on
Critical point detection method promotes the detection accuracy of key point, to effectively promote the identification other accuracy of gesture class.
Example IV:
For critical point detection method provided in embodiment two, the embodiment of the invention provides a kind of critical point detections
Device, a kind of structural block diagram of critical point detection device shown in Figure 8, which includes following module:
Image collection module 802 includes target object in image to be detected for obtaining image to be detected.
Module of target detection 804 obtains the position of each target object for carrying out target detection to image to be detected
Frame.
Position frame merging module 806 carries out the merging of position frame for the position frame based on each target object, obtains target
Merge frame;Wherein, it includes at least two target objects being in contact that target, which merges in the corresponding image of frame,.
Critical point detection module 808 carries out critical point detection to image to be detected for merging frame based on target, obtains every
Multiple thermodynamic charts of a target object;Wherein, different thermodynamic charts is used to characterize the pass being located on target object at different location
Key point.
Key point determining module 810, for determining the key point for each target object being in contact based on thermodynamic chart.
A kind of critical point detection device provided in an embodiment of the present invention, can be by carrying out target to image to be detected first
Detection, obtains the position frame of each target object, and the position frame for being then based on each target object carries out the merging of position frame, obtains
Target merges frame, and it includes at least two target objects being in contact that target, which merges in the corresponding image of frame,;It is closed again based on target
And frame carries out the detection of key point to image to be detected, obtains multiple thermodynamic charts of each target object;Wherein, different heating power
Figure is for characterizing the key point being located on target object at different location;Each target pair being in contact finally is determined based on thermodynamic chart
The key point of elephant.The present embodiment can merge frame based on the corresponding target of at least two target objects being in contact and carry out key point
Detection, obtain to characterize multiple thermodynamic charts of the key point on the target object being in contact at different location, thus based on heat
Try hard to determine the key point of the target object, caused by this mode can be effectively improved because of contact interaction between target object
The problem of critical point detection inaccuracy, improves the detection accuracy of key point.
In some embodiments, above-mentioned position frame merging module 806 is further used for: the position based on each target object
Frame is set, preset union operation is repeated to position frame, until the degree of overlapping between the frame of any two position is no more than pre-
If degree of overlapping threshold value, obtains target and merge frame.
In some embodiments, above-mentioned position frame merging module 806 is further used for: the position based on each target object
Frame is set, preset union operation is repeated to position frame, until the degree of overlapping between the frame of any two position is no more than pre-
If degree of overlapping threshold value, candidate merging frame is obtained;According to the candidate quantity for merging target object included in frame, merge from candidate
Screening obtains target merging frame in frame.
In some embodiments, above-mentioned union operation includes determining position frame to sub-operation and merging new position frame behaviour
Make, wherein determine that position frame includes: that position frame pair to be combined is determined from multiple position frames to sub-operation;Wherein, position frame
To including two position frames;Merging new position frame operation includes: the degree of overlapping between the frame center position frame of calculating position;If
The degree of overlapping of calculating is greater than default degree of overlapping threshold value, and the position frame of position frame centering is merged into new position frame;Wherein, newly
The boundary of position frame is to be determined according to the boundary of two position frames of position frame centering.
In some embodiments, above-mentioned determining position frame is to sub-operation further include: obtains the confidence level of position frame;According to
The confidence level of each position frame is ranked up position frame, obtains position frame ranking results;According to position frame ranking results from multiple
Position frame pair to be combined is determined in the frame of position.
In some embodiments, above-mentioned determining position frame is to sub-operation further include: obtains two positions of position frame centering
The confidence level of frame obtains the confidence level of new position frame according to the confidence level of two position frames.
In some embodiments, above-mentioned critical point detection module 808 is further used for: merging frame to be checked based on target
Topography is taken in altimetric image;It include the target object being in contact in topography;Size adjusting is carried out to topography,
And critical point detection is carried out to the topography after size adjusting, obtain multiple thermodynamic charts of each target object.
In some embodiments, above-mentioned critical point detection module 808 is further used for: by the detection model after training
Critical point detection is carried out to the topography after size adjusting, obtains multiple thermodynamic charts of each target object.
In some embodiments, above-mentioned critical point detection device further includes model training module (not shown), should
Model training module is used for: the training figure for the key point position that multiple are labeled with target object is inputted to detection model to be trained
Picture, wherein include at least two target objects that are in contact in training image, and the degree of overlapping between any two target object
Reach default degree of overlapping threshold value;Training image is detected by detection model to be trained, is exported each in training image
The thermodynamic chart of target object;Thermodynamic chart based on target object each in training image obtains in training image in each target object
Key point position;The key point position obtained based on detection model to be trained and the key point position marked are treated trained
Detection model carries out parameter optimization, until the obtained key point position of detection model to be trained and the key point position marked
Between matching degree when reaching preset matching and spending, determine that training terminates, the detection model after being trained.
In some embodiments, above-mentioned key point determining module 810 is further used for: obtaining each pixel in thermodynamic chart
The brightness value of point;Wherein, brightness value is for characterizing the confidence level for corresponding to key point in thermodynamic chart;According to preset key point brightness
Threshold value and the maximum brightness value got are filtered thermodynamic chart;Each target being in contact is determined according to filtered thermodynamic chart
The key point of object.
The technical effect and previous embodiment two of device provided by the present embodiment, realization principle and generation are identical, are
It briefly describes, the present embodiment part does not refer to place, can refer to corresponding contents in preceding method embodiment two.
Embodiment five:
For gesture identification method provided in embodiment three, the embodiment of the invention provides a kind of gesture identification dresses
It sets, which includes:
Hand critical point detection module, for using critical point detection method provided in embodiment two to be detected
Hand images carry out critical point detection, obtain the key point of each hand.
Gesture recognition module, for identifying gesture classification according to the key point of each hand.
Embodiment six:
Based on previous embodiment, this gives a kind of critical point detection system, which includes: that Image Acquisition is set
Standby, processor and storage equipment;Wherein, image capture device is for acquiring image to be detected;Calculating is stored in storage equipment
Machine program, computer program execute any one critical point detection method as provided by embodiment two when being run by processor.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description
Specific work process, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
Further, the present embodiment additionally provides a kind of electronic equipment, including memory, processor, is stored in memory
The computer program that can be run on a processor, processor realize any that above-described embodiment two provides when executing computer program
The step of gesture identification method that the step of item critical point detection method or embodiment three provide.
Further, the present embodiment additionally provides a kind of computer readable storage medium, deposits on computer readable storage medium
Computer program is contained, computer program equipment processed executes any one key point inspection of the offer of above-described embodiment two when running
The step of gesture identification method that the step of survey method or embodiment three provide.
A kind of calculating of critical point detection method, gesture identification method, apparatus and system provided by the embodiment of the present invention
Machine program product, the computer readable storage medium including storing program code, the instruction that said program code includes are available
In executing previous methods method as described in the examples, specific implementation can be found in embodiment of the method, and details are not described herein.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.
Finally, it should be noted that embodiment described above, only a specific embodiment of the invention, to illustrate the present invention
Technical solution, rather than its limitations, scope of protection of the present invention is not limited thereto, although with reference to the foregoing embodiments to this hair
It is bright to be described in detail, those skilled in the art should understand that: anyone skilled in the art
In the technical scope disclosed by the present invention, it can still modify to technical solution documented by previous embodiment or can be light
It is readily conceivable that variation or equivalent replacement of some of the technical features;And these modifications, variation or replacement, do not make
The essence of corresponding technical solution is detached from the spirit and scope of technical solution of the embodiment of the present invention, should all cover in protection of the invention
Within the scope of.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (16)
1. a kind of critical point detection method, which is characterized in that the described method includes:
Image to be detected is obtained, includes target object in described image to be detected;
Target detection is carried out to described image to be detected, obtains the position frame of each target object;
Position frame based on each target object carries out the merging of position frame, obtains target and merges frame;Wherein, the target is closed
It and include at least two target objects being in contact in the corresponding image of frame;
Merge frame based on the target and critical point detection is carried out to described image to be detected, obtains the more of each target object
A thermodynamic chart;Wherein, the different thermodynamic charts is used to characterize the key point being located on the target object at different location;
The key point for each target object being in contact is determined based on the thermodynamic chart.
2. the method according to claim 1, wherein the position frame based on each target object carries out
Position frame merges, and obtains the step of target merges frame, comprising:
Based on the position frame of each target object, preset union operation is repeated to position frame, until any two
Degree of overlapping between the frame of position obtains target and merges frame no more than default degree of overlapping threshold value.
3. the method according to claim 1, wherein the position frame based on each target object carries out
Position frame merges, and obtains the step of target merges frame, comprising:
Based on the position frame of each target object, preset union operation is repeated to position frame, until any two
Degree of overlapping between the frame of position obtains candidate merging frame no more than default degree of overlapping threshold value;
According to the candidate quantity for merging target object included in frame, screening obtains target from the candidate merging frame
Merge frame.
4. according to the method in claim 2 or 3, which is characterized in that the union operation includes:
Position frame pair to be combined is determined from multiple position frames;Wherein, the position frame is to including two position frames;
Calculate the degree of overlapping between the position frame center position frame;
If the degree of overlapping calculated is greater than default degree of overlapping threshold value, the position frame of the position frame centering is merged into new position
Frame;Wherein, the boundary of the new position frame is to be determined according to the boundary of two position frames of the position frame centering.
5. according to the method described in claim 4, it is characterized in that, described determine position frame to be combined from multiple position frames
Pair step, comprising:
Obtain the confidence level of position frame;
The position frame is ranked up according to the confidence level of each position frame, obtains position frame ranking results;
Position frame pair to be combined is determined from multiple position frames according to the position frame ranking results.
6. according to the method described in claim 5, it is characterized in that, it is described obtain position frame confidence level the step of, comprising:
The confidence level for obtaining two position frames of the position frame centering obtains described new according to the confidence level of described two position frames
Position frame confidence level.
7. the method according to claim 1, wherein described merge frame to the mapping to be checked based on the target
As the step of carrying out critical point detection, obtaining multiple thermodynamic charts of each target object, comprising:
Merge frame based on the target and takes topography from described image to be detected;It include to connect in the topography
The target object of touching;
Size adjusting is carried out to the topography, and critical point detection is carried out to the topography after size adjusting, is obtained
Multiple thermodynamic charts of each target object.
8. the method according to the description of claim 7 is characterized in that the topography to after size adjusting carries out key
The step of point detects, and obtains multiple thermodynamic charts of each target object, comprising:
Critical point detection is carried out to the topography after size adjusting by the detection model after training, obtains each mesh
Mark multiple thermodynamic charts of object.
9. according to the method described in claim 8, it is characterized in that, the method also includes:
The training image for the key point position that multiple are labeled with target object is inputted to detection model to be trained, wherein described
It include at least two target objects being in contact in training image, and the degree of overlapping between target object described in any two reaches
To default degree of overlapping threshold value;
The training image is detected by the detection model to be trained, exports each mesh in the training image
Mark the thermodynamic chart of object;
Thermodynamic chart based on the target object each in the training image obtains in the training image in each target object
Key point position;
The key point position obtained based on the detection model to be trained and the key point position marked to it is described to
Trained detection model carries out parameter optimization, until the obtained key point position of detection model to be trained and having marked
Matching degree between key point position reaches preset matching when spending, and determines that training terminates, the detection model after being trained.
10. the method according to claim 1, wherein described determine each institute being in contact based on the thermodynamic chart
The step of stating the key point of target object, comprising:
Obtain the brightness value of each pixel in the thermodynamic chart;Wherein, the brightness value is right in the thermodynamic chart for characterizing
Answer the confidence level of key point;
The thermodynamic chart is filtered according to preset key point luminance threshold and the maximum brightness value got;
The key point for each target object being in contact is determined according to filtered thermodynamic chart.
11. a kind of gesture identification method, which is characterized in that the described method includes:
Key point is carried out to hand images to be detected using critical point detection method as described in any one of claim 1 to 10
Detection, obtains the key point of each hand;
Gesture classification is identified according to the key point of each hand.
12. a kind of critical point detection device, which is characterized in that described device includes:
Image collection module includes target object in described image to be detected for obtaining image to be detected;
Module of target detection obtains the position of each target object for carrying out target detection to described image to be detected
Frame;
Position frame merging module carries out the merging of position frame for the position frame based on each target object, obtains target conjunction
And frame;Wherein, it includes at least two target objects being in contact that the target, which merges in the corresponding image of frame,;
Critical point detection module carries out critical point detection to described image to be detected for merging frame based on the target, obtains
Multiple thermodynamic charts of each target object;Wherein, the different thermodynamic charts is located at the target object for characterizing
Key point at different location;
Key point determining module, for determining the key point for each target object being in contact based on the thermodynamic chart.
13. a kind of gesture identifying device, which is characterized in that described device includes:
Hand critical point detection module, for being treated using critical point detection method as described in any one of claim 1 to 10
The hand images of detection carry out critical point detection, obtain the key point of each hand;
Gesture recognition module, for identifying gesture classification according to the key point of each hand.
14. a kind of critical point detection system, which is characterized in that the system comprises: image collecting device, processor and storage dress
It sets;
Described image acquisition device, for acquiring image to be detected;
Computer program is stored on the storage device, the computer program is executed when being run by the processor as weighed
Benefit requires 1 to 10 described in any item critical point detection methods or gesture identification method as claimed in claim 11.
15. a kind of electronic equipment, including memory, processor, it is stored with and can runs on the processor in the memory
Computer program, which is characterized in that processor execute computer program when realize 1 to 10 any one of the claims described in
Critical point detection method the step of or gesture identification method as claimed in claim 11.
16. a kind of computer readable storage medium, computer program, feature are stored on the computer readable storage medium
It is, the described in any item critical point detections of the claims 1 to 10 is executed when the computer program is run by processor
The step of method or gesture identification method as claimed in claim 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910830741.5A CN110532984B (en) | 2019-09-02 | 2019-09-02 | Key point detection method, gesture recognition method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910830741.5A CN110532984B (en) | 2019-09-02 | 2019-09-02 | Key point detection method, gesture recognition method, device and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110532984A true CN110532984A (en) | 2019-12-03 |
CN110532984B CN110532984B (en) | 2022-10-11 |
Family
ID=68666665
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910830741.5A Active CN110532984B (en) | 2019-09-02 | 2019-09-02 | Key point detection method, gesture recognition method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110532984B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111178192A (en) * | 2019-12-18 | 2020-05-19 | 北京达佳互联信息技术有限公司 | Position identification method and device for target object in image |
CN111208509A (en) * | 2020-01-15 | 2020-05-29 | 中国人民解放军国防科技大学 | Ultra-wideband radar human body target posture visualization enhancing method |
CN111325171A (en) * | 2020-02-28 | 2020-06-23 | 深圳市商汤科技有限公司 | Abnormal parking monitoring method and related product |
CN111524188A (en) * | 2020-04-24 | 2020-08-11 | 杭州健培科技有限公司 | Lumbar positioning point acquisition method, equipment and medium |
CN111767792A (en) * | 2020-05-22 | 2020-10-13 | 上海大学 | Multi-person key point detection network and method based on classroom scene |
CN111783882A (en) * | 2020-06-30 | 2020-10-16 | 北京市商汤科技开发有限公司 | Key point detection method and device, electronic equipment and storage medium |
CN111948609A (en) * | 2020-08-26 | 2020-11-17 | 东南大学 | Binaural sound source positioning method based on Soft-argmax regression device |
CN112464753A (en) * | 2020-11-13 | 2021-03-09 | 深圳市优必选科技股份有限公司 | Method and device for detecting key points in image and terminal equipment |
CN112714253A (en) * | 2020-12-28 | 2021-04-27 | 维沃移动通信有限公司 | Video recording method and device, electronic equipment and readable storage medium |
CN112784765A (en) * | 2021-01-27 | 2021-05-11 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for recognizing motion |
CN112836745A (en) * | 2021-02-02 | 2021-05-25 | 歌尔股份有限公司 | Target detection method and device |
CN112861678A (en) * | 2021-01-29 | 2021-05-28 | 上海依图网络科技有限公司 | Image identification method and device |
CN113012089A (en) * | 2019-12-19 | 2021-06-22 | 北京金山云网络技术有限公司 | Image quality evaluation method and device |
CN113128436A (en) * | 2021-04-27 | 2021-07-16 | 北京百度网讯科技有限公司 | Method and device for detecting key points |
CN113128383A (en) * | 2021-04-07 | 2021-07-16 | 杭州海宴科技有限公司 | Recognition method for campus student cheating behavior |
CN113972006A (en) * | 2021-10-22 | 2022-01-25 | 中冶赛迪重庆信息技术有限公司 | Live animal health detection method and system based on infrared temperature measurement and image recognition |
CN114998424A (en) * | 2022-08-04 | 2022-09-02 | 中国第一汽车股份有限公司 | Vehicle window position determining method and device and vehicle |
CN115100691A (en) * | 2022-08-24 | 2022-09-23 | 腾讯科技(深圳)有限公司 | Method, device and equipment for acquiring key point detection model and detecting key points |
CN115166790A (en) * | 2022-05-23 | 2022-10-11 | 集度科技有限公司 | Road data processing method, device, equipment and storage medium |
CN117079242A (en) * | 2023-09-28 | 2023-11-17 | 比亚迪股份有限公司 | Deceleration strip determining method and device, storage medium, electronic equipment and vehicle |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015219892A (en) * | 2014-05-21 | 2015-12-07 | 大日本印刷株式会社 | Visual line analysis system and visual line analysis device |
CN106778585A (en) * | 2016-12-08 | 2017-05-31 | 腾讯科技(上海)有限公司 | A kind of face key point-tracking method and device |
CN108268869A (en) * | 2018-02-13 | 2018-07-10 | 北京旷视科技有限公司 | Object detection method, apparatus and system |
CN108875482A (en) * | 2017-09-14 | 2018-11-23 | 北京旷视科技有限公司 | Object detecting method and device, neural network training method and device |
CN108985259A (en) * | 2018-08-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | Human motion recognition method and device |
CN109509222A (en) * | 2018-10-26 | 2019-03-22 | 北京陌上花科技有限公司 | The detection method and device of straight line type objects |
CN109801335A (en) * | 2019-01-08 | 2019-05-24 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and computer storage medium |
CN110047095A (en) * | 2019-03-06 | 2019-07-23 | 平安科技(深圳)有限公司 | Tracking, device and terminal device based on target detection |
-
2019
- 2019-09-02 CN CN201910830741.5A patent/CN110532984B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015219892A (en) * | 2014-05-21 | 2015-12-07 | 大日本印刷株式会社 | Visual line analysis system and visual line analysis device |
CN106778585A (en) * | 2016-12-08 | 2017-05-31 | 腾讯科技(上海)有限公司 | A kind of face key point-tracking method and device |
CN108875482A (en) * | 2017-09-14 | 2018-11-23 | 北京旷视科技有限公司 | Object detecting method and device, neural network training method and device |
CN108268869A (en) * | 2018-02-13 | 2018-07-10 | 北京旷视科技有限公司 | Object detection method, apparatus and system |
CN108985259A (en) * | 2018-08-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | Human motion recognition method and device |
CN109509222A (en) * | 2018-10-26 | 2019-03-22 | 北京陌上花科技有限公司 | The detection method and device of straight line type objects |
CN109801335A (en) * | 2019-01-08 | 2019-05-24 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and computer storage medium |
CN110047095A (en) * | 2019-03-06 | 2019-07-23 | 平安科技(深圳)有限公司 | Tracking, device and terminal device based on target detection |
Non-Patent Citations (2)
Title |
---|
MANDAR HALDEKAR等: "Identifying Spatial Relations in Images using Convolutional Neural Networks", 《2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)》 * |
夏瀚笙等: "基于人体关键点的分心驾驶行为识别", 《计算机技术与发展》 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111178192A (en) * | 2019-12-18 | 2020-05-19 | 北京达佳互联信息技术有限公司 | Position identification method and device for target object in image |
CN111178192B (en) * | 2019-12-18 | 2023-08-22 | 北京达佳互联信息技术有限公司 | Method and device for identifying position of target object in image |
CN113012089B (en) * | 2019-12-19 | 2024-07-09 | 北京金山云网络技术有限公司 | Image quality evaluation method and device |
CN113012089A (en) * | 2019-12-19 | 2021-06-22 | 北京金山云网络技术有限公司 | Image quality evaluation method and device |
CN111208509A (en) * | 2020-01-15 | 2020-05-29 | 中国人民解放军国防科技大学 | Ultra-wideband radar human body target posture visualization enhancing method |
CN111208509B (en) * | 2020-01-15 | 2020-12-29 | 中国人民解放军国防科技大学 | Ultra-wideband radar human body target posture visualization enhancing method |
CN111325171A (en) * | 2020-02-28 | 2020-06-23 | 深圳市商汤科技有限公司 | Abnormal parking monitoring method and related product |
CN111524188A (en) * | 2020-04-24 | 2020-08-11 | 杭州健培科技有限公司 | Lumbar positioning point acquisition method, equipment and medium |
CN111767792A (en) * | 2020-05-22 | 2020-10-13 | 上海大学 | Multi-person key point detection network and method based on classroom scene |
WO2022001106A1 (en) * | 2020-06-30 | 2022-01-06 | 北京市商汤科技开发有限公司 | Key point detection method and apparatus, and electronic device, and storage medium |
CN111783882A (en) * | 2020-06-30 | 2020-10-16 | 北京市商汤科技开发有限公司 | Key point detection method and device, electronic equipment and storage medium |
CN111948609B (en) * | 2020-08-26 | 2022-02-18 | 东南大学 | Binaural sound source positioning method based on Soft-argmax regression device |
CN111948609A (en) * | 2020-08-26 | 2020-11-17 | 东南大学 | Binaural sound source positioning method based on Soft-argmax regression device |
CN112464753B (en) * | 2020-11-13 | 2024-05-24 | 深圳市优必选科技股份有限公司 | Method and device for detecting key points in image and terminal equipment |
CN112464753A (en) * | 2020-11-13 | 2021-03-09 | 深圳市优必选科技股份有限公司 | Method and device for detecting key points in image and terminal equipment |
CN112714253A (en) * | 2020-12-28 | 2021-04-27 | 维沃移动通信有限公司 | Video recording method and device, electronic equipment and readable storage medium |
CN112784765A (en) * | 2021-01-27 | 2021-05-11 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for recognizing motion |
CN112861678A (en) * | 2021-01-29 | 2021-05-28 | 上海依图网络科技有限公司 | Image identification method and device |
CN112861678B (en) * | 2021-01-29 | 2024-04-19 | 上海依图网络科技有限公司 | Image recognition method and device |
CN112836745A (en) * | 2021-02-02 | 2021-05-25 | 歌尔股份有限公司 | Target detection method and device |
CN112836745B (en) * | 2021-02-02 | 2022-12-09 | 歌尔股份有限公司 | Target detection method and device |
CN113128383A (en) * | 2021-04-07 | 2021-07-16 | 杭州海宴科技有限公司 | Recognition method for campus student cheating behavior |
CN113128436A (en) * | 2021-04-27 | 2021-07-16 | 北京百度网讯科技有限公司 | Method and device for detecting key points |
CN113972006A (en) * | 2021-10-22 | 2022-01-25 | 中冶赛迪重庆信息技术有限公司 | Live animal health detection method and system based on infrared temperature measurement and image recognition |
CN115166790A (en) * | 2022-05-23 | 2022-10-11 | 集度科技有限公司 | Road data processing method, device, equipment and storage medium |
CN114998424A (en) * | 2022-08-04 | 2022-09-02 | 中国第一汽车股份有限公司 | Vehicle window position determining method and device and vehicle |
CN115100691B (en) * | 2022-08-24 | 2023-08-08 | 腾讯科技(深圳)有限公司 | Method, device and equipment for acquiring key point detection model and detecting key point |
CN115100691A (en) * | 2022-08-24 | 2022-09-23 | 腾讯科技(深圳)有限公司 | Method, device and equipment for acquiring key point detection model and detecting key points |
CN117079242A (en) * | 2023-09-28 | 2023-11-17 | 比亚迪股份有限公司 | Deceleration strip determining method and device, storage medium, electronic equipment and vehicle |
CN117079242B (en) * | 2023-09-28 | 2024-01-26 | 比亚迪股份有限公司 | Deceleration strip determining method and device, storage medium, electronic equipment and vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN110532984B (en) | 2022-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110532984A (en) | Critical point detection method, gesture identification method, apparatus and system | |
CN107808143B (en) | Dynamic gesture recognition method based on computer vision | |
US11379996B2 (en) | Deformable object tracking | |
CN104350509B (en) | Quick attitude detector | |
TW201814435A (en) | Method and system for gesture-based interactions | |
CN107679455A (en) | Target tracker, method and computer-readable recording medium | |
CN109829368B (en) | Palm feature recognition method and device, computer equipment and storage medium | |
CN108520229A (en) | Image detecting method, device, electronic equipment and computer-readable medium | |
CN108764024A (en) | Generating means, method and the computer readable storage medium of human face recognition model | |
CN109034397A (en) | Model training method, device, computer equipment and storage medium | |
CN106537305A (en) | Touch classification | |
CN110517319A (en) | A kind of method and relevant apparatus that camera posture information is determining | |
CN109685037A (en) | A kind of real-time action recognition methods, device and electronic equipment | |
CN108292362A (en) | gesture recognition for cursor control | |
CN102073414B (en) | Multi-touch tracking method based on machine vision | |
KR20150108888A (en) | Part and state detection for gesture recognition | |
CN109933206B (en) | Finger non-contact drawing method and system based on Leap Motion | |
CN109886951A (en) | Method for processing video frequency, device and electronic equipment | |
CN109934065A (en) | A kind of method and apparatus for gesture identification | |
CN109325456A (en) | Target identification method, device, target identification equipment and storage medium | |
CN108960192A (en) | Action identification method and its neural network generation method, device and electronic equipment | |
CN109598198A (en) | The method, apparatus of gesture moving direction, medium, program and equipment for identification | |
CN111598149B (en) | Loop detection method based on attention mechanism | |
CN106200971A (en) | Man-machine interactive system device based on gesture identification and operational approach | |
CN114138121B (en) | User gesture recognition method, device and system, storage medium and computing equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |