CN105844659B - The tracking and device of moving component - Google Patents
The tracking and device of moving component Download PDFInfo
- Publication number
- CN105844659B CN105844659B CN201510018291.1A CN201510018291A CN105844659B CN 105844659 B CN105844659 B CN 105844659B CN 201510018291 A CN201510018291 A CN 201510018291A CN 105844659 B CN105844659 B CN 105844659B
- Authority
- CN
- China
- Prior art keywords
- moving component
- feature
- classification
- component
- case point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Databases & Information Systems (AREA)
Abstract
The present invention provides a kind of tracking of moving component and devices, wherein the described method includes: acquiring signal using dynamic visual sensor and exporting the case point of detection;Classification and the position of moving component are identified for the case point that current detection goes out using classifier;Motion profile is determined according to the position of the moving component of the classification successively identified, the tracking result of the moving component as the classification;Wherein, the classifier be according to the dynamic visual sensor acquisition sample signal train in advance come.With the application of the invention, can make tracking process, low energy consumption and can quickly and accurately capture moving target.
Description
Technical field
The present invention relates to technical field of intelligent equipment, specifically, the present invention relates to a kind of trackings of moving component
And device.
Background technique
Motion target tracking is a research hotspot of computer vision field, is widely used in military surveillance, monitoring system
The fields such as system, human-computer interaction.For example, can be captured to the moving target in environment, track, analyze;According to what is analyzed
The motion mode of moving target carries out the switching of the operational mode of terminal device, so as to improve man-machine interaction experience.Wherein, terminal
Equipment can be mobile terminal, video camera, intelligent glasses, smart television etc..
Currently, existing motion target tracking method, mainly using traditional based on CCD (Charge-coupled
Device, charge coupled cell) or CMOS (Complementary Metal Oxide Semiconductor, complementary metal oxygen
Compound semiconductor) picture pick-up device, to where moving target scene carry out Image Acquisition;Later, divide from the image of acquisition
After cutting out the region and background area moved, moving target is gone out according to the region recognition for the movement being partitioned into, and transported with this
The tracking of moving-target.And be partitioned into the moving target in scene consuming time is long, cause existing motion target tracking method without
Method is suitable for the occasion with quick track demand.
Moreover, existing motion target tracking method could quickly be caught due to needing picture pick-up device to be constantly in open state
Grasp the moving target moved in environment, and the energy consumption of traditional picture pick-up device based on CCD or CMOS is higher, then cause with
Track process energy consumption is high, is unfavorable for applying on portable device, such as mobile phone, intelligent glasses equipment.
Therefore, it is necessary to provide one kind, low energy consumption and can quickly capture the motion target tracking method of moving target.
Summary of the invention
The purpose of the present invention aims to solve at least one of above-mentioned technological deficiency, especially tracking process energy consumption height, Yi Jiwu
Method quickly and accurately captures the problem of moving target.
The present invention provides a kind of trackings of moving component, comprising:
Signal is acquired using dynamic visual sensor and exports the case point of detection;
Classification and the position of moving component are identified for the case point that current detection goes out using classifier;
Motion profile is determined according to the position of the moving component of the classification successively identified, the fortune as the classification
The tracking result of dynamic component;
Wherein, the classifier be according to the dynamic visual sensor acquisition sample signal train in advance come.
The present invention program additionally provides a kind of tracking device of moving component, comprising:
Signal acquisition unit, for acquiring signal using dynamic visual sensor and exporting the case point of detection;
Component recognition unit, for being identified using classifier for the case point that the signal acquisition unit current detection goes out
The classification of moving component and position out;Wherein, the classifier is the sample signal acquired according to the dynamic visual sensor
It trains in advance;
Motion tracking unit, the moving component of the classification for successively being identified according to the component recognition unit
Position determines motion profile, the tracking result of the moving component as the classification.
In the scheme of the present embodiment, for the case point that dynamic visual sensor current detection goes out, preparatory instruction can use
Experienced classifier identifies classification and the position of moving component;And it is directed to the other moving component of every type, according to successively identifying
The position of moving component of the category determine motion profile, the tracking result of the moving component as the category.
Further, corresponding action command is identified according to tracking result, and respective response is carried out according to action command
Operation.
Compared to existing motion target tracking method, on the one hand, the dynamic visual sensor in scheme provided by the invention
The moving component that can be fast moved with quick response, and low energy consumption;On the other hand, since dynamic visual sensor is only bright to pixel
The degree variation case point above to a certain degree responds, and can directly detect outgoing event in dynamic visual sensor acquisition signal
Point, the operation without being partitioned into moving object from scene, can effectively improve the speed and precision of pursuit movement component.
The additional aspect of the present invention and advantage will be set forth in part in the description, these will become from the following description
Obviously, or practice through the invention is recognized.
Detailed description of the invention
Fig. 1 is the flow diagram of classifier training method provided in an embodiment of the present invention;
Fig. 2 a is the flow diagram of the tracking of moving component provided in an embodiment of the present invention;
Fig. 2 b is the image schematic diagram of dynamic visual sensor provided in an embodiment of the present invention acquisition;
Fig. 3 is the structural schematic diagram of the tracking device of moving component provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram of internal structure of classifier training unit provided in an embodiment of the present invention;
Fig. 5 is the schematic diagram of internal structure of component recognition unit provided in an embodiment of the present invention;
Fig. 6 is the schematic diagram of internal structure of motor imagination unit provided in an embodiment of the present invention.
Specific embodiment
Clear, complete description is carried out to technical solution of the present invention below with reference to attached drawing, it is clear that described implementation
Example is only a part of the embodiments of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, this field is general
Logical technical staff obtained all other embodiment without making creative work belongs to the present invention and is protected
The range of shield.
The terms such as " module " used in this application, " system " be intended to include with computer-related entity, such as it is but unlimited
In hardware, firmware, combination thereof, software or software in execution.For example, module can be, and it is not limited to: processing
Process, processor, object, executable program, the thread of execution, program and/or the computer run on device.For example, it counts
Calculating the application program run in equipment and this calculating equipment can be module.One or more modules can be located in execution
In one process and/or thread, a module can also be located on a computer and/or be distributed in two or more calculating
Between machine.
It was found by the inventors of the present invention that existing motion target tracking method can not be suitable for quick track demand
The key of occasion is:
In existing motion target tracking method, need to carry out the image of acquisition complicated image procossing, and from shooting
Image in by moving target after being partitioned into scene, carry out motion target tracking;And it is partitioned into the movement mesh in scene
Consuming time is long for mark.
Therefore, under the occasion with quick track demand, the movement mesh quickly moved can not be captured in time by easily causing
Mark.For example, birds quickly fly over during bird appreciation, since above-mentioned traditional picture pick-up device does not often have fast reaction
Capturing ability brings many technical requirements to photographer it is thus typically necessary to track Bird Flight track manually.
Further, it was found by the inventors of the present invention that dynamic visual sensor only changes more than to a certain degree pixel intensity
Case point respond, and have the characteristics that low energy consumption, illumination condition is wide in range.And low energy consumption that it can be made in mobile device etc.
It is in running order when terminal standby, can in time, rapidly, accurately capture moving target;Once user's needs are outer
Boundary's condition meets, and the equipment that can automatically control response carries out pattern switching, or when finding unsafe conditions, issues in time pre-
It is alert.Illumination condition is wide in range can to make dynamic visual sensor effectively worked in different environmental backgrounds, even at dark
The very weak environment of light source, also can capture moving component.
It thereby takes into account, can use dynamic visual sensor acquisition signal, detect that the pixel in the signal of acquisition is bright
Degree variation is more than the case point of the range of setting and output.Wherein, pixel intensity variation is more than the case point of the range of setting
Position is usually position corresponding to the moving object moved in scene.Therefore, the training based on the case point detected and in advance
For determine the affiliated moving component of case point classification classifier, can identify classification and the position of moving component, and
Determine the motion profile of moving component.
Compared to existing motion target tracking method, on the one hand, the dynamic visual sensor in scheme provided by the invention
The moving object that can be fast moved with quick response, and low energy consumption;On the other hand, dynamic visual sensor only becomes pixel intensity
Change the case point above to a certain degree to respond, can directly detect case point in the signal of acquisition, without from
It is partitioned into the operation of moving object in scene, can effectively improve the speed and precision of pursuit movement component.
The technical solution that the invention will now be described in detail with reference to the accompanying drawings.
In the embodiment of the present invention, before the tracking for carrying out moving component, case point institute for identification can be trained in advance
Belong to the classifier of the classification of moving component, as shown in Figure 1, step can specifically train by the following method:
S101: training sample is generated according to the case point of dynamic visual sensor collecting sample signal output.
It, can be first with the preparatory collecting sample signal of dynamic visual sensor in this step.For example, can be directed to various types of
Other moving component is shot the various motion processes of the moving component of the category using dynamic visual sensor, acquisition
To sample signal.
Since dynamic visual sensor only responds case point more than pixel intensity variation to a certain degree, transmission is simultaneously
The case point of memory response.Therefore, can from the sample signal that dynamic visual sensor acquires direct outgoing event point;And it will
The case point of dynamic visual sensor collecting sample signal output is as sample case point.
In practical application, each case point has a corresponding spatial position, position corresponding to a certain case point
Neighbouring case point is properly termed as neighbours' point of the case point.In view of the sample signal of dynamic visual sensor acquisition can be one
Determine the motion outline for describing object in degree well, and the profile information that these movements generate can also express object itself
Shape information, therefore, neighbours' point of case point have good structure description to act on case point, can be in order to judging case point
Belong to which part of moving object, i.e., the classification of affiliated moving component.For example, many case points can be generated when human motion,
It can judge whether case point belongs to head or hand or body etc. by case point and its a certain range of neighbours point.
Therefore, determine the case point that exports dynamic visual sensor collecting sample signal as sample case point it
Afterwards, neighbours' point of the sample case point currently exported can be determined;It is right and according to the position of the sample case point and its neighbours' point
The sample case point is classified, that is, judges the classification of moving component belonging to the sample case point.Wherein, moving component
Classification specifically can be with head, hand, body etc..
In this way, can be by the sample case point currently exported and neighbours' point of the sample case point, as a training
Sample;By the classification of moving component belonging to the sample case point judged to the classification of the moving component of the training sample into
Rower is fixed.It specifically, can be to the sampling around the sample case point according to setting for the sample case point currently exported
Range is sampled, and neighbours' point of setting quantity is selected;And by the sample case point and the neighbours o'clock selected as one
Sample is trained or tests.
S102: using the training sample and its calibration result generated, depth confidence network is trained, is classified
Device.
Wherein, the calibration result of training sample refers to the classification for the moving component demarcated for the training sample.
In this step, the multiple training samples composition training sample set that step S101 can be generated utilizes training sample
The calibration result for each training sample that collection and the training sample are concentrated is trained depth confidence network, is classified
Model.Wherein, on how to be trained to depth confidence network, the common technology hand of those skilled in the art can be used
Section.
For example, carrying out successive ignition training to depth confidence network using the training sample and its calibration result that generate.Its
In, an iteration training process specifically includes: the training sample set that multiple training samples are formed is as depth confidence network
Input;Then, the output of depth confidence network is compared with the calibration result of each training sample;And according to comparison result tune
The level parameter of whole depth confidence network continues next iteration, or stops iteration and obtain classifier.
Wherein, the output of depth confidence network is actually the conjecture to the classification of the affiliated moving component of sample case point,
In this way, by the way that the classification of the affiliated moving component of conjecture to be compared with calibration result demarcate in advance, accurate, it will
The error amount that the two generates is adjusted each level parameter of depth confidence network by the training technique of back-propagating, improves
The category division accuracy of finally obtained classifier, with this accurately tracking convenient for moving component.
Based on above-mentioned classifier, the present invention provides a kind of trackings of moving component, as shown in Figure 2 a, process tool
Body may include steps of:
S201: acquiring signal using dynamic visual sensor and exports the case point of detection.
Specifically, it can use dynamic visual sensor and acquire signal in real time, and export the case point of detection.Wherein, often
The corresponding position of one case point, but the same position may correspond to several case points.Therefore, in the thing of output detection
It before part point, needs to get rid of duplicate case point, retain most newly generated case point and exports.
In practical application, there may be noise caused by system, environment etc. in acquisition signal, it therefore, can basis
The precedence relationship and space that event occurs close on relationship, remove and make an uproar in the flow of event signal as composed by multiple case points of acquisition
Sound.
Further, by accumulation certain time interval (for example, 20ms) in flow of event and binding events point position,
The flow of event signal that dynamic visual sensor acquires can be converted into picture signal;As shown in Figure 2 b, the picture signal of conversion
In only substantially reflect the profile and texure information of moving target, directly have ignored the object that will not be moved in background.
S202: classification and the position of moving component are identified for the case point that current detection goes out using classifier.
In this step, the classification trained previously according to the sample signal of dynamic visual sensor acquisition can use
Device determines the classification of the affiliated moving component of the case point according to the neighbours' point for the case point that current detection goes out.Classifier can be with
Neighbours' point of the case point and the case point that are gone out according to the current detection of input, accordingly exports movement portion belonging to the case point
The classification of part.
Wherein, neighbours' point of case point can determine in the following way:
Determine dynamic visual sensor acquire the setting before current flow of event signal time interval (for example,
All flow of event signals collected in 20ms);And for each flow of event signal for determining, by the flow of event signal
Case point in the setting spatial dimension (for example, rectangle of 80 × 80 pixels) on the case point periphery that current detection goes out is determined as this
Neighbours' point of case point.
It further, can be with needle after determining the classification of affiliated moving component of all case points detected
The category is determined according to the position of each case point for the moving component for belonging to the category to the other moving component of every type
The position of moving component.
For example, the center for belonging to the case point of same category of moving component can be calculated;In calculated
Position of the heart position as the moving component of the category.In practical application, known in those skilled in the art can be used
What common cluster mode obtains the center for belonging to the case point of same category of moving component, the i.e. position of moving component
It sets.As an example, K-means clustering method can be used in the embodiment of the present invention, the case point that will test out is given not
Same moving component, and the center of moving component is obtained, in order to accurately tracking for subsequent moving component.
S203: determining motion profile according to the position of the moving component successively identified, the tracking knot as moving component
Fruit.
It specifically, can be according to successively identifying behind classification and the position for identifying moving component by step S202
The category moving component position, determine the motion profile of the moving component of the category.
In practical application, those skilled in the art can be used, and commonly general track algorithm carries out moving component
Motion profile determination, for example, smoothing filter, timing track algorithm etc., this will not be detailed here.
It more preferably,, can also be to identifying after the classification and position for identifying moving component in the embodiment of the present invention
The classification of moving component carries out soundness verification, the position of the moving component of false judgment is excluded, so as to improve movement
The tracking efficiency of component improves tracking velocity.
Specifically, after the classification of affiliated moving component that all case points detected are determined in step S202, also
This can be determined according to the position of each case point for the moving component for belonging to the category for the other moving component of every type
The shape and record of the moving component of classification.In turn, it can be determined that whether the shape determined belongs to the moving component of the category
Reasonable Shape within the scope of;If so, passing through verifying;Otherwise, verifying does not pass through.Wherein, Reasonable Shape range is according to last time
What the shape prior knowledge of the moving component of the shape and category of the moving component of the category of record determined.
More preferably, after step S202 identifies the position of moving component, it can also judge the classification currently identified
Moving component position whether in reasonable regional scope;If so, passing through verifying;Otherwise, verifying does not pass through.Wherein, it closes
Reason regional scope is the position model according to the moving component of the position and category of the moving component of the category of last registration
Enclose what priori knowledge determined.For example, can will work as when moving component is specially the concrete positions such as head or the hand of human body
Before the movement portion of the category of the position of the moving component (for example, head or hand) of the category and last registration that identifies
The position of part is carried out apart from calculating, if distance meets certain condition, and when meeting the Heuristics of Regular Human's form, is said
The position of the moving component of the bright classification currently identified is in reasonable regional scope.
In turn, if the classification of the moving component identified passes through verifying, the classification for the moving component that will identify that and position
Set corresponding record.For example, can recorde in the tracking unit list constructed in advance, for movement portion in tracking unit list
The position of part is tracked and recorded.In this way, can be according to the moving component of the category successively recorded in tracking unit list
Position determines motion profile, and the tracking result as moving component.
In practical application, due to the particularity of dynamic visual sensor imaging, when there is of short duration pause in moving component,
The moving component reflected according to the case point that dynamic visual sensor detects is it is possible that the of short duration disappearance of motion profile
The case where.Therefore, the continuous tracking to different motion component can be realized by one tracking unit list of maintenance, and
Movement position is smoothed.Wherein, common smoothing processing means such as karr can be used in the process of smoothing processing
Graceful filtering mode etc..
S204: the identification of action command is carried out according to tracking result.
Specifically, movement segment can be carried out to the obtained tracking result of step S203 to divide, and division is obtained dynamic
After making sequence fragment extraction feature, corresponding action command is identified according to the feature of extraction.Wherein, the feature of extraction includes extremely
Few following a kind of feature: position feature, route characteristic, moving direction feature, velocity characteristic and acceleration signature etc..
Such as, it can be determined that the moving component that the category whether is directed in instruction database is stored with to match with the feature of extraction
Feature;If so, the action command corresponding with this feature that will be recorded in instruction database, is identified as move corresponding with tracking result
It instructs.Wherein, instruction database is constructed in advance by technical staff;The other moving component of every type is directed in instruction database in advance,
It is stored with the feature of the motion profile to match with the moving component of the category, and corresponding record has preset movement to refer to
It enables, such as pattern switching, dangerous prompting etc..
In practical application, some action commands may be needed according to the moving component of multiple classifications all in accordance with certain movement
Track carries out athleticism triggering, or needs to be triggered according to the motion profile of entire moving object.
Therefore, more preferably, after different classes of moving component group can also being combined into moving object, according to fortune of all categories
The motion profile of dynamic component determines the motion profile of moving object;And the motion profile for the moving object determined is extracted special
After sign, corresponding action command is identified according to the feature of extraction.
Wherein, on how to determine the motion profile of moving object according to the motion profile of moving component of all categories,
It can specifically be carried out according to the position between component each in tracking unit list apart from calculating, if the distance between two components
Meet setting matching condition, then illustrates that the two matches;In this way, the motion profile of each component to match can be intended
It closes, obtains the motion profile of moving object.
Wherein, feature is extracted to the motion profile for the moving object determined, specifically includes: the moving object to determining
Motion profile carry out movement segment and divide, to the obtained action sequence snippet extraction feature of division.
S205: respective response operation is carried out according to the action command identified.
Specifically, after step S204 identifies action command, corresponding corresponding operating can be carried out according to action command,
For example, the recording function of opening terminal apparatus, the camera lens for rotating picture pick-up device etc..
Based on the tracking of above-mentioned moving component, a kind of tracking device of moving component provided in an embodiment of the present invention,
As shown in figure 3, can specifically include: signal acquisition unit 301, component recognition unit 302, motion tracking unit 303.
Wherein, signal acquisition unit 301 is used for using dynamic visual sensor acquisition signal and exports the case point of detection.
Component recognition unit 302 is used to know using classifier for the case point that 301 current detection of signal acquisition unit goes out
Not Chu moving component classification and position.
Wherein, classifier be according to dynamic visual sensor acquisition sample signal train in advance come.
Motion tracking unit 303 is used to be directed to the other moving component of every type, according to the fortune of the category successively identified
The position of dynamic component determines motion profile, the tracking result of the moving component as the category.
More preferably, in the embodiment of the present invention, the tracking device of moving component be can further include: motor imagination unit
304。
Motor imagination unit 304 is used to identify corresponding movement according to the tracking result that motion tracking unit 303 exports
Instruction;Respective response operation is carried out according to the action command identified.
In practical application, the sample signal that classifier can be acquired by other devices according to dynamic visual sensor is instructed in advance
It practises after coming, is stored in the tracking device of moving component;Alternatively, can also tracked by the tracking device of moving component
It trains and in advance before.
Therefore, more preferably, in the embodiment of the present invention, the tracking device of moving component be can further include: classifier
Training unit 305.
The case point that classifier training unit 305 is used to be exported according to dynamic visual sensor collecting sample signal generates instruction
Practice sample;Using the training sample and its calibration result of generation, depth confidence network is trained, classifier is obtained.Wherein,
The calibration result of training sample refers to the classification for the moving component demarcated for the training sample.
Specifically, as shown in figure 4, classifier training unit 305 can specifically include: training sample collection subelement 401,
And repetitive exercise subelement 402.
Wherein, training sample collects case point of the subelement 401 for exporting signal acquisition unit 301 as sample thing
Part point;Determine neighbours' point of the sample case point currently exported;By the sample case point currently exported and the sample case point
Neighbours' point, as a training sample.
Repetitive exercise subelement 402 is used to collect the training sample and its calibration that subelement 401 generates using training sample
As a result, carrying out successive ignition training to depth confidence network.
Specifically, repetitive exercise subelement 402 can use training sample collect subelement 401 generate training sample and
Its calibration result, when carrying out successive ignition training to depth confidence network, in an iteration training process, by multiple trained samples
Input of the training sample set of this composition as depth confidence network;By the output of depth confidence network and the mark of each training sample
Determine result to be compared;Continue next iteration according to the level parameter of comparison result percentage regulation confidence network, or stops iteration
Obtain classifier.
Preferably, as shown in figure 5, component recognition unit 302 may include: component categories identification subelement 501 and portion
Part position identifies subelement 502.
Wherein, component categories identification subelement 501 is used to be gone out using classifier according to 301 current detection of signal acquisition unit
Case point neighbours' point, determine the classification of the affiliated moving component of the case point.
Component locations identify that subelement 502 is used to be directed to the other moving component of every type, identify that son is single according to component categories
The position of each case point for the moving component for belonging to the category that member 501 is determined, determines the position of the moving component of the category
It sets.For example, component locations identification subelement 502 can calculate the centre bit for belonging to the case point of same category of moving component
It sets;Using calculated center as the position of the moving component of the category.
More preferably, component recognition unit 302 can also include: soundness verification subelement 503.
The classification for the moving component that soundness verification subelement 503 is used to identify component categories identification subelement 501
Carry out soundness verification;If component categories are identified that subelement 501, component locations identification subelement 502 identify by verifying
The classification and position corresponding record of moving component out.
Specifically, soundness verification subelement 503 may determine that the class that component locations identification subelement 502 currently identifies
Whether the position of other moving component is in reasonable regional scope;If so, passing through verifying;Otherwise, verifying does not pass through.Wherein,
Reasonable regional scope is the position according to the moving component of the position and category of the moving component of the category of last registration
What range priori knowledge determined.
Correspondingly, motion tracking unit 303 can be directed to the moving component of each classification, according to soundness verification subelement
The position of the moving component of 504 categories successively recorded determines motion profile, and the tracking of the moving component as the category
As a result.
Preferably, as shown in fig. 6, motor imagination unit 304 may include: feature extraction subelement 601, action command knowledge
Small pin for the case unit 602 and action command respond subelement 603.
Wherein, the tracking result that feature extraction subelement 601 is used to export motion tracking unit 303 carries out movement segment
It divides, the action sequence snippet extraction feature that division is obtained.Wherein, the feature of extraction includes a kind of at least following feature: position
Set feature, route characteristic, moving direction feature, velocity characteristic and acceleration signature etc..
More preferably, the different classes of movement portion that feature extraction subelement is also used to identify component recognition unit 302
After part group is combined into moving object, the motion profile of the moving component of all categories exported according to motion tracking unit 303 is determined
The motion profile of moving object;Feature is extracted to the motion profile of moving object.
Action command identification subelement 602 according to the feature that feature extraction subelement 601 extracts for identifying accordingly
Action command.Specifically, action command identifies the moving component other for every type of subelement 602, it can be determined that in instruction database
Whether the feature extracted with feature extraction subelement 601 is stored with for the moving component of the category or moving object to match
Feature;If so, the action command corresponding with this feature that will be recorded in instruction database, is identified as move corresponding with tracking result
It instructs.
Action command response subelement 603 be used to be identified according to action command the action command that identifies of subelement 602 into
The operation of row respective response.
In the embodiment of the present invention, the tool of each subelement under each unit and unit in the tracking device of moving component
Body function is realized, is referred to the specific steps of the tracking of above-mentioned moving component, this will not be detailed here.
Present inventor has further discovered that intelligent glasses can attract a large amount of attention of user while using, this
Sample is easy to cause user to ignore the thing around occurred, includes hazard event, such as the vehicle quickly moved etc..
It is thus preferable to which in practical applications, the tracking and device of above-mentioned moving component can be applied to Brilliant Eyes
In mirror.For example, a kind of intelligent glasses for the tracking device for being equipped with above-mentioned moving component, can be used for watching in user virtual
Dangerous prompting is carried out when showing picture:
Specifically, can while using intelligent glasses, open the dynamic visual sensor that is arranged on intelligent glasses into
Row monitoring.In this way, can be by the tracking device of above-mentioned moving component, when user watches the picture virtually shown, to dynamic
The case where periphery of visual sensor real time monitoring, carries out signal acquisition, and according to the tracking device of moving component determine with
Track result identifies corresponding action command.Wherein, it is close can be expressed as object for tracking result, and corresponding action command can
Think dangerous prompting.Further, when identifying the action command of dangerous prompting, intelligent glasses can promptly exit user
Currently watched virtual display picture, and dangerous prompting is issued, inform the arrival of user's danger.
Under normal conditions, the mobile monitoring device in mobile monitoring system is as a mobile device without external power supply,
The limitation of the energy and storage space is received, is not suitable for being in open state for a long time.It is contemplated that by above-mentioned moving component
Tracking and device are applied in mobile monitoring system.For example, a kind of shifting for the tracking device for being equipped with above-mentioned moving component
Dynamic monitoring system, may be implemented to monitor in real time under conditions of low energy consumption:
A dynamic visual sensor can be disposed in mobile monitoring system.Since dynamic visual sensor is with extremely low
Energy consumption therefore dynamic visual sensor can be allowed in the open state always, be monitored, acquire signal, simultaneously close off shifting
Dynamic monitoring device.In this way, tracked by the moving component that the tracking device of above-mentioned moving component can be moved quickly,
And corresponding action command is identified according to the tracking result that the tracking device of moving component is determined.Wherein it is determined that go out with
Track result can be expressed as the outsider and enter or leave setting regions;And the corresponding action command identified can for open or
Close picture pick-up device.It, can not only be in this way, be equipped with the mobile monitoring system of the tracking device of moving component provided by the invention
It realizes under conditions of low energy consumption and monitors always, and can greatly save the storage space of video record, while storage can also be improved
Deposit the validity etc. of content.
Further, it is contemplated that photographer such as sport events reporter, Bird Watching fan are equal generally for capture
Splendid moment at the time of needing prolonged waiting event to occur, and is captured when this significant instant arrives;And
This process is arduous and success rate is lower.
The tracking and device of above-mentioned moving component apply also for quickly moving in shooting system.For example, a kind of peace
The quick movement of tracking device equipped with moving component with shooting system, the tracked person quickly moved can be carried out automatically with
It claps:
It is set as supervising always with dynamic visual sensor is arranged in shooting system, and by dynamic visual sensor in quick movement
Control state is monitored tracked person (such as flying bird, vehicle).When the tracking device of the moving component provided through the invention
After identifying tracking result, feature can be extracted from the tracking result identified, and identify accordingly according to the feature of extraction
Action command.Wherein, the feature of extraction includes being tracked the moving direction feature and velocity characteristic of person;And corresponding movement refers to
Enable to be rotation or dollying equipment.To realize to the automatic with clapping of the tracked person quickly moved, reduce to bat
The requirement for the person of taking the photograph.
In technical solution of the present invention, for the case point that dynamic visual sensor current detection goes out, it can use in advance
Trained classifier identifies classification and the position of moving component;And it is directed to the other moving component of every type, according to successively identifying
The position of the moving component of the category out determines motion profile, the tracking result of the moving component as the category.
Further, after identifying corresponding action command according to tracking result, phase can be carried out according to action command
Operation should be responded.
Compared to existing motion target tracking method, on the one hand, the dynamic visual sensor in scheme provided by the invention
The moving component that can be fast moved with quick response, and low energy consumption;On the other hand, dynamic visual sensor only becomes pixel intensity
Change the case point above to a certain degree to respond, can directly detect case point in dynamic visual sensor acquisition signal,
Operation without being partitioned into moving object from scene, can effectively improve the speed and precision of pursuit movement component.
Those skilled in the art of the present technique are appreciated that the present invention includes being related to for executing in operation described herein
One or more equipment.These equipment can specially design and manufacture for required purpose, or also may include general
Known device in computer.These equipment have the computer program being stored in it, these computer programs are selectively
Activation or reconstruct.Such computer program can be stored in equipment (for example, computer) readable medium or be stored in
It e-command and is coupled in any kind of medium of bus respectively suitable for storage, the computer-readable medium includes but not
Be limited to any kind of disk (including floppy disk, hard disk, CD, CD-ROM and magneto-optic disk), ROM (Read-Only Memory, only
Read memory), RAM (Random Access Memory, immediately memory), EPROM (Erasable Programmable
Read-Only Memory, Erarable Programmable Read only Memory), EEPROM (Electrically Erasable
Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), flash memory, magnetic card or light card
Piece.It is, readable medium includes by equipment (for example, computer) with any Jie for the form storage or transmission information that can be read
Matter.
Those skilled in the art of the present technique be appreciated that can be realized with computer program instructions these structure charts and/or
The combination of each frame and these structure charts and/or the frame in block diagram and/or flow graph in block diagram and/or flow graph.This technology neck
Field technique personnel be appreciated that these computer program instructions can be supplied to general purpose computer, special purpose computer or other
The processor of programmable data processing method is realized, to pass through the processing of computer or other programmable data processing methods
The scheme specified in frame or multiple frames of the device to execute structure chart and/or block diagram and/or flow graph disclosed by the invention.
Those skilled in the art of the present technique have been appreciated that in the present invention the various operations crossed by discussion, method, in process
Steps, measures, and schemes can be replaced, changed, combined or be deleted.Further, each with having been crossed by discussion in the present invention
Kind of operation, method, other steps, measures, and schemes in process may also be alternated, changed, rearranged, decomposed, combined or deleted.
Further, in the prior art to have and the step in various operations, method disclosed in the present invention, process, measure, scheme
It may also be alternated, changed, rearranged, decomposed, combined or deleted.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, without departing from the principle of the present invention, it can also make several improvements and retouch, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (31)
1. a kind of tracking of moving component characterized by comprising
Signal is acquired using dynamic visual sensor and exports the case point of detection;
Classification and the position of moving component are identified for the case point that current detection goes out using classifier;
Motion profile is determined according to the position of the moving component of the classification successively identified, the movement portion as the classification
The tracking result of part;
Classification and the position for identifying moving component for the case point that current detection goes out using classifier, comprising:
Using the neighbours' point for the case point that classifier goes out according to current detection, the class of the affiliated moving component of the case point is determined
Not;
According to the position of each case point for the moving component for belonging to the category, the position of the moving component of the category is determined.
2. the method as described in claim 1, which is characterized in that the classifier is acquired according to the dynamic visual sensor
Sample signal training obtains in advance.
3. method according to claim 2, which is characterized in that the classifier is trained according to the following method to be obtained:
Training sample is generated according to the case point of dynamic visual sensor collecting sample signal output;
Using the training sample and its calibration result of generation, depth confidence network is trained, the classifier is obtained;
Wherein, the calibration result of the training sample refers to the classification for the moving component demarcated for the training sample.
4. method as claimed in claim 3, which is characterized in that described according to the dynamic visual sensor collecting sample signal
The case point of output generates training sample, comprising:
Using the case point of dynamic visual sensor collecting sample signal output as sample case point;
Determine neighbours' point of the sample case point currently exported;
By the sample case point currently exported and neighbours' point of the sample case point, as a training sample.
5. method as claimed in claim 4, which is characterized in that it is described using the training sample and its calibration result that generate, it is right
Depth confidence network is trained, comprising:
Using the training sample and its calibration result of generation, successive ignition training is carried out to depth confidence network.
6. method as claimed in claim 5, which is characterized in that an iteration training process specifically includes:
The training sample set that multiple training samples are formed is as the input of the depth confidence network;
The output of the depth confidence network is compared with the calibration result of each training sample;
Continue next iteration according to the level parameter that comparison result adjusts the depth confidence network, or stop iteration obtaining it is described
Classifier.
7. the method as described in claim 1, which is characterized in that the basis belongs to each case point of the moving component of the category
Position, determine the position of the moving component of the category, comprising:
Calculate the center for belonging to the case point of same category of moving component;
Using calculated center as the position of the moving component of the category.
8. the method as described in claim 1-6 is any, which is characterized in that in the classification for identifying moving component and position
Afterwards, further includes:
Soundness verification is carried out to the classification of the moving component identified;
If the classification for the moving component that will identify that and position are recorded, and have recorded the class of moving component by verifying
Other and position corresponding relationship.
9. method according to claim 8, which is characterized in that the classification of the described pair of moving component identified carries out reasonability
Verifying, comprising:
Judge the position of the moving component of the classification currently identified whether in reasonable regional scope;If so, passing through
Verifying;Otherwise, verifying does not pass through;
Wherein, the reasonable regional scope is the fortune of the position and the category according to the moving component of the category of last registration
What the position range priori knowledge of dynamic component determined.
10. the method as described in claim 1-6 is any, which is characterized in that in the classification that the basis successively identifies
The position of moving component determine motion profile, after the tracking result of the moving component as the classification, further includes:
Corresponding action command is identified according to the tracking result;
Corresponding operating is carried out according to the action command.
11. method as claimed in claim 10, which is characterized in that described to identify corresponding movement according to the tracking result
Instruction, comprising:
Movement segment is carried out to the tracking result to divide;
The action sequence snippet extraction feature that division is obtained;
Corresponding action command is identified according to the feature of extraction;
Wherein, the feature of the extraction includes at least following a kind of feature: position feature, route characteristic, moving direction feature, speed
Spend feature and acceleration signature.
12. method as claimed in claim 10, which is characterized in that described to identify corresponding movement according to the tracking result
Instruction, comprising:
After different classes of moving component group is combined into moving object, determined according to the motion profile of moving component of all categories
The motion profile of the moving object;
Feature is extracted to the motion profile of the moving object;
Corresponding action command is identified according to the feature of extraction;
Wherein, the feature of the extraction includes at least following a kind of feature: position feature, route characteristic, moving direction feature, speed
Spend feature and acceleration signature.
13. the method as described in claim 11 or 12, which is characterized in that the feature according to extraction identifies corresponding dynamic
It instructs, comprising:
Whether feature phase with extraction is stored with for the moving component of the classification or the moving object in decision instruction library
Matched feature;
If so, the action command corresponding with this feature that will be recorded in described instruction library, is identified as and the tracking result phase
The action command answered.
14. method as claimed in claim 10, which is characterized in that
The tracking result indicates that the outsider enters or leave setting regions;And
Corresponding action command is to open or close picture pick-up device;
Alternatively, the tracking result indicates that object is close;And
Corresponding action command is dangerous reminds.
15. the method as described in claim 11 or 12, which is characterized in that
Corresponding action command is rotation or dollying equipment.
16. a kind of tracking device of moving component characterized by comprising
Signal acquisition unit, for acquiring signal using dynamic visual sensor and exporting the case point of detection;
Component recognition unit, the case point for being gone out using classifier for the signal acquisition unit current detection identify fortune
The classification of dynamic component and position;
Motion tracking unit, the position of the moving component of the classification for successively being identified according to the component recognition unit
Determine motion profile, the tracking result of the moving component as the classification;
The component recognition unit includes:
Component categories identify subelement, for the case point using classifier according to the signal acquisition unit current detection out
Neighbours' point, determines the classification of the affiliated moving component of the case point;
Component locations identify subelement, for being directed to the classification of every kind of moving component, identify subelement according to the component categories
That determines belongs to the position of each case point of the moving component of the category, determines the position of the moving component of the category.
17. device according to claim 16, which is characterized in that the classifier is according to the dynamic visual sensor
The training in advance of the sample signal of acquisition obtains.
18. device as claimed in claim 17, which is characterized in that further include:
Classifier training unit, the case point for being exported according to the dynamic visual sensor collecting sample signal generate training
Sample;Using the training sample and its calibration result of generation, depth confidence network is trained, the classifier is obtained;
Wherein, the calibration result of the training sample refers to the classification for the moving component demarcated for the training sample.
19. device as claimed in claim 18, which is characterized in that the classifier training unit includes:
Training sample collects subelement, and the case point for exporting the signal acquisition unit is determined as sample case point
Neighbours' point of the sample case point currently exported, by the sample case point currently exported and neighbours' point of the sample case point,
As a training sample.
20. device according to claim 19, which is characterized in that the classifier training unit includes:
Repetitive exercise subelement, for collecting the training sample and its calibration result that subelement generates using the training sample,
Successive ignition training is carried out to depth confidence network.
21. device according to claim 20, which is characterized in that it is directed to an iteration training process,
The repetitive exercise subelement, the training sample set for forming multiple training samples is as the depth confidence network
Input, the output of the depth confidence network is compared with the calibration result of each training sample, according to comparison result tune
The level parameter of the whole depth confidence network continues next iteration, or stops iteration and obtain the classifier.
22. device according to claim 16, which is characterized in that component locations identify subelement, belong to for calculating
The center of the case point of same category of moving component, using calculated center as the moving component of the category
Position.
23. the device as described in claim 16-21 is any, which is characterized in that the component recognition unit further include:
The classification of soundness verification subelement, the moving component for identifying to component categories identification subelement is closed
Rationality verifying;And when passing through verifying, the component categories are identified into subelement, component locations identification subelement identification
The classification and position corresponding record of moving component out.
24. device according to claim 23, which is characterized in that the soundness verification subelement, it is current for judging
Whether the position of the moving component of the classification identified is in reasonable regional scope;And the institute for ought currently identify
When stating the position of the moving component of classification in reasonable regional scope, by verifying, otherwise, verifying does not pass through;
Wherein, the reasonable regional scope is the fortune of the position and the category according to the moving component of the category of last registration
What the position range priori knowledge of dynamic component determined.
25. the device as described in claim 16-21 is any, which is characterized in that further include:
Motor imagination unit, the tracking result for being exported according to the motion tracking unit identify that corresponding movement refers to
It enables, respective response operation is carried out according to the action command.
26. device as claimed in claim 25, which is characterized in that the motor imagination unit includes:
Feature extraction subelement, the tracking result for exporting to the motion tracking unit carry out movement segment and divide,
The action sequence snippet extraction feature that division is obtained;Wherein, the feature of the extraction includes a kind of at least following feature: position
Feature, route characteristic, moving direction feature, velocity characteristic and acceleration signature;
Action command identifies subelement, and the feature for being extracted according to the feature extraction subelement identifies that corresponding movement refers to
It enables;
Action command responds subelement, for identifying that the action command that subelement identifies carries out accordingly according to the action command
Response operation.
27. according to the method for claim 25, which is characterized in that
The motor imagination unit, after different classes of moving component group is combined into moving object, according to fortune of all categories
The motion profile of dynamic component determines the motion profile of the moving object;And for the motion profile to the moving object
Extract feature;And corresponding action command is identified for the feature according to extraction;
Wherein, the feature of the extraction includes at least following a kind of feature: position feature, route characteristic, moving direction feature, speed
Spend feature and acceleration signature.
28. device as claimed in claim 26, which is characterized in that
Action command identification subelement in decision instruction library whether for the classification moving component be stored with
The feature that the feature that the feature extraction subelement extracts matches;If so, by recorded in described instruction library and this feature
Corresponding action command is identified as action command corresponding with the tracking result.
29. device according to claim 25, which is characterized in that
The tracking result indicates that the outsider enters or leave setting regions;And
Corresponding action command is to open or close picture pick-up device;
Alternatively, the tracking result indicates that object is close;And
Corresponding action command is dangerous reminds.
30. the device according to claim 26 or 27, which is characterized in that
Corresponding action command is rotation or dollying equipment.
31. a kind of electronic equipment, which is characterized in that
Processor;
Memory, for storing program;
The processor is used to execute the tracking that described program realizes the described in any item moving components of claim 1 ~ 15.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510018291.1A CN105844659B (en) | 2015-01-14 | 2015-01-14 | The tracking and device of moving component |
KR1020150173974A KR102595604B1 (en) | 2015-01-14 | 2015-12-08 | Method and apparatus of detecting object using event-based sensor |
US14/995,262 US10043064B2 (en) | 2015-01-14 | 2016-01-14 | Method and apparatus of detecting object using event-based sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510018291.1A CN105844659B (en) | 2015-01-14 | 2015-01-14 | The tracking and device of moving component |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105844659A CN105844659A (en) | 2016-08-10 |
CN105844659B true CN105844659B (en) | 2019-04-26 |
Family
ID=56579870
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510018291.1A Active CN105844659B (en) | 2015-01-14 | 2015-01-14 | The tracking and device of moving component |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102595604B1 (en) |
CN (1) | CN105844659B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106384130A (en) * | 2016-09-22 | 2017-02-08 | 宁波大学 | Fault detection method based on data multi-neighbor-local-feature embedding |
CN108073929B (en) | 2016-11-15 | 2023-11-24 | 北京三星通信技术研究有限公司 | Object detection method and device based on dynamic vision sensor |
CN111149350B (en) * | 2017-09-28 | 2022-02-22 | 苹果公司 | Generating still images using event cameras |
EP3543898A1 (en) * | 2018-03-21 | 2019-09-25 | Robert Bosch Gmbh | Fast detection of secondary objects that may intersect the trajectory of a moving primary object |
DE102018211042A1 (en) * | 2018-07-04 | 2020-01-09 | Robert Bosch Gmbh | Rapid detection of dangerous or endangered objects around a vehicle |
WO2020102021A2 (en) | 2018-11-13 | 2020-05-22 | Nvidia Corporation | Determining associations between objects and persons using machine learning models |
CN109544590B (en) * | 2018-11-27 | 2020-05-15 | 上海芯仑光电科技有限公司 | Target tracking method and computing device |
CN111988493B (en) * | 2019-05-21 | 2021-11-30 | 北京小米移动软件有限公司 | Interaction processing method, device, equipment and storage medium |
US11610330B2 (en) | 2019-10-08 | 2023-03-21 | Samsung Electronics Co., Ltd. | Method and apparatus with pose tracking |
CN110782492B (en) * | 2019-10-08 | 2023-03-28 | 三星(中国)半导体有限公司 | Pose tracking method and device |
CN111083354A (en) * | 2019-11-27 | 2020-04-28 | 维沃移动通信有限公司 | Video recording method and electronic equipment |
CN112949512B (en) | 2021-03-08 | 2022-07-08 | 豪威芯仑传感器(上海)有限公司 | Dynamic gesture recognition method, gesture interaction method and interaction system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103533263A (en) * | 2012-07-03 | 2014-01-22 | 三星电子株式会社 | Image sensor chip, operation method, and system having the same |
CN103533234A (en) * | 2012-07-05 | 2014-01-22 | 三星电子株式会社 | Image sensor chip, method of operating the same, and system including the image sensor chip |
CN103732287A (en) * | 2011-05-12 | 2014-04-16 | 皮埃尔和玛利居里大学(巴黎第六大学) | Method and device for controlling a device for aiding vision |
CN103813156A (en) * | 2012-11-02 | 2014-05-21 | 三星电子株式会社 | Motion sensor array device and depth sensing system and methods of using the same |
CN104007814A (en) * | 2013-02-22 | 2014-08-27 | 三星电子株式会社 | Apparatus and method for recognizing proximity motion using sensors |
CN104272723A (en) * | 2011-12-19 | 2015-01-07 | 苏黎世大学 | Photoarray, particularly for combining sampled brightness sensing with asynchronous detection of time-dependent image data |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3078166B2 (en) * | 1994-02-02 | 2000-08-21 | キヤノン株式会社 | Object recognition method |
KR101880998B1 (en) * | 2011-10-14 | 2018-07-24 | 삼성전자주식회사 | Apparatus and Method for motion recognition with event base vision sensor |
KR102227494B1 (en) * | 2013-05-29 | 2021-03-15 | 삼성전자주식회사 | Apparatus and method for processing an user input using movement of an object |
US9696812B2 (en) * | 2013-05-29 | 2017-07-04 | Samsung Electronics Co., Ltd. | Apparatus and method for processing user input using motion of object |
-
2015
- 2015-01-14 CN CN201510018291.1A patent/CN105844659B/en active Active
- 2015-12-08 KR KR1020150173974A patent/KR102595604B1/en active IP Right Grant
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103732287A (en) * | 2011-05-12 | 2014-04-16 | 皮埃尔和玛利居里大学(巴黎第六大学) | Method and device for controlling a device for aiding vision |
CN104272723A (en) * | 2011-12-19 | 2015-01-07 | 苏黎世大学 | Photoarray, particularly for combining sampled brightness sensing with asynchronous detection of time-dependent image data |
CN103533263A (en) * | 2012-07-03 | 2014-01-22 | 三星电子株式会社 | Image sensor chip, operation method, and system having the same |
CN103533234A (en) * | 2012-07-05 | 2014-01-22 | 三星电子株式会社 | Image sensor chip, method of operating the same, and system including the image sensor chip |
CN103813156A (en) * | 2012-11-02 | 2014-05-21 | 三星电子株式会社 | Motion sensor array device and depth sensing system and methods of using the same |
CN104007814A (en) * | 2013-02-22 | 2014-08-27 | 三星电子株式会社 | Apparatus and method for recognizing proximity motion using sensors |
Also Published As
Publication number | Publication date |
---|---|
CN105844659A (en) | 2016-08-10 |
KR102595604B1 (en) | 2023-10-30 |
KR20160087738A (en) | 2016-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105844659B (en) | The tracking and device of moving component | |
CN109117827B (en) | Video-based method for automatically identifying wearing state of work clothes and work cap and alarm system | |
CN101699862B (en) | Acquisition method of high-resolution region-of-interest image of PTZ camera | |
CN104506819B (en) | The mutual feedback tracking system and method for a kind of multi-cam real-time linkage | |
CN112396658B (en) | Indoor personnel positioning method and system based on video | |
CN108319926A (en) | A kind of the safety cap wearing detecting system and detection method of building-site | |
CN110633612B (en) | Monitoring method and system for inspection robot | |
CN102915638A (en) | Surveillance video-based intelligent parking lot management system | |
WO2021139049A1 (en) | Detection method, detection apparatus, monitoring device, and computer readable storage medium | |
CN102065275B (en) | Multi-target tracking method in intelligent video monitoring system | |
CN111163285A (en) | High-altitude falling object monitoring method and system and computer readable storage medium | |
CN110490043A (en) | A kind of forest rocket detection method based on region division and feature extraction | |
CN108230607B (en) | Image fire detection method based on regional characteristic analysis | |
CN103646250A (en) | Pedestrian monitoring method and device based on distance image head and shoulder features | |
CN104346802A (en) | Method and device for monitoring off-job behaviors of personnel | |
Lengvenis et al. | Application of computer vision systems for passenger counting in public transport | |
US20190096066A1 (en) | System and Method for Segmenting Out Multiple Body Parts | |
US11756303B2 (en) | Training of an object recognition neural network | |
CN110750152A (en) | Human-computer interaction method and system based on lip action | |
CN103514429A (en) | Method for detecting specific part of object and image processing equipment | |
CN109961031A (en) | Face fusion identifies identification, target person information display method, early warning supervision method and system | |
KR101375186B1 (en) | Method for detecting disturbance of monitoring camera | |
CN107547865A (en) | Trans-regional human body video frequency object tracking intelligent control method | |
CN110705453A (en) | Real-time fatigue driving detection method | |
CN109145758A (en) | A kind of recognizer of the face based on video monitoring |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |