CN107247946A - Activity recognition method and device - Google Patents

Activity recognition method and device Download PDF

Info

Publication number
CN107247946A
CN107247946A CN201710544459.1A CN201710544459A CN107247946A CN 107247946 A CN107247946 A CN 107247946A CN 201710544459 A CN201710544459 A CN 201710544459A CN 107247946 A CN107247946 A CN 107247946A
Authority
CN
China
Prior art keywords
pedestrian
region
tracked
behavior
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710544459.1A
Other languages
Chinese (zh)
Other versions
CN107247946B (en
Inventor
陶铁牛
王帼筊
张丽媛
赵爱巧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing anningwell emergency fire safety technology Co.,Ltd.
Original Assignee
Beijing Anning Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Anning Technology Development Co Ltd filed Critical Beijing Anning Technology Development Co Ltd
Priority to CN201710544459.1A priority Critical patent/CN107247946B/en
Publication of CN107247946A publication Critical patent/CN107247946A/en
Application granted granted Critical
Publication of CN107247946B publication Critical patent/CN107247946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The present invention provides a kind of Activity recognition method and device.Methods described is applied to computing device, and the computing device includes the identification model for being used to carry out Activity recognition.Methods described includes:Visible images and infrared image to collection carry out image procossing and obtain target area to be tracked;Detect whether target area to be tracked includes pedestrian;When including pedestrian, pedestrian is tracked, and detects pedestrian edge to obtain pedestrian region to be identified from target area to be tracked during tracking;Pedestrian region to be identified is input to the Activity recognition result that pedestrian is obtained in identification model.Thus, the behavior to pedestrian in image has carried out Activity recognition.

Description

Activity recognition method and device
Technical field
The present invention relates to technical field of computer vision, in particular to a kind of Activity recognition method and device.
Background technology
With the improvement of people's safety consciousness, increasing for various accidents (such as, the fire) that society is faced, it is safe Monitoring is increasingly paid attention to by society, mechanism and individual.Traditional safety monitoring system is mainly by way of manually monitoring To realize the monitoring to scene, do not possess in real time, environmental monitoring ability on one's own initiative.
Such as, when occurring fire, the reason for can only occurring fire subsequently through video retrieval, fire includes artificial active Set fire, electrically or fire caused by ageing equipment etc..And conventional method can not effectively predict the behavioral trait of personnel, nothing in scene Method judges whether monitored personnel have illegal, unlawful practice, while the purpose for monitoring and alarming in real time can not be played.Therefore, such as What is the problem of those skilled in the art continue to solve according to the behavior of monitor video automatic identification people.
The content of the invention
In order to overcome above-mentioned deficiency of the prior art, the technical problems to be solved by the invention are to provide a kind of behavior and known Other method and device, it can carry out Activity recognition automatically to the pedestrian in image, to prevent the generation of accident.
Present pre-ferred embodiments provide a kind of Activity recognition method, applied to computing device, the computing device bag The identification model for carrying out Activity recognition is included, methods described includes:
Visible images and infrared image to collection carry out image procossing and obtain target area to be tracked;
Detect whether the target area to be tracked includes pedestrian;
When including pedestrian, pedestrian is tracked, and detects pedestrian edge with from described to be tracked during tracking Pedestrian region to be identified is obtained in target area;
Pedestrian region to be identified is input to the Activity recognition result that the pedestrian is obtained in identification model.
Present pre-ferred embodiments additionally provide a kind of Activity recognition device, applied to computing device, the computing device Including the identification model for carrying out Activity recognition, described device includes:
Processing module, carries out image procossing for the visible images to collection and infrared image and obtains target area to be tracked Domain;
Detection module, for detecting whether the target area to be tracked includes pedestrian;
Processing module, is additionally operable to when including pedestrian, be tracked pedestrian, and the detection pedestrian edge during tracking To obtain pedestrian region to be identified from the target area to be tracked;
Identification module, the behavior for obtaining the pedestrian in identification model for pedestrian region to be identified to be input to is known Other result.
In terms of existing technologies, the invention has the advantages that:
Present pre-ferred embodiments provide a kind of Activity recognition method and device.Methods described is applied to computing device, institute State the identification model that computing device includes being used to carry out Activity recognition.After visible images and infrared image is obtained, to obtaining Image carries out image procossing and obtains target area to be tracked.When detecting the target area to be tracked including pedestrian, to row People is tracked, and detects pedestrian edge to obtain pedestrian region to be identified during tracking.By the way that the pedestrian is waited to know Other region is input in identification model, you can obtain the Activity recognition result of pedestrian.Thus, the pedestrian in image is carried out automatically Activity recognition, can prevent the generation of accident.
To enable the above-mentioned purpose of invention, feature and advantage to become apparent, present pre-ferred embodiments cited below particularly, and Coordinate appended accompanying drawing, be described in detail below.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be attached to what is used required in embodiment Figure is briefly described, it will be appreciated that the following drawings illustrate only certain embodiments of the present invention, therefore is not construed as pair The restriction of scope, for those of ordinary skill in the art, on the premise of not paying creative work, can also be according to this A little accompanying drawings obtain other related accompanying drawings.
Fig. 1 is the block diagram for the computing device that present pre-ferred embodiments are provided.
Fig. 2 is one of schematic flow sheet of Activity recognition method that present pre-ferred embodiments are provided.
Fig. 3 is the schematic flow sheet of the sub-step that step S120 includes in Fig. 2.
Fig. 4 is the schematic flow sheet of the sub-step that sub-step S123 includes in Fig. 3.
Fig. 5 is the schematic flow sheet of the sub-step that step S150 includes in Fig. 2.
Fig. 6 is the schematic flow sheet of the sub-step that sub-step S154 includes in Fig. 5.
Fig. 7 is the two of the schematic flow sheet for the Activity recognition method that present pre-ferred embodiments are provided.
Fig. 8 is the three of the schematic flow sheet for the Activity recognition method that present pre-ferred embodiments are provided.
Fig. 9 is the schematic flow sheet of the part sub-step that step S110 includes in Fig. 8.
Figure 10 is the schematic flow sheet of another part sub-step that step S110 includes in Fig. 8.
Figure 11 is the block diagram for the Activity recognition device that present pre-ferred embodiments are provided.
Icon:100- computing devices;110- memories;120- storage controls;130- processors;200- Activity recognitions are filled Put;220- processing modules;230- detection modules;250- identification modules.
Embodiment
Below in conjunction with accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Ground is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Generally exist The component of the embodiment of the present invention described and illustrated in accompanying drawing can be arranged and designed with a variety of configurations herein.Cause This, the detailed description of the embodiments of the invention to providing in the accompanying drawings is not intended to limit claimed invention below Scope, but it is merely representative of the selected embodiment of the present invention.Based on embodiments of the invention, those skilled in the art are not doing The every other embodiment obtained on the premise of going out creative work, belongs to the scope of protection of the invention.
It should be noted that:Similar label and letter represents similar terms in following accompanying drawing, therefore, once a certain Xiang Yi It is defined in individual accompanying drawing, then it further need not be defined and explained in subsequent accompanying drawing.Meanwhile, the present invention's In description, term " first ", " second " etc. are only used for distinguishing description, and it is not intended that indicating or implying relative importance.
Below in conjunction with the accompanying drawings, some embodiments of the present invention are elaborated.It is following in the case where not conflicting Feature in embodiment and embodiment can be mutually combined.
Fig. 1 is refer to, Fig. 1 is the block diagram for the computing device 100 that present pre-ferred embodiments are provided.It is of the invention real Apply computing device 100 described in example may be, but not limited to, computer, server etc..As shown in figure 1, the computing device 100 Including:Memory 110, storage control 120, processor 130 and Activity recognition device 200.
Directly or indirectly it is electrically connected between the memory 110, storage control 120 and each element of processor 130, To realize the transmission or interaction of data.For example, these elements can pass through one or more communication bus or signal wire each other Realize and be electrically connected with.Be stored with Activity recognition device 200 in memory 110, and the Activity recognition device 200 includes at least one The software function module in the memory 110 can be stored in the form of software or firmware (firmware).The processor 130 are stored in the Activity recognition device in software program and module in memory 110, such as embodiment of the present invention by operation 200, so as to perform various function application and data processing, that is, realize the Activity recognition method in the embodiment of the present invention.
Wherein, the memory 110 may be, but not limited to, random access memory (Random Access Memory, RAM), read-only storage (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM), electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc..Wherein, memory 110 be used for storage program, the processor 130 after execute instruction is received, Perform described program.The processor 130 and other access of possible component to memory 110 can be in the storage controls Carried out under the control of device 120.
The processor 130 is probably a kind of IC chip, the disposal ability with signal.Above-mentioned processor 130 can be general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP) etc..It can also be digital signal processor (DSP), application specific integrated circuit (ASIC), scene Programmable gate array (FPGA) or other PLDs, discrete gate or transistor logic, discrete hardware group Part.It can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor can be with It is microprocessor or the processor can also be any conventional processor etc..
It is appreciated that the structure shown in Fig. 1 be only signal, computing device 100 may also include it is more more than shown in Fig. 1 or Less component, or with the configuration different from shown in Fig. 1.Each component shown in Fig. 1 can using hardware, software or its Combination is realized.
Fig. 2 is refer to, Fig. 2 is one of schematic flow sheet of Activity recognition method that present pre-ferred embodiments are provided.Institute Method is stated applied to the computing device 100.The idiographic flow of Activity recognition method is described in detail below.
Step S120, visible images and infrared image to collection carry out image procossing and obtain target area to be tracked.
In the present embodiment, the computing device 100 can respectively with visible light camera and thermal camera communication link Connect.Thus, the computing device 100 obtains visible images and infrared image.
It refer to Fig. 3, Fig. 3 is the schematic flow sheet of the sub-step that step S120 includes in Fig. 2.The step S120 can be with Including sub-step S121, sub-step S122 and sub-step S123.
Sub-step S121, collection multiframe visible ray background image is to set up the first background image, to current frame image and the One background image is handled, and obtains first object motion suspicious region.
In the present embodiment, the computing device 100 builds the multiframe visible images of collection using Gauss model Vertical first background image, so as to obtain first object motion suspicious region according to current frame image and the first background image.Meanwhile, During tracking, first background image can be automatically updated according to the more new command of reception or timing (such as, 0.5s).
Wherein, Gauss model is exactly accurately to quantify things with Gaussian probability-density function (normal distribution curve), by one Individual things is decomposed into some models based on Gaussian probability-density function (normal distribution curve) formation.
Introduced below in the way of son of illustrating and how first object fortune is obtained according to current frame image and the first background image Dynamic suspicious region.
Such as, current frame image and the first background are calculated in video monitoring regional both sides (i.e. import and export) certain area The difference of image.The difference that calculating is obtained is compared with default differential threshold, is handled according to comparative result.Calculating When obtained difference is more than default differential threshold, the interval as the suspicious area of first object motion of default differential threshold will be greater than Domain.When calculating obtained difference less than default differential threshold, subsequent treatment is no longer carried out.Wherein, the default differential threshold It is suspicious to be configured (such as, 10 brightness values) according to actual conditions.
Image difference refers to that the respective pixel value of two images is subtracted each other, and to weaken the similar portion of image, highlights figure The changing unit of picture.Such as, difference image tends to detect the profile of moving target, can extract the rail of flicker conduit Mark etc..
Sub-step S122, collection multiframe infrared background image to set up the second background image, to current infrared two field picture and Second background image is handled, and obtains the second target motion suspicious region.
In the present embodiment, by obtaining the motion of the second target with obtaining first object motion suspicious region identical mode Suspicious region, will not be repeated here.
Sub-step S123, moves suspicious region by the first object and is obtained with second target motion suspicious region Target area to be tracked.
It refer to Fig. 4, Fig. 4 is the schematic flow sheet of the sub-step that sub-step S123 includes in Fig. 3.The sub-step S123 Sub-step S1231, sub-step S1232, sub-step S1233 and sub-step S1234 can be included.
Sub-step S1231, calculates the first object motion suspicious region and moves suspicious region with second target Degree of overlapping.
In the present embodiment, because infrared image is relevant with temperature, therefore the second target suspicious motion region can be passed through Whether there is pedestrian in auxiliary judgment first object motion suspicious region.First object moving region and second mesh are calculated first Obtained degree of overlapping, is then compared by the degree of overlapping of mark motion suspicious region with default degree of overlapping.
Sub-step S1232, judges whether the degree of overlapping is more than default degree of overlapping.
Wherein, the default degree of overlapping is suspicious is set (such as, 30%) according to actual conditions.
If the degree of overlapping is more than default degree of overlapping, sub-step S1233 is performed.
Sub-step S1233, the weight that suspicious region moves suspicious region with second target is moved to the first object Folded region is handled, and obtains target area to be tracked.
The degree of overlapping is more than default degree of overlapping, can move suspicious region according to second target and adjust first mesh The coverage of mark motion suspicious region.And then the cavity of first object motion suspicious region is filled using morphology technology and gone Except area is less than the region of default degree of overlapping, to obtain target to be tracked.Calculate again camera installation site, image resolution ratio and The feature such as target to be tracked position in the picture and pedestrian level and width primarily determines that target area to be tracked.
If the degree of overlapping is less than default degree of overlapping, sub-step S1234 is performed.
Sub-step S1234, stops tracking and judges.
The degree of overlapping is less than default degree of overlapping, and it is false areas to characterize the first object motion suspicious region, to void Do not do subsequent treatment in false region.
Step S130, detects whether the target area to be tracked includes pedestrian.
In the present embodiment, detect whether the target area to be tracked includes pedestrian using Hog algorithms.Hog (direction ladders Spend histogram, Histogram of Oriented Gradient) it is characterized in that one kind is used in computer vision and image procossing To carry out the Feature Descriptor of object detection.It is constituted by calculating the gradient orientation histogram with statistical picture regional area Feature.Its main thought is:In the case where edge particular location is unknown, the distribution of edge direction can also represent capable well The appearance profile of people's target.Thus, it is possible to detect obtain whether including pedestrian in image.If not including pedestrian, no longer carry out Subsequent step.If including pedestrian, carrying out step S140.
Step S140, when including pedestrian, is tracked to pedestrian, and detects pedestrian edge with from institute during tracking State and pedestrian region to be identified is obtained in target area to be tracked.
In the present embodiment, when including pedestrian, each pedestrian of Mean-shift algorithm keeps tracks can be used, is thus monitored Interval motion conditions.Mean-shift algorithms are the target tracking algorisms based on average drifting by calculating target area respectively The description as described in object module and candidate family is obtained with the characteristic value probability of pixel in candidate region, similar function is then utilized The similitude of candidate's masterplate of initial frame object module and present frame is measured, selection makes the maximum candidate family of similar function and obtained To the Meanshift vectors on object module, this vector vector that exactly target is moved from initial position to correct position. Due to the fast convergence of mean shift algorithm, by constantly iterating to calculate Meanshift vectors, algorithm most converges to mesh at last Target actual position, reaches the purpose of tracking.
Meanwhile, the pedestrian of pedestrian in the target area to be tracked can be detected using Snake algorithms during tracking Edge, and then obtain pedestrian region to be identified.Snake algorithms are providing initial profile (such as, target area to be tracked) On the basis of, it is iterated, makes profile close along the direction that energy is reduced, finally obtains the border of an optimization.
Step S150, pedestrian region to be identified is input to the Activity recognition knot that the pedestrian is obtained in identification model Really.
It refer to Fig. 5, Fig. 5 is the schematic flow sheet of the sub-step that step S150 includes in Fig. 2.The identification model includes First identification model.The step S150 can include following sub-step.
Sub-step S151, pedestrian region to be identified is input in the first identification model, the first output result is obtained, First output result includes multiple behaviors and the corresponding weight of each behavior.
In the present embodiment, pedestrian region to be identified is input in the first identification model, makes the first identification model The weight of relatively every kind of behavior in this region is calculated, the first output result is then obtained.First output result includes many Individual behavior and the corresponding weight of each behavior.Wherein, the behavior of the first output result can drop according to the corresponding weight of behavior Sequence or ascending order are arranged.Such as:Normal walking 0.5;Stride away 0.3 etc..
Sub-step S152, will be greater than the behavior corresponding to the weight of default weight threshold as the first segmented result.
In the present embodiment, the default weight threshold can be set according to actual conditions.
Sub-step S153, if first segmented result only includes a behavior, the behavior is used as recognition result.
Sub-step S154, if first segmented result includes multiple behaviors, recognition result is determined from multiple behaviors.
In an embodiment of the present embodiment, the behavior in the first output result is arranged in decreasing order.In the first output As a result difference (such as, weight limit and adjacent with weight limit between the weight of middle predetermined number (such as, 2) behavior Difference between weight) when being more than default weight difference (such as, 0.4), it is used as identification to tie the corresponding behavior of weight limit value Really.When difference in first output result between the weight of predetermined number behavior is less than default weight difference, from predetermined number row For middle determination recognition result.
It refer to Fig. 6, Fig. 6 is the schematic flow sheet of the sub-step that sub-step S154 includes in Fig. 5.The identification model is also Including the second identification model.The sub-step S154 can include sub-step S1541, sub-step S1542, sub-step S1543 and Sub-step S1544.
Sub-step S1541, according to the edge and the centre of form in pedestrian region to be identified, extracts the pedestrian region to be identified Fisrt feature collection.
In the present embodiment, the edge and the centre of form in the pedestrian region to be identified are calculated, edge is obtained to the distance of the centre of form. Using centre of form horizontal direction as starting point, turn clockwise, every 30 degree are an interval, calculate this interval inward flange flat to centre of form distance Average, thus obtains the fisrt feature collection in pedestrian region to be identified.Wherein, the fisrt feature collection can include 12 spies Levy.
Wherein, the centre of form is for abstract geometry body, for the real object of even density, and barycenter and the centre of form are overlapped.
Sub-step S1542, calculates the geometric moment in the pedestrian region to be identified, extracts the of the pedestrian region to be identified Two feature sets.
Moment characteristics mainly characterize the geometric properties of image-region, also known as geometric moment, and geometric moment is with rotation, put down The invariant features of the characteristics such as shifting, yardstick.Wherein, geometric moment includes hu squares, and hu squares construct seven using second order and third central moment Individual constant matrix, they can keep translating under the conditions of consecutive image, scale and invariable rotary, thus obtain second feature collection. Wherein, second feature collection can include 7 features.
Sub-step S1543, the fisrt feature collection and second feature collection are input in the second identification model, second is obtained Output result.
Sub-step S1544, recognition result is determined according to second output result from multiple behaviors.
In the embodiment of the present embodiment, the corresponding behavior of weight limit in second output result is regard as second Segmented result.If second segmented result is one of behavior in the multiple behavior, the behavior identification will be used as As a result.If second segmented result does not include any one behavior in the multiple behavior, by the target in the image As key monitoring target, in the continuous monitoring of next gust of image relay and judgement.
Repeat preset times (such as, 3 times) and carry out Activity recognition, if Activity recognition can not still be obtained by repeating preset times As a result, alarm is generated.Wherein, preset times can be set according to actual conditions.
Fig. 7 is refer to, Fig. 7 is the two of the schematic flow sheet for the Activity recognition method that present pre-ferred embodiments are provided.Institute The method of stating can also include step S160.
Step S160, preset strategy is performed according to the recognition result.
In the present embodiment, judged whether to meet default abnormal behaviour condition according to recognition result.Wherein, abnormal row is preset For a lighter can be included, but not limited to, the behavior such as kick.If the recognition result is unsatisfactory for default abnormal behaviour condition, Stop carrying out emphasis monitoring to the corresponding behavior of the recognition result.If the recognition result meets default abnormal behaviour condition, table Levying the corresponding pedestrian of the recognition result has destruction to be inclined to, then carries out emphasis monitoring to the corresponding pedestrian of the recognition result, and Generate alarm.
Such as, in fire-fighting system, Activity recognition is carried out to the behavior of pedestrian using the identification model.According to identification When pedestrian in result judgement scene has destruction tendency, monitoring personnel is pointed out in time, so as to the corresponding pedestrian of recognition result Paid close attention to and/or taken safeguard procedures.
Fig. 8 is refer to, Fig. 8 is the three of the schematic flow sheet for the Activity recognition method that present pre-ferred embodiments are provided.Institute Step S110 can also be included before step S120 by stating method.
Step S110, training is identified model.
It refer to Fig. 9, Fig. 9 is the schematic flow sheet of the part sub-step that step S110 includes in Fig. 8.The step S110 Sub-step S111, sub-step S112, sub-step S113 and sub-step S114 can be included.
Sub-step S111, the behavior figure picture of the different behaviors of collection pedestrian is used as sample image.
In the present embodiment, different behavior figure pictures can be gathered according to the occasion and target of monitoring.Such as, applied to disappearing Anti- system, then can gather different behaviors for fire-fighting, including normally walk, stride away, single leg spring, both legs are jumped, bend over, stretched Open up arm, kicking, punch, body collision etc..
Sub-step S112, the first pedestrian area that image procossing obtains including pedestrian is carried out to the sample image.
In the present embodiment, detect that the pedestrian in sample image is interval by Hog algorithms, so as to obtain the first pedestrian area Domain.First pedestrian area can also be corrected by manually detecting.
Sub-step S113, is detected the pedestrian edge of pedestrian in first pedestrian area, is obtained based on the pedestrian edge Second pedestrian area, regard the second pedestrian area as pretreated sample image.
In the present embodiment, the pedestrian edge of pedestrian can be further detected using Snake models, so as to remove background area The interference in domain, can be set to 0 by background area.The pedestrian edge can also be corrected by manually detecting.
The interval position in pedestrian upper and lower, left and right can also be determined using horizontal and vertical shadow casting technique again.Interval position can With with bi(i=0,1,2,3) are represented.
Sub-step S114, model library is set up by pretreated sample image, corresponding according to the behavior of each in model library Pretreated sample image is trained to obtain the first identification model.
In the present embodiment, edge position information normalization creep function storehouse is constituted by the sample image after default processing.According to Sample image in normalization creep function storehouse sets up different behavioral trait sample pattern storehouse Dataj(j=0 ..., N-1), N represents capable For specimen types sum.Multiple samples (such as, 100) can be chosen per class.By each model library DatajIn part sample (such as, model library Dataj4/5) being input in CNN models to be trained for sample total, is recognized so as to obtain initial first Model.Wherein, the CNN models trained can extract the feature of image.
Then model library Data is passed throughjIn remainder sample initial first identification model is verified, if First identification error rate is more than the first error rate threshold, then initial first identification model is adjusted according to the sample of verification portion It is whole, to be met the first identification model of requirement.Wherein, the first error rate threshold can be set according to actual conditions.
It refer to Figure 10, Figure 10 is the schematic flow sheet of another part sub-step that step S110 includes in Fig. 8.The step Rapid S110 can also include sub-step S116 and sub-step S117.
Sub-step S116, extracts the fisrt feature collection and second feature collection of pretreated sample image respectively.
Obtain each model library DatajThe edge and image centroid of sample in (j=0 ..., N-1), calculate edge to the centre of form Distance.Using centre of form horizontal direction as starting point, turn clockwise, every 30 degree are an interval, calculate this interval inward flange to the centre of form Distance average, is used as sample Sij(i=0~M, j=0~N) feature, i.e. fisrt feature collection, fisrt feature collection can include 12 Individual feature, is expressed as follows fi(i=0 ..., 11).Wherein, sample size during M is per class, N is specimen types sum.
The geometric moment of sample is calculated, to obtain the second feature collection of sample.The second feature collection can include 7 spies Levy, be expressed as follows fi(i=12 ..., 18).
Sub-step S117, is trained to obtain second according to the corresponding fisrt feature collection of each behavior and second feature collection Identification model.
The feature f that will be made up of fisrt feature collection and second feature collectioni(i=0 ..., 18) training sample.Utilize each mould Type storehouse DatajIn part sample (such as, model library Dataj4/5) the training random forest of sample total, obtains one at random Forest model.Then model library Data is passed throughjIn remainder sample the Random Forest model is verified and adjusted, To obtain the second identification model.
Figure 11 is refer to, Figure 11 is the block diagram for the Activity recognition device 200 that present pre-ferred embodiments are provided.Institute Activity recognition device 200 is stated applied to computing device 100.Wherein, the computing device 100 includes being used to carry out Activity recognition Identification model.The Activity recognition device 200 includes:Processing module 220, detection module 230 and identification module 250.
Processing module 220, carries out image procossing for the visible images to collection and infrared image and obtains mesh to be tracked Mark region.
In the present embodiment, the processing module 220 is used to perform the step S120 in Fig. 2, on the processing module 220 specific descriptions are referred to the description of step S120 in Fig. 2.
Detection module 230, for detecting whether the target area to be tracked includes pedestrian.
In the present embodiment, the detection module 230 is used to perform the step S130 in Fig. 2, on the detection module 230 specific descriptions are referred to the description of step S130 in Fig. 2.
Processing module 220, is additionally operable to when including pedestrian, be tracked pedestrian, and detects during tracking pedestrian Edge from the target area to be tracked to obtain pedestrian region to be identified.
In the present embodiment, the processing module 220 is additionally operable to perform the step S140 in Fig. 2, on the processing mould The specific descriptions of block 220 are referred to the description of step S140 in Fig. 2.
Identification module 250, the row of the pedestrian is obtained for pedestrian region to be identified to be input in identification model For recognition result.
In the present embodiment, the identification module 250 is used to perform the step S150 in Fig. 2, on the identification module 250 specific descriptions are referred to the description of step S150 in Fig. 2.
In summary, the present invention provides a kind of Activity recognition method and device.Methods described is applied to computing device, described Computing device includes the identification model for being used to carry out Activity recognition.After visible images and infrared image is obtained, to being schemed Target area to be tracked is obtained as carrying out image procossing.When detecting the target area to be tracked including pedestrian, to pedestrian It is tracked, and detects pedestrian edge to obtain pedestrian region to be identified during tracking.By the way that the pedestrian is to be identified Region is input in identification model, you can obtain the Activity recognition result of pedestrian.Thus, every trade is entered automatically to the pedestrian in image For identification, the generation of accident can be prevented, harm is reduced.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made any repaiies Change, equivalent substitution, improvement etc., should be included in the scope of the protection.

Claims (10)

1. a kind of Activity recognition method, applied to computing device, it is characterised in that the computing device includes being used to carry out behavior The identification model of identification, methods described includes:
Visible images and infrared image to collection carry out image procossing and obtain target area to be tracked;
Detect whether the target area to be tracked includes pedestrian;
When including pedestrian, pedestrian is tracked, and detects pedestrian edge with from the target to be tracked during tracking Pedestrian region to be identified is obtained in region;
Pedestrian region to be identified is input to the Activity recognition result that the pedestrian is obtained in identification model.
2. according to the method described in claim 1, it is characterised in that the visible images and infrared image of described pair of collection are carried out The step of image procossing obtains target area to be tracked includes:
Multiframe visible ray background image is gathered to set up the first background image, at current frame image and the first background image Reason, obtains first object motion suspicious region;
Multiframe infrared background image is gathered to set up the second background image, current infrared two field picture and the second background image are carried out Processing, obtains the second target motion suspicious region;
Suspicious region is moved by the first object and second target moves suspicious region and obtains target area to be tracked.
3. method according to claim 2, it is characterised in that described that suspicious region and institute are moved by the first object Stating the step of the second target motion suspicious region obtains target area to be tracked includes:
Calculate the degree of overlapping that the first object motion suspicious region moves suspicious region with second target;
If the degree of overlapping is less than default degree of overlapping, stops tracking and judge;
If the degree of overlapping is more than default degree of overlapping, suspicious region is moved to the first object can with second target motion The overlapping region for doubting region is handled, and obtains target area to be tracked.
4. according to the method described in claim 1, it is characterised in that the identification model includes the first identification model, described to incite somebody to action The pedestrian region to be identified, which is input in identification model the step of obtaining Activity recognition result, to be included:
Pedestrian region to be identified is input in the first identification model, the first output result, the first output result bag is obtained Include multiple behaviors and the corresponding weight of each behavior;
The behavior corresponding to the weight of default weight threshold be will be greater than as the first segmented result;
If first segmented result only includes a behavior, the behavior is used as recognition result;
If first segmented result includes multiple behaviors, recognition result is determined from multiple behaviors.
5. method according to claim 4, it is characterised in that the identification model also includes the second identification model, described If the segmented result includes multiple behaviors, determine to include the step of recognition result from multiple behaviors:
According to the edge and the centre of form in pedestrian region to be identified, the fisrt feature collection in the pedestrian region to be identified is extracted;
The geometric moment in pedestrian region to be identified is calculated, the second feature collection in the pedestrian region to be identified is extracted;
The fisrt feature collection and second feature collection are input in the second identification model, the second output result is obtained;
Recognition result is determined from multiple behaviors according to second output result.
6. method according to claim 5, it is characterised in that by the corresponding row of weight limit in second output result It is described to be determined from multiple behaviors to wrap the step of recognition result according to second output result for as the second segmented result Include:
If second segmented result is one of behavior in the multiple behavior, the behavior recognition result will be used as;
If second segmented result does not include any one behavior in the multiple behavior, repeat preset times and enter every trade For identification;
If Activity recognition result can not still be obtained by repeating preset times, alarm is generated.
7. according to the method described in claim 1, it is characterised in that methods described also includes:
Preset strategy is performed according to the recognition result;
The step of execution preset strategy according to the recognition result, includes:
Judge whether the recognition result meets default abnormal behaviour condition;
If the recognition result meets default abnormal behaviour condition, pedestrian corresponding to the recognition result carries out emphasis monitoring, Alarm is generated simultaneously.
8. according to the method described in claim 1, it is characterised in that methods described also includes:
Training is identified model;
The step of training is identified model includes:
The behavior figure picture of the different behaviors of collection pedestrian is used as sample image;
The first pedestrian area that image procossing obtains including pedestrian is carried out to the sample image;
The pedestrian edge of pedestrian in first pedestrian area is detected, the second pedestrian area is obtained based on the pedestrian edge, will Second pedestrian area is used as pretreated sample image;
Model library is set up by pretreated sample image, according to the corresponding pretreated sample graph of the behavior of each in model library As being trained to obtain the first identification model.
9. method according to claim 8, it is characterised in that the step of training is identified model also includes:
The fisrt feature collection and second feature collection of pretreated sample image are extracted respectively;
It is trained according to the corresponding fisrt feature collection of each behavior and second feature collection to obtain the second identification model.
10. a kind of Activity recognition device, applied to computing device, it is characterised in that the computing device includes being used for into every trade For the identification model of identification, described device includes:
Processing module, carries out image procossing for the visible images to collection and infrared image and obtains target area to be tracked;
Detection module, for detecting whether the target area to be tracked includes pedestrian;
Processing module, is additionally operable to when including pedestrian, be tracked pedestrian, and during tracking detect pedestrian edge with from Pedestrian region to be identified is obtained in the target area to be tracked;
Identification module, the Activity recognition knot of the pedestrian is obtained for pedestrian region to be identified to be input in identification model Really.
CN201710544459.1A 2017-07-06 2017-07-06 Behavior recognition method and device Active CN107247946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710544459.1A CN107247946B (en) 2017-07-06 2017-07-06 Behavior recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710544459.1A CN107247946B (en) 2017-07-06 2017-07-06 Behavior recognition method and device

Publications (2)

Publication Number Publication Date
CN107247946A true CN107247946A (en) 2017-10-13
CN107247946B CN107247946B (en) 2021-01-26

Family

ID=60013968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710544459.1A Active CN107247946B (en) 2017-07-06 2017-07-06 Behavior recognition method and device

Country Status (1)

Country Link
CN (1) CN107247946B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229337A (en) * 2017-12-14 2018-06-29 阿里巴巴集团控股有限公司 The method, apparatus and equipment of a kind of data processing
CN108983956A (en) * 2017-11-30 2018-12-11 成都通甲优博科技有限责任公司 Body feeling interaction method and device
CN109377769A (en) * 2018-10-24 2019-02-22 东北林业大学 A kind of walker signal lamp timing system control method based on infrared thermal imaging technique
CN109709546A (en) * 2019-01-14 2019-05-03 珠海格力电器股份有限公司 Pet state monitoring method and device
CN109870250A (en) * 2019-01-27 2019-06-11 武汉星巡智能科技有限公司 Region exception body temperature monitoring method, device and computer readable storage medium
CN110059531A (en) * 2018-12-19 2019-07-26 浙江宇视科技有限公司 Behavioral value method and device of fighting based on video image
CN110223325A (en) * 2019-06-18 2019-09-10 北京字节跳动网络技术有限公司 Method for tracing object, device and equipment
CN110751034A (en) * 2019-09-16 2020-02-04 平安科技(深圳)有限公司 Pedestrian behavior identification method and terminal equipment
CN112084882A (en) * 2020-08-18 2020-12-15 深圳英飞拓科技股份有限公司 Behavior detection method and device and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096803A (en) * 2010-11-29 2011-06-15 吉林大学 Safe state recognition system for people on basis of machine vision
US20130343071A1 (en) * 2012-06-26 2013-12-26 Honda Motor Co., Ltd. Light distribution controller
CN105469054A (en) * 2015-11-25 2016-04-06 天津光电高斯通信工程技术股份有限公司 Model construction method of normal behaviors and detection method of abnormal behaviors
CN106327461A (en) * 2015-06-16 2017-01-11 浙江大华技术股份有限公司 Image processing method and device used for monitoring
CN106571014A (en) * 2016-10-24 2017-04-19 上海伟赛智能科技有限公司 Method for identifying abnormal motion in video and system thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096803A (en) * 2010-11-29 2011-06-15 吉林大学 Safe state recognition system for people on basis of machine vision
US20130343071A1 (en) * 2012-06-26 2013-12-26 Honda Motor Co., Ltd. Light distribution controller
CN106327461A (en) * 2015-06-16 2017-01-11 浙江大华技术股份有限公司 Image processing method and device used for monitoring
CN105469054A (en) * 2015-11-25 2016-04-06 天津光电高斯通信工程技术股份有限公司 Model construction method of normal behaviors and detection method of abnormal behaviors
CN106571014A (en) * 2016-10-24 2017-04-19 上海伟赛智能科技有限公司 Method for identifying abnormal motion in video and system thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孟彩霞: "基于融合双通道视频的暴恐人员检测仿真", 《计算机仿真》 *
马建平 等: "Android 智能手机自适应手势识别方法", 《小型微型计算机系统》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108983956A (en) * 2017-11-30 2018-12-11 成都通甲优博科技有限责任公司 Body feeling interaction method and device
CN108983956B (en) * 2017-11-30 2021-07-06 成都通甲优博科技有限责任公司 Somatosensory interaction method and device
CN108229337A (en) * 2017-12-14 2018-06-29 阿里巴巴集团控股有限公司 The method, apparatus and equipment of a kind of data processing
US11106909B2 (en) 2017-12-14 2021-08-31 Advanced New Technologies Co., Ltd. Recognizing carbon-saving behaviors from images
US10878239B2 (en) 2017-12-14 2020-12-29 Advanced New Technologies Co., Ltd. Recognizing carbon-saving behaviors from images
CN108229337B (en) * 2017-12-14 2021-03-30 创新先进技术有限公司 Data processing method, device and equipment
CN109377769A (en) * 2018-10-24 2019-02-22 东北林业大学 A kind of walker signal lamp timing system control method based on infrared thermal imaging technique
CN109377769B (en) * 2018-10-24 2022-03-15 东北林业大学 Pedestrian signal lamp timing system control method based on infrared thermal imaging technology
CN110059531B (en) * 2018-12-19 2021-06-01 浙江宇视科技有限公司 Frame-fighting behavior detection method and device based on video images
CN110059531A (en) * 2018-12-19 2019-07-26 浙江宇视科技有限公司 Behavioral value method and device of fighting based on video image
CN109709546A (en) * 2019-01-14 2019-05-03 珠海格力电器股份有限公司 Pet state monitoring method and device
CN109870250A (en) * 2019-01-27 2019-06-11 武汉星巡智能科技有限公司 Region exception body temperature monitoring method, device and computer readable storage medium
CN110223325B (en) * 2019-06-18 2021-04-27 北京字节跳动网络技术有限公司 Object tracking method, device and equipment
CN110223325A (en) * 2019-06-18 2019-09-10 北京字节跳动网络技术有限公司 Method for tracing object, device and equipment
CN110751034A (en) * 2019-09-16 2020-02-04 平安科技(深圳)有限公司 Pedestrian behavior identification method and terminal equipment
CN110751034B (en) * 2019-09-16 2023-09-01 平安科技(深圳)有限公司 Pedestrian behavior recognition method and terminal equipment
CN112084882A (en) * 2020-08-18 2020-12-15 深圳英飞拓科技股份有限公司 Behavior detection method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN107247946B (en) 2021-01-26

Similar Documents

Publication Publication Date Title
CN107247946A (en) Activity recognition method and device
CN106412501B (en) A kind of the construction safety behavior intelligent monitor system and its monitoring method of video
US10007850B2 (en) System and method for event monitoring and detection
JP6942029B2 (en) Fire monitoring system
CN110414400B (en) Automatic detection method and system for wearing of safety helmet on construction site
US8611664B2 (en) Method for detecting fire-flame using fuzzy finite automata
Liao et al. Slip and fall event detection using Bayesian Belief Network
EP3159859B1 (en) Human presence detection in a home surveillance system
CN106571014A (en) Method for identifying abnormal motion in video and system thereof
KR20190046351A (en) Method and Apparatus for Detecting Intruder
CN111126153B (en) Safety monitoring method, system, server and storage medium based on deep learning
CN106210634A (en) A kind of wisdom gold eyeball identification personnel fall down to the ground alarm method and device
CN108564069A (en) A kind of industry safe wearing cap video detecting method
CN110633643A (en) Abnormal behavior detection method and system for smart community
CN105575034A (en) Image processing and analysis method of double-waveband forest fireproof intelligent monitoring software
CN104463253A (en) Fire fighting access safety detection method based on self-adaptation background study
CN113469654B (en) Multi-level safety control system of transformer substation based on intelligent algorithm fuses
JP2020057236A (en) Smoke detection device and smoke identification method
CN103945197A (en) Electric power facility external damage prevention warming scheme based on video motion detecting technology
CN116563776A (en) Method, system, medium and equipment for warning illegal behaviors based on artificial intelligence
KR101311148B1 (en) Visual surveillance system and method for detecting object in the visual surveillance system
De Venâncio et al. Fire detection based on a two-dimensional convolutional neural network and temporal analysis
CN107330884A (en) Ignition point detection method and device
CN105451235A (en) Wireless sensor network intrusion detection method based on background updating
Deepak et al. Design and utilization of bounding box in human detection and activity identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100080 Room 301, 3 / F, 8 caihefang Road, Haidian District, Beijing

Patentee after: Beijing anningwell emergency fire safety technology Co.,Ltd.

Address before: 100080 Room 301, 3 / F, 8 caihefang Road, Haidian District, Beijing

Patentee before: BEIJING ANYWELL TECHNOLOGY DEVELOPMENT Co.,Ltd.

CP01 Change in the name or title of a patent holder