CN109190601A - Recongnition of objects method and device under a kind of monitoring scene - Google Patents

Recongnition of objects method and device under a kind of monitoring scene Download PDF

Info

Publication number
CN109190601A
CN109190601A CN201811223348.1A CN201811223348A CN109190601A CN 109190601 A CN109190601 A CN 109190601A CN 201811223348 A CN201811223348 A CN 201811223348A CN 109190601 A CN109190601 A CN 109190601A
Authority
CN
China
Prior art keywords
feature information
monitor video
facial characteristics
feature
identity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811223348.1A
Other languages
Chinese (zh)
Inventor
张曼
黄永祯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yinhe shuidi Technology (Ningbo) Co.,Ltd.
Original Assignee
Watrix Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Watrix Technology Beijing Co Ltd filed Critical Watrix Technology Beijing Co Ltd
Priority to CN201811223348.1A priority Critical patent/CN109190601A/en
Publication of CN109190601A publication Critical patent/CN109190601A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

This application provides recongnition of objects method and devices under a kind of monitoring scene, wherein this method comprises: obtaining the fisrt feature information of the first object, according to fisrt feature information, the first object is determined in monitor video;From second object of the first object of distance determining in monitor video within the scope of pre-determined distance;And according to the second feature information of the second object, obtains frequency of occurrence of second object in monitor video or duration occur;And filter out selected second object;The third feature information of each sample object in the second feature information and date library of selected second object is matched, determines target object.The embodiment of the present application can find target object suspicious in monitor video by image processing techniques, it solves lower come treatment effeciency when screening suspect by artificial checking monitoring video in the prior art, the low problem of accuracy rate, thus treatment effeciency and accuracy rate when improving checking monitoring video.

Description

Recongnition of objects method and device under a kind of monitoring scene
Technical field
This application involves technical field of image processing, in particular to recongnition of objects side under a kind of monitoring scene Method and device.
Background technique
In the biggish place of the flows of the people such as subway station, bus station, the criminal offences such as theft easily occur.Currently, working as certain field When the criminal offences such as theft occur, then the main monitor video by obtaining the monitoring device installed around the place leads to The mode of artificial checking monitoring video is crossed to screen suspect, but this side for screening suspect by artificial checking monitoring video Not only treatment effeciency is lower for formula, is also easy to the problem of can not accurately determining suspect occur.
Summary of the invention
In view of this, the embodiment of the present application is designed to provide recongnition of objects method and dress under a kind of monitoring scene It sets, target object suspicious in monitor video can be found by image processing techniques, solve in the prior art by manually checking Monitor video is lower come treatment effeciency when screening suspect, the low problem of accuracy rate, thus place when improving checking monitoring video Manage efficiency and accuracy rate.
In a first aspect, the embodiment of the present application provides a kind of recongnition of objects method under monitoring scene, comprising:
The fisrt feature information of the first object is obtained, the fisrt feature information includes the gait feature of first object And/or facial characteristics;
According to the fisrt feature information, first object is determined in monitor video;
At least one second pair apart from first object within the scope of pre-determined distance is determined from the monitor video As, and the second feature information of each second object is extracted respectively, the second feature information includes the step of second object State feature and/or facial characteristics;
According to the second feature information of each second object, obtain each described second pair within the scope of setting time As the frequency of occurrence in the monitor video, or there is duration;
The frequency of occurrence is filtered out not less than preset times or described the of duration not less than preset duration threshold value occurs Two objects are defined as selected second object;
The third for each sample object for including in the second feature information and date library of selected second object is special Reference breath is matched, and selected second object of successful match is determined as the target object;Wherein, the third is special Reference breath includes the gait feature and/or facial characteristics of the sample object.
With reference to first aspect, the embodiment of the present application provides the first possible embodiment of first aspect, wherein institute It states according to the fisrt feature information, first object is determined in monitor video, comprising:
Obtain the facial characteristics of each object in the monitor video;
For the facial characteristics of each object, whether the facial characteristics for detecting each object meets face recognition item Part;
Facial characteristics is met to the facial characteristics of the object of the face recognition condition and the face spy of first object Sign is matched, and the object of facial characteristics successful match is determined as first object;
Extract the gait feature that facial characteristics is unsatisfactory for the object of the face recognition condition, the gait feature that will be extracted It is matched with the gait feature of first object, and the object of gait feature successful match is determined as described first pair As.
With reference to first aspect, the embodiment of the present application provides second of possible embodiment of first aspect, wherein institute It states according to the fisrt feature information, first object is determined in monitor video, comprising:
Obtain the facial characteristics of each object in the monitor video;
The facial characteristics of each object is matched with the facial characteristics of first object, and by facial characteristics The object of successful match is determined as first object.
With reference to first aspect, the embodiment of the present application provides the third possible embodiment of first aspect, wherein institute It states according to the fisrt feature information, first object is determined in monitor video, comprising:
Obtain the gait feature of each object in the monitor video;
The gait feature of each object is matched with the gait feature of first object, and by gait feature The object of successful match is determined as first object.
With reference to first aspect, the embodiment of the present application provides the 4th kind of possible embodiment of first aspect, wherein institute It states the second feature information according to each second object, obtains within the scope of setting time each second object in institute State the frequency of occurrence in monitor video, comprising:
For each second object, perform the following operations:
Obtain the fourth feature information of each object of the monitor video within the scope of setting time;
The fourth feature information is matched with the second feature information of second object;
The object of successful match is determined as second object, and second object is tracked;
When in the monitor video second object can not be tracked, the frequency of occurrence of second object is united It is calculated as once, with frequency of occurrence of determination second object in the monitor video.
With reference to first aspect, the embodiment of the present application provides the 5th kind of possible embodiment of first aspect, wherein institute It states the second feature information according to each second object, obtains within the scope of setting time each second object in institute State the frequency of occurrence in monitor video, further includes:
For each second object, perform the following operations:
In multiple video-frequency bands within the scope of the setting time of the monitor video, obtain each in each video-frequency band The fourth feature information of object;
The fourth feature information is matched with the second feature information of second object, by pair of successful match As being determined as second object;
Occur the number of the video-frequency band of second object by counting, determines that second object is regarded in the monitoring Frequency of occurrence in frequency.
With reference to first aspect, the embodiment of the present application provides the 6th kind of possible embodiment of first aspect, wherein institute It states the second feature information according to each second object, obtains within the scope of setting time each second object in institute State the length of the time of occurrence in monitor video, comprising:
For each second object, perform the following operations:
Obtain the fourth feature information of each object of the monitor video within the scope of setting time;
The fourth feature information is matched with the second feature information of second object;
The object of successful match is determined as second object, and second object is tracked, described in statistics The number of image frames of second object is continuously had in monitor video;
According to described image frame number, time of occurrence length of second object in the monitor video is determined.
With reference to first aspect, the embodiment of the present application provides the 7th kind of possible embodiment of first aspect, wherein also Include:
The gait feature of each selected second object is inputted into preparatory trained gait identification model respectively First identity of middle each selected second object of prediction, and the prediction for obtaining the first identity of each selected second object is general Rate;And
The facial characteristics of each selected second object is inputted into facial identification model trained in advance respectively Second identity of middle each selected second object of prediction, and the prediction for obtaining the second identity of each selected second object is general Rate;
According to the weight of pre-set first identity and the weight of second identity, calculate separately each selected The target prediction probability of the corresponding different identity of second object;
According to the target prediction probability of each selected corresponding different identity of second object, from each described second pair selected The target object is filtered out as in.
Second aspect, the embodiment of the present application provide recongnition of objects device under a kind of monitoring scene, comprising:
First object information obtains module, for obtaining the fisrt feature information of the first object, the fisrt feature information Gait feature and/or facial characteristics including first object;
First object determining module, for determining described first pair in monitor video according to the fisrt feature information As;
Second object determining module, for determining apart from first object from the monitor video in pre-determined distance model At least one interior second object is enclosed, and extracts the second feature information of each second object respectively, the second feature information Gait feature and/or facial characteristics including second object;
There is data obtaining module in second object, for the second feature information according to each second object, obtains Frequency of occurrence of each second object in the monitor video within the scope of setting time, or the time span occurred;
Selected second object determining module, for filtering out the frequency of occurrence not less than preset times or the appearance Time span be not less than the second object of predetermined time period threshold value, be defined as selected second object;
Target object determining module, for that will include in the second feature information and date library of selected second object The third feature information of each sample object is matched, and is determined as selected second object of successful match to appear in institute State target object.
In conjunction with second aspect, the embodiment of the present application provides the first possible embodiment of second aspect, wherein also It include: another target object determining module;
The another kind target object determining module, specifically using following step for determining the target object:
The gait feature of each selected second object is inputted into preparatory trained gait identification model respectively First identity of middle each selected second object of prediction, and the prediction for obtaining the first identity of each selected second object is general Rate;And
The facial characteristics of each selected second object is inputted into facial identification model trained in advance respectively Second identity of middle each selected second object of prediction, and the prediction for obtaining the second identity of each selected second object is general Rate;
According to the weight of pre-set first identity and the weight of second identity, calculate separately each selected The target prediction probability of the corresponding different identity of second object;
According to the target prediction probability of each selected corresponding different identity of second object, from each described second pair selected The target object is filtered out as in.
Recongnition of objects method and device under monitoring scene provided by the embodiments of the present application, by obtaining the first object Fisrt feature information determines first object according to the fisrt feature information in monitor video;From the monitor video Middle determining at least one second object apart from first object within the scope of pre-determined distance, and each second pair is extracted respectively The second feature information of elephant;According to the second feature information of each second object, obtain each within the scope of setting time Frequency of occurrence of second object in the monitor video, or there is duration;The frequency of occurrence is filtered out not less than pre- If number or second object for duration occur and being not less than preset duration threshold value, are defined as selected second object;Finally by institute State the third feature information progress for each sample object for including in the second feature information and date library of selected second object Match, selected second object of successful match is determined as the target object.The embodiment of the present application can be by image at Reason technology finds target object suspicious in monitor video, solves in the prior art by artificial checking monitoring video come the suspicion of screening Treatment effeciency is lower when people, the low problem of accuracy rate, thus treatment effeciency and accuracy rate when improving checking monitoring video.
To enable the above objects, features, and advantages of the application to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate Appended attached drawing, is described in detail below.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 shows the flow chart of recongnition of objects method under a kind of monitoring scene provided by the embodiment of the present application;
Fig. 2 shows in recongnition of objects method under monitoring scene provided by the embodiment of the present application, the first is determined The mode flow chart of first object;
Fig. 3 is shown under monitoring scene provided by the embodiment of the present application in recongnition of objects method, second of determination The mode flow chart of first object;
Fig. 4 is shown under monitoring scene provided by the embodiment of the present application in recongnition of objects method, the third determination The mode flow chart of first object;
Fig. 5 is shown under monitoring scene provided by the embodiment of the present application in recongnition of objects method, the first determination The flow chart of frequency of occurrence of second object in monitor video;
Fig. 6 is shown under monitoring scene provided by the embodiment of the present application in recongnition of objects method, second of determination The flow chart of frequency of occurrence of second object in monitor video;
Fig. 7 is shown under monitoring scene provided by the embodiment of the present application in recongnition of objects method, determines second pair As the flow chart of the appearance duration in monitor video;
Fig. 8 is shown under monitoring scene provided by the embodiment of the present application in recongnition of objects method, and another kind determines The flow chart of target object;
Fig. 9 shows the structural schematic diagram of recongnition of objects device under monitoring scene provided by the embodiment of the present application;
Figure 10 shows a kind of structural schematic diagram of computer equipment provided by the embodiment of the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application Middle attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only It is some embodiments of the present application, instead of all the embodiments.The application being usually described and illustrated herein in the accompanying drawings is real The component for applying example can be arranged and be designed with a variety of different configurations.Therefore, below to the application's provided in the accompanying drawings The detailed description of embodiment is not intended to limit claimed scope of the present application, but is merely representative of the selected reality of the application Apply example.Based on embodiments herein, those skilled in the art institute obtained without making creative work There are other embodiments, shall fall in the protection scope of this application.
Screening the mode of suspect by artificial checking monitoring video at present, not only treatment effeciency is lower, is also easy to nothing occur Method accurately determines the problem of suspect, is based on this, recongnition of objects method and dress under a kind of monitoring scene provided by the present application It sets, target object suspicious in monitor video can be found by image processing techniques, solve in the prior art by manually checking Monitor video is lower come treatment effeciency when screening suspect, the low problem of accuracy rate, thus place when improving checking monitoring video Manage efficiency and accuracy rate.
For convenient for understanding the present embodiment, first to target under a kind of monitoring scene disclosed in the embodiment of the present application Object identifying method describes in detail.
Shown in Figure 1, recongnition of objects method includes step under a kind of monitoring scene provided by the embodiment of the present application Rapid S101~S106:
S101: the fisrt feature information of the first object is obtained.
Herein, fisrt feature information includes the gait feature and/or facial characteristics of the first object.
In a kind of possible application scenarios, after thievery occurs for certain region, it can be chosen in the area monitoring video Certain an object is as the first object, such as can choose aggrieved artificial first object.
S102: according to fisrt feature information, the first object is determined in monitor video.
Herein, it is desirable to find target object suspicious in monitor video, namely the first object may be implemented stealing or its The target object of his criminal offence, it is necessary first to judge whether comprising the first object in each frame image of monitor video, and The first object is determined in each frame image of monitor video.
Optionally, monitor video is the monitor video in place when the first object may occur stolen, such as subway station, public affairs Hand over the monitor video in station somewhere.
When specific implementation, the embodiment of the present application determines the first object based on following three kinds of modes:
Mode one:
Shown in Figure 2, the embodiment of the present application determines the first object based on following manner one:
S201: the facial characteristics of each object in monitor video is obtained.
When specific implementation, the facial characteristics of each object in monitor video in each frame image is obtained.
Optionally, it is first determined the human region of each object in each frame image to be processed;Then, face and people are based on The geometrical relationship of body determines the human face region in human region;The image of human face region is finally obtained as facial image.It is this Mode does not have to global image and carries out facial characteristics identification, but first identifies human region, then human region is recycled to determine Human face region shortens the recognition time of facial characteristics identification, recognition efficiency can be improved.
S202: for the facial characteristics of each object, whether the facial characteristics for detecting each object meets face recognition item Part.
Herein, the face-image of the object taken due to monitor video may be vague, and it is special not have progress face Levy the condition of identification, it is therefore desirable to detect to whether the facial characteristics of each object meets face recognition condition.
For example, facial characteristics includes: left eye, right eye, nose, the left corners of the mouth, the right corners of the mouth;
When specific implementation, optionally, the scoring of this 5 facial characteristics is obtained based on full convolutional network, then will be commented / and with default scoring threshold value comparison meet face recognition condition if the sum of scoring is greater than or equal to default scoring threshold value; If the sum of scoring is less than default scoring threshold value, it is unsatisfactory for face recognition condition.
S203: by facial characteristics meet the object of face recognition condition facial characteristics and the first object facial characteristics into Row matching, and the object of facial characteristics successful match is determined as the first object.
When specific implementation, calculate facial characteristics meet face recognition condition object facial characteristics feature to Similarity between the feature vector of the facial characteristics of amount and the first object, if the obtained corresponding similarity of all objects In maximum similarity be greater than default similarity threshold, then it is assumed that successful match, and the corresponding object of maximum similarity is true It is set to the first object.
Optionally, when extracting the feature vector of facial characteristics, convolutional neural networks or scale invariant feature can be used Convert the feature vector that (Scale-invariant feature transform, SIFT) algorithm etc. extracts facial characteristics.
S204: extracting the gait feature that facial characteristics is unsatisfactory for the object of face recognition condition, by the gait extracted spy Sign is matched with the gait feature of the first object, and the object of gait feature successful match is determined as the first object.
Herein, the facial characteristics of object in monitor video possibly can not be sometimes acquired, or the facial characteristics of acquisition is not inconsistent Close face recognition condition, then it can be with the gait feature of acquisition target, by matching collected gait feature and the first object Gait feature determines the first object in monitor video.When carrying out gait feature matching, needs multiple image just and can determine that step State feature.The embodiment of the present application determines the gait feature of object using following manner:
Firstly, extracting at least one associated images septum reset feature corresponding to nth frame image is unsatisfactory for face recognition item The gait feature of the object of part.
Herein, associated images include but is not limited to: the preceding X frame adjacent with nth frame image and continuous in monitor video Image, it is adjacent with nth frame image and continuous after Y frame image, or it is adjacent with nth frame image and continuous before X frame image and Y frame image after adjacent with nth frame image and continuous.
It optionally, can Method of Gait Feature Extraction algorithm, grain based on Hough (Hough) transformation when extracting gait feature Method of Gait Feature Extraction algorithm of sub- filter tracking etc. extracts gait feature.
Then, the gait feature extracted is matched with the gait feature of the first object.For example, what calculating extracted Similarity between gait feature and the gait feature of the first object, if maximum in the obtained corresponding similarity of all objects Similarity be greater than default similarity threshold, then it is assumed that successful match, and the corresponding object of maximum similarity is determined as the An object.
Mode two:
Shown in Figure 3, the embodiment of the present application determines the first object based on following manner two:
S301: the facial characteristics of each object in monitor video is obtained.
S301: the facial characteristics of each object is matched with the facial characteristics of the first object, and by facial characteristics It is determined as the first object with successful object.
Mode three:
Shown in Figure 4, the embodiment of the present application determines the first object based on following manner three:
S401: the gait feature of each object in monitor video is obtained.
S402: the gait feature of each object is matched with the gait feature of the first object, and by gait feature It is determined as the first object with successful object.
It should be noted that in specific implementation, the facial characteristics or step of mode two and mode three only with each object State feature is matched with the facial characteristics of the first object or gait feature, and object is met face recognition first by mode one The facial characteristics of condition is matched with the facial characteristics of the first object, when facial characteristics is unsatisfactory for face recognition condition, is adopted It is matched with the gait feature of each object with the gait feature of the first object.As can be seen that mode is first is that one preferable Embodiment, precision is higher, and the precision of mode two and mode three is lower.
After the embodiment of the present application has determined the first object in monitor video through the above steps, following step S103 is executed ~S106.
S103: from determining at least one second object within the scope of pre-determined distance of the first object of distance in monitor video, And the second feature information of each second object is extracted respectively.
Specifically, second feature information includes the gait feature and/or facial characteristics of the second object.
In a kind of possible embodiment, since suspect is when implementing theft or other criminal offences to victim, need It appears near victim.Therefore need from monitor video determine the first object of distance within the scope of pre-determined distance at least One the second object.
When specific implementation, which is all carried out to each frame image in the monitor video for having the first object, From at least one second object of the first object of distance determining in monitor video within the scope of pre-determined distance, and extract respectively each The second feature information of second object.
S104: according to the second feature information of each second object, each second object within the scope of setting time is obtained Frequency of occurrence in monitor video, or there is duration.
If a people frequently occurs on the biggish place of some flow of the people within certain time, this people's Behavior will be more suspicious, it is likely to suspect.Therefore existed by each second object within the scope of statistics setting time Frequency of occurrence in monitor video, or there is duration, it is more then to filter out frequency of occurrence, or occur longer second pair of duration As the range for finding target object can be reduced.
Monitoring it should be noted that optionally, here for determining the frequency of occurrence of the second object, or when there is duration Video can be a local monitor video with the monitor video in step S102 and S103;It is also possible to different places Monitor video in monitor video, such as step S102 and S103 is primarily to determine the first object and the second object, therefore Monitor video in step S102 and S103 is to be determined by the appearance position of the first object, and the monitor video of step S104 is In order to determine that the second object in the frequency of occurrence in some place, or duration occurs, this place can be the prison on subway station doorway Control the monitor video etc. in somewhere in video or other subway stations.
When specific implementation, the embodiment of the present application determines the second object in monitor video by following two mode Frequency of occurrence:
Mode one:
It is shown in Figure 5, for each second object, perform the following operations:
S501: the fourth feature information of each object of the monitor video within the scope of setting time is obtained.
Specifically, fourth feature information includes the gait feature and/or facial characteristics of object.
S502: fourth feature information is matched with the second feature information of the second object.
S503: the object of successful match is determined as the second object, and the second object is tracked.
S504: being one by the frequency of occurrence statistics of the second object when that can not track the second object in monitor video It is secondary, to determine frequency of occurrence of second object in monitor video.
Mode two:
It is shown in Figure 6, for each second object, perform the following operations:
S601: in multiple video-frequency bands within the scope of the setting time of monitor video, it is each right in each video-frequency band to obtain The fourth feature information of elephant.
Specifically, fourth feature information includes the gait feature and/or facial characteristics of the object.
S602: fourth feature information is matched with the second feature information of the second object, by the object of successful match It is determined as the second object.
S603: occur the number of the video-frequency band of the second object by counting, determine the second object in monitor video Frequency of occurrence.
Shown in Figure 7 when specific implementation, the embodiment of the present application determines that the second object is monitoring by following manner Appearance duration in video:
S701: the fourth feature information of each object of the monitor video within the scope of setting time is obtained.
Specifically, fourth feature information includes the gait feature and/or facial characteristics of the object.
S702: fourth feature information is matched with the second feature information of the second object.
S703: the object of successful match is determined as the second object, and the second object is tracked, Statistical monitor video In continuously have the number of image frames of the second object.
S704: according to number of image frames, appearance duration of second object in monitor video is determined.
The embodiment of the present application obtains frequency of occurrence of second object in monitor video through the above steps, and duration occurs Afterwards, recongnition of objects method further includes following step S105 and step S106 under monitoring scene provided by the embodiments of the present application.
S105: filtering out frequency of occurrence not less than preset times, or occurs duration is not less than preset duration threshold value second Object is defined as selected second object.
S106: the third for each sample object for including in the second feature information and date library of selected second object is special Reference breath is matched, and selected second object of successful match is determined as target object.
Wherein, third feature information includes the gait feature and/or facial characteristics of the sample object.
Optionally, each sample object for including in database, including the people with criminal record.
Optionally, shown in Figure 8, the embodiment of the present application also provides another modes for determining target object:
S801: the gait feature of each selected second object is inputted into preparatory trained gait identification model respectively First identity of middle each selected second object of prediction, and obtain the prediction probability of the first identity of each selected second object.
Herein, the training sample of gait identification model is the respective body of multiple sample objects with criminal record Part label and gait feature.Can obtain selected second object by gait identification model is each to have criminal record The prediction probability of sample object optionally according to the sequence of prediction probability from big to small, chooses prediction probability in the sample of several former First identity of the identity label of this object as selected second object.
For example, the first identity of selected second object and the prediction probability of the first identity are (identity A, prediction probability 1), (identity B, prediction probability 2), (identity C, prediction probability 3), (identity D, prediction probability 4).
S802: the facial characteristics of each selected second object is inputted into facial identification model trained in advance respectively Second identity of middle each selected second object of prediction, and obtain the prediction probability of the second identity of each selected second object.
Herein, the training sample of facial identification model is the respective body of multiple sample objects with criminal record Part label and facial characteristics.Can obtain selected second object by facial identification model is each to have criminal record The prediction probability of sample object optionally according to the sequence of prediction probability from big to small, chooses prediction probability in the sample of several former Second identity of the identity label of this object as selected second object.
For example, the second identity of selected second object and the prediction probability of the second identity are (identity A, prediction probability 5), (identity B, prediction probability 6), (identity D, prediction probability 7), (identity E, prediction probability 8).
S803: according to the weight of pre-set first identity and the weight of the second identity, each selected the is calculated separately The target prediction probability of the corresponding different identity of two objects.
For example, the weight of the first identity is a, the weight of the second identity is b, then selectes the target that the second object is identity A Prediction probability is a* prediction probability 1+b* prediction probability 5, and so on, obtain each selected corresponding different identity of second object Target prediction probability.
S804: according to the target prediction probability of each selected corresponding different identity of second object, from each selected second Target object is filtered out in object.
When specific implementation, by the target prediction probability highest goal of each selected corresponding different identity of second object The corresponding identity of prediction probability is determined as the target prediction identity of selected second object, selectes all in the second objects, mesh The corresponding target prediction probability of mark prediction identity is greater than selected second object of goal-selling prediction probability threshold value, is determined as target Object.
For example, if the target prediction probability value highest that selected second object is identity A, the body of selected second object Part is identity A, if this is selected second pair greater than goal-selling prediction probability threshold value by the target prediction probability value of identity A As being determined as target object.
Optionally, facial identification model and gait identification model provided by the embodiments of the present application, can be based on Neural network model or support vector machines training obtain.
Optionally, the embodiment of the present application can be existed target object by way of mark after target object has been determined It is marked in monitor video, so that paying close attention to the target object being marked when through artificial checking monitoring video, reduces and find The range of suspect improves treatment effeciency and accuracy rate.
Recongnition of objects method under monitoring scene provided by the embodiments of the present application, it is special by obtain the first object first Reference breath determines the first object according to fisrt feature information in monitor video;The first object of distance is determined from monitor video At least one second object within the scope of pre-determined distance, and the second feature information of each second object is extracted respectively;According to The second feature information of each second object obtains appearance of each second object in monitor video within the scope of setting time Number, or there is duration;Frequency of occurrence is filtered out not less than preset times, or the of duration not less than preset duration threshold value occurs Two objects are defined as selected second object;It is finally each by include in the second feature information and date library of selected second object The third feature information of a sample object is matched, and selected second object of successful match is determined as target object.This Shen Please embodiment can find target object suspicious in monitor video by image processing techniques, solve in the prior art by artificial Checking monitoring video is lower come treatment effeciency when screening suspect, the low problem of accuracy rate, thus when improving checking monitoring video Treatment effeciency and accuracy rate.
Based on the same inventive concept, it is additionally provided in the embodiment of the present application and recongnition of objects method pair under monitoring scene Recongnition of objects device under the monitoring scene answered, the principle and the application solved the problems, such as due to the device in the embodiment of the present application Recongnition of objects method is similar under the above-mentioned monitoring scene of embodiment, therefore the implementation of device may refer to the implementation of method, weight Multiple place repeats no more.
It is shown in Figure 9, recongnition of objects device under monitoring scene provided by the embodiment of the present application, comprising:
First object information obtains module 91, for obtaining the fisrt feature information of the first object, fisrt feature packet Include the gait feature and/or facial characteristics of the first object;
First object determining module 92, for determining the first object in monitor video according to fisrt feature information;
Second object determining module 93, for from monitor video determine the first object of distance within the scope of pre-determined distance At least one second object, and the second feature information of each second object is extracted respectively, second feature information includes second pair The gait feature and/or facial characteristics of elephant;
There is data obtaining module 94 in second object, for the second feature information according to each second object, obtains Frequency of occurrence of each second object in monitor video within the scope of setting time, or the time span occurred;
Selected second object determining module 95, for filtering out frequency of occurrence not less than preset times, or the time of appearance Length is not less than the second object of predetermined time period threshold value, is defined as selected second object;
Target object determining module 96, for each by include in the second feature information and date library for selecting the second object The third feature information of a sample object is matched, and is determined as selected second object of successful match to appear in target pair As.
Optionally, recongnition of objects device under monitoring scene provided by the embodiment of the present application, further includes: another mesh Mark object determining module 97;
Optionally, another target object determining module 97, specifically using following step for determining target object:
The gait feature of each selected second object is inputted respectively pre- in preparatory trained gait identification model The first identity of each selected second object is surveyed, and obtains the prediction probability of the first identity of each selected second object;And
The facial characteristics of each selected second object is inputted respectively pre- in facial identification model trained in advance The second identity of each selected second object is surveyed, and obtains the prediction probability of the second identity of each selected second object;
According to the weight of pre-set first identity and the weight of the second identity, each selected second object is calculated separately The target prediction probability of corresponding different identity;
According to the target prediction probability of each selected corresponding different identity of second object, from each selected second object Filter out target object.
Optionally, the first object determining module 92, specifically for determining the first object using following manner:
Obtain the facial characteristics of each object in monitor video;
For the facial characteristics of each object, whether the facial characteristics for detecting each object meets face recognition condition;
Facial characteristics is met to the facial characteristics progress of the facial characteristics and the first object of the object of face recognition condition Match, and the object of facial characteristics successful match is determined as the first object;
The gait feature that facial characteristics is unsatisfactory for the object of face recognition condition is extracted, by the gait feature extracted and the The gait feature of an object is matched, and the object of gait feature successful match is determined as the first object.
Optionally, the first object determining module 92 is also used to determine the first object using following manner:
Obtain the facial characteristics of each object in monitor video;
The facial characteristics of each object is matched with the facial characteristics of the first object, and by facial characteristics successful match Object be determined as the first object.
Optionally, the first object determining module 92 is also used to determine the first object using following manner:
Obtain the gait feature of each object in monitor video;
The gait feature of each object is matched with the gait feature of the first object, and by gait feature successful match Object be determined as the first object.
Optionally, there is data obtaining module 94 in the second object, is specifically used for obtaining using following manner in setting time Frequency of occurrence of each second object in monitor video in range:
For each second object, perform the following operations:
Obtain the fourth feature information of each object of the monitor video within the scope of setting time;
Fourth feature information is matched with the second feature information of the second object;
The object of successful match is determined as the second object, and the second object is tracked;
When that can not track the second object in monitor video, it is primary that the frequency of occurrence of the second object, which is counted, with Determine frequency of occurrence of second object in monitor video.
Optionally, there is data obtaining module 94 in the second object, is also used to obtain using following manner in setting time model Enclose frequency of occurrence of interior each second object in monitor video:
For each second object, perform the following operations:
In multiple video-frequency bands within the scope of the setting time of monitor video, of each object in each video-frequency band is obtained Four characteristic informations;
Fourth feature information is matched with the second feature information of the second object, the object of successful match is determined as Second object;
Occur the number of the video-frequency band of the second object by counting, determines that the second object goes out occurrence in monitor video Number.
Optionally, there is data obtaining module 94 in the second object, is specifically used for obtaining using following manner in setting time Appearance duration of each second object in monitor video in range:
For each second object, perform the following operations:
Obtain the fourth feature information of each object of the monitor video within the scope of setting time;
Fourth feature information is matched with the second feature information of the second object;
The object of successful match is determined as the second object, and the second object is tracked, is connected in Statistical monitor video The continuous number of image frames for having the second object;
According to number of image frames, appearance duration of second object in monitor video is determined.
Recongnition of objects device under monitoring scene provided by the embodiments of the present application, when being identified to target object, By obtaining the fisrt feature information of the first object, according to fisrt feature information, the first object is determined in monitor video;From prison It controls and determines at least one second object of the first object of distance within the scope of pre-determined distance in video, and extract each second respectively The second feature information of object;According to the second feature information of each second object, obtain each the within the scope of setting time Frequency of occurrence of two objects in monitor video, or there is duration;When filtering out frequency of occurrence not less than preset times, or occurring Long the second object for being not less than preset duration threshold value is defined as selected second object;It is finally special by the second of selected second object Reference breath is matched with the third feature information for each sample object for including in database, by selected the second of successful match Object is determined as target object.The embodiment of the present application can find target pair suspicious in monitor video by image processing techniques As, lower come treatment effeciency when screening suspect by artificial checking monitoring video in the prior art, the low problem of accuracy rate is solved, To improve the treatment effeciency and accuracy rate when checking monitoring video.
The embodiment of the present application also provides a kind of computer readable storage medium, stored on the computer readable storage medium There is computer program, which executes the step of recongnition of objects method under above-mentioned monitoring scene when being run by processor Suddenly.
Specifically, which can be general storage medium, such as mobile disk, hard disk, on the storage medium Computer program when being run, recongnition of objects method under above-mentioned monitoring scene is able to carry out, so as to pass through image Processing technique finds target object suspicious in monitor video, and suspicion is screened in solution by artificial checking monitoring video in the prior art Treatment effeciency is lower when doubting people, the low problem of accuracy rate, thus treatment effeciency and accuracy rate when improving checking monitoring video.
Corresponding to recongnition of objects method under the monitoring scene in Fig. 1, the embodiment of the present application also provides a kind of calculating Machine equipment, as shown in Figure 10, the equipment include memory 1000, processor 2000 and are stored on the memory 1000 and can be The computer program run on the processor 2000, wherein above-mentioned processor 2000 is realized when executing above-mentioned computer program The step of stating recongnition of objects method under monitoring scene.
Specifically, above-mentioned memory 1000 and processor 2000 can be general memory and processor, not do here It is specific to limit, when the computer program of 2000 run memory 1000 of processor storage, it is able to carry out under above-mentioned monitoring scene Recongnition of objects method solves existing so as to find target object suspicious in monitor video by image processing techniques There is lower come treatment effeciency when screening suspect by artificial checking monitoring video in technology, the low problem of accuracy rate, to improve Treatment effeciency and accuracy rate when checking monitoring video.
The computer program product of recongnition of objects method and device under monitoring scene provided by the embodiment of the present application, Computer readable storage medium including storing program code, the instruction that said program code includes can be used for executing front side Method method as described in the examples, specific implementation can be found in embodiment of the method, and details are not described herein.
In all examples being illustrated and described herein, any occurrence should be construed as merely illustratively, without It is as limitation, therefore, other examples of exemplary embodiment can have different values.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description It with the specific work process of device, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.In the application In provided several embodiments, it should be understood that disclosed device and method may be implemented in other ways.With Upper described Installation practice is only schematical.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in the executable non-volatile computer-readable storage medium of a processor.Based on this understanding, the application Technical solution substantially the part of the part that contributes to existing technology or the technical solution can be with software in other words The form of product embodies, which is stored in a storage medium, including some instructions use so that One computer equipment (can be personal computer, server or the network equipment etc.) executes each embodiment institute of the application State all or part of the steps of method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic or disk etc. is various to deposit Store up the medium of program code.
Finally, it should be noted that embodiment described above, the only specific embodiment of the application, to illustrate the application Technical solution, rather than its limitations, the protection scope of the application is not limited thereto, although with reference to the foregoing embodiments to this Shen It please be described in detail, those skilled in the art should understand that: anyone skilled in the art Within the technical scope of the present application, it can still modify to technical solution documented by previous embodiment or can be light It is readily conceivable that variation or equivalent replacement of some of the technical features;And these modifications, variation or replacement, do not make The essence of corresponding technical solution is detached from the spirit and scope of the embodiment of the present application technical solution, should all cover the protection in the application Within the scope of.Therefore, the protection scope of the application shall be subject to the protection scope of the claim.

Claims (10)

1. a kind of recongnition of objects method under monitoring scene characterized by comprising
Obtain the fisrt feature information of the first object, the fisrt feature information include first object gait feature and/ Or facial characteristics;
According to the fisrt feature information, first object is determined in monitor video;
At least one second object apart from first object within the scope of pre-determined distance is determined from the monitor video, and The second feature information of each second object is extracted respectively, and the second feature information includes the gait feature of second object And/or facial characteristics;
According to the second feature information of each second object, obtains each second object within the scope of setting time and exist Frequency of occurrence in the monitor video, or there is duration;
The frequency of occurrence is filtered out not less than preset times or second pair for duration occur and being not less than preset duration threshold value As being defined as selected second object;
By the third feature letter for each sample object for including in the second feature information and date library of selected second object Breath is matched, and selected second object of successful match is determined as the target object;Wherein, the third feature letter Breath includes the gait feature and/or facial characteristics of the sample object.
2. the method according to claim 1, wherein described according to the fisrt feature information, in monitor video Interior determination first object, comprising:
Obtain the facial characteristics of each object in the monitor video;
For the facial characteristics of each object, whether the facial characteristics for detecting each object meets face recognition condition;
By facial characteristics meet the object of the face recognition condition facial characteristics and first object facial characteristics into Row matching, and the object of facial characteristics successful match is determined as first object;
The gait feature that facial characteristics is unsatisfactory for the object of the face recognition condition is extracted, by the gait feature extracted and institute The gait feature for stating the first object is matched, and the object of gait feature successful match is determined as first object.
3. the method according to claim 1, wherein described according to the fisrt feature information, in monitor video Interior determination first object, comprising:
Obtain the facial characteristics of each object in the monitor video;
The facial characteristics of each object is matched with the facial characteristics of first object, and facial characteristics is matched Successful object is determined as first object.
4. the method according to claim 1, wherein described according to the fisrt feature information, in monitor video Interior determination first object, comprising:
Obtain the gait feature of each object in the monitor video;
The gait feature of each object is matched with the gait feature of first object, and gait feature is matched Successful object is determined as first object.
5. the method according to claim 1, wherein described believe according to the second feature of each second object Breath obtains frequency of occurrence of each second object in the monitor video within the scope of setting time, comprising:
For each second object, perform the following operations:
Obtain the fourth feature information of each object of the monitor video within the scope of setting time;
The fourth feature information is matched with the second feature information of second object;
The object of successful match is determined as second object, and second object is tracked;
When in the monitor video second object can not be tracked, it is by the frequency of occurrence statistics of second object Once, the frequency of occurrence with determination second object in the monitor video.
6. the method according to claim 1, wherein described believe according to the second feature of each second object Breath obtains frequency of occurrence of each second object in the monitor video within the scope of setting time, further includes:
For each second object, perform the following operations:
In multiple video-frequency bands within the scope of the setting time of the monitor video, each object in each video-frequency band is obtained Fourth feature information;
The fourth feature information is matched with the second feature information of second object, the object of successful match is true It is set to second object;
Occur the number of the video-frequency band of second object by counting, determines second object in the monitor video Frequency of occurrence.
7. the method according to claim 1, wherein described believe according to the second feature of each second object Breath obtains appearance duration of each second object in the monitor video within the scope of setting time, comprising:
For each second object, perform the following operations:
Obtain the fourth feature information of each object of the monitor video within the scope of setting time;
The fourth feature information is matched with the second feature information of second object;
The object of successful match is determined as second object, and second object is tracked, counts the monitoring The number of image frames of second object is continuously had in video;
According to described image frame number, appearance duration of second object in the monitor video is determined.
8. the method according to claim 1, wherein further include:
The gait feature of each selected second object is inputted respectively pre- in preparatory trained gait identification model The first identity of each selected second object is surveyed, and obtains the prediction probability of the first identity of each selected second object; And
The facial characteristics of each selected second object is inputted respectively pre- in facial identification model trained in advance The second identity of each selected second object is surveyed, and obtains the prediction probability of the second identity of each selected second object;
According to the weight of pre-set first identity and the weight of second identity, each selected second is calculated separately The target prediction probability of the corresponding different identity of object;
According to the target prediction probability of each selected corresponding different identity of second object, from each selected second object Filter out the target object.
9. recongnition of objects device under a kind of monitoring scene characterized by comprising
First object information obtains module, and for obtaining the fisrt feature information of the first object, the fisrt feature information includes The gait feature and/or facial characteristics of first object;
First object determining module, for determining first object in monitor video according to the fisrt feature information;
Second object determining module, for being determined from the monitor video apart from first object within the scope of pre-determined distance At least one second object, and extract the second feature information of each second object respectively, the second feature information includes The gait feature and/or facial characteristics of second object;
There is data obtaining module in second object, and for the second feature information according to each second object, acquisition is being set Frequency of occurrence of each second object in the monitor video in range of fixing time, or the time span occurred;
Selected second object determining module, for filter out the frequency of occurrence not less than preset times or the appearance when Between length be not less than the second object of predetermined time period threshold value, be defined as selected second object;
Target object determining module, for each by include in the second feature information and date library of selected second object The third feature information of sample object is matched, and is determined as selected second object of successful match to appear in the mesh Mark object.
10. device according to claim 9, which is characterized in that further include: another target object determining module;
The another kind target object determining module, specifically using following step for determining the target object:
The gait feature of each selected second object is inputted respectively pre- in preparatory trained gait identification model The first identity of each selected second object is surveyed, and obtains the prediction probability of the first identity of each selected second object; And
The facial characteristics of each selected second object is inputted respectively pre- in facial identification model trained in advance The second identity of each selected second object is surveyed, and obtains the prediction probability of the second identity of each selected second object;
According to the weight of pre-set first identity and the weight of second identity, each selected second is calculated separately The target prediction probability of the corresponding different identity of object;
According to the target prediction probability of each selected corresponding different identity of second object, from each selected second object Filter out the target object.
CN201811223348.1A 2018-10-19 2018-10-19 Recongnition of objects method and device under a kind of monitoring scene Pending CN109190601A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811223348.1A CN109190601A (en) 2018-10-19 2018-10-19 Recongnition of objects method and device under a kind of monitoring scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811223348.1A CN109190601A (en) 2018-10-19 2018-10-19 Recongnition of objects method and device under a kind of monitoring scene

Publications (1)

Publication Number Publication Date
CN109190601A true CN109190601A (en) 2019-01-11

Family

ID=64946324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811223348.1A Pending CN109190601A (en) 2018-10-19 2018-10-19 Recongnition of objects method and device under a kind of monitoring scene

Country Status (1)

Country Link
CN (1) CN109190601A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871783A (en) * 2019-01-28 2019-06-11 武汉恩特拉信息技术有限公司 A kind of monitoring method and monitoring system based on video image
CN110070687A (en) * 2019-03-18 2019-07-30 苏州凸现信息科技有限公司 A kind of intelligent safety defense monitoring system and its working method based on motion analysis
CN110278414A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110458052A (en) * 2019-07-25 2019-11-15 Oppo广东移动通信有限公司 Recongnition of objects method, apparatus based on augmented reality, equipment, medium
CN110728815A (en) * 2019-10-25 2020-01-24 深圳市商汤科技有限公司 Early warning method and device based on video analysis, electronic equipment and storage medium
CN110807393A (en) * 2019-10-25 2020-02-18 深圳市商汤科技有限公司 Early warning method and device based on video analysis, electronic equipment and storage medium
CN111354013A (en) * 2020-03-13 2020-06-30 北京字节跳动网络技术有限公司 Target detection method and device, equipment and storage medium
CN111460182A (en) * 2020-04-17 2020-07-28 支付宝(杭州)信息技术有限公司 Multimedia evaluation method and device
CN111767782A (en) * 2020-04-15 2020-10-13 上海摩象网络科技有限公司 Tracking target determining method and device and handheld camera
WO2021000644A1 (en) * 2019-07-04 2021-01-07 深圳壹账通智能科技有限公司 Video processing method and apparatus, computer device and storage medium
CN112347808A (en) * 2019-08-07 2021-02-09 中国电信股份有限公司 Method, device and system for identifying characteristic behaviors of target object
CN112598704A (en) * 2020-12-15 2021-04-02 中标慧安信息技术股份有限公司 Target positioning and tracking method and system for public place
CN112989893A (en) * 2019-12-17 2021-06-18 广州慧睿思通科技股份有限公司 Monitoring method and device
CN115116009A (en) * 2022-08-26 2022-09-27 中国工业互联网研究院 Security management method, device, equipment and storage medium based on industrial internet

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794481A (en) * 2009-02-04 2010-08-04 深圳市先进智能技术研究所 ATM (Automatic teller machine) self-service bank monitoring system and method
US20110142282A1 (en) * 2009-12-14 2011-06-16 Indian Institute Of Technology Bombay Visual object tracking with scale and orientation adaptation
CN103246869A (en) * 2013-04-19 2013-08-14 福建亿榕信息技术有限公司 Crime monitoring method based on face recognition technology and behavior and sound recognition
CN103680147A (en) * 2013-12-15 2014-03-26 天津恒远达科技有限公司 Automobile tracing recognition device
CN104539874A (en) * 2014-06-17 2015-04-22 武汉理工大学 Human body mixed monitoring system and method fusing pyroelectric sensing with cameras
US20150286872A1 (en) * 2011-06-20 2015-10-08 University Of Southern California Visual tracking in video images in unconstrained environments by exploiting on-the-fly contxt using supporters and distracters
CN105933650A (en) * 2016-04-25 2016-09-07 北京旷视科技有限公司 Video monitoring system and method
CN106778655A (en) * 2016-12-27 2017-05-31 华侨大学 A kind of entrance based on human skeleton is trailed and enters detection method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794481A (en) * 2009-02-04 2010-08-04 深圳市先进智能技术研究所 ATM (Automatic teller machine) self-service bank monitoring system and method
US20110142282A1 (en) * 2009-12-14 2011-06-16 Indian Institute Of Technology Bombay Visual object tracking with scale and orientation adaptation
US20150286872A1 (en) * 2011-06-20 2015-10-08 University Of Southern California Visual tracking in video images in unconstrained environments by exploiting on-the-fly contxt using supporters and distracters
CN103246869A (en) * 2013-04-19 2013-08-14 福建亿榕信息技术有限公司 Crime monitoring method based on face recognition technology and behavior and sound recognition
CN103680147A (en) * 2013-12-15 2014-03-26 天津恒远达科技有限公司 Automobile tracing recognition device
CN104539874A (en) * 2014-06-17 2015-04-22 武汉理工大学 Human body mixed monitoring system and method fusing pyroelectric sensing with cameras
CN105933650A (en) * 2016-04-25 2016-09-07 北京旷视科技有限公司 Video monitoring system and method
CN106778655A (en) * 2016-12-27 2017-05-31 华侨大学 A kind of entrance based on human skeleton is trailed and enters detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
中国人工智能学会编: "《中国人工智能进展(2009)》", 31 December 2009, 北京邮电大学出版社 *
猛犸: "《未来在现实的第几层》", 31 July 2011, 北京:印刷工业出版社 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871783A (en) * 2019-01-28 2019-06-11 武汉恩特拉信息技术有限公司 A kind of monitoring method and monitoring system based on video image
CN110070687A (en) * 2019-03-18 2019-07-30 苏州凸现信息科技有限公司 A kind of intelligent safety defense monitoring system and its working method based on motion analysis
CN110278414A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
WO2021000644A1 (en) * 2019-07-04 2021-01-07 深圳壹账通智能科技有限公司 Video processing method and apparatus, computer device and storage medium
CN110458052A (en) * 2019-07-25 2019-11-15 Oppo广东移动通信有限公司 Recongnition of objects method, apparatus based on augmented reality, equipment, medium
CN110458052B (en) * 2019-07-25 2023-04-07 Oppo广东移动通信有限公司 Target object identification method, device, equipment and medium based on augmented reality
CN112347808A (en) * 2019-08-07 2021-02-09 中国电信股份有限公司 Method, device and system for identifying characteristic behaviors of target object
CN110728815A (en) * 2019-10-25 2020-01-24 深圳市商汤科技有限公司 Early warning method and device based on video analysis, electronic equipment and storage medium
CN110807393A (en) * 2019-10-25 2020-02-18 深圳市商汤科技有限公司 Early warning method and device based on video analysis, electronic equipment and storage medium
CN112989893A (en) * 2019-12-17 2021-06-18 广州慧睿思通科技股份有限公司 Monitoring method and device
CN111354013A (en) * 2020-03-13 2020-06-30 北京字节跳动网络技术有限公司 Target detection method and device, equipment and storage medium
CN111767782A (en) * 2020-04-15 2020-10-13 上海摩象网络科技有限公司 Tracking target determining method and device and handheld camera
WO2021208252A1 (en) * 2020-04-15 2021-10-21 上海摩象网络科技有限公司 Tracking target determination method, device, and hand-held camera
CN111767782B (en) * 2020-04-15 2022-01-11 上海摩象网络科技有限公司 Tracking target determining method and device and handheld camera
CN111460182A (en) * 2020-04-17 2020-07-28 支付宝(杭州)信息技术有限公司 Multimedia evaluation method and device
CN112598704A (en) * 2020-12-15 2021-04-02 中标慧安信息技术股份有限公司 Target positioning and tracking method and system for public place
CN115116009A (en) * 2022-08-26 2022-09-27 中国工业互联网研究院 Security management method, device, equipment and storage medium based on industrial internet

Similar Documents

Publication Publication Date Title
CN109190601A (en) Recongnition of objects method and device under a kind of monitoring scene
US11188783B2 (en) Reverse neural network for object re-identification
US10402627B2 (en) Method and apparatus for determining identity identifier of face in face image, and terminal
Tran et al. Anomaly detection using a convolutional winner-take-all autoencoder
Topkaya et al. Counting people by clustering person detector outputs
US9008365B2 (en) Systems and methods for pedestrian detection in images
CN108229335A (en) It is associated with face identification method and device, electronic equipment, storage medium, program
CN109446936A (en) A kind of personal identification method and device for monitoring scene
WO2019114145A1 (en) Head count detection method and device in surveillance video
CN110569731A (en) face recognition method and device and electronic equipment
CN108875481B (en) Method, device, system and storage medium for pedestrian detection
CN110287889A (en) A kind of method and device of identification
CN104268586A (en) Multi-visual-angle action recognition method
CN105893963B (en) A kind of method of the best frame easy to identify of single pedestrian target in screening video
CN109359620A (en) Method and device for identifying suspicious object
CN104966305A (en) Foreground detection method based on motion vector division
CN110322472A (en) A kind of multi-object tracking method and terminal device
CN110020618A (en) A kind of crowd's abnormal behaviour monitoring method can be used for more shooting angle
CN109508645A (en) Personal identification method and device under monitoring scene
CN111814690A (en) Target re-identification method and device and computer readable storage medium
Hoshen et al. Egocentric video biometrics
CN111950507B (en) Data processing and model training method, device, equipment and medium
CN113469135A (en) Method and device for determining object identity information, storage medium and electronic device
CN114764895A (en) Abnormal behavior detection device and method
KR101766467B1 (en) Alarming apparatus and methd for event occurrence, and providing method of event occurrence determination model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210126

Address after: 315000 9-3, building 91, 16 Buzheng lane, Haishu District, Ningbo City, Zhejiang Province

Applicant after: Yinhe shuidi Technology (Ningbo) Co.,Ltd.

Address before: 101500 Room 501-1753, Office Building, Development Zone, No. 8, Xingsheng South Road, Economic Development Zone, Miyun District, Beijing (Economic Development Zone Centralized Office Area)

Applicant before: Yinhe waterdrop Technology (Beijing) Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20190111

RJ01 Rejection of invention patent application after publication