CN109858308A - Video frequency searching device, video retrieval method and storage medium - Google Patents
Video frequency searching device, video retrieval method and storage medium Download PDFInfo
- Publication number
- CN109858308A CN109858308A CN201711236903.XA CN201711236903A CN109858308A CN 109858308 A CN109858308 A CN 109858308A CN 201711236903 A CN201711236903 A CN 201711236903A CN 109858308 A CN109858308 A CN 109858308A
- Authority
- CN
- China
- Prior art keywords
- personage
- target person
- weight
- candidate
- candidate personage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of video frequency searching device, video retrieval method and storage medium with high accuracy, belongs to field of video retrieval.Comprising: obtain personage's acquisition unit (901,902) of target person and the image of candidate personage;Personage's cutting part (903) of personage's segmentation is carried out by identical segmentaion position to target person and candidate personage;The feature extraction unit (904) of feature is extracted to target person and the respective each segmentaion position of candidate personage;Characteristic distance is calculated to acquire the local similarity calculation part (905) of local similarity by each segmentaion position to target person and candidate personage;The weight calculation unit (906) of weight is calculated for each segmentaion position of candidate personage;With based on the calculated local similarity of the calculated weight of weight calculation unit and local Similarity measures portion, the correlation result output section (1001) of the result of target person and the similitude judgement of candidate personage is exported.
Description
Technical field
The present invention relates to field of video retrieval, in particular to pedestrian retrieval, especially shoot from camera network
The technology of searched targets personage in video.
Background technique
In recent years, pedestrian retrieval (also referred to as pedestrian identifies (Pedestrian re-identification) again) becomes one
It is a not only to have had researching value simultaneously but also extremely challenging heat subject.
With the needs of society, machine has more and more been deployed in by the monitoring system that large-scale camera network is constituted
The public spaces such as field, shopping center, bank, station, harbour, campus, square, office building, cell, these geographical space spans
Greatly, the nonoverlapping camera network of the ken provides very large video data to people.By to such video data
Analyzed using video frequency searching device, therefrom retrieve specific target person --- it is for example preassigned once to be taken the photograph by some
Personage in some personage that camera takes or the image being input in video frequency searching device, can understand the target person
In the coverage which video camera which appears at moment, the thus course of action of master goal personage, current location etc..
What pedestrian retrieval related in essence to is the matching of image, due to These characteristics, current complicated domestic and international
Under environment, for criminal investigation and monitoring, security, all there is very huge application prospect from the people daily life to national defense construction,
The improvement of its technology also receives much attention.
In existing video retrieval technology, recognition of face is typically based on to carry out pedestrian retrieval, but since video camera is set
Angle setting degree, setting position, object block and the influence of light etc. etc., the face of pedestrian often can not recognize or cannot be shot
Camera takes, and can not carry out effective pedestrian retrieval.
Patent document 1 discloses a kind of pedestrian retrieval method based on Bidirectional sort, same under multiple-camera for being promoted
The matched accuracy of pedestrian.Specifically, in technology in patent document 1, first by the feature extraction of pedestrian's object and
Scale learning initially sorts to pedestrian to be measured collection, and then Query pedestrian to be measured concentrates each pedestrian, and calculates to be checked
The two-way content similarities and neighbour's similitude of pedestrian and pedestrian to be measured, finally, similar with neighbour according to two-way content similarities
Property reset pedestrian to be measured and concentrate each pedestrian to be measured.According to patent document 1, the thought of bi-directional matching is introduced, content and neighbour are passed through
Similitude resets pedestrian to be measured, can get more accurate pedestrian retrieval as a result, and becoming for pedestrian's appearance caused by environmental change
Change more robust.
In addition, pedestrian retrieval technology of the patent document 2 for tradition based on video image, which exists, excessively relies on classifier
The problem of distinguishing pedestrian and non-pedestrian sport foreground and then carrying out characteristic matching proposes that a kind of Internet of Things that is based on carries out pedestrian retrieval
Method and system.Method disclosed in patent document 2 includes: to obtain the video flowing that camera acquires under association scene, is utilized respectively
Edge detection algorithm extracts candidate streak line to video flowing, and to qualified candidate streak line horizontal line section or vertical line
Duan Jinhang is approached;The judgement of striped line density is carried out, non-pedestrian interference is excluded, obtains adding frame pedestrian position;Extract striped dress ornament row
Feature in people's frame, establishes feature database;According to the front-rear position relationship of camera under association scene, select to be retrieved in Current camera
Striped dress ornament pedestrian extracts position and feature, and the feature database with before within the scope of magazine a period of time carries out matching sequence,
Obtain pedestrian retrieval result.According to patent document 2, striped dress ornament row compared with the existing technology can be quickly and efficiently identified
People is more able to satisfy the demand of public security traffic law enforcement.
But, pedestrian p to be checked and pedestrian to be detected are collected G={ g by patent document 1i| i=1 ..., each pedestrian g in ni
It is matched as a whole, the different parts for not accounting for pedestrian have difference to matching due to the reasons such as shooting condition
Importance.Also, patent document 2 is only oriented to the retrieval of striped dress ornament pedestrian, and application scenarios are very limited.
Existing technical literature
103325122 A of patent document 1:CN
102663359 A of patent document 2:CN
Summary of the invention
Technical problems to be solved by the inivention
Specifically, in the technology of patent document 1, in addition to previous measurement pedestrian p to be checked and pedestrian g to be measuredi's
Similitude, by similitude by obtaining positive content similarity from big to smallExcept, also building include pedestrian p to be checked and to
Pedestrian is detected to collect in G except giThe pedestrian to be measured of outer pedestrian's object collectsAnd measure pedestrian g to be measurediCollect with pedestrian to be measuredIn
The similitude of each pedestrian is obtained reversed similarity by similitude from small to largeBut, patent document 1 is in measured similarity
When, distance function is learnt by existing scale learning method, using the distance metric function of acquisition come measured similarity, therefore
Similitude is the whole matching result between pedestrian to be checked and pedestrian to be measured, does not account for the body of pedestrian to be checked
Each position of body has different importance this point for retrieval.
Patent document 2 is utilized the related information (front-rear position relationship) that Internet of Things polyphaser is monitored and carries out pedestrian's inspection
Rope, but since it is to detect striped dress ornament pedestrian as target, therefore application prospect is limited.
Due in camera network each video camera shooting video, picture from various environment and feature it is different, such as
Although fruit target person occurs in video, because intentionally or accidentally leading to only to be photographed partial portion or partial portion
It is not clearly captured to then being likely to that target person can not be correctly detected according to the prior art, in accuracy, effect
There is the possibility further increased in terms of rate.
For this purpose, the purpose of the present invention is to provide a kind of video frequency searching device and video retrieval methods, it is contemplated that people
The parts of body of object has different importance to retrieval due to the reasons such as shooting condition, improves the accuracy of video frequency searching.
The technical solution solved the problems, such as
First aspect present invention provides a kind of video frequency searching device, extracts target person from the video data that video camera is shot
At least either in object and candidate personage measures this any one similitude between another one characterized by comprising
Obtain the target person acquisition unit of the image of target person;Obtain candidate personage's acquisition unit of the image of candidate personage;To described
Target person and the candidate personage are carried out personage's cutting part of personage's segmentation by identical segmentaion position;To the target person
The feature extraction unit of feature is extracted with the candidate respective each segmentaion position of personage;To the target person and the time
Object of choosing is acquired the local similarity calculation part of local similarity by each segmentaion position calculating characteristic distance;For institute
State the weight calculation unit that each of candidate personage segmentaion position calculates weight;The weight calculation unit is calculated with being based on
Weight and the calculated local similarity of local similarity calculation part export the target person and the candidate personage's
The correlation result output section of the result of similitude judgement.
According to the video frequency searching device, spy is carried out since target person and candidate personage are divided into multiple segmentaion positions
Sign matching by segmentaion position measured similarity, therefore considers the different parts of personage to similarity measurement with different
Importance can be improved the accuracy of similarity measurement or even personage's retrieval compared with prior art.
The video frequency searching device of second aspect of the present invention is characterized in that, in the video frequency searching device of above-mentioned first aspect
In, the weight calculation unit calculates at least one of the first weight, the second weight and third weight to determine the weight, institute
Stating the first weight is to be located at the position of human body based on the segmentaion position and determine, second weight is based on the segmentation
The readability of the image of the target person at position and the segmentaion position of the candidate personage and determine, it is described
Third weight is the conspicuousness of the image of the segmentaion position based on the target person and the candidate personage and determines.
Wherein, the weight calculation unit constructs training one for each segmentaion position using sample set by the method for machine learning
The image of the target person and each segmentaion position of the candidate personage is inputted and is corresponded to by the model for judging readability
The model, utilize output result to calculate second weight.
According to the video frequency searching device, due to consideration that the weight of the position based on segmentaion position, being based on readability
Weight and at least one of weight this three based on notable feature, so can more accurately reflect each segmentaion position
Weight, can further increase compared with prior art similarity measurement so that personage retrieval accuracy.
Third aspect present invention provides a kind of video retrieval method, extracts target person from the video data that video camera is shot
Any one of object and candidate personage measure this any one similitude between another one characterized by comprising obtain
The target person obtaining step of the image of target person;Obtain candidate personage's obtaining step of the image of candidate personage;To described
Target person and the candidate personage are carried out personage's segmentation step of personage's segmentation by identical segmentaion position;To the target person
Object and the respective each segmentaion position of the candidate personage extract the characteristic extraction step of feature;To the target person and institute
It states candidate personage and calculates characteristic distance by each segmentaion position to acquire the local similarity of local similarity and calculate step;
The weight calculation step of weight is calculated for each of the candidate personage segmentaion position;Be based on the weight calculation unit
Calculated weight and the calculated local similarity of local similarity calculation part, export the target person and the time
Choose object similitude judgement result correlation result output step.
According to the video retrieval method, in the same manner as the video frequency searching device of above-mentioned first aspect, due to by target person
Multiple segmentaion positions are divided into carry out characteristic matching with candidate personage, by segmentaion position measured similarity, therefore are considered
The different parts of personage have different importance to similarity measurement, and can be improved similarity measurement compared with prior art is
The accuracy retrieved to personage.
Invention effect
It is capable of providing a kind of video frequency searching device and video retrieval method according to the present invention, in view of the body of personage is each
Position has different importance to retrieval, improves the accuracy of video frequency searching.
Detailed description of the invention
Fig. 1 is the schematic diagram for indicating target person and candidate personage in the present invention.
Fig. 2 is the schematic diagram for indicating personage's segmentation.
Fig. 3 is the schematic diagram for indicating to carry out segmentaion position feature extraction.
Fig. 4 is the schematic diagram for indicating to calculate the local similarity of segmentaion position.
Fig. 5 is the figure for indicating to seek an example of the method for image clearly degree.
Fig. 6 is the flow chart of the video retrieval method of first embodiment.
Fig. 7 is the flow chart of the video retrieval method of second embodiment.
Fig. 8 is the schematic diagram for indicating to apply the video frequency search system of video retrieval method of the invention.
Fig. 9 is the block diagram for indicating the video frequency searching device of third embodiment.
Figure 10 is the block diagram for indicating the variation of video frequency searching device.
Specific embodiment
Below with reference to the accompanying drawings description of specific embodiments of the present invention.
In following implementation, in the case where referring to that the number of element waits (including number, numerical value, amount, range etc.), remove
The case where especially clearly stating and the case where be obviously limited to optional network specific digit from principle except, be not limited to the certain number
Word can be optional network specific digit above and below.
In addition, in the following embodiments, structural element (including step element etc.) is in addition to what is especially clearly stated
It situation and is obviously not understood as except necessary situation from principle, is all not necessarily necessary, and also may include explanation
The element being not expressly mentioned in book, this is not necessarily to explicit word.
Similarly, in the following embodiments, in shape, the positional relationship etc. for referring to structural element etc., in addition to spy
The case where not clearly stating and be expressly understood that from principle for and infeasible situation except, including substantially with its shape etc.
Approximate or similar element.This is also the same for above-mentioned numerical value and range.
Also, the present invention is suitable for (similitude) whether any matching judged between target person and candidate personage
Scene.As applicable scene of the invention for following two example.First scene is for the spy extracted in video data
Fixed personage's (one or more) judge its whether be personage in existing database and be which personage scene, at this time
The specific personage is target person, and the personage in database is then candidate personage, wherein the database for example can be
The figure database that specific organization is possessed, is also possible to detect from video data, extracts multiple personages and construct
Database.Second scene is the retrieval monitoring pair from video data captured by the camera network being made of a large amount of video cameras
The scene of elephant, at this time the supervision object be target person and candidate personage be then detected from video data, extract it is every
A personage, though wherein the target person is specified by user equally to can be and is extracted from video data --- and it for example can be with
It is to be also possible to obtain by other approach from extracting in certain frame image in video data --- it is, for example, from existing figure
As data (such as photo etc.) extract.
Firstly, being illustrated with reference to first embodiment of the FIG. 1 to FIG. 6 to video retrieval method of the invention.
Fig. 1 is the schematic diagram for indicating target person T in first embodiment and candidate personage C, it is assumed that target person T is
One, candidate personage C is { Ci| i=1 ..., n } it is n total.The application scenarios that the first embodiment assumes are above-mentioned first
Scene, for example, can from the entrance camera of certain enclosed environment shoot video data in detect, extract each enter should
The personage of enclosed environment and constitute candidate figure database, and target person can be a certain portion's video camera in the enclosed environment
Taken personage, present embodiment is for judging the personage taken is which personage in figure database.
But, as second embodiment will illustrate hereinafter, as long as the present invention being capable of metric objective personage and time
The similitude chosen between object, therefore substantially there is no limit to target person and the respective quantity of candidate personage.
It, can in the case where target person and/or candidate personage are detected, extracted from video data or image data
To use existing arbitrary pedestrian detection (Pedestrian Detection) technology.For example, can use HOG feature and SVM
Classifier, to carry out pedestrian detection, obtains each personage's shown in FIG. 1 for the environmental training classifier of video camera setting
Image.
As described above, in the prior art, usually target person and candidate personage being regarded as an entirety respectively and carrying out spy
Sign matching, measures the similitude of the two.The measure of similitude generally all includes image feature representation and figure there are a variety of
It, can be with Color feature and textural characteristics construction one as the two steps of distance metric, such as described in patent document 1
Then the visual signature of a robust carries out similarity measurement using the distance function (such as Euclidean distance) of standard, or can lead to
Scale learning method study mahalanobis distance function is crossed, is carried out more accurately according to the distance matrix metric that training obtains apart from degree
Amount.
But, due to the limitation of video camera setting position, angle, illumination etc., certain positions of target person may be clapped
It takes the photograph relatively to obscure or some positions is blocked without being photographed, the prior art is not accounted in measured similarity
To such problems.
In this regard, the present invention is as shown in Fig. 2, carry out personage's segmentation, owner to the target person T and candidate personage C in Fig. 1
Object is divided into the part m according to identical segmentaion position, for example, target person T is divided into { Tj| j=1 ..., m }, candidate
Object CiIt is divided into { Ci,j| j=1 ..., m }.Here, for convenience, only it is marked in figure and each personage is divided into head, body
The state of cadre and leg totally 3 parts, but the present invention is for being divided into several parts for personage and dividing by what position
It cuts and does not limit, can be according to the feature of target person --- such as clarity feature, macroscopic features, garment ornament, appoint
Meaning carries out personage's segmentation.
Target person and candidate personage are regarded as one unlike the prior art by the video retrieval method of present embodiment
It is whole to carry out characteristic matching, but personage's segmentation is carried out as shown in Figure 2, characteristic matching, which is carried out, by each segmentaion position comes the system of weights and measures
Portion's similitude, and the sequence of candidate personage, then the power based on each segmentaion position are carried out by local similarity to each segmentaion position
The comprehensive whole similitude for determining target person and candidate personage again, is thus finally sorted.
Here, the similarity measurement of segmentaion position can use existing method for measuring similarity.Fig. 3 is indicated in Fig. 2
Segmentaion position after segmentation carries out the schematic diagram of feature extraction.It can be come with Color feature, textural characteristics and local feature
To characteristics of image.Wherein, more commonly used color characteristic such as color histogram, UV component color feature, domain color composes histogram
Figure;Textural characteristics are primarily referred to as the striped repeated on target clothes, grid etc., more commonly used such as Gabor filter,
OGD filter;Local feature such as commonly HOG feature.In this way, as shown in figure 3, can be to target person T and each candidate
Each segmentaion position of personage C extracts feature, for example, the segmentaion position T of target person TjExtraction feature is FTj, each candidate
Object CiSegmentaion position Ci,jExtraction feature is FCi,j.Here, due to candidate personage CiFor the personage in figure database, therefore
Feature can be organized into property data base DBC by each segmentaion position, then, as shown in figure 3, the property data base of candidate personage
It is expressed as { DBCj| j=1 ..., m }.
Then, to each segmentaion position T of target person TjFeature FTj, measure itself and each candidate personage CiCorrespondence
Segmentaion position Ci,jFeature FCi,jThe distance between, characteristic distance is calculated to acquire local similarity.The present invention for feature away from
From measure there is no any restriction, existing any means can be used.It is, for example, possible to use the distance functions of standard
(such as Euclidean distance) calculates characteristic distance, and characteristic distance is converted to the score of similitude;Engineering can certainly be utilized
Correlation technique (having supervision or unsupervised) in habit, learns optimal measurement model, so that matching precision maximizes, herein not
It repeats again.
As a result, as shown in figure 4, to each candidate personage CiEach segmentaion position Ci,jFeature FCi,j, obtain itself and mesh
Mark the correspondence segmentaion position T of personage TjFeature FTjBetween local similarity Si,j.Then, it is carried out by each segmentaion position
The sequence of similitude obtains each local similarity Si,jPartial ordering value Ri,j, 1≤R herei,j≤ n, and the partial ordering
The value the high with similitude, and the bigger mode of ranking value acquires.For example, if assuming each candidate personage CiJ-th segmentaion position
Local similarity { Si,j| i=1 ..., n } meet S2,j> S5,j... > S1,j, then partial ordering's value { Ri,j| i=1 ..., n value
Among, R2,j=n, R5,j=n-1, R1,j=1.
Then, by each segmentaion position to each candidate personage CiBy its partial ordering value Ri,jMultiplied by corresponding weight
Wi,jIt is weighted summation, each candidate personage C is acquired by formula 1iWhole ranking value Ri。
It is specifically described below for each partial ordering's value Ri,jThe weight W of settingi,j.Although should be noted that herein
It is described as weight Wi,jFor each partial ordering's value Ri,jSetting, but as described later, weight Wi,jIt is also to be directed to each Local Phase
Like property Si,jSetting, in fact, weight Wi,jIt should be understood to be each segmentaion position { C for candidate personagei,j| i=
1,…,n;J=1 ..., m } setting value, or be understood as each target person under each segmentaion position and candidate
Value set by the combination of personage.
As described above, the present invention and the prior art the difference lies in that considering the different parts of target person to people
Quality testing rigging has different importance, which is embodied in above-mentioned weight Wi,jOn.
Weight Wi,jIt can be set at least through following three kinds of modes, moreover, following manner both can individually determine weight,
It can comprehensively consider and combine decision weight, naturally it is also possible to further combined with other modes.
(1) the weight WD of the position based on segmentaion positioni,j
It is well known that numerous features such as the dress ornament of people, height, posture, manner, hair style, face can constitute and other people
Area's another characteristic.But, the interference that all multiple features all can more or less be worn clothes among this, light, weather etc. are extraneous, but
There are the face of some features such as personage, posture then to interfere opposite robust to such.It therefore, can be according to each segmentaion position Tj
Weight WD is determined positioned in which position of human bodyi,j.And great weight, this hair are assigned as which position for body
It is bright that this is not limited then, it can be set according to the actual situation.Here, due to WDi,jIt is that the position based on segmentaion position is determined
Fixed, so the weight is identical for each candidate personage, therefore it can also write a Chinese character in simplified form and make WDj。
For example, as shown in Fig. 2, target person is divided into head T1, trunk T2With leg T3The case where totally 3 part
Under, it can be according to WD1> WD2> WD3Mode assign weight.
(2) the weight WI based on readabilityi,j
As noted previously, as the influence of video camera setting position, angle, light, weather etc., detected from video data,
The image of the personage extracted may some positions cannot be clearly captured to.For the portion not being clearly captured
Point, it is believed that part contribution matched for personage does not have higher reliability, therefore, it may be considered that according to segmentaion position
The readability of image assigns different weight WI to each segmentaion positioni,j.Wherein, target person and each candidate personage are each
From clear position may be different, therefore weight WIi,jReadability of the size both with each segmentaion position of target person have
It closes, it is also related with the readability of each segmentaion position of corresponding candidate personage.It but, can also be only in the case where simplification
Consider the readability of each segmentaion position of target person, at this moment weight WIi,jIt can write a Chinese character in simplified form and make weight WIj。
Illustrate the judgment method of readability below.
The method that a variety of readabilities for being used to judge image exist in the prior art, the present invention, which can apply, arbitrarily to be sentenced
Disconnected method, is not particularly limited.Here, to judge image clearly journey by the method for machine learning by each segmentaion position
It is illustrated for degree.
As shown in figure 5, sample set SP can be in the same manner as target person T or candidate personage firstly, preparing sample set SP
It detects, extract from video data.Then, for the identical method as shown in Figure 2 of the character image in sample set into
The segmentation of pedestrian's object obtains segmentation sample set { SPj| j=1 ..., m, and judge by artificial mode each cutting part of each image
How is the readability of position, assigns definition values respectively to them.Then, for each segmentaion position with each segmentation sample set
Image is input, definition values are output, and a model { M is respectively trainedj| j=1 ..., m } for obtaining the clear of the position
Degree.
To by by target person T and each candidate personage CiThe image of each segmentaion position be input to corresponding model
In, the definition values of the segmentaion position can be obtained.For example, for each weight WIi,j, by the segmentaion position T of target person Tj
With candidate personage CiSegmentaion position Ci,jImage be input in corresponding model and acquire respective definition values ITjAnd ICi,j,
And the definition values of both sides are comprehensively considered to determine weight WIi,j。
(3) the weight WA based on notable featurei,j
In some cases, target person or candidate personage may have certain significant features, such as aobvious on dress ornament
Write feature or knapsack, bag, luggage case, cap, jewellery etc. can significantly with other people area's another characteristics.At this moment, it may be considered that
Different weight WA is assigned to different segmentaion positions based on such notable featurei,j.Wherein, weight WAi,jIt can be only
It is set based on the feature of each candidate personage, each candidate feature of personage and the feature of target person can also be comprehensively considered
It sets, the present invention do not limit this.But, in the first embodiment, it is contemplated that candidate personage be it is limited and known,
For convenience, the notable feature that can only consider each candidate personage, each segmentaion position based on each candidate personage it is significant
Property sets weight WAi,j。
As described above, weight Wi,jIt can be by above-mentioned WDi,j、WIi,jAnd WAi,jAny of determine, can also integrate
Consider and is determined by any number of among them.It is of course also possible to decide the size of weight in its sole discretion by user, the present invention is to this
Do not limit.
Then, each candidate personage C is being obtained by above-mentioned formula 1iWhole ranking value RiLater, to whole ranking value RiInto
Row sequence.As shown in figure 4, such as these whole ranking value RiMeet Rx> Rn... > R3, then with wherein maximum value RxIt is corresponding
Candidate personage CxAs the candidate personage closest to target person T, it is believed that target person T is most likely to be candidate personage Cx。
The process of the video retrieval method of first embodiment is indicated in Fig. 6.
As shown in fig. 6, in step s 601, being detected from video data, extracting the image of target person T, and obtained simultaneously
Take the image of candidate personage C.As described above, candidate personage C can come from existing figure database, can be from newly constructing
Candidate figure database.
Then, in step S602, to target person T and each candidate personage CiPersonage point is carried out by identical segmentaion position
It cuts, wherein as described above, the present invention does not limit for personage to be divided into several parts and be split by what position
It is fixed, can be according to the characteristic of target person --- such as clarity characteristic, appearance characteristic, dress ornament characteristic, arbitrarily carry out personage
Segmentation.
Then, in step S603, to the segmentaion position T of target person TjExtract feature FTj, to each candidate personage Ci's
Segmentaion position Ci,jExtract feature FCi,j。
Later, in step s 604, to each segmentaion position T of target person TjFeature FTj, measure itself and each time
Choose object CiCorrespondence segmentaion position Ci,jFeature FCi,jThe distance between, characteristic distance is calculated to acquire local similarity
Si,j。
Then, in step s 605, the sequence that similitude is carried out by each segmentaion position, obtains partial ordering value Ri,j。
Then, in step S606, to each candidate personage CiBy the partial ordering value R of each of which segmentaion positioni,jMultiply
With corresponding weight Wi,jIt is weighted summation, each candidate personage C is acquired by formula 1iWhole ranking value Ri.Wherein, weight Wi,j
It can be set at least through three kinds of above-mentioned modes, it both can be by above-mentioned WDi,j、WIi,jAnd WAi,jAny of determine,
It can comprehensively consider and be determined by any number of among them.It is of course also possible to decide the size of weight in its sole discretion by user.
Then, in step S607, to whole ranking value RiIt is ranked up, with wherein maximum value RxCorresponding candidate
Object CxAs the candidate personage closest to target person T.Then, such as the processing that the result is prompted to user can be carried out
(not shown), the processing is same as the prior art, and so it will not be repeated.
It is multiple since target person and candidate personage to be divided into according to the video retrieval method of above-mentioned first embodiment
Segmentaion position carries out characteristic matching, by segmentaion position measured similarity, therefore considers the different parts of personage to similar
Property measurement have different importance, can be improved compared with prior art similarity measurement so that personage retrieval accuracy.
Thus, it is possible to quickly and accurately judge that the target person for detecting, extracting from video is which of candidate personage.
The second embodiment of video retrieval method of the invention is illustrated below, it is hereafter main to illustrate with first in fact
Apply the difference of mode.
In the explanation of the above first embodiment, due to being directed to the feelings that candidate personage C is the personage in database
Condition, it is possible to carry out the partial ordering of candidate personage first by each segmentaion position, then considered by each candidate personage
Summation is weighted to local ranking value on the basis of the weight of each segmentaion position and acquires whole ranking value, then again based on whole
Body ranking value judges that target person is most likely to be which of candidate personage personage.
But, such calculation is suitable for the first above-mentioned scene, but is not suitable for the second above-mentioned scene.Upper
In the case where the second scene stated, since candidate personage is each personage for detecting, extracting in video data, it is contemplated that video
Data are that real-time delivery is come, and carry out partial ordering to such each personage and whole sequence is technically difficult to realize, because
This can be calculated without partial ordering, whole sequence using local similarity and weight in this second embodiment
Whole similitude, and to whole similitude given threshold, it is believed that whole similitude is higher than the candidate personage of threshold value, and there is a strong possibility
It is intended to the target person of retrieval.
Fig. 7 is the flow chart for indicating the video retrieval method of second embodiment.
In this second embodiment, it is assumed that from video data captured by the camera network being made of a large amount of video cameras
Retrieve monitored object, thus the supervision object is target person and candidate personage is then to detect, extract from video data
Each personage.Therefore, it is different from the first embodiment, target person T is people that is preassigned, having grasped its image
Object, certain image can be from what is extracted in certain frame image in video data, be also possible to obtain by other approach
--- it is, for example, to be extracted from existing image data etc..
The image of target person and candidate personage is obtained in the same manner as step S601 in step s 701, difference exists
In the current candidate personage C of detection, extraction from video datar, and the figure of the target person T specified by user is obtained simultaneously
Picture.Since present embodiment is without partial ordering, whole sequence, waited here only for a target person T and one
Choose object CrSimilarity measurement is carried out to be illustrated.But, such similarity measurement is to candidate personage and target person
What each combination will carry out, this point is needless to say.
Then, in step S702, to target person T and current candidate personage C in the same manner as step S602rBy identical
Segmentaion position carries out personage's segmentation.
Then, in step S703, to the segmentaion position T of target person T in the same manner as step S603jExtract feature FTj,
To current candidate personage CrSegmentaion position Cr,jExtract feature FCr,j。
Later, in step S704, to each segmentaion position T of target person TjFeature FTj, measure its with it is current
Candidate personage CrCorrespondence segmentaion position Cr,jFeature FCr,jThe distance between, characteristic distance is calculated to acquire local similarity
Sr,j。
Then, in step S705, to candidate personage CrBy the local similarity S of each of which segmentaion positionr,jMultiplied by phase
The weight W answeredr,jIt is weighted summation, candidate personage C is acquired by formula 2rWhole similitude Sr.Wherein, weight Wr,jSeek
Method is identical with first embodiment, and details are not described herein again.Wherein, it should be noted that be a bit, for based on notable feature
Weight WAi,jFor, since candidate personage is limited and known in first embodiment, therefore each point based on each candidate personage
The conspicuousness at position is cut to set weight WAi,j, and target person is specified personage in present embodiment, it is possible to it is based on
The conspicuousness of each segmentaion position of target person sets weight WAi,j.But, as described above, the present invention for this point simultaneously
It does not limit.
Then, in step S706, to obtained whole similitude SrWith threshold value STSize be compared, in SrIt is greater than
Threshold value STIn the case where, it is believed that current candidate's object CrIt may be target person T.Wherein, threshold value STIt can rule of thumb set
Fixed, the present invention does not limit this.Certainly, which can also be more than one, can set multiple threshold values, according to whole phase
Like the size relation of property and multiple threshold values, current candidate personage C is judgedrIt may be the similarity degree of target person T.For example, can
With given threshold ST1、ST2、ST3Totally three threshold values, work as Sr≤ST1When, it is judged as that current candidate personage is unlikely to be target person,
Work as ST1< Sr≤ST2When, being judged as that current candidate personage has 70% may be target person, work as ST2< Sr≤ST3When, judgement
Having 80% for current candidate personage may be target person, work as ST3< SrWhen, be judged as current candidate personage have 90% can
It can be target person.Later, it is prompted to user by all preserving the current candidate personage for meeting this 3 kinds of conditions, and led to
Cross arbitrary mode by they distinguish indicate, can ensure to omit may be the candidate personage of target person while,
Further judgement power is given to user visually to be judged.
As a result, according to the video retrieval method of above-mentioned second embodiment, in the same manner as first embodiment, due to by mesh
Mark personage and candidate personage are divided into multiple segmentaion positions to carry out characteristic matching, by segmentaion position measured similarity, therefore examine
The different parts for having considered personage have different importance to similarity measurement, can be improved similitude compared with prior art
Measurement or even the accuracy of personage's retrieval.Thus, it is possible to quickly and accurately judge the people for detecting, extracting from video data
Whether object is intended to the target person found.
In addition, as described above, although the application scenarios of first embodiment and second embodiment are different, their skill
Art thought is inherently identical, that is, first embodiment is all similarly that part is utilized is similar with second embodiment
Property and weight obtain the judging result of the similitude of target person and candidate personage.
The video frequency searching device of third embodiment of the invention is illustrated below.
Fig. 8 is the schematic diagram for indicating to apply the video frequency search system of video retrieval method of the invention.Video frequency searching system
System 800 includes camera network 801 and video frequency searching device 804, and wherein camera network 801 is by multiple (k) video cameras
801-1,801-2 ... 801-k and interface equipment 803 are constituted, and each video camera is arranged on different geographical locations, claps respectively
Different space 802-1,802-2 ... 802-k is taken the photograph, the respective ken can not be overlapped or partly overlap.
The video data of each video camera shooting in camera network 801 is by interface equipment 803 with wired or wireless
Mode is sent to video frequency searching device 804 and is retrieved, analyzed.Video frequency searching device 804 execute above-mentioned first embodiment and
Video retrieval method described in second embodiment.
Fig. 9 is the block diagram for indicating the specific structure of video frequency searching device 804 of the invention.It will be appreciated that the Fig. 9 was indicated
Each module can be hardware module, be also possible to program module, that is, video retrieval method of the invention can be executed by computer
The program that is stored in a storage medium is realized.
As shown in figure 9, video frequency searching device 804 includes target person acquisition unit 901, candidate personage's acquisition unit 902, personage
Cutting part 903, feature extraction unit 904, local similarity calculation part 905, weight calculation unit 906, local similarity sequence portion
907, whole ranking value calculation part 908, whole Similarity measures portion 909, switching part 910 and judge output section 911.
Wherein, target person acquisition unit 901 and candidate personage's acquisition unit 902 execute the place of step S601 or step S701
Reason obtains the image of target person and the image of candidate personage from video data and/or figure database.For example, above-mentioned
In the case where first embodiment, target person acquisition unit 901 detects from video data, extracts the image of target person T, candidate
Personage's acquisition unit 902 obtains the image of candidate personage C from figure database.And in the case where above-mentioned second embodiment, mesh
Mark personage's acquisition unit 901 obtains the image for the target person specified by user, and candidate personage's acquisition unit 902 is mentioned from video data
Take the image of candidate personage.
Personage's cutting part 903 executes the processing of step S602 and step S702, to target person acquisition unit 901 and candidate
The target person and candidate personage that object acquisition unit 902 is got carry out personage's segmentation.And it sends the segmented image after segmentation to
Feature extraction unit 904 and weight calculation unit 906.
Feature extraction unit 904 executes the processing of step S603 and step S703, to the respective of target person and candidate personage
Segmentaion position extract feature, and send local similarity calculation part 905 for the local feature extracted.
Weight calculation unit 906 is according to the target person sent from personage's cutting part 903 and the segmented image of candidate personage
Calculate weight.Here the mode of weight calculation can use above-mentioned WDi,j、WIi,jAnd WAi,jAny of, it can also be comprehensive
It closes and considers and determined by any number of among them.
Local similarity calculation part 905 executes the processing of step S604 and step S704, calculates by each segmentaion position
The characteristic distance of target person and candidate personage, measures local similarity.Also, office is sent by calculated local similarity
Similitude sequence portion, portion 907 and whole Similarity measures portion 909.
Local similarity sequence portion 907 executes partial ordering described in the step S605 of first embodiment, obtains part
Ranking value, and whole ranking value calculation part 908 is sent by partial ordering's value.
The local similarity and weight meter that whole Similarity measures portion 909 is sent using local similarity calculation part 905
The calculated weight in calculation portion 906, executes the weighted sum of local similarity described in the step S705 of second embodiment, obtains
The whole similitude of target person and candidate personage.Switching part 910 is sent by obtained whole similitude.
The partial ordering's value and weight meter that whole ranking value calculation part 908 is sent using local similarity sequence portion 907
The calculated weight in calculation portion 906, executes the weighted sum of partial ordering's value described in the step S606 of first embodiment, obtains
The respective whole ranking value of candidate personage.Switching part 910 is sent by obtained whole ranking value.
Switching part 910 is the method for first embodiment or second embodiment and will receive according to currently performed
Whole similitude and the switching of whole ranking value be sent to and judge output section 911.
Judge that output section 911 receives any one of whole similitude and whole ranking value, is receiving whole similitude
In the case where, as described in second embodiment, it is compared with one or more threshold values, exports comparison result, and connecing
In the case where receiving whole ranking value, as described in the first embodiment, it is believed that target person is most possibly the largest ranking value
Corresponding candidate personage, exports the result.
Wherein, in the example of video frequency searching device 804 shown in Fig. 9, using switching part 910 according to being first embodiment
Or second embodiment and switch whole similitude and whole ranking value, however, the present invention is not limited thereto can also be not provided with cutting
Portion 910 is changed, but is in whole Similarity measures portion 909 in the case where executing the video retrieval method of first embodiment
Unactivated state, and make 907 He of local similarity sequence portion in the case where executing the video retrieval method of second embodiment
Whole ranking value calculation part 908 is unactivated state.
As described above, the video frequency searching device 804 using third embodiment is able to carry out the first and second embodiments
Video retrieval method, carry out characteristic matching since target person and candidate personage are divided into multiple segmentaion positions, by point
Position measured similarity is cut, therefore consider the different parts of personage there is different importance to similarity measurement, and it is existing
There is technology to compare the accuracy that can be improved similarity measurement or even personage's retrieval.Thus, it is possible to quickly and accurately judge from
The target person detect in video, extracted is which of candidate personage;Alternatively, can quickly and accurately judge from view
Whether the personage detect in frequency evidence, extracted is intended to the target person found.
In addition, as shown in Figure 10, it can also be by local similarity calculation part 907, whole ranking value calculation part 908, entirety
Similarity measures portion 909, switching part 910 and judge output section 911 as one it is module integrated at correlation result output section
1001, the local similarity and weight calculation unit calculated by the correlation result output section according to local similarity calculation part 905
906 weights calculated export correlation result.That is, in the correlation result output section 1001, based on being above-mentioned first
Which scene in scene and the second scene, and selectively carry out local similarity calculation part 907, whole ranking value calculation part
908, the processing in whole Similarity measures portion 909, and then export result.
The preferred embodiment of the present invention is illustrated above, but the present invention is not limited to above embodiment,
It can be made various changes in the range of not departing from its thought.
In addition, also including various modifications example the present invention is not limited to above embodiment.Also, above embodiment
It is and non-limiting must have illustrated whole structures to make the present invention be easily understood and the detailed description that carries out
Industrial utilization
The present invention relates to field of video retrieval, can be adapted for any judged between target person and candidate personage
With whether scene, be not limited to the first and second above-mentioned scenes, can be widely applied to the occasions such as security, criminal investigation.
Claims (11)
1. a kind of video frequency searching device, from extraction target person in the video data that video camera is shot and candidate personage at least
Any one, measures this any one similitude between another one characterized by comprising
Obtain the target person acquisition unit of the image of target person;
Obtain candidate personage's acquisition unit of the image of candidate personage;
Personage's cutting part of personage's segmentation is carried out by identical segmentaion position to the target person and the candidate personage;
The feature extraction unit of feature is extracted to the target person and the respective each segmentaion position of the candidate personage;
It is similar to acquire part by each segmentaion position calculating characteristic distance to the candidate personage to the target person
The local similarity calculation part of property;
The weight calculation unit of weight is calculated for each of the candidate personage segmentaion position;With
It is defeated based on the calculated weight of the weight calculation unit and the calculated local similarity of local similarity calculation part
The correlation result output section of the result of the similitude judgement of the target person and the candidate personage out.
2. video frequency searching device as described in claim 1, it is characterised in that:
The weight calculation unit calculates at least one of the first weight, the second weight and third weight to determine the weight,
First weight is to be located at the position of human body based on the segmentaion position and determine,
Second weight is the clear journey of the image of the segmentaion position based on the target person and the candidate personage
Degree and determine,
The third weight is the conspicuousness of the image of the segmentaion position based on the target person and the candidate personage
And determine.
3. video frequency searching device as claimed in claim 2, it is characterised in that:
The weight calculation unit is by the method for machine learning using sample set for each described one judgement of segmentaion position training
The image of the target person and each segmentaion position of the candidate personage is inputted corresponding institute by the model of readability
Model is stated, calculates second weight using output result.
4. video frequency searching device according to any one of claims 1 to 3, it is characterised in that:
The candidate personage is the multiple personages obtained from figure database,
The target person is the personage extracted from the video data,
The correlation result output section includes local similarity sequence portion, whole ranking value calculation part,
Local similarity sequence portion is by each segmentaion position to the calculated part of local similarity calculation part
Similitude is ranked up to assign partial ordering's value,
The entirety ranking value calculation part is to each partial ordering of the candidate personage based on each segmentaion position
The whole ranking value of value and the weight calculation candidate personage,
The correlation result output section output whole maximum candidate personage of ranking value.
5. video frequency searching device according to any one of claims 1 to 3, it is characterised in that:
The candidate personage is the personage extracted from the video data,
The target person is the personage that the user of the video frequency searching device specifies,
The correlation result output section includes whole Similarity measures portion,
The Local Phase of the entirety Similarity measures portion based on the calculated each segmentaion position of the local similarity calculation part
Like property and the calculated weight of the weight calculation unit, the whole similitude of candidate personage and target person are calculated,
The correlation result output section exports the candidate personage according to the relationship of the whole similitude and defined threshold value
With the similarity degree of the target person.
6. a kind of video retrieval method, from any extracted in the video data that video camera is shot in target person and candidate personage
Person measures this any one similitude between another one characterized by comprising
Obtain the target person obtaining step of the image of target person;
Obtain candidate personage's obtaining step of the image of candidate personage;
Personage's segmentation step of personage's segmentation is carried out by identical segmentaion position to the target person and the candidate personage;
The characteristic extraction step of feature is extracted to the target person and the respective each segmentaion position of the candidate personage;
It is similar to acquire part by each segmentaion position calculating characteristic distance to the candidate personage to the target person
Property local similarity calculate step;
The weight calculation step of weight is calculated for each of the candidate personage segmentaion position;With
It is similar with the local similarity calculating calculated part of step based on the calculated weight of weight calculation step
Property, export the correlation result output step of the result of the target person and the similitude judgement of the candidate personage.
7. video retrieval method as described in claim 1, it is characterised in that:
In the weight calculation step, at least one of the first weight, the second weight and third weight are calculated to determine
Weight is stated,
First weight is to be located at the position of human body based on the segmentaion position and determine,
Second weight is the clear journey of the image of the segmentaion position based on the target person and the candidate personage
Degree and determine,
The third weight is the conspicuousness of the image of the segmentaion position based on the target person and the candidate personage
And determine.
8. video retrieval method as claimed in claim 7, it is characterised in that:
Judge the model of readability for each described segmentaion position training one using sample set by the method for machine learning,
By the image input pair of the target person and each segmentaion position of the candidate personage in the weight calculation step
The model answered calculates second weight using output result.
9. the video retrieval method as described in any one of claim 6~8, it is characterised in that:
The candidate personage is the multiple personages obtained from figure database,
The target person is the personage extracted from the video data,
The correlation result output step includes that local similarity sequence step and whole ranking value calculate step,
In the local similarity sequence step, step is calculated to the local similarity by each segmentaion position and is calculated
Local similarity out is ranked up to assign partial ordering's value,
It is calculated in step in the whole ranking value, to each office of the candidate personage based on each segmentaion position
The whole ranking value of portion's ranking value and the weight calculation candidate personage,
In correlation result output step, the maximum candidate personage of the whole ranking value is exported.
10. the video retrieval method as described in any one of claim 6~8, it is characterised in that:
The candidate personage is the personage extracted from the video data,
The target person is the personage that user specifies,
The correlation result output step includes whole Similarity measures step,
In the whole Similarity measures step, the calculated each segmentaion position of step is calculated based on the local similarity
Local similarity and the calculated weight of weight calculation step, it is similar to the entirety of target person to calculate candidate personage
Property,
In correlation result output step, according to the relationship of the whole similitude and defined threshold value, described in output
The similarity degree of candidate personage and the target person.
11. a kind of storage medium for being stored with the executable program of computer, it is characterised in that:
Described program makes video retrieval method described in any one of computer perform claim requirement 6~10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711236903.XA CN109858308B (en) | 2017-11-30 | 2017-11-30 | Video retrieval device, video retrieval method, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711236903.XA CN109858308B (en) | 2017-11-30 | 2017-11-30 | Video retrieval device, video retrieval method, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109858308A true CN109858308A (en) | 2019-06-07 |
CN109858308B CN109858308B (en) | 2023-03-24 |
Family
ID=66887972
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711236903.XA Active CN109858308B (en) | 2017-11-30 | 2017-11-30 | Video retrieval device, video retrieval method, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109858308B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110866532A (en) * | 2019-11-07 | 2020-03-06 | 浙江大华技术股份有限公司 | Object matching method and device, storage medium and electronic device |
CN116127133A (en) * | 2023-04-17 | 2023-05-16 | 成都苏扶软件开发有限公司 | File searching method, system, equipment and medium based on artificial intelligence |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1227430A2 (en) * | 2001-01-24 | 2002-07-31 | Eastman Kodak Company | System and method for determining image similarity |
CN102663359A (en) * | 2012-03-30 | 2012-09-12 | 博康智能网络科技股份有限公司 | Method and system for pedestrian retrieval based on internet of things |
CN103325122A (en) * | 2013-07-03 | 2013-09-25 | 武汉大学 | Pedestrian retrieval method based on bidirectional sequencing |
CN103714181A (en) * | 2014-01-08 | 2014-04-09 | 天津大学 | Stratification specific figure search method |
CN105023008A (en) * | 2015-08-10 | 2015-11-04 | 河海大学常州校区 | Visual saliency and multiple characteristics-based pedestrian re-recognition method |
CN105141903A (en) * | 2015-08-13 | 2015-12-09 | 中国科学院自动化研究所 | Method for retrieving object in video based on color information |
CN105635665A (en) * | 2014-12-01 | 2016-06-01 | 株式会社日立制作所 | Monitoring system, data transmission method and data transmission device |
CN106559647A (en) * | 2015-09-28 | 2017-04-05 | 株式会社日立制作所 | Enabling is closed the door detection means and method, artificial abortion's detecting system and the vehicles |
AU2016225819A1 (en) * | 2015-11-11 | 2017-05-25 | Adobe Inc. | Structured knowledge modeling and extraction from images |
CN106919889A (en) * | 2015-12-25 | 2017-07-04 | 株式会社日立制作所 | The method and apparatus detected to the number of people in video image |
US20170300569A1 (en) * | 2016-04-19 | 2017-10-19 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and storage medium |
CN107315795A (en) * | 2017-06-15 | 2017-11-03 | 武汉大学 | The instance of video search method and system of joint particular persons and scene |
-
2017
- 2017-11-30 CN CN201711236903.XA patent/CN109858308B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1227430A2 (en) * | 2001-01-24 | 2002-07-31 | Eastman Kodak Company | System and method for determining image similarity |
CN102663359A (en) * | 2012-03-30 | 2012-09-12 | 博康智能网络科技股份有限公司 | Method and system for pedestrian retrieval based on internet of things |
CN103325122A (en) * | 2013-07-03 | 2013-09-25 | 武汉大学 | Pedestrian retrieval method based on bidirectional sequencing |
CN103714181A (en) * | 2014-01-08 | 2014-04-09 | 天津大学 | Stratification specific figure search method |
CN105635665A (en) * | 2014-12-01 | 2016-06-01 | 株式会社日立制作所 | Monitoring system, data transmission method and data transmission device |
CN105023008A (en) * | 2015-08-10 | 2015-11-04 | 河海大学常州校区 | Visual saliency and multiple characteristics-based pedestrian re-recognition method |
CN105141903A (en) * | 2015-08-13 | 2015-12-09 | 中国科学院自动化研究所 | Method for retrieving object in video based on color information |
CN106559647A (en) * | 2015-09-28 | 2017-04-05 | 株式会社日立制作所 | Enabling is closed the door detection means and method, artificial abortion's detecting system and the vehicles |
AU2016225819A1 (en) * | 2015-11-11 | 2017-05-25 | Adobe Inc. | Structured knowledge modeling and extraction from images |
CN106919889A (en) * | 2015-12-25 | 2017-07-04 | 株式会社日立制作所 | The method and apparatus detected to the number of people in video image |
US20170300569A1 (en) * | 2016-04-19 | 2017-10-19 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and storage medium |
CN107315795A (en) * | 2017-06-15 | 2017-11-03 | 武汉大学 | The instance of video search method and system of joint particular persons and scene |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110866532A (en) * | 2019-11-07 | 2020-03-06 | 浙江大华技术股份有限公司 | Object matching method and device, storage medium and electronic device |
CN110866532B (en) * | 2019-11-07 | 2022-12-30 | 浙江大华技术股份有限公司 | Object matching method and device, storage medium and electronic device |
CN116127133A (en) * | 2023-04-17 | 2023-05-16 | 成都苏扶软件开发有限公司 | File searching method, system, equipment and medium based on artificial intelligence |
CN116127133B (en) * | 2023-04-17 | 2023-08-08 | 湖南柚子树文化传媒有限公司 | File searching method, system, equipment and medium based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
CN109858308B (en) | 2023-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhao et al. | Learning mid-level filters for person re-identification | |
CN108520226B (en) | Pedestrian re-identification method based on body decomposition and significance detection | |
Sarfraz et al. | Deep view-sensitive pedestrian attribute inference in an end-to-end model | |
CN104881637B (en) | Multimodal information system and its fusion method based on heat transfer agent and target tracking | |
CN106503687B (en) | Merge the monitor video system for identifying figures and its method of face multi-angle feature | |
CN105574505B (en) | The method and system that human body target identifies again between a kind of multiple-camera | |
Zheng et al. | Partial person re-identification | |
CN109558810B (en) | Target person identification method based on part segmentation and fusion | |
CN107067413B (en) | A kind of moving target detecting method of time-space domain statistical match local feature | |
CN107944431B (en) | A kind of intelligent identification Method based on motion change | |
CN105389562B (en) | A kind of double optimization method of the monitor video pedestrian weight recognition result of space-time restriction | |
CN103325122B (en) | Based on the pedestrian retrieval method of Bidirectional sort | |
CN110414441B (en) | Pedestrian track analysis method and system | |
CN105631430A (en) | Matching method and apparatus for face image | |
CN107615298A (en) | Face identification method and system | |
CN105279483A (en) | Fall-down behavior real-time detection method based on depth image | |
CN104992142A (en) | Pedestrian recognition method based on combination of depth learning and property learning | |
CN104751136A (en) | Face recognition based multi-camera video event retrospective trace method | |
CN107944416A (en) | A kind of method that true man's verification is carried out by video | |
CN109271932A (en) | Pedestrian based on color-match recognition methods again | |
CN112183438B (en) | Image identification method for illegal behaviors based on small sample learning neural network | |
D'Orazio et al. | People re-identification and tracking from multiple cameras: A review | |
CN111753601B (en) | Image processing method, device and storage medium | |
CN110443179A (en) | It leaves the post detection method, device and storage medium | |
CN109165612A (en) | Pedestrian's recognition methods again based on depth characteristic and two-way KNN sorting consistence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |