CN107992591A - People search method and device, electronic equipment and computer-readable recording medium - Google Patents
People search method and device, electronic equipment and computer-readable recording medium Download PDFInfo
- Publication number
- CN107992591A CN107992591A CN201711310435.6A CN201711310435A CN107992591A CN 107992591 A CN107992591 A CN 107992591A CN 201711310435 A CN201711310435 A CN 201711310435A CN 107992591 A CN107992591 A CN 107992591A
- Authority
- CN
- China
- Prior art keywords
- target person
- pictures
- picture
- filter condition
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
Abstract
The present invention provides a kind of people search method and device, electronic equipment and computer-readable recording medium.The people search method includes:Obtain the picture of target person;Identify the face characteristic of target person described in the picture of the target person and wear feature;Obtain the first filter condition;According to the face characteristic of the target person and feature is worn, and first filter condition is concentrated from image data and obtains the first pictures;Obtain the second filter condition;Second picture collection is obtained from first pictures according to second filter condition;The data for including picture according to the second picture collection determine the event trace of the target person.The present invention can be by multi-filtering condition, and face characteristic with reference to the target person and wears feature, quickly and accurately determines the event trace of the target person, more preferable usage experience is brought to user.
Description
Technical field
The present invention relates to intelligent search technique field, more particularly to a kind of people search method and device, electronic equipment and
Computer-readable recording medium.
Background technology
In prior art, face recognition technology relative maturity, is widely used in various people search fields
Jing Zhong.
But under specific search situation, it is simple when carrying out the search of personage by face recognition technology, only with face
For feature as search condition, search range is excessive, can not often meet the needs of users.In addition, again due to face recognition technology
Identify many details of face, identification process is comparatively laborious, not only increases the workload of server, and it is time-consuming compared with
It is long.Therefore, the simple search that personage is carried out by face recognition technology can not have been met the needs of users.
The content of the invention
In view of the foregoing, it is necessary to a kind of people search method and device, electronic equipment are provided and computer-readable deposited
Storage media, can be by multi-filtering condition, and face characteristic with reference to the target person and wears feature, quickly and accurately
Determine the event trace of the target person, more preferable usage experience is brought to user.
A kind of people search method, the described method includes:
Obtain the picture of target person;
Identify the face characteristic of target person described in the picture of the target person and wear feature;
Obtain the first filter condition;
According to the face characteristic of the target person and feature is worn, and first filter condition is concentrated from image data
Obtain the first pictures;
Obtain the second filter condition;
Second picture collection is obtained from first pictures according to second filter condition;
The data for including picture according to the second picture collection determine the event trace of the target person.
Preferred embodiment according to the present invention, the dress of target person is special described in the picture of the identification target person
Sign includes:
Clothes region is determined from the picture of the target person, the feature of clothes in the clothes region is carried
Take, the feature of the clothes includes style, color and the color-ratio of clothes, and the feature of extraction and clothes trained in advance is special
Individual features in sign model are matched, to determine that the include style, color and the color-ratio of the target person are believed
Breath wears feature.
Preferred embodiment according to the present invention, the face characteristic according to the target person and wears feature, and described
First filter condition concentrates the first pictures of acquisition to include following one or more kinds of combination from image data:
According to the first specified time section included in first filter condition, concentrate and search in the image data, and
Determine first candidate's pictures, the picture in the first candidate pictures has the first specified time segment information;According to
The face characteristic of the target person, determines first pictures, first picture from the first candidate pictures
The every face concentrated is respectively provided with the face characteristic;And/or
According to the target person wear included in feature and first filter condition first specify clothing color,
Concentrate and search in the image data, and determine first candidate's pictures, the picture in the first candidate pictures has institute
State first and specify clothing color information;According to the face characteristic of the target person, determined from the first candidate pictures
First pictures, every face in first pictures are respectively provided with the face characteristic;And/or
According to the target person wear included in feature and first filter condition first specify clothing color
Ratio, concentrates in the image data and searches for, and determines first candidate's pictures, the picture tool in the first candidate pictures
There is described first to specify clothing color percent information;According to the face characteristic of the target person, from the first candidate picture
Concentrate and determine first pictures, every face in first pictures is respectively provided with the face characteristic;And/or
According to the target person wear included in feature and first filter condition first specify clothes fashion,
Concentrate and search in the image data, and determine first candidate's pictures, the picture in the first candidate pictures has institute
State first and specify clothes fashion information;According to the face characteristic of the target person, determined from the first candidate pictures
First pictures, every face in first pictures are respectively provided with the face characteristic;And/or
Region is specified according to first included in first filter condition, concentrates and searches in the image data, and really
Fixed first candidate's pictures, the picture in the first candidate pictures have described first to specify area information;According to described
The face characteristic of target person, determines first pictures, in first pictures from the first candidate pictures
Every face be respectively provided with the face characteristic.
Preferred embodiment according to the present invention, it is described to obtain from first pictures according to second filter condition
Two pictures include following one or more kinds of combination:
The picture met in first pictures in second filter condition in the second specified time section is determined as
The second picture collection, the picture that the second picture is concentrated have the second specified time segment information, and described first specifies
Period includes second specified time section;And/or
The picture in second filter condition in the second specified region will be met in first pictures and be determined as institute
Second picture collection is stated, the picture that the second picture is concentrated has described second to specify area information, and described first specifies region
Region is specified comprising described second.
Preferred embodiment according to the present invention, the data for including picture according to the second picture collection determine the mesh
The event trace of mark personage includes:
The second picture is obtained to concentrate per the corresponding shooting time of pictures and spot for photography;
Based on the corresponding shooting time of every pictures, the picture of the second picture collection is ranked up, determines the mesh
Mark the event trace of personage;Or
Based on the corresponding spot for photography of every pictures, classify to the picture of the second picture collection, determine the mesh
Mark the event trace of personage.
Preferred embodiment according to the present invention, the method further include:
Obtain the rearmost position that the target person occurs;
Determine the direction of motion of target person target person at the rearmost position;
According to the direction of motion of the target person, the zone of action of the target person is predicted.
Preferred embodiment according to the present invention, the method further include:
The picture that the camera device in the zone of action of prediction is captured is obtained in real time;
The target person is identified from the picture of the candid photograph, and tracks the target person;
When the target person is dangerous person, the event trace of the target person and the target person are corresponded to
The zone of action of the prediction send at least one user equipment.
A kind of people search device, described device include:
Acquiring unit, for obtaining the picture of target person;
Recognition unit, for identifying that the face characteristic of target person described in the picture of the target person and dress are special
Sign;
The acquiring unit, is additionally operable to obtain the first filter condition;
The acquiring unit, is additionally operable to according to the face characteristic of the target person and wears feature, and first mistake
Filter condition is concentrated from image data and obtains the first pictures;
The acquiring unit, is additionally operable to obtain the second filter condition;
The acquiring unit, is additionally operable to obtain second picture from first pictures according to second filter condition
Collection;
Determination unit, the data for including picture according to the second picture collection determine the activity of the target person
Track.
Preferred embodiment according to the present invention, the recognition unit identify target person described in the picture of the target person
Feature of wearing include:
Clothes region is determined from the picture of the target person, the feature of clothes in the clothes region is carried
Take, the feature of the clothes includes style, color and the color-ratio of clothes, and the feature of extraction and clothes trained in advance is special
Individual features in sign model are matched, to determine that the include style, color and the color-ratio of the target person are believed
Breath wears feature.
Preferred embodiment according to the present invention, the acquiring unit are special according to the face characteristic of the target person and dress
Sign, and first filter condition concentrate the first pictures of acquisition to include following one or more kinds of combination from image data:
According to the first specified time section included in first filter condition, concentrate and search in the image data, and
Determine first candidate's pictures, the picture in the first candidate pictures has the first specified time segment information;According to
The face characteristic of the target person, determines first pictures, first picture from the first candidate pictures
The every face concentrated is respectively provided with the face characteristic;And/or
According to the target person wear included in feature and first filter condition first specify clothing color,
Concentrate and search in the image data, and determine first candidate's pictures, the picture in the first candidate pictures has institute
State first and specify clothing color information;According to the face characteristic of the target person, determined from the first candidate pictures
First pictures, every face in first pictures are respectively provided with the face characteristic;And/or
According to the target person wear included in feature and first filter condition first specify clothing color
Ratio, concentrates in the image data and searches for, and determines first candidate's pictures, the picture tool in the first candidate pictures
There is described first to specify clothing color percent information;According to the face characteristic of the target person, from the first candidate picture
Concentrate and determine first pictures, every face in first pictures is respectively provided with the face characteristic;And/or
According to the target person wear included in feature and first filter condition first specify clothes fashion,
Concentrate and search in the image data, and determine first candidate's pictures, the picture in the first candidate pictures has institute
State first and specify clothes fashion information;According to the face characteristic of the target person, determined from the first candidate pictures
First pictures, every face in first pictures are respectively provided with the face characteristic;And/or
Region is specified according to first included in first filter condition, concentrates and searches in the image data, and really
Fixed first candidate's pictures, the picture in the first candidate pictures have described first to specify area information;According to described
The face characteristic of target person, determines first pictures, in first pictures from the first candidate pictures
Every face be respectively provided with the face characteristic.
Preferred embodiment according to the present invention, the acquiring unit is according to second filter condition from first pictures
Middle acquisition second picture collection includes following one or more kinds of combination:
The picture met in first pictures in second filter condition in the second specified time section is determined as
The second picture collection, the picture that the second picture is concentrated have the second specified time segment information, and described first specifies
Period includes second specified time section;And/or
The picture in second filter condition in the second specified region will be met in first pictures and be determined as institute
Second picture collection is stated, the picture that the second picture is concentrated has described second to specify area information, and described first specifies region
Region is specified comprising described second.
Preferred embodiment according to the present invention, the determination unit are specifically used for:
The second picture is obtained to concentrate per the corresponding shooting time of pictures and spot for photography;
Based on the corresponding shooting time of every pictures, the picture of the second picture collection is ranked up, determines the mesh
Mark the event trace of personage;Or
Based on the corresponding spot for photography of every pictures, classify to the picture of the second picture collection, determine the mesh
Mark the event trace of personage.
Preferred embodiment according to the present invention, the acquiring unit, is additionally operable to obtain the last position that the target person occurs
Put;
The determination unit, is additionally operable to determine the movement of target person target person at the rearmost position
Direction;
Described device further includes:
Predicting unit, for the direction of motion according to the target person, predicts the zone of action of the target person.
Preferred embodiment according to the present invention, the acquiring unit, is additionally operable to obtain taking the photograph in the zone of action predicted in real time
The picture captured as device;
Described device further includes:
Tracking cell, for identifying the target person from the picture of the candid photograph, and tracks the target person;
Transmitting element, for when the target person is dangerous person, by the event trace of the target person and institute
The zone of action for stating the corresponding prediction of target person is sent at least one user equipment.
A kind of electronic equipment, the electronic equipment include:
Memory, stores at least one instruction;And
Processor, performs the instruction that is stored in the memory to realize the people search method.
A kind of computer-readable recording medium, is stored with least one instruction, institute in the computer-readable recording medium
At least one instruction is stated to be performed by the processor in electronic equipment to realize the people search method.
As can be seen from the above technical solutions, the present invention obtains the picture of target person;Identify the figure of the target person
The face characteristic of target person described in piece and wear feature;Obtain the first filter condition;According to the face of the target person
Feature and feature is worn, and first filter condition is concentrated from image data and obtains the first pictures;Obtain the second filtering rod
Part;Second picture collection is obtained from first pictures according to second filter condition;According to the second picture collection institute
Data comprising picture determine the event trace of the target person.Can be by multi-filtering condition using the present invention, and combine
The face characteristic of the target person and feature is worn, the event trace of the target person is quickly and accurately determined, to user
Bring more preferable usage experience.
Brief description of the drawings
Fig. 1 is the application environment schematic diagram of the preferred embodiment of personage's searching method of the present invention.
Fig. 2 is the flow chart of the preferred embodiment of personage's searching method of the present invention.
Fig. 3 is the functional block diagram of the preferred embodiment of personage's searcher of the present invention.
Fig. 4 is the structure diagram of the electronic equipment for the preferred embodiment that the present invention realizes people search method.
Embodiment
In order to make the object, technical solutions and advantages of the present invention clearer, below in conjunction with the accompanying drawings with specific embodiment pair
The present invention is described in detail.
As shown in Figure 1, it is the application environment schematic diagram of the preferred embodiment of personage's searching method of the present invention.Electronic equipment 1
Communicated with camera device 2.The camera device 2 is used to shoot video image.
As shown in Fig. 2, it is the flow chart of the preferred embodiment of personage's searching method of the present invention., should according to different demands
The order of step can change in flow chart, and some steps can be omitted.
The people search method is applied in one or more electronic equipment 1, and the electronic equipment 1 is that one kind can
Automatic to carry out numerical computations and/or the equipment of information processing according to the instruction for being previously set or storing, its hardware includes but unlimited
In microprocessor, application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), may be programmed
Gate array (Field-Programmable Gate Array, FPGA), digital processing unit (Digital Signal
Processor, DSP), embedded device etc..
The electronic equipment 1 can be the electronic product that any type can carry out human-computer interaction with user, for example, personal meter
Calculation machine, tablet computer, smart mobile phone, personal digital assistant (Personal Digital Assistant, PDA), game machine, friendship
Mutual formula Web TV (Internet Protocol Television, IPTV), intellectual Wearable etc..
The electronic equipment 1 can also include the network equipment and/or user equipment.Wherein, the network equipment includes, but
It is not limited to single network server, the server group of multiple webservers composition or based on cloud computing (Cloud
Computing the cloud being made of a large amount of hosts or the webserver).
Network residing for the electronic equipment 1 include but not limited to internet, wide area network, Metropolitan Area Network (MAN), LAN, it is virtual specially
With network (Virtual Private Network, VPN) etc..
S10, the electronic equipment 1 obtain the picture of target person.
In at least one embodiment of the present invention, the picture of the acquisition of electronic equipment 1 target person includes, but unlimited
In following one or more kinds of combination:
(1) electronic equipment 1 receives the picture uploaded, and the picture using the picture of upload as the target person.
In at least one embodiment of the present invention, the electronic equipment 1 can receive the picture of user's upload, and by institute
State picture of the picture of upload as the target person.
In this way, the demand that the electronic equipment 1 can be provided according to the user carries out the search work of target person, make
Search is more targeted, and under normal conditions, in the picture that the user provides, the character features of the target person
What can be shown becomes apparent from, and the picture that the electronic equipment 1 is provided using the user is carried out the face characteristic of personage and is worn
When the identification of feature, recognition result is more accurate, reliable.
(2) electronic equipment 1 receives the picture that selection is concentrated from image data, and using selected picture as institute
State the picture of target person.
In at least one embodiment of the present invention, when the user can not provide the picture of the target person, institute
One pictures of selection can be concentrated from the image data by stating user, and the electronic equipment 1 receives the picture of selection, and by described in
The picture of selection is determined as the picture of the target person.
In this way, when the user can not provide basis of the picture of the target person as identification, the electricity
Sub- equipment 1 can also concentrate the picture of selection to carry out the face characteristic of the target person according to reception from the image data
And the identification of feature is worn, so that the search work of the target person is successfully carried out, also make the target person
Search work more there is flexibility.
In at least one embodiment of the present invention, the image data collection can be the image data prestored
Collection or the image data collection that real-time grasp shoot is carried out using the camera device 2 communicated with the electronic equipment 1,
This is not restricted by the present invention.
In at least one embodiment of the present invention, for abundant data, it is convenient for more accurately searching for, the picture
Picture of the picture of personage, the picture of animal, scenery etc. can be stored in data set, the present invention is not limited.
S11, the electronic equipment 1 identify the face characteristic of target person and dress described in the picture of the target person
Feature.
In at least one embodiment of the present invention, the electronic equipment 1 identifies the face characteristic of the target person
Method can have many kinds, and the present invention is not limited.Due to face recognition technology relative maturity, belong to the prior art, this
Details are not described herein for invention.
In at least one embodiment of the present invention, the electronic equipment 1 is identified described in the picture of the target person
The feature of wearing of target person includes:
The electronic equipment 1 determines clothes region from the picture of the target person, to clothes in the clothes region
Feature extracted, the feature of the clothes includes style, color and the color-ratio of clothes, and the electronic equipment 1 will carry
The feature taken is matched with the individual features in garment feature model trained in advance, to determine including for the target person
The style, color and color-ratio information wear feature.
In at least one embodiment of the present invention, the garment feature model trained in advance can have many types
Not, specifically, the classification of the garment feature model trained in advance can include, but are not limited to:Korean style, Japanese, the European, people
Race's wind, gloomy female system, small pure and fresh series etc..It should be noted that all including personage in the garment feature model of every kind of classification
Style, color and color-ratio of dress etc. all wear the characteristic of feature.
In this way, the electronic equipment 1 can first determine that the clothes trained in advance belonging to the picture of the target person are special
The classification of model is levied, then the feature of extraction is matched with the feature in the garment feature model of corresponding classification, is matched
Feature.Due to reducing the scope of search, make target person described in the picture of the identification target person wears feature
Process efficiency higher.
S12, the electronic equipment 1 obtain the first filter condition.
In at least one embodiment of the present invention, the scope of first filter condition can include, but are not limited to:When
Between, place, clothes fashion, clothing color and clothing color ratio etc..
It should be noted that first filter condition is the larger filter condition in a search range, pass through described
One filter condition can obtain the wider search result of covering surface, in order to avoid causing the limitation of search since filter condition is narrow, make to search
Hitch fruit is not comprehensive enough.
S13, the electronic equipment 1 is according to the face characteristic of the target person and wears feature, and first filtering
Condition is concentrated from image data and obtains the first pictures.
In at least one embodiment of the present invention, the electronic equipment 1 according to the face characteristic of the target person and
Feature is worn, and first filter condition concentrates the first pictures of acquisition to include, but are not limited to following one kind from image data
Or a variety of combination:
(1) electronic equipment 1 is according to the first specified time section included in first filter condition, in the picture
Searched in data set, and determine first candidate's pictures, the picture in the first candidate pictures has described first to specify
Time segment information;The electronic equipment 1 is determined according to the face characteristic of the target person from the first candidate pictures
First pictures, every face in first pictures are respectively provided with the face characteristic.
In at least one embodiment of the present invention, the first specified time section can be configured by user, specifically
Ground, the user can be configured according to actual search demand, can also be practised according to the life of the target person of understanding
It is used to be configured, or be configured according to the known search time empirical value in search field, the present invention is not limited.
Such as:When the user wonders the event trace in the target person morning, the user can be by described in
When the first specified time section being arranged to the morning 8 to 12, in this way, the electronic equipment 1 is first with reference to first specified time
When during Duan Shangwu 8 to 12, concentrate and search in the image data, and determine first candidate's pictures;The electronic equipment 1 root again
According to the face characteristic of the target person, first pictures are determined from the first candidate pictures.
(2) electronic equipment 1 wears what is included in feature and first filter condition according to the target person
First specifies clothing color, concentrates and searches in the image data, and determines first candidate's pictures, the first candidate picture
The picture of concentration has described first to specify clothing color information;According to the face characteristic of the target person, from described first
First pictures are determined in candidate's pictures, every face in first pictures is respectively provided with the face characteristic.
In at least one embodiment of the present invention, described first clothing color is specified to be configured by user,
It can be configured by the electronic equipment 1 according to the clothing color worn in feature of the target person identified, this
Invention is not limited.
Such as:When the clothing color worn in feature for the target person that the electronic equipment 1 identifies is red
During with black, the electronic equipment 1 can set described first to specify clothing color as red and black, in this way, the electronics
Equipment 1 wears feature and the first specified clothing color red and black with reference to the target person first, in the figure
Sheet data concentrates search, and determines first candidate's pictures;The electronic equipment 1 is special further according to the face of the target person
Sign, determines first pictures from the first candidate pictures.
(3) electronic equipment 1 wears what is included in feature and first filter condition according to the target person
First specifies clothing color ratio, concentrates and searches in the image data, and determines first candidate's pictures, first candidate
Picture in pictures has described first to specify clothing color percent information;According to the face characteristic of the target person, from
First pictures are determined in the first candidate pictures, every face in first pictures is respectively provided with the people
Face feature.
In at least one embodiment of the present invention, described first clothing color ratio is specified to be set by user
Put, can also by the electronic equipment 1 according to the clothing color ratio worn in feature of the target person identified into
Row is set, and the present invention is not limited.
Such as:When the clothing color ratio worn in feature for the target person that the electronic equipment 1 identifies is:
It is red:Black=1:When 1, the electronic equipment 1 can set described first specify clothing color ratio be:It is red:Black=
1:1, in this way, the electronic equipment 1 wears feature and the first specified clothing color ratio with reference to the target person first
Example is red:Black=1:1, concentrate and search in the image data, and determine first candidate's pictures;The electronic equipment 1 is again
According to the face characteristic of the target person, first pictures are determined from the first candidate pictures.
(4) electronic equipment 1 wears what is included in feature and first filter condition according to the target person
First specifies clothes fashion, concentrates and searches in the image data, and determines first candidate's pictures, the first candidate picture
The picture of concentration has described first to specify clothes fashion information;According to the face characteristic of the target person, from described first
First pictures are determined in candidate's pictures, every face in first pictures is respectively provided with the face characteristic.
In at least one embodiment of the present invention, described first clothes fashion is specified to be configured by user,
It can be configured by the electronic equipment 1 according to the clothes fashion worn in feature of the target person identified, this
Invention is not limited.
Such as:When the clothes fashion worn in feature for the target person that the electronic equipment 1 identifies is Korean style
When, it is Korean style that the electronic equipment 1, which can set described first to specify clothes fashion, in this way, the electronic equipment 1 combines first
The target person wears feature and the first specified clothes fashion Korean style, concentrates and searches in the image data, and really
Fixed first candidate's pictures;The electronic equipment 1 further according to the target person face characteristic, from the first candidate picture
Concentrate and determine first pictures.
(5) electronic equipment 1 specifies region according to first included in first filter condition, in the picture number
Searched for according to concentrating, and determine first candidate's pictures, the picture in the first candidate pictures has described first to specify area
Domain information;According to the face characteristic of the target person, first pictures, institute are determined from the first candidate pictures
The every face stated in the first pictures is respectively provided with the face characteristic.
In at least one embodiment of the present invention, described first region is specified to be configured by user, specifically,
The user can be configured according to actual search demand, can also according to the habits and customs of the target person of understanding into
Row is set, or is configured according to the known experience in search field, and the present invention is not limited.
Such as:When it is A streets that the electronic equipment 1, which receives described the first of user setting to specify region, the electricity
Sub- equipment 1 specifies region A streets with reference to described first first, concentrates and searches in the image data, and determines the first candidate figure
Piece collection;The electronic equipment 1 determines described further according to the face characteristic of the target person from the first candidate pictures
First pictures.
In at least one embodiment of the present invention, since face recognition technology is finer, cumbersome, it is necessary to spend more
Time operated, therefore, in the present embodiment, the electronic equipment 1 wears feature with reference to the target person first
And first filter condition is concentrated in the image data and searched for, after obtaining the first candidate pictures, in conjunction with described
The face characteristic of target person, determines first pictures from the first candidate pictures.So not only save and searched
The time of rope, can also, when environment more complicated fast search described in target person larger in flow of the people, improve search efficiency.
In at least one embodiment of the present invention, according to search as a result, one can be included in first pictures
Open or plurality of pictures, the present invention are not limited.
S14, the electronic equipment 1 obtain the second filter condition.
In at least one embodiment of the present invention, second filter condition is a less filter condition of scope,
The scope of second filter condition can include, but are not limited to:Time, place etc..Specifically, second filter condition
Include the second specified time section and the second specified region, the second specified time section is contained in first filter condition
The first specified time section, described second specify region be contained in first filter condition first specify region.
In this way, after the electronic equipment 1 has got first pictures, the electronic equipment 1 can be with root
Search range is reduced according to further limit of second filter condition, obtains more accurate search result.
S15, the electronic equipment 1 obtain second picture according to second filter condition from first pictures
Collection.
In at least one embodiment of the present invention, the electronic equipment 1 according to second filter condition from described
The combination that second picture collection includes, but are not limited to following one or more kinds of modes is obtained in one pictures:
(1) electronic equipment 1 will meet the second specified time in second filter condition in first pictures
Picture in section is determined as the second picture collection, and the picture that the second picture is concentrated has second specified time section letter
Breath, the first specified time section include second specified time section.
Such as:If the electronic equipment 1 reduces search range, when second specified time section is arranged to the morning 10
During to 11, then the figure when electronic equipment 1 is by when meeting the second specified time section 10 to 11 in first pictures
Piece is determined as the second picture collection.
(2) electronic equipment 1 will meet the second specified region in second filter condition in first pictures
Interior picture is determined as the second picture collection, and the picture that the second picture is concentrated has second specified time section letter
Breath, described first, which specifies region to include described second, specifies region.
Such as:If the electronic equipment 1 reduces search range, the described second specified region is arranged to No. X of A streets
During to Y, then the electronic equipment 1 will meet the described second specify region A streets No. X to No. Y in first pictures
Picture be determined as the second picture collection.
In at least one embodiment of the present invention, the electronic equipment 1 can record the camera device 2 and shoot every
Time during picture, either, the camera device 2 include shooting time on every pictures when shooting is per pictures,
The electronic equipment 1 obtains the shooting time per pictures included in first pictures, and shooting time is met
Picture in second filter condition in the second specified time section is determined as the second picture collection.
Pass through embodiment of above, when the user wants further to limit filter condition, the electronics
Equipment 1 directly on the basis of first pictures, can carry out the tool in time, place etc. by second filter condition
Body limits, and has not only saved search time, but also search is more had specific aim, is also more in line with the search need of user.
Certainly, in other embodiments, the limitation to clothing color etc. can also be included in second filter condition, this
Invention is not limited.Such as:If the clothing color in first filter condition is red and black, second filtering rod
Clothing color in part is red, then the electronic equipment 1 directly can pick out clothing color from first pictures
It is that red picture is added to the second picture concentration, re-searches for without being concentrated from the image data, both reduced
Workload, and saved search time.
S16, the electronic equipment 1 determine the target person according to the data that the second picture collection includes picture
Event trace.
In at least one embodiment of the present invention, the electronic equipment 1 includes picture according to the second picture collection
Data determine that the event trace of the target person includes:
The electronic equipment 1 obtains the second picture and concentrates per the corresponding shooting time of pictures and spot for photography, and
Based on the corresponding shooting time of every pictures, the picture of the second picture collection is ranked up, determines the target person
Event trace.
In at least one embodiment of the present invention, the electronic equipment 1 can shooting corresponding according to every pictures when
Between the picture of the second picture collection is ranked up, in this way, the electronic equipment 1 is i.e. available using the time as described in order
The event trace of target person, so that the place that the target person according to time sequencing predicts future time is likely to occur.
Such as:The electronic equipment 1 can by the picture of the second picture collection according to during the morning 9 to 11 when period
Interior shooting order is ranked up, and so described electronic equipment 1 can determine the target person upper sequentially in time
The place occurred successively when during noon 9 to 11, to determine the event trace of the target person.
Either, the electronic equipment 1 obtains the second picture and concentrates per the corresponding shooting time of pictures and shooting
Place, and based on per the corresponding spot for photography of pictures, classify to the picture of the second picture collection, determine the target
The event trace of personage.
In at least one embodiment of the present invention, the electronic equipment 1 can be according to the corresponding shooting ground of every pictures
Point classifies the picture of the second picture collection, in this way, the electronic equipment 1 is i.e. available using place to judge benchmark
The event trace of the target person, so as to predict the mesh according to height of the target person in each place frequency of occurrences
Next place that mark personage is likely to occur.
Such as:The electronic equipment 1 can be by every pictures that the second picture is concentrated according to corresponding spot for photography
It is divided into A streets X, A streets Y, A streets Z etc., so described electronic equipment 1 can be according to A streets X, A streets Y
Number, the line in the place such as A streets Z, determine the event trace of the target person.
, can be clearly to user in a manner of the above-mentioned event trace for determining the target person by time or place
Show search result, readily appreciate.
In at least one embodiment of the present invention, the electronic equipment 1 can be by communicating with the electronic equipment 1
Camera device 2 obtain the second picture and concentrate per the corresponding shooting time of pictures and spot for photography, it is to be understood that
Concentrated by the second picture per the corresponding shooting time of pictures and spot for photography, it is possible to determine that the target person goes out
Existing time and place.
Specifically, the electronic equipment 1 can record the camera device 2 and shoot the second picture concentration per pictures
When time, either, the camera device 2 shows shooting time when shooting every pictures that the second picture is concentrated
Show the second picture concentrate every photo it is first-class.
Specifically, the electronic equipment 1 can record the camera device 2 and shoot the second picture concentration per pictures
When spot for photography, and the spot for photography is determined as the place that the target person occurs.
In at least one embodiment of the present invention, the method further includes:
The electronic equipment 1 obtains the rearmost position that the target person occurs, determine the target person it is described most
Afterwards at position the target person the direction of motion, and according to the direction of motion of the target person, predict the target person
Zone of action.
Specifically, the electronic equipment 1 can be according to the movement of the body direction, leg of the target person identified
The direction of motion of the definite target person such as trend, face orientation, or the vehicles for passing through the target person
Headstock direction determines the direction of motion of the target person, and the present invention is not limited.
Such as:When the rearmost position that the electronic equipment 1 gets the target person appearance is A streets X, if
The electronic equipment 1 determines that the direction of motion of the target person A streets X is towards A streets Y, then the electronics is set
Standby 1 can predict that the zone of action of the target person will be by A streets X to A streets Y.
In this way, the electronic equipment 1 can accurately grasp the target by predicting the zone of action of the target person
The region that personage is likely to occur, and corresponding preparation can be carried out in advance in the region being likely to occur, to coordinate search institute
State the search purpose of target person.Such as:When the purpose of search of the target person is to arrest the target person, Ke Yitong
Cross predict the zone of action of the target person carry out in advance arrest preparation etc..
In other embodiments, the electronic equipment 1 can also judge the target person in the rearmost position place
The direction of motion of the vehicles of target person, and the direction of motion of the vehicles according to the target person are stated, predicts institute
State the zone of action of target person.Such as:When the purpose of search of the target person is to track the target person, the electricity
Sub- equipment 1 can start camera device 2 in the zone of action of the target person of prediction, carry out candid photograph in advance and prepare, reach
To the purpose accurately tracked.
In at least one embodiment of the present invention, the method further includes:
The electronic equipment 1 obtains the picture that the camera device 2 in the zone of action of prediction is captured in real time, from the candid photograph
Picture in identify the target person, and track the target person, when the target person is dangerous person, will described in
The zone of action of the event trace of target person and the corresponding prediction of the target person, which is sent at least one user, to be set
It is standby.
Such as:When the target person is thief, the electronic equipment 1 can be by the event trace of the thief
And the zone of action of the corresponding prediction of the thief sends the active rail to the server of police office and the thief
On the terminal device of community security personnel involved by mark, to prompt related personnel to take safe precaution measure in time, and by
In the tracking for having carried out the target person, related personnel can also be aided in quickly to arrest the thief.
In conclusion the present invention can obtain the picture of target person;Identify target described in the picture of the target person
The face characteristic of personage and wear feature;Obtain the first filter condition;It is special according to the face characteristic of the target person and dress
Sign, and first filter condition are concentrated from image data and obtain the first pictures;Obtain the second filter condition;According to described
Two filter conditions obtain second picture collection from first pictures;The data of picture are included according to the second picture collection
Determine the event trace of the target person.Therefore, the present invention can be by multi-filtering condition, and with reference to the target person
Face characteristic and feature is worn, quickly and accurately determine the event trace of the target person, more preferable use is brought to user
Experience.
As shown in figure 3, it is the functional block diagram of the preferred embodiment of personage's searcher of the present invention.The people search dress
Putting 11 includes acquiring unit 110, recognition unit 111, determination unit 112, predicting unit 113, tracking cell 114, transmitting element
115.Module/unit alleged by the present invention refers to that one kind can be performed by processor 13, and can complete fixed function
Series of computation machine program segment, it is stored in memory 12.In the present embodiment, the function on each module/unit will be
It is described in detail in follow-up embodiment.
Acquiring unit 110 obtains the picture of target person.
In at least one embodiment of the present invention, the picture of the acquisition of acquiring unit 110 target person includes, but not
It is limited to following one or more kinds of combination:
(1) acquiring unit 110 receives the picture uploaded, and the figure using the picture of upload as the target person
Piece.
In at least one embodiment of the present invention, the acquiring unit 110 can receive the picture of user's upload, and will
Picture of the picture of the upload as the target person.
In this way, the demand that the electronic equipment 1 can be provided according to the user carries out the search work of target person, make
Search is more targeted, and under normal conditions, in the picture that the user provides, the character features of the target person
What can be shown becomes apparent from, and the picture that the electronic equipment 1 is provided using the user is carried out the face characteristic of personage and is worn
When the identification of feature, recognition result is more accurate, reliable.
(2) acquiring unit 110 receives the picture that selection is concentrated from image data, and using selected picture as
The picture of the target person.
In at least one embodiment of the present invention, when the user can not provide the picture of the target person, institute
One pictures of selection can be concentrated from the image data by stating user, and the acquiring unit 110 receives the picture of selection, and by institute
The picture for stating selection is determined as the picture of the target person.
In this way, when the user can not provide basis of the picture of the target person as identification, the electricity
Sub- equipment 1 can also concentrate the picture of selection to carry out the face characteristic of the target person according to reception from the image data
And the identification of feature is worn, so that the search work of the target person is successfully carried out, also make the target person
Search work more there is flexibility.
In at least one embodiment of the present invention, the image data collection can be the image data prestored
Collection or the image data collection that real-time grasp shoot is carried out using the camera device 2 communicated with the electronic equipment 1,
This is not restricted by the present invention.
In at least one embodiment of the present invention, for abundant data, it is convenient for more accurately searching for, the picture
Picture of the picture of personage, the picture of animal, scenery etc. can be stored in data set, the present invention is not limited.
Recognition unit 111 identifies the face characteristic of target person described in the picture of the target person and wears feature.
In at least one embodiment of the present invention, the recognition unit 111 identifies the face characteristic of the target person
Method can have a many kinds, the present invention is not limited.Due to face recognition technology relative maturity, belong to the prior art,
Details are not described herein by the present invention.
In at least one embodiment of the present invention, the recognition unit 111 identifies institute in the picture of the target person
Stating the feature of wearing of target person includes:
The recognition unit 111 determines clothes region from the picture of the target person, and the clothes region is taken orally
The feature of dress is extracted, and the feature of the clothes includes style, color and the color-ratio of clothes, the recognition unit 111
The feature of extraction is matched with the individual features in garment feature model trained in advance, to determine the target person
Feature of wearing including the style, color and color-ratio information is worn.
In at least one embodiment of the present invention, the garment feature model trained in advance can have many types
Not, specifically, the classification of the garment feature model trained in advance can include, but are not limited to:Korean style, Japanese, the European, people
Race's wind, gloomy female system, small pure and fresh series etc..It should be noted that all including personage in the garment feature model of every kind of classification
Style, color and color-ratio of dress etc. all wear the characteristic of feature.
In this way, the recognition unit 111 can first determine the clothes trained in advance belonging to the picture of the target person
The classification of characteristic model, then the feature of extraction is matched with the feature in the garment feature model of corresponding classification, obtains
The feature matched somebody with somebody.Due to reducing the scope of search, make the dress of target person described in the picture of the identification target person special
The process efficiency higher of sign.
The acquiring unit 110 obtains the first filter condition.
In at least one embodiment of the present invention, the scope of first filter condition can include, but are not limited to:When
Between, place, clothes fashion, clothing color and clothing color ratio etc..
It should be noted that first filter condition is the larger filter condition in a search range, pass through described
One filter condition can obtain the wider search result of covering surface, in order to avoid causing the limitation of search since filter condition is narrow, make to search
Hitch fruit is not comprehensive enough.
The acquiring unit 110 is according to the face characteristic of the target person and wears feature, and first filtering rod
Part is concentrated from image data and obtains the first pictures.
In at least one embodiment of the present invention, the acquiring unit 110 is according to the face characteristic of the target person
And feature is worn, and first filter condition concentrates the first pictures of acquisition to include, but are not limited to next from image data
Kind or a variety of combinations:
(1) acquiring unit 110 is according to the first specified time section included in first filter condition, in the figure
Sheet data concentrates search, and determines first candidate's pictures, and the picture in the first candidate pictures has described first to refer to
Fix time segment information;The acquiring unit 110 is according to the face characteristic of the target person, from the first candidate pictures
Determine first pictures, every face in first pictures is respectively provided with the face characteristic.
In at least one embodiment of the present invention, the first specified time section can be configured by user, specifically
Ground, the user can be configured according to actual search demand, can also be practised according to the life of the target person of understanding
It is used to be configured, or be configured according to the known search time empirical value in search field, the present invention is not limited.
Such as:When the user wonders the event trace in the target person morning, the user can be by described in
When the first specified time section being arranged to the morning 8 to 12, in this way, when the acquiring unit 110 is specified with reference to described first first
Between the section morning 8 when to 12 when, concentrate and search in the image data, and determine first candidate's pictures then, it is described obtain it is single
Member 110 determines first pictures further according to the face characteristic of the target person from the first candidate pictures.
(2) acquiring unit 110 is included according to wearing in feature and first filter condition for the target person
First specify clothing color, concentrate and search in the image data, and determine first candidate's pictures, first candidate figure
The picture that piece is concentrated has described first to specify clothing color information;According to the face characteristic of the target person, from described
First pictures are determined in one candidate's pictures, it is special that every face in first pictures is respectively provided with the face
Sign.
In at least one embodiment of the present invention, described first clothing color is specified to be configured by user,
It can be configured by the acquiring unit 110 according to the clothing color worn in feature of the target person identified,
The present invention is not limited.
Such as:When the clothing color worn in feature for the target person that the recognition unit 111 identifies is red
When color is with black, the acquiring unit 110 can set described first to specify clothing color as red and black, in this way, described
Acquiring unit 110 wears feature and the first specified clothing color red and black with reference to the target person first,
The image data concentrates search, and determines first candidate's pictures;The acquiring unit 110 is further according to the target person
Face characteristic, determines first pictures from the first candidate pictures.
(3) acquiring unit 110 is included according to wearing in feature and first filter condition for the target person
First specify clothing color ratio, concentrate and search in the image data, and determine first candidate's pictures, described first waits
The picture in pictures is selected to have described first to specify clothing color percent information;According to the face characteristic of the target person,
First pictures are determined from the first candidate pictures, every face in first pictures is respectively provided with described
Face characteristic.
In at least one embodiment of the present invention, described first clothing color ratio is specified to be set by user
Put, the clothing color ratio worn in feature that can also be by the acquiring unit 110 according to the target person identified
It is configured, the present invention is not limited.
Such as:When the clothing color ratio worn in feature for the target person that the recognition unit 111 identifies
For:It is red:Black=1:When 1, the acquiring unit 110 can set described first specify clothing color ratio be:It is red:It is black
Color=1:1, in this way, the acquiring unit 110 wears feature and the first specified clothes with reference to the target person first
Color-ratio is red:Black=1:1, concentrate and search in the image data, and determine first candidate's pictures;It is described to obtain list
Member 110 determines first pictures further according to the face characteristic of the target person from the first candidate pictures.
(4) acquiring unit 110 is included according to wearing in feature and first filter condition for the target person
First specify clothes fashion, concentrate and search in the image data, and determine first candidate's pictures, first candidate figure
The picture that piece is concentrated has described first to specify clothes fashion information;According to the face characteristic of the target person, from described
First pictures are determined in one candidate's pictures, it is special that every face in first pictures is respectively provided with the face
Sign.
In at least one embodiment of the present invention, described first clothes fashion is specified to be configured by user,
It can be configured by the acquiring unit 110 according to the clothes fashion worn in feature of the target person identified,
The present invention is not limited.
Such as:When the clothes fashion worn in feature for the target person that the recognition unit 111 identifies is Korea Spro
During formula, it is Korean style that the acquiring unit 110, which can set described first to specify clothes fashion, in this way, the acquiring unit 110 is first
Feature and the first specified clothes fashion Korean style first are worn with reference to the target person, concentrates and searches in the image data
Rope, and determine first candidate's pictures;The acquiring unit 110 further according to the target person face characteristic, from described
First pictures are determined in one candidate's pictures.
(5) acquiring unit 110 specifies region according to first included in first filter condition, in the picture
Searched in data set, and determine first candidate's pictures, the picture in the first candidate pictures has described first to specify
Area information;According to the face characteristic of the target person, first pictures are determined from the first candidate pictures,
Every face in first pictures is respectively provided with the face characteristic.
In at least one embodiment of the present invention, described first region is specified to be configured by user, specifically,
The user can be configured according to actual search demand, can also according to the habits and customs of the target person of understanding into
Row is set, or is configured according to the known experience in search field, and the present invention is not limited.
Such as:It is described when it is A streets that the acquiring unit 110, which receives described the first of user setting to specify region,
Acquiring unit 110 specifies region A streets with reference to described first first, concentrates and searches in the image data, and determines that first waits
Select pictures;The acquiring unit 110 further according to the target person face characteristic, from the first candidate pictures really
Fixed first pictures.
In at least one embodiment of the present invention, since face recognition technology is finer, cumbersome, it is necessary to spend more
Time operated, therefore, in the present embodiment, the acquiring unit 110 is special with reference to the dress of the target person first
Sign and first filter condition are concentrated in the image data to be searched for, after obtaining the first candidate pictures, the acquisition
Unit 110 determines first pictures in conjunction with the face characteristic of the target person from the first candidate pictures.
So not only saved the time of search, can also, when environment more complicated fast search described in target person larger in flow of the people,
Improve search efficiency.
In at least one embodiment of the present invention, according to search as a result, one can be included in first pictures
Open or plurality of pictures, the present invention are not limited.
The acquiring unit 110 obtains the second filter condition.
In at least one embodiment of the present invention, second filter condition is a less filter condition of scope,
The scope of second filter condition can include, but are not limited to:Time, place etc..Specifically, second filter condition
Include the second specified time section and the second specified region, the second specified time section is contained in first filter condition
The first specified time section, described second specify region be contained in first filter condition first specify region.
In this way, after the acquiring unit 110 has got first pictures, the acquiring unit 110 may be used also
Search range is reduced with further limit according to second filter condition, obtains more accurate search result.
The acquiring unit 110 obtains second picture collection according to second filter condition from first pictures.
In at least one embodiment of the present invention, the acquiring unit 110 according to second filter condition from described
The combination that second picture collection includes, but are not limited to following one or more kinds of modes is obtained in first pictures:
(1) when the acquiring unit 110 will meet in second filter condition second and specify in first pictures
Between picture in section be determined as the second picture collection, the picture that the second picture is concentrated has second specified time section
Information, the first specified time section include second specified time section.
Such as:If the acquiring unit 110 reduces search range, second specified time section is arranged to the morning 10
When to 11 when, then when when the acquiring unit 110 will meet second specified time section 10 in first pictures to 11
Picture be determined as the second picture collection.
(2) acquiring unit 110 will meet the second specified area in second filter condition in first pictures
Picture in domain is determined as the second picture collection, and the picture that the second picture is concentrated has second specified time section letter
Breath, described first, which specifies region to include described second, specifies region.
Such as:If the acquiring unit 110 reduces search range, the described second specified region is arranged to the X in A streets
When number arriving Y, then the acquiring unit 110 will meet described second and specify No. X of region A streets to arrive in first pictures
The picture of No. Y is determined as the second picture collection.
In at least one embodiment of the present invention, the electronic equipment 1 can record the camera device 2 and shoot every
Time during picture, either, the camera device 2 include shooting time on every pictures when shooting is per pictures,
The electronic equipment 1 obtains the shooting time per pictures included in first pictures, and shooting time is met
Picture in second filter condition in the second specified time section is determined as the second picture collection.
Pass through embodiment of above, when the user wants further to limit filter condition, the acquisition
Unit 110 can carry out time, place etc. directly on the basis of first pictures by second filter condition
Concrete restriction, has not only been saved search time, but also search is more had specific aim, is also more in line with the search need of user.
Certainly, in other embodiments, the limitation to clothing color etc. can also be included in second filter condition, this
Invention is not limited.Such as:If the clothing color in first filter condition is red and black, second filtering rod
Clothing color in part is red, then the acquiring unit 110 directly can pick out clothes face from first pictures
Color is that red picture is added to the second picture concentration, re-searches for without being concentrated from the image data, both subtracted
Lack workload, and save search time.
Determination unit 112 determines the active rail of the target person according to the data that the second picture collection includes picture
Mark.
In at least one embodiment of the present invention, the determination unit 112 includes figure according to the second picture collection
The data of piece determine that the event trace of the target person includes:
The determination unit 112 obtains the second picture and concentrates per the corresponding shooting time of pictures and spot for photography,
And based on per the corresponding shooting time of pictures, the picture of the second picture collection is ranked up, determines the target person
Event trace.
In at least one embodiment of the present invention, the determination unit 112 can be according to the corresponding shooting of every pictures
Time is ranked up the picture of the second picture collection, in this way, the determination unit 112 is i.e. available using the time as order
The event trace of the target person, so that the ground that the target person according to time sequencing predicts future time is likely to occur
Point.
Such as:The determination unit 112 can by the picture of the second picture collection according to during the morning 9 to 11 when time
Shooting order in section is ranked up, and so described determination unit 112 can determine the target person sequentially in time
The place occurred successively when at the morning 9 to 11, to determine the event trace of the target person.
Either, the determination unit 112 obtains the second picture and concentrates per the corresponding shooting time of pictures and bat
Place is taken the photograph, and based on per the corresponding spot for photography of pictures, classifies to the picture of the second picture collection, determines the mesh
Mark the event trace of personage.
In at least one embodiment of the present invention, the determination unit 112 can be according to the corresponding shooting of every pictures
The picture of the second picture collection is classified in place, in this way, the determination unit 112 is i.e. available using place to judge base
The event trace of the accurate target person, so as to predict institute according to height of the target person in each place frequency of occurrences
State next place that target person is likely to occur.
Such as:Every pictures that the second picture is concentrated can be shot ground by the determination unit 112 according to corresponding
Point is divided into A streets X, A streets Y, A streets Z etc., and so described determination unit 112 can be according to A streets X, A streets
The line in the places such as road Y, A streets Z, determines the event trace of the target person.
, can be clearly to user in a manner of the above-mentioned event trace for determining the target person by time or place
Show search result, readily appreciate.
In at least one embodiment of the present invention, the determination unit 112 can be by communicating with the electronic equipment 1
The camera device 2 of letter obtains the second picture and concentrates per the corresponding shooting time of pictures and spot for photography, it is possible to understand that
It is to be concentrated by the second picture per the corresponding shooting time of pictures and spot for photography, it is possible to determine the target person
The time and place that thing occurs.
Specifically, the determination unit 112, which can record the camera device 2 and shoot the second picture, concentrates every figure
Time during piece, either, the camera device 2 is when shooting every pictures that the second picture is concentrated, by shooting time
It is first-class to be shown in every photo that the second picture is concentrated.
Specifically, the determination unit 112, which can record the camera device 2 and shoot the second picture, concentrates every figure
Spot for photography during piece, and the spot for photography is determined as the place that the target person occurs.
In at least one embodiment of the present invention, the method further includes:
The acquiring unit 110 obtains the rearmost position that the target person occurs, and the determination unit 112 determines described
The direction of motion of target person target person at the rearmost position, predicting unit 113 is according to the target person
The direction of motion, predicts the zone of action of the target person.
Specifically, the electronic equipment 1 can be according to the movement of the body direction, leg of the target person identified
The direction of motion of the definite target person such as trend, face orientation, or the vehicles for passing through the target person
Headstock direction determines the direction of motion of the target person, and the present invention is not limited.
Such as:When the rearmost position that the acquiring unit 110 gets the target person appearance is A streets X, such as
Determination unit 112 described in fruit determine the direction of motion of the target person A streets X be towards A streets Y, then it is described pre-
Surveying unit 113 can predict that the zone of action of the target person will be by A streets X to A streets Y.
In this way, the predicting unit 113 can accurately grasp the mesh by predicting the zone of action of the target person
The region that mark personage is likely to occur, and corresponding preparation can be carried out in advance in the region being likely to occur, to coordinate search
The search purpose of the target person.Such as:, can be with when the purpose of search of the target person is to arrest the target person
Zone of action by predicting the target person is carried out in advance arrests preparation etc..
In other embodiments, the predicting unit 113 can also judge the target person at the rearmost position
The direction of motion of the vehicles of the target person, and the direction of motion of the vehicles according to the target person, prediction
The zone of action of the target person.Such as:It is described when the purpose of search of the target person is to track the target person
Predicting unit 113 can start camera device 2 in the zone of action of the target person of prediction, and it is accurate to carry out candid photograph in advance
It is standby, achieve the purpose that accurately to track.
In at least one embodiment of the present invention, the method further includes:
The acquiring unit 110 obtains the picture that the camera device 2 in the zone of action of prediction is captured, the identification in real time
Unit 111 identifies the target person from the picture of the candid photograph, and tracking cell 114 tracks the target person, when described
When target person is dangerous person, transmitting element 115 is corresponding by the event trace of the target person and the target person
The zone of action of the prediction is sent at least one user equipment.
Such as:When the target person is thief, the transmitting element 115 can be by the active rail of the thief
The zone of action of mark and the corresponding prediction of the thief sends the activity to the server of police office and the thief
On the terminal device of community security personnel involved by track, to prompt related personnel to take safe precaution measure in time, and
Due to having carried out the tracking of the target person, related personnel can also be aided in quickly to arrest the thief.
In conclusion the present invention can obtain the picture of target person;Identify target described in the picture of the target person
The face characteristic of personage and wear feature;Obtain the first filter condition;It is special according to the face characteristic of the target person and dress
Sign, and first filter condition are concentrated from image data and obtain the first pictures;Obtain the second filter condition;According to described
Two filter conditions obtain second picture collection from first pictures;The data of picture are included according to the second picture collection
Determine the event trace of the target person.Therefore, the present invention can be by multi-filtering condition, and with reference to the target person
Face characteristic and feature is worn, quickly and accurately determine the event trace of the target person, more preferable use is brought to user
Experience.
As shown in figure 4, it is the structure diagram of the electronic equipment for the preferred embodiment that the present invention realizes people search method.
The electronic equipment 1 be it is a kind of can according to the instruction for being previously set or storing, it is automatic carry out numerical computations and/or
The equipment of information processing, its hardware include but not limited to microprocessor, application-specific integrated circuit (Application Specific
Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), number
Word processing device (Digital Signal Processor, DSP), embedded device etc..
The electronic equipment 1 can also be but not limited to any type can with user by keyboard, mouse, remote controler, touch
The mode such as template or voice-operated device carries out the electronic product of human-computer interaction, for example, personal computer, tablet computer, smart mobile phone,
Personal digital assistant (Personal Digital Assistant, PDA), game machine, Interactive Internet TV (Internet
Protocol Television, IPTV), intellectual Wearable etc..
The electronic equipment 1 can also be that the calculating such as desktop PC, notebook, palm PC and cloud server are set
It is standby.
Network residing for the electronic equipment 1 include but not limited to internet, wide area network, Metropolitan Area Network (MAN), LAN, it is virtual specially
With network (Virtual Private Network, VPN) etc..
In one embodiment of the invention, the electronic equipment 1 includes, but not limited to memory 12, processor 13,
And the computer program that can be run in the memory 12 and on the processor 13 is stored in, such as people search journey
Sequence.
It will be understood by those skilled in the art that the schematic diagram is only the example of electronic equipment 1, not structure paired electrons
The restriction of equipment 1, can include than illustrating more or fewer components, either combine some components or different components, example
Such as described electronic equipment 1 can also include input-output equipment, network access equipment, bus.
Alleged processor 13 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor
Deng the processor 13 is arithmetic core and the control centre of the electronic equipment 1, whole using various interfaces and connection
The various pieces of electronic equipment 1, and perform the operating system of the electronic equipment 1 and types of applications program, the program of installation
Code etc..
The processor 13 performs the operating system of the electronic equipment 1 and the types of applications program of installation.The place
Reason device 13 performs the application program to realize the step in above-mentioned each people search embodiment of the method, such as shown in Fig. 1
Step S10, S11, S12, S13, S14, S15, S16.
Alternatively, the processor 13 realizes each module in above-mentioned each device embodiment/mono- when performing the computer program
The function of member, such as:Obtain the picture of target person;Identify that the face of target person described in the picture of the target person is special
Levy and wear feature;Obtain the first filter condition;According to the face characteristic of the target person and feature is worn, and described first
Filter condition is concentrated from image data and obtains the first pictures;Obtain the second filter condition;According to second filter condition from
Second picture collection is obtained in first pictures;The target is determined according to the data that the second picture collection includes picture
The event trace of personage.
Exemplary, the computer program can be divided into one or more module/units, one or more
A module/unit is stored in the memory 12, and is performed by the processor 13, to complete the present invention.It is one
Or multiple module/units can be the series of computation machine programmed instruction section that can complete specific function, which is used to retouch
State implementation procedure of the computer program in the electronic equipment 1.Obtained for example, the computer program can be divided into
Take unit 110, recognition unit 111, determination unit 112, predicting unit 113, tracking cell 114, transmitting element 115.
The memory 12 can be used for storing the computer program and/or module, the processor 13 by operation or
The computer program and/or module being stored in the memory 12 are performed, and calls the data being stored in memory 12,
Realize the various functions of the electronic equipment 1.The memory 12 can mainly include storing program area and storage data field, its
In, storing program area can storage program area, application program (such as sound-playing function, image needed at least one function
Playing function etc.) etc.;Storage data field can be stored uses created data (such as voice data, phone directory according to mobile phone
Deng) etc..In addition, memory 12 can include high-speed random access memory, nonvolatile memory can also be included, such as firmly
Disk, memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital,
SD) block, flash card (Flash Card), at least one disk memory, flush memory device or other volatile solid-states
Part.
The memory 12 can be the external memory storage and/or internal storage of electronic equipment 1.Further, it is described
Memory 12 can be the circuit with store function for not having in integrated circuit physical form, such as RAM (Random-Access
Memory, random access memory), FIFO (First In First Out) etc..Alternatively, the memory 12 can also be
Memory with physical form, such as memory bar, TF card (Trans-flash Card).
If the integrated module/unit of the electronic equipment 1 is realized in the form of SFU software functional unit and as independent
Production marketing in use, can be stored in a computer read/write memory medium.It is real based on such understanding, the present invention
All or part of flow in existing above-described embodiment method, can also instruct relevant hardware come complete by computer program
Into the computer program can be stored in a computer-readable recording medium, which is being executed by processor
When, it can be achieved that the step of above-mentioned each embodiment of the method.
Wherein, the computer program includes computer program code, and the computer program code can be source code
Form, object identification code form, executable file or some intermediate forms etc..The computer-readable medium can include:Can
Carry any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic disc, CD, the computer of the computer program code
Memory, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access
Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that the computer-readable medium
Comprising content appropriate increase and decrease can be carried out according to legislation in jurisdiction and the requirement of patent practice, such as in some departments
Method administrative area, according to legislation and patent practice, computer-readable medium does not include electric carrier signal and telecommunication signal.
With reference to Fig. 2, the memory 12 in the electronic equipment 1 stores multiple instruction to realize a kind of people search side
Method, the processor 13 can perform it is the multiple instruction so as to fulfill:Obtain the picture of target person;Identify the target person
Picture described in target person face characteristic and wear feature;Obtain the first filter condition;According to the target person
Face characteristic and feature is worn, and first filter condition is concentrated from image data and obtains the first pictures;Obtain the second mistake
Filter condition;Second picture collection is obtained from first pictures according to second filter condition;According to the second picture
Collect the event trace that the data for including picture determine the target person.
Preferred embodiment according to the present invention, the processor 13, which also performs multiple instruction, to be included:
Clothes region is determined from the picture of the target person, the feature of clothes in the clothes region is carried
Take, the feature of the clothes includes style, color and the color-ratio of clothes, and the feature of extraction and clothes trained in advance is special
Individual features in sign model are matched, to determine that the include style, color and the color-ratio of the target person are believed
Breath wears feature.
Preferred embodiment according to the present invention, the processor 13, which also performs multiple instruction, to be included:
According to the first specified time section included in first filter condition, concentrate and search in the image data, and
Determine first candidate's pictures, the picture in the first candidate pictures has the first specified time segment information;According to
The face characteristic of the target person, determines first pictures, first picture from the first candidate pictures
The every face concentrated is respectively provided with the face characteristic;And/or
According to the target person wear included in feature and first filter condition first specify clothing color,
Concentrate and search in the image data, and determine first candidate's pictures, the picture in the first candidate pictures has institute
State first and specify clothing color information;According to the face characteristic of the target person, determined from the first candidate pictures
First pictures, every face in first pictures are respectively provided with the face characteristic;And/or
According to the target person wear included in feature and first filter condition first specify clothing color
Ratio, concentrates in the image data and searches for, and determines first candidate's pictures, the picture tool in the first candidate pictures
There is described first to specify clothing color percent information;According to the face characteristic of the target person, from the first candidate picture
Concentrate and determine first pictures, every face in first pictures is respectively provided with the face characteristic;And/or
According to the target person wear included in feature and first filter condition first specify clothes fashion,
Concentrate and search in the image data, and determine first candidate's pictures, the picture in the first candidate pictures has institute
State first and specify clothes fashion information;According to the face characteristic of the target person, determined from the first candidate pictures
First pictures, every face in first pictures are respectively provided with the face characteristic;And/or
Region is specified according to first included in first filter condition, concentrates and searches in the image data, and really
Fixed first candidate's pictures, the picture in the first candidate pictures have described first to specify area information;According to described
The face characteristic of target person, determines first pictures, in first pictures from the first candidate pictures
Every face be respectively provided with the face characteristic.
Preferred embodiment according to the present invention, the processor 13, which also performs multiple instruction, to be included:
The picture met in first pictures in second filter condition in the second specified time section is determined as
The second picture collection, the picture that the second picture is concentrated have the second specified time segment information, and described first specifies
Period includes second specified time section;And/or
The picture in second filter condition in the second specified region will be met in first pictures and be determined as institute
Second picture collection is stated, the picture that the second picture is concentrated has described second to specify area information, and described first specifies region
Region is specified comprising described second.
Preferred embodiment according to the present invention, the processor 13, which also performs multiple instruction, to be included:
The second picture is obtained to concentrate per the corresponding shooting time of pictures and spot for photography;
Based on the corresponding shooting time of every pictures, the picture of the second picture collection is ranked up, determines the mesh
Mark the event trace of personage;Or
Based on the corresponding spot for photography of every pictures, classify to the picture of the second picture collection, determine the mesh
Mark the event trace of personage.
Preferred embodiment according to the present invention, the processor 13, which also performs multiple instruction, to be included:
Obtain the rearmost position that the target person occurs;
Determine the direction of motion of target person target person at the rearmost position;
According to the direction of motion of the target person, the zone of action of the target person is predicted.
Preferred embodiment according to the present invention, the processor 13, which also performs multiple instruction, to be included:
The picture that the camera device in the zone of action of prediction is captured is obtained in real time;
The target person is identified from the picture of the candid photograph, and tracks the target person;
When the target person is dangerous person, the event trace of the target person and the target person are corresponded to
The zone of action of the prediction send at least one user equipment.
Specifically, the processor 13 refers to the concrete methods of realizing of above-metioned instruction Fig. 2 and corresponds to correlation in embodiment
The description of step, this will not be repeated here.
In several embodiments provided by the present invention, it should be understood that disclosed system, apparatus and method can be with
Realize by another way.For example, device embodiment described above is only schematical, for example, the module
Division, is only a kind of division of logic function, can there is other dividing mode when actually realizing.
The module illustrated as separating component may or may not be physically separate, be shown as module
The component shown may or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
In network unit.Some or all of module therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each function module in each embodiment of the present invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units integrate in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, can also be realized in the form of hardware adds software function module.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.
Therefore, in all respects, the present embodiments are to be considered as illustrative and not restrictive, this
The scope of invention is indicated by the appended claims rather than the foregoing description, it is intended that will fall equivalency in claim
All changes in implication and scope are included in the present invention.Any attached associated diagram mark in claim should not be considered as limit
The involved claim of system.
Furthermore, it is to be understood that one word of " comprising " is not excluded for other units or step, odd number is not excluded for plural number.In system claims
The multiple units or device of statement can also be realized by a unit or device by software or hardware.Second grade word is used
To represent title, and it is not offered as any specific order.
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention and it is unrestricted, although reference
The present invention is described in detail in preferred embodiment, it will be understood by those of ordinary skill in the art that, can be to the present invention's
Technical solution is modified or equivalent substitution, without departing from the spirit and scope of technical solution of the present invention.
Claims (10)
- A kind of 1. people search method, it is characterised in that the described method includes:Obtain the picture of target person;Identify the face characteristic of target person described in the picture of the target person and wear feature;Obtain the first filter condition;According to the face characteristic of the target person and feature is worn, and first filter condition is concentrated from image data and obtained First pictures;Obtain the second filter condition;Second picture collection is obtained from first pictures according to second filter condition;The data for including picture according to the second picture collection determine the event trace of the target person.
- 2. people search method as claimed in claim 1, it is characterised in that institute in the picture of the identification target person Stating the feature of wearing of target person includes:Clothes region is determined from the picture of the target person, the feature of clothes in the clothes region is extracted, institute Stating the feature of clothes includes style, color and the color-ratio of clothes, by the feature of extraction and garment feature mould trained in advance Individual features in type are matched, and include the style, color and color-ratio information with determine the target person Wear feature.
- 3. people search method as claimed in claim 1, it is characterised in that the face characteristic according to the target person And feature is worn, and first filter condition concentrates the first pictures of acquisition to include following one or more from image data Combination:According to the first specified time section included in first filter condition, concentrate and search in the image data, and determine First candidate's pictures, the picture in the first candidate pictures have the first specified time segment information;According to described The face characteristic of target person, determines first pictures, in first pictures from the first candidate pictures Every face be respectively provided with the face characteristic;And/orAccording to the target person wear included in feature and first filter condition first specify clothing color, in institute State image data and concentrate search, and determine first candidate's pictures, the picture in the first candidate pictures has described the One specifies clothing color information;According to the face characteristic of the target person, determined from the first candidate pictures described First pictures, every face in first pictures are respectively provided with the face characteristic;And/orAccording to the target person wear included in feature and first filter condition first specify clothing color ratio, Concentrate and search in the image data, and determine first candidate's pictures, the picture in the first candidate pictures has institute State first and specify clothing color percent information;According to the face characteristic of the target person, from the first candidate pictures Determine first pictures, every face in first pictures is respectively provided with the face characteristic;And/orAccording to the target person wear included in feature and first filter condition first specify clothes fashion, in institute State image data and concentrate search, and determine first candidate's pictures, the picture in the first candidate pictures has described the One specifies clothes fashion information;According to the face characteristic of the target person, determined from the first candidate pictures described First pictures, every face in first pictures are respectively provided with the face characteristic;And/orRegion is specified according to first included in first filter condition, concentrates and searches in the image data, and determines the One candidate's pictures, the picture in the first candidate pictures have described first to specify area information;According to the target The face characteristic of personage, determines first pictures from the first candidate pictures, every in first pictures Open face and be respectively provided with the face characteristic.
- 4. people search method as claimed in claim 1, it is characterised in that it is described according to second filter condition from described Second picture collection is obtained in first pictures includes following one or more kinds of combination:The picture met in first pictures in second filter condition in the second specified time section is determined as described Second picture collection, the picture that the second picture is concentrated have the second specified time segment information, first specified time Section includes second specified time section;And/orSecond will be met in second filter condition in first pictures specifies the picture in region to be determined as described the Two pictures, the picture that the second picture is concentrated have described second to specify area information, and described first specifies region to include Described second specifies region.
- 5. people search method as claimed in claim 1, it is characterised in that described that figure is included according to the second picture collection The data of piece determine that the event trace of the target person includes:The second picture is obtained to concentrate per the corresponding shooting time of pictures and spot for photography;Based on the corresponding shooting time of every pictures, the picture of the second picture collection is ranked up, determines the target person The event trace of thing;OrBased on the corresponding spot for photography of every pictures, classify to the picture of the second picture collection, determine the target person The event trace of thing.
- 6. people search method as claimed in claim 5, it is characterised in that the method further includes:Obtain the rearmost position that the target person occurs;Determine the direction of motion of target person target person at the rearmost position;According to the direction of motion of the target person, the zone of action of the target person is predicted.
- 7. people search method as claimed in claim 6, it is characterised in that the method further includes:The picture that the camera device in the zone of action of prediction is captured is obtained in real time;The target person is identified from the picture of the candid photograph, and tracks the target person;When the target person is dangerous person, by the event trace of the target person and the corresponding institute of the target person The zone of action for stating prediction is sent at least one user equipment.
- 8. a kind of people search device, it is characterised in that described device includes:Acquiring unit, for obtaining the picture of target person;Recognition unit, for identifying the face characteristic of target person described in the picture of the target person and wearing feature;The acquiring unit, is additionally operable to obtain the first filter condition;The acquiring unit, is additionally operable to according to the face characteristic of the target person and wears feature, and first filtering rod Part is concentrated from image data and obtains the first pictures;The acquiring unit, is additionally operable to obtain the second filter condition;The acquiring unit, is additionally operable to obtain second picture collection from first pictures according to second filter condition;Determination unit, the data for including picture according to the second picture collection determine the active rail of the target person Mark.
- 9. a kind of electronic equipment, it is characterised in that the electronic equipment includes:Memory, stores at least one instruction;AndProcessor, performs the instruction that is stored in the memory to realize personage as claimed in any of claims 1 to 7 in one of claims Searching method.
- A kind of 10. computer-readable recording medium, it is characterised in that:At least one is stored with the computer-readable recording medium A instruction, at least one instruction are performed by the processor in electronic equipment to realize such as any one in claim 1 to 7 The people search method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711310435.6A CN107992591A (en) | 2017-12-11 | 2017-12-11 | People search method and device, electronic equipment and computer-readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711310435.6A CN107992591A (en) | 2017-12-11 | 2017-12-11 | People search method and device, electronic equipment and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107992591A true CN107992591A (en) | 2018-05-04 |
Family
ID=62035781
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711310435.6A Pending CN107992591A (en) | 2017-12-11 | 2017-12-11 | People search method and device, electronic equipment and computer-readable recording medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107992591A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108733819A (en) * | 2018-05-22 | 2018-11-02 | 深圳云天励飞技术有限公司 | A kind of personnel's archives method for building up and device |
CN109829418A (en) * | 2019-01-28 | 2019-05-31 | 北京影谱科技股份有限公司 | A kind of punch card method based on figure viewed from behind feature, device and system |
CN110647642A (en) * | 2019-09-25 | 2020-01-03 | 上海依图网络科技有限公司 | Retrieval system and method for improving performance through structured information pre-filtering control library |
CN110874421A (en) * | 2018-08-28 | 2020-03-10 | 深圳云天励飞技术有限公司 | Target analysis method and device, electronic equipment and storage medium |
CN110990609A (en) * | 2019-12-13 | 2020-04-10 | 云粒智慧科技有限公司 | Searching method, searching device, electronic equipment and storage medium |
CN111798341A (en) * | 2020-06-30 | 2020-10-20 | 深圳市幸福人居建筑科技有限公司 | Green property management method, system computer equipment and storage medium thereof |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102204814A (en) * | 2010-03-30 | 2011-10-05 | 索尼公司 | Information processing device, image output method, and program |
CN102469303A (en) * | 2010-11-12 | 2012-05-23 | 索尼公司 | Video surveillance |
CN102982323A (en) * | 2012-12-19 | 2013-03-20 | 重庆信科设计有限公司 | Quick gait recognition method |
CN103713316A (en) * | 2012-10-09 | 2014-04-09 | 中国石油天然气股份有限公司 | Speed prediction method and apparatus based on rock hole digital characterization |
CN103886506A (en) * | 2012-12-20 | 2014-06-25 | 联想(北京)有限公司 | Information processing method and electronic device |
CN104091148A (en) * | 2014-06-16 | 2014-10-08 | 联想(北京)有限公司 | Facial feature point positioning method and device |
CN106295618A (en) * | 2016-08-26 | 2017-01-04 | 亨特瑞(昆山)新材料科技有限公司 | A kind of personal identification method and device based on video image |
CN107358146A (en) * | 2017-05-22 | 2017-11-17 | 深圳云天励飞技术有限公司 | Method for processing video frequency, device and storage medium |
CN107392162A (en) * | 2017-07-27 | 2017-11-24 | 京东方科技集团股份有限公司 | Dangerous person's recognition methods and device |
CN107392138A (en) * | 2017-07-18 | 2017-11-24 | 上海与德科技有限公司 | A kind of display methods and device |
-
2017
- 2017-12-11 CN CN201711310435.6A patent/CN107992591A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102204814A (en) * | 2010-03-30 | 2011-10-05 | 索尼公司 | Information processing device, image output method, and program |
CN102469303A (en) * | 2010-11-12 | 2012-05-23 | 索尼公司 | Video surveillance |
CN103713316A (en) * | 2012-10-09 | 2014-04-09 | 中国石油天然气股份有限公司 | Speed prediction method and apparatus based on rock hole digital characterization |
CN102982323A (en) * | 2012-12-19 | 2013-03-20 | 重庆信科设计有限公司 | Quick gait recognition method |
CN103886506A (en) * | 2012-12-20 | 2014-06-25 | 联想(北京)有限公司 | Information processing method and electronic device |
CN104091148A (en) * | 2014-06-16 | 2014-10-08 | 联想(北京)有限公司 | Facial feature point positioning method and device |
CN106295618A (en) * | 2016-08-26 | 2017-01-04 | 亨特瑞(昆山)新材料科技有限公司 | A kind of personal identification method and device based on video image |
CN107358146A (en) * | 2017-05-22 | 2017-11-17 | 深圳云天励飞技术有限公司 | Method for processing video frequency, device and storage medium |
CN107392138A (en) * | 2017-07-18 | 2017-11-24 | 上海与德科技有限公司 | A kind of display methods and device |
CN107392162A (en) * | 2017-07-27 | 2017-11-24 | 京东方科技集团股份有限公司 | Dangerous person's recognition methods and device |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108733819A (en) * | 2018-05-22 | 2018-11-02 | 深圳云天励飞技术有限公司 | A kind of personnel's archives method for building up and device |
CN108733819B (en) * | 2018-05-22 | 2021-07-06 | 深圳云天励飞技术有限公司 | Personnel archive establishing method and device |
CN110874421A (en) * | 2018-08-28 | 2020-03-10 | 深圳云天励飞技术有限公司 | Target analysis method and device, electronic equipment and storage medium |
CN109829418A (en) * | 2019-01-28 | 2019-05-31 | 北京影谱科技股份有限公司 | A kind of punch card method based on figure viewed from behind feature, device and system |
CN109829418B (en) * | 2019-01-28 | 2021-01-05 | 北京影谱科技股份有限公司 | Card punching method, device and system based on shadow features |
CN110647642A (en) * | 2019-09-25 | 2020-01-03 | 上海依图网络科技有限公司 | Retrieval system and method for improving performance through structured information pre-filtering control library |
WO2021056888A1 (en) * | 2019-09-25 | 2021-04-01 | 上海依图网络科技有限公司 | Retrieval system and method for improving performance by structured information pre-filtering deploy and control library |
CN110990609A (en) * | 2019-12-13 | 2020-04-10 | 云粒智慧科技有限公司 | Searching method, searching device, electronic equipment and storage medium |
CN110990609B (en) * | 2019-12-13 | 2023-06-16 | 云粒智慧科技有限公司 | Searching method, searching device, electronic equipment and storage medium |
CN111798341A (en) * | 2020-06-30 | 2020-10-20 | 深圳市幸福人居建筑科技有限公司 | Green property management method, system computer equipment and storage medium thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107992591A (en) | People search method and device, electronic equipment and computer-readable recording medium | |
Wang et al. | Deep people counting in extremely dense crowds | |
CN101772782B (en) | Device for displaying result of similar image search and method for displaying result of similar image search | |
CN110516705A (en) | Method for tracking target, device and computer readable storage medium based on deep learning | |
CN109978918A (en) | A kind of trajectory track method, apparatus and storage medium | |
CN110175549A (en) | Face image processing process, device, equipment and storage medium | |
CN110751022A (en) | Urban pet activity track monitoring method based on image recognition and related equipment | |
Lin et al. | Visual-attention-based background modeling for detecting infrequently moving objects | |
CN109711890B (en) | User data processing method and system | |
Zhang et al. | Fast collective activity recognition under weak supervision | |
CN107330386A (en) | A kind of people flow rate statistical method and terminal device | |
CN107633206B (en) | Eyeball motion capture method, device and storage medium | |
JP2022518459A (en) | Information processing methods and devices, storage media | |
CN110874583A (en) | Passenger flow statistics method and device, storage medium and electronic equipment | |
CN110245250A (en) | Image processing method and relevant apparatus | |
US6434271B1 (en) | Technique for locating objects within an image | |
WO2017092269A1 (en) | Passenger flow information collection method and apparatus, and passenger flow information processing method and apparatus | |
CN102281385A (en) | Periodic motion detection method based on motion video | |
CN110751675A (en) | Urban pet activity track monitoring method based on image recognition and related equipment | |
WO2023077797A1 (en) | Method and apparatus for analyzing queue | |
CN113255477A (en) | Comprehensive management system and method for pedestrian video images | |
CN110322472A (en) | A kind of multi-object tracking method and terminal device | |
CN110175990A (en) | Quality of human face image determination method, device and computer equipment | |
US20180144074A1 (en) | Retrieving apparatus, display device, and retrieving method | |
CN111767880B (en) | Living body identity recognition method and device based on facial features and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180504 |
|
RJ01 | Rejection of invention patent application after publication |