CN1819652A - Watching device and system - Google Patents

Watching device and system Download PDF

Info

Publication number
CN1819652A
CN1819652A CNA2005100897268A CN200510089726A CN1819652A CN 1819652 A CN1819652 A CN 1819652A CN A2005100897268 A CNA2005100897268 A CN A2005100897268A CN 200510089726 A CN200510089726 A CN 200510089726A CN 1819652 A CN1819652 A CN 1819652A
Authority
CN
China
Prior art keywords
mentioned
personage
face image
information
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2005100897268A
Other languages
Chinese (zh)
Inventor
森靖英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of CN1819652A publication Critical patent/CN1819652A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

To identify a person by means of even the image other than his or her front face highly precisely. In the monitoring device in which a photographed image is inputted from a camera and information is acquired from a storage part for storing various information, the storage part stores past information including information related with an image inputted in the past to the monitoring device and geographical information including the position information of the camera, and extracts the face image from the inputted image, and estimates a figure corresponding to the extracted face image on the basis of the past information, and decides the person corresponding to the extracted face image out of the estimated persons on the basis of the geographical information.

Description

Monitoring arrangement and surveillance
Technical field
The present invention relates to surveillance that the image that video camera is photographed is monitored, particularly, relate to the technology of using geography information that the personage who is mirrored on the image is determined.
Background technology
In recent years, utilize the personal authentication of face image recognition just constantly to expand.The face image recognition is a technology of discerning the personage according to the face image.Particularly, go out personage's face zone exactly from image contract.Then, this face that extracts zone is checked with the face image of being logined in advance, and judge similitude.Then, determine the personage according to determining similitude.
Utilize the personal authentication of face image, psychological opposing is less.This is because the image that utilizes the personal authentication of face image to be to use from afar to be photographed carries out non-contactly, so need not carry out cause at the input action of fetch equipment like that such as finger print identifying etc.
But, utilize the personal authentication of face image, exist for the change of illumination, the change of face area size in the image and face towards more weak and so on problem such as change.
Thereby, utilized the person authentication device of face image to authenticate by obtaining the image that is suitable for authenticating in the past.For example person authentication device is obtained suitably in the image of authentication by the user's that stand in the appropriate position before the video camera front face is photographed.But this person authentication device must be photographed to the user who stand in appropriate position whenever authenticating all.
Therefore, the monitoring arrangement of head it off is people's known (for example, with reference to patent documentations 1).This monitoring arrangement is stored the face image from a plurality of directions in advance, even the image of obtaining thus beyond the face of front also can authenticate.That is this monitoring arrangement can authenticate and need not make the user stand in the position of regulation.
In addition, as the technology of utilizing character image, known have personage's tracking process (for example, with reference to non-patent literature 1).This personage's tracking process is to extract the face zone from the surveillance camera image, and according to the face zone of being extracted find the solution the size of face and face towards.Then, according to the size of the face of being obtained and face towards translational speed of moving direction of finding the solution the personage and personage etc.Then, follow the trail of the personage based on the personage's who is obtained moving direction and personage's translational speed.This personage's tracking process is applied to monitoring arrangement and market etc.
Just can only carry out this personage's tracking process if satisfy all conditions with the common image processing of utilizing difference etc.All conditions are meant that the personage of tracing object is accommodated in the picture all the time and light quantity is that threshold value is unusual etc.
Thereby personage's tracking process in the past then can't be proceeded to follow the trail of if the personage leaves and then enter facility in the picture.That is this personage's tracking process will be left the problem that personage and the personage who enters facility once more handle as different personages with regard to having in picture.
In order to address this problem, known by the technology that a plurality of video cameras are followed the trail of.But, if having between video camera and the video camera dead angle then this technology also same problem will take place.
[patent documentation 1] Japan Patent Publication Laid-Open 2002-92601 communique.
[non-patent literature 1] height wood is done male editor, " newly organized graphical analysis handbook ", comprehensive revised edition, Tokyo University publishing house, on September 10th, 2004.
Summary of the invention
The monitoring arrangement that above-mentioned patent documentation 1 is put down in writing can also authenticate from the image beyond personage's the front face.But this monitoring arrangement is so owing to only determine that with the similitude of image the precision of personage's authentication just becomes problem.
Thereby, even purpose of the present invention just provides the monitoring arrangement that a kind of personage's front face image in addition also can authenticate and can authenticate accurately.
Technical scheme of the present invention provides a kind of monitoring arrangement, from the captured image of video camera input, obtain information from the storage part of storing various information, it is characterized in that, above-mentioned storage portion stores is information and geography information in the past, wherein, described information in the past comprises the image-related information of being imported in the past with above-mentioned monitoring arrangement, and described geography information comprises the positional information of above-mentioned video camera; From the above-mentioned image contract face image of importing, and infer personage corresponding to the above-mentioned face image that extracts based on above-mentioned information in the past; Based on above-mentioned geography information, among the personage who is inferred from above-mentioned facial recognition portion, determine personage corresponding to the above-mentioned face image that extracts.
According to the present invention,, and then can also authenticate accurately even if the image beyond personage's the front face also can authenticate.
Description of drawings
Fig. 1 is the block diagram of the surveillance of the present invention's the 1st execution mode.
Fig. 2 is the pie graph that data appear in the personage of embodiment of the present invention.
Fig. 3 is the pie graph of the geodata of embodiment of the present invention.
Fig. 4 is the flow chart of image processing of the server of the present invention's the 1st execution mode.
Fig. 5 is the block diagram of formation of the surveillance of the present invention's the 2nd execution mode.
Fig. 6 is the key diagram of stream of people's tracking process of the server of the present invention's the 2nd execution mode.
Fig. 7 is the flow chart of image processing of the server of the present invention's the 2nd execution mode.
Fig. 8 is the block diagram of formation of the surveillance of the present invention's the 3rd execution mode.
Fig. 9 is the pie graph of the lighting condition data of the present invention's the 3rd execution mode.
Figure 10 A is the key diagram of influence of the illumination of the present invention's the 3rd execution mode.
Figure 10 B is the key diagram of influence of the illumination of the present invention's the 3rd execution mode.
Figure 11 is the flow chart of image processing of the server of the present invention's the 3rd execution mode.
Embodiment
Describe with regard to embodiments of the present invention with reference to drawing.
(the 1st execution mode)
Fig. 1 is the block diagram of the surveillance of the present invention's the 1st execution mode.
Surveillance monitors facilities such as buildings.Surveillance has in addition: video camera 101, server 102, go into to move back chamber control device 103, data 108 and geodata 109 appear in the personage.
In addition, server 102 is via network 112, with video camera 101 and go into to move back chamber control device 103 and couple together.
Video camera 101 possesses a plurality of different places that are separately positioned on.Video camera 101 for example is set at the place that the stream of people produces branch.In addition, the image in 101 pairs of set places of video camera is photographed, and the image of being photographed is sent to server 102.In addition, to the set place of video camera 101 attached with the zone number that place is separately discerned uniquely.
Server 102 is the computers with CPU, memory and storage part.Storing various programs in storage part, CPU carries out these programs.In view of the above, server 102 is handled the image that receives from video camera 101.Then, the result according to this processing indicates the switching of door (omitting diagram) to going into to move back chamber control device 103.In addition, door is set at the facility that the surveillance of present embodiment monitors.The temporary transient storage of memory comprises the various information of the image that receives from video camera 101 etc.
In addition, server 102 comprises geographical time restriction and calculates portion 105 and facial recognition portion 106.The personage (corresponding personage) who is mirrored by the image contract face zone that receives from video camera 101, and infers in facial recognition portion 106 on the face zone that is extracted.Geographical time restriction is calculated portion 105, based on the place and the moment that this image is photographed, determines corresponding personage among the personage who is inferred by facial recognition portion 106.
In addition, server 102 is obtained the personage and data 108 and geodata 109 are occurred.The personage data 108 and geodata 109 occur and both can be stored in the inside of server 102, also can be stored in the external memory of server 102.
The personage data 108 usefulness Fig. 2 occur in the back narration, has and the relevant information of image of being photographed by video camera 101.Geodata 109 usefulness Fig. 3 have the information relevant with the place that is provided with video camera 101 in the back narration.
Go into to move back chamber control device 103, the switching of the door of the facility that the control surveillance monitors.
Fig. 2 is the pie graph that data 108 appear in the personage of embodiments of the present invention.
The personage data 108 occur and comprises face view data numbering 1081, image feature amount 1082, infers people's name 1083, reliability 1084, zone number 1085, time on date 1086 and other attributes 1087 occur.
Face view data numbering 1081 is identifiers of discerning the face view data uniquely.The face view data is meant from the image in the image contract noodle outlet zone of being photographed by video camera 101.
Image feature amount 1082 is set of the numerical value of regulation number, is the value of this face view data being given feature.Characteristic quantity 1082 calculates by known various methods.Server 102 utilizes the method that is suitable for present embodiment, goes out characteristic quantity 1082 according to the face image calculation.
In addition, server 102 based on characteristic quantity 1082, calculates the similarity of two face view data that compare.Similarity is the similar degree of two face view data being compared.That is big more two images that compare of similarity are similar more.
For example, server 102 also can be regarded image feature amount 1082 as vector and finds the solution similarity.In the case, server 102, the distance that the image feature amount of two view data that calculating compares is 1082.Then, server 102 is found the solution similarity according to the distance that calculates.Particularly, the near more similarity of the distance that calculates just is big more value.
In addition, server 102 calculates correlation according to the image feature amount 1082 of two view data that compare, and finds the solution similarity according to the correlation that calculates.
Inferring people's name 1083 is titles of personage (corresponding personage) who is mirrored on this face view data and the personage who is inferred.Infer people's name 1083 and can also have a plurality of records for a face view data numbering 1081.In the case, the corresponding personage of this image is a people who infers in people's name 1083.
Reliability 1084 is that this infers the possibility of personage for corresponding personage.
Zone number 1085 is identifiers that the place that this face view data is photographed is carried out uniquely.The date that in store this face view data is photographed and the moment appearred in the time on date 1086.
In other attributes 1087, the information of in store relevant this face view data.In this key diagram, in other attributes 1087, keeping the personage towards, Weather information and illumination information etc.The personage towards be the personage that mirrored on this face view data towards direction, for example, be that the angle of 0 degree is represented with the frontal of establishing video camera 101.Weather information is the weather of this face view data when being photographed, for example, and in store " fine ", " cloud " and information such as " rain ".Illumination information is the situation of the illumination of this face view data when being photographed, and for example, is keeping " frontlighting ", " backlight ", " right skew ray ", " left skew ray " and information such as " indoor ".
In addition, occur also can storing personal information to each personage in the data 108 the personage with these information.Personal information is this personage's a face view data numbering, the mean value of relevant this personage's all images characteristic quantity and this personage's authentication information etc.
Fig. 3 is the pie graph of the geodata 109 of embodiment of the present invention.
Geodata 109 is calculated portion 105 in geographical time restriction and is determined to be utilized in corresponding personages' the processing.In geodata 109, keeping from the mobile needed average time in the place of place to the destination zone number 1092 of starting area numbering 1091.
Starting area numbering 1091 is identifiers that the place before the personage is moved is discerned uniquely.Destination zone number 1092 is identifiers that the place after the personage is moved is discerned uniquely.
For example, be " 2 second " for place to the destination zone numbers 1092 of " 0001 " for the average traveling time in the place of " 0002 " from starting area numbering 1091.
In addition, also can in frame separately, preserve a plurality of average traveling times.In the case, there are a plurality of moving methods in expression.Moving method for example has elevator, escalator and stair etc.
For example, be that place to the destination zone number 1092 of " 0002 " is that the average traveling time in place of " 0034 " is different because of moving method from starting area numbering 1091.This average traveling time is if utilize elevator then to be " 10 seconds ", if utilize escalator then to be " 125 seconds ", as if utilizing stair then to be " 250 seconds ".
In addition, from the moving of the place of place to the destination zone number 1092 of starting area numbering 1091, if not via other places that are provided with video camera 101 with regard under the irremovable situation, in store in qualified frame " non-adjacent ".In the case, server 102, retrieval is from the mobile route in the place of place to the destination zone number 1092 of starting area numbering 1091.Then, server 102 is by all adding up to the average traveling time of finding the solution from the place of place to the destination zone number 1092 of starting area numbering 1091 to the average traveling time on this mobile route.
Fig. 4 is the flow chart of image processing of the server 102 of the present invention's the 1st execution mode.
If the image of being photographed by video camera 101 from video camera 101 input then server 102 begins to handle (201).At this moment, the zone number in the place that also this image can be photographed of server 102 and moment that this image is photographed etc. are imported in the lump.
At first, from the image (face image) (202) in the image contract face zone imported.In addition, in the image of being imported, exist under a plurality of personages' the situation in face zone, separately face image is extracted.Then, the face image separately to being extracted carries out following processing.
Then, according to the face image calculation image feature amount (203) that is extracted.Then, extract the image feature amount 1082 that data 108 appear in the personage in order from top record.Then, according to image feature amount that in step 203, calculates and the image feature amount 1082 that extracted, calculate similarity (204).That is all recording gauges that occur data 108 at the personage are calculated similarity.
Then, from value bigger extract one from all similarities of being calculated in order.Then, from the personage record (205) that data 108 are selected corresponding to the similarity that is extracted appears.
Then, judge that the similarity extracted is whether more than threshold value (206).
If similarity less than threshold value, then is judged to be two face image dissmilarities that compare.Thereby the personage who is mirrored on the face image that will be extracted in step 202 is defined as new personage (210), and end process.In addition, new personage is meant the personage who occurs not preserving in the data 108 the face view data the personage.
On the other hand, if similarity more than threshold value, then is judged to be two face image similarities that compare, extract the zone number 1085 of selected record in step 205.Then, geodata 109 extract number in the zone number 1085 that is extracted and the starting area of geodata 109 1091 consistent, and the average traveling time of being preserved in the consistent frame of the destination zone number 1092 of the zone number of in step 201, being imported and geodata 109 (207).
Then, deduct the time on appearance date 1086 of selected record in step 205 from the moment of step 201, being imported.
Whether the value of then, judgement process subtraction is included in the average traveling time that is extracted is the scope (for example, ± 20%) at center.In view of the above, judge the personage can be naturally from place (208) that the image that is moved in step 201 to be imported by the place of photography corresponding to the image of selected record step 205 is photographed.
If be judged to be and can not move naturally, the personage who is mirrored on two images that then compare just is judged to be different personages.Then, turn back to time high same repeatedly processing of image of step 205 pair similarity.
On the other hand,, then will in step 205, be defined as corresponding personage (209) by the people's name 1083 of inferring of selected record if be judged to be and can move naturally, and end process.In addition, in inferring people's name 1083 under the situation of in store a plurality of people's names, it is the highest just to extract reliability from the reliability 1084 at selected record from step 205.Then, will be defined as corresponding personage corresponding to the people's name 1083 of inferring of the reliability 1084 that is extracted.
In addition, also can be before end process, will to the personage data 108 appear with the image-related information registration of the face that is extracted in step 203.
Particularly, at first, data 108 are appearred in the personage create new record.Then, the face view data that does not repeat is numbered face image attached and that in step 202, extracted.With attached and face view data numbering be saved in the face view data numbering 1081 of new record.Then, will in step 203, calculate the image feature amount 1082 that image feature amount is saved in new record.
Then, fixed corresponding personage's the name people's name 1083 of inferring that is referred to as new record is preserved.Then, based on calculating similarity in step 204, the zone number of being imported in step 201 and moment of being photographed etc. calculate reliability.Then, the reliability 1084 that reliability is saved in new record will be calculated.
Then, the zone number that will be imported in step 201 is saved in the zone number 1085 of new record.Then, the moment that will be imported in step 201 is saved in going out now 1086 of new record.In addition, if from other information of video camera 101 input (for example, comprise the illumination information of the weather when photographing), then other information of being imported are saved in other attributes 1087 of new record.
As described above, according to present embodiment, data 108 occur by the reference personage and just can the personage of image similarity be judged.And then, just can among the personage of image similarity, determine corresponding personage by reference geodata 109.
(the 2nd execution mode)
In the 2nd execution mode, server 102 is followed the trail of the personage, alleviates the image processing (Fig. 4) of the 1st execution mode thus.
Fig. 5 is the block diagram of formation of the surveillance of the present invention's the 2nd execution mode.
The surveillance of the present invention's the 2nd execution mode is the formation that is comprised in server 102, identical with the surveillance (Fig. 1) of the 1st execution mode.Label and the omission explanation additional identical to identical formation.
Also comprise person detecting tracking part 104 and go into to move back chamber management department 107 formation that in server 102, in the server 102 (Fig. 1) of the 1st execution mode, is comprised.
104 pairs of face images that extracted by facial recognition portion 106 of person detecting tracking part carry out stream of people's tracking process (Fig. 6).Go into to move back 107 pairs of chamber management departments and go into to move back the switching of chamber control device 103 indication doors.
Fig. 6 is the key diagram of stream of people's tracking process of the server 102 of the present invention's the 2nd execution mode.
In this key diagram, the personage's 601 that follows the trail of walking processing is described.
Video camera 101A photographs continuously to personage 601, and the image of being photographed is input to server.Server 102 is by the image contract face image of being imported from video camera 101A.Server 102 based on the face image that is extracted, calculates personage 601 moving direction and personage 601 translational speed.
The position that server 102 is judged the face image in the image of being photographed by video camera 101A whether with moving direction that is calculated and translational speed contradiction.For example, if the opposite contradiction that then is judged to be of moving direction of the position of face image with the moving direction that calculates.In addition, if the translational speed of the position of face image is compared the too fast contradiction that then is judged to be with the translational speed that calculates.
If the position of face image is contradiction not, then to be judged to be the face image that goes out from two image contracts of sequence photography be same personage 601 to server 102.Then, server 102 occurs data 108 as personage 601 information registration to the personage with this face image.
In addition, if the face that is extracted is not fogging clear, then server 102 data 108 these face images of login do not occur the personage.Like this and since server 102 only will be clearly the face image sign in to the personage and data 108 occur, occur in the data 108 so personage's feature is stored explicitly on the personage.As described above, server 102 is followed the trail of personage 601.
On the other hand, if the position contradiction of face image, then to be judged to be the face image that goes out from two image contracts of sequence photography be different personages 602 to server 102.
In addition, server 102, being judged to be the face image that from the image contract of being photographed by other video camera 101B also is different personages' 603.
If it is different personages 602,603 that server 102 is judged to be the face image, then use the image processing (Fig. 4) of the 1st execution mode, determine the corresponding personage of this face image.
Fig. 7 is the flow chart of image processing of the server 102 of the present invention's the 2nd execution mode.
If the image of being photographed by video camera 101 from video camera 101 input then server 102 begins to handle (301).At this moment, the zone number in the place that also this image can be photographed of server 102 and moment that this image is photographed etc. are imported in the lump.
At first, from the image (face image) (302) in the image contract face zone imported.Then, find the solution the face size of images that is extracted.
And then, infer the face that mirrored on the face image that is extracted towards.For example, based on the personage of the image feature amount of preserving the face image of photographing from a plurality of angle occur data 108 infer face towards.In the case, calculate similarity according to the image feature amount of image feature amount that occurs the personage being preserved in the data 108 and the face image that in step 302, extracted.Then, determine the image feature amount (occurring in the data being preserved) of the similarity maximum that calculates the personage.Then, the image with determined image feature amount is estimated as the face that is mirrored on this face image by the angle of photography.
Then, follow the trail of situation based on the obtaining face size of images and the face of being inferred towards calculating the stream of people.Stream of people's tracking situation comprises the personage's who is mirrored on this face image moving direction and personage's translational speed etc.The stream of people follows the trail of situation, for example can utilize existing image processing (for example, with reference to non-patent literature 1) and calculates.
Then, the change direction and the variable quantity of the position in the face zone of the position of finding the solution the face zone from the image that video camera 101 is imported to the image of before this video camera 101, being imported.Then, follow the trail of situation (303), judge that whether this face image is the image (304) in following the trail of with reference to the change direction, variable quantity and the stream of people that are obtained.Particularly, under the reconcilable situation of translational speed that is comprised in the moving direction that is comprised in the change direction of being obtained and stream of people's tracking situation consistent and the variable quantity obtained and the stream of people's tracking situation, just being judged to be is image in following the trail of.
If this face image is not the image in following the trail of, then carry out the personage and determine to handle (307), enter step 308.The personage determines that processing 307 is identical with step 203~210 of the image processing (Fig. 4) of the 1st execution mode, and omits explanation.
On the other hand, if the face image is the image in following the trail of, then the corresponding personage of the image that will be imported before the video camera 101 of having taken this face image is defined as the corresponding personage (305) of this face image.
Particularly, retrieve the face view data numbering of the image of being imported before the video camera 101 of the image of having imported from having taken step 201 exactly.Then, determine to number 1081 consistent records with the face view data that data 108 appear in the personage through the face view data numbering of retrieval.Then, from determined record, select the highest reliability 1084.Then, will be corresponding to the corresponding personage that people's name 1083 is defined as this face image that infers of selected reliability 1084.
Then, personage's change direction of being obtained in step 303 based on personage's variable quantity and stream of people's tracking situation etc., calculates definite personage's reliability.
Then, according to the face image calculation image feature amount (306) that in step 302, is extracted.Then, occur comprising in the data 108 under personage's the situation of personal information, revise the mean value of the image feature amount of this personal information based on the image feature amount that is calculated the personage.
Then, the information of relevant this face image is saved in the personage and data 108 (308) occur.
Particularly, at first, data 108 are appearred in the personage create new record.Then, number with the face view data that does not repeat the face image that is extracted is attached.With attached and face view data numbering be saved in the face view data numbering 1081 of new record.
Then, the image feature amount that will calculate in step 306 is saved in the image feature amount 1082 of new record.What then, determined people's name is saved in new record infers people's name 1083.
Then, will in step 305, calculate the reliability 1084 that reliability is saved in new record.Then, the zone number that will be imported in step 301 is saved in the zone number 1085 of new record.
Then, the moment that will be imported in step 201 is saved in the time on appearance date 1086 of new record.In addition, if from other information of video camera 101 input (for example, the illumination information of the weather when comprising), then other the information of being imported is saved in other attributes 1087 of new record by photography.
Then, judge whether the video camera 101 of having imported this image is arranged on video camera 101 (309) in front of the door.Particularly, judge that exactly whether the zone number imported in step 301 is zone number in front of the door.
If be not arranged on 101 in video camera in front of the door owing to do not need to authenticate and end process.
On the other hand, if be arranged in front of the door video camera 101, then judge whether permit that determined corresponding personage enters (310) in step 305 or step 307.Particularly, by authentication information in the data 108 occurring, judge and whether permit corresponding personage to enter exactly with reference to being kept at the personage.
If permission enters and then judges and calculate reliability in step 305 whether more than threshold value (311).
If reliability more than threshold value, then is judged to be the personage who is mirrored on this image is the personage that permission enters, to going into to move back unblank (312) of chamber control device 103 indication doors.Then, end process.Going into to move back chamber control device 103 unblanks door.
On the other hand, licensed characters enters or be judged to be reliability less than threshold value in step 311 if be judged to be in step 310, then this image is sent to manager (313).Then, end process.The manager is by just keeping the safety of this facility with reference to the image that receives.
Then, just the surveillance of present embodiment being used for the occasion that the safety in the facilities such as building guarantees describes.
This facility is by authenticating and guarantee safety at the door of facility and the door of the office separately in the facility.
Video camera 101 uses common surveillance camera.In addition, video camera 101 mainly is arranged on the place of personage's flow branch.The place of personage's flow branch is gateway, stair of for example elevator etc.In addition, video camera 101 is provided with one by one by the visual field and scope.
Then, the processing of surveillance that just enters the personage (registrant) of facility at permission describes.In addition, the registrant is regular and the personage of this facility of coming in and going out continually, for example, and the service staff of this facility etc.Describe from the occasion of past purpose office in front of the door of this facility with the registrant.
Surveillance is logined the access admittance information of registrant's face image and facility the personage in advance and is occurred in the data 108.In addition, the face image of being not only registrant's front is also logined the face image of photographing from a plurality of direction.For example, with five horizontal directions of positive, right 45 degree, right 90 degree, a left side 45 degree and a left side 90 degree and, the 15 breadth hole patterns that combine of three vertical direction of positive, the angle of depression 10 degree and the angle of depression 45 degree look like to login.
At first, the registrant is near the door of facility, and the 101 couples of registrants of video camera that are arranged at the door of facility take.So, server 102, the personage who is mirrored on the image of determining to be imported from the video camera 101 of the door that is arranged at facility.In the case, the personage who is defined as on the image being mirrored is the registrant.So, server 102, the indication facility the door unblank.
In addition, this surveillance also can be used other authentication means in the lump under the situation of the fail safe of needs height.Other authentication means for example is, the input of password, ID card or biologicall test etc.
At this moment, server 102 from this image contract face image, occurs data 108 as registrant's information registration to the personage with the face image that extracted.
If the door of facility is opened, then the registrant moves towards purpose office.The video camera 101 that is on registrant's the mobile route is taken registrants, and captured image is inputed to server 102.Server 102 extracts the face image from the image of being imported.Then, server 102 is followed the trail of the face image that is extracted.In view of the above, server 102 prevents sneaking into of the personage different with the registrant, and then can alleviate processing.
Server 102, under situation about can't follow the trail of, the corresponding personage of definite face image that is extracted.For example, the situation that has the registrant to leave, enter the visual field of the video camera 101 that is arranged at 1 layer of elevator hall from the visual field of video camera 101 that is arranged at door.In the case, server 102 image feature amount 1082 of data 108 occurs based on the personage, determines the personage of image similarity.Then, server 102 is according to the personage's who is mirrored on this face image of the face image calculation that is extracted moving direction and translational speed.Then, server 102, with reference to the moving direction, the translational speed that calculate, the corresponding personage who is mirrored on the face image of determining to be extracted among the personage of image similarity.Particularly, server 102 be exactly by judgement be arranged at the video camera 101 of door and be arranged at distance between the video camera 101 of elevator hall and registrant's traveling time whether with calculate moving direction and translational speed contradiction and determine corresponding personage.In addition, time on appearance date 1086 and other attributes 1087 definite corresponding personages of data 108 also can appear in server 102 in the lump with reference to the personage.In view of the above, just can improve determined corresponding personage's reliability.
In addition, if the face clear picture that is extracted, then server 102 occurs data 108 as registrant's information registration to the personage with this face image.
Then, the registrant moves to 6 layers elevator hall from 1 layer elevator hall, afterwards, move to 6 layers purpose office.
Server 102 is judged the personage's who is mirrored title on the image that the video camera 101 before be arranged at purpose office is imported.Here, server 102 personage that is defined as on the image being mirrored is the registrant.So server 102 is unblanked the door of purpose office being judged as the moment in front of the door that the registrant arrives purpose office.
As described above, the surveillance of present embodiment can be unblanked door and need not be carried out conscious acts of authentication to the registrant.In addition, the surveillance of present embodiment, the just many more face images of savings frequently appear in the registrant.So, will increase fail safe and convenience so this surveillance just can improve the precision of authentication.
Then, processing at first visiting personage's (visitor) surveillance is described.Describe from the occasion of past purpose office in front of the door of this facility with the visitor.
At first, the visitor gets in touch with the manager in the door of facility.If the permission visitor enters facility, then the manager unblanks the door of facility.In addition, the manager is to the zone number of server 102 input purpose offices.At this moment, the video camera 101 that is arranged at the door of facility is taken the visitor, and this image is input to server 102.The image contract noodle outlet image that from then on server 102 is imported.Then, will with being saved in the personage from the zone number that the manager imported data 108 appear from the face image that is extracted.
Then, the visitor moves towards purpose office.At this moment, server 102, carry out with above-mentioned at registrant's the same processing of processing.Thereby if the not mistaken path of visitor arrives purpose office, then server 102 is unblanked the door of purpose office.In the case, also can not carry out conscious acts of authentication ground for the visitor unblanks.
If the visitor makes a mistake the path, then server 102, detect departing from and being notified to the manager of path.In addition, server 102 also can utilize means of communication to notify the mistake in visitor path.In view of the above, server 102 just can prevent that the visitor is from path deviation.
Then, processing at the registrant's who moves from office other places in facility of oneself surveillance is described.For example, have the registrant from oneself office to occasion that is used for moving in recreative public corner etc.In surveillance in the past whenever the office of coming in and going out just need carry out acts of authentication.For this reason, surveillance has in the past been lost convenience.
If the registrant moves back the chamber from office, the video camera 101 that then is arranged at the door of office is taken the registrant, and captured image is input to server 102.Server 102 from the image contract noodle outlet image of being imported, and is saved in the personage with the face image that is extracted and data 108 occur.If in official hour, import face image the unblanking of server 102 indications then that the face image similarity preserved the data 108 occurs with the personage from the video camera in front of the door 101 that is arranged at office.In view of the above, just need all not carry out acts of authentication whenever the registrant office of coming in and going out.
Then, illustrate that being set the registrant who forbids entering the zone enters the processing that this forbids entering the occasion in zone.
Server 102 is this registrant if be defined as from the corresponding personage who is arranged at the image that the video camera 101 of forbidding entering the zone imported, and then is judged to be to the intrusion of forbidding entering the zone.Server 102 is if judge to forbidding that entering regional intrusion then notifies the manager.In addition, server 102 also can utilize means of communication to notify the registrant to be prohibited from entering.And then server 102 also can data 108 occur by the reference personage and judge that this registrant's intrusion forbids entering the frequency in zone.Under the high situation of the frequency that enters, server 102 with this registrant as noticing that the personage is notified to the manager.And then, this registrant since the width of cloth number that the face image preserved in the data 108 occurs the personage increase, so server 102 will improve at the precision of this registrant's judgement.
By above whole processing, the surveillance of present embodiment just can monitoring facility and need not be carried out acts of authentication consciously to all personages.
Then, just the surveillance of this enforcement is applicable to that uptown visitor informs that the occasion of service describes.
In recent years, from the viewpoint of fail safe, the inlet in the block is provided with door, constantly increases with restriction guest's the block of structure.Such block, the inlet in the street has door, at each dwelling house door is arranged also.Thereby, if remove indoor with outdoor different, then surveillance carry out with the above-mentioned facility that is applicable in the same processing of occasion guaranteed of safety.
In addition, surveillance also can be notified to visiting object with near corresponding people's name of the image captured inlet of block.In addition, this surveillance also can send to visiting object with the image of being photographed near the inlet of block.In addition, if the mistaken path of visitor, then surveillance also can be notified to path deviation visiting object.
As described above, be applicable to that the visitor informs that the surveillance of service just can guarantee the safety of block.
(the 3rd execution mode)
In the 3rd execution mode, come correction image according to the situation of illumination.
Fig. 8 is the block diagram of formation of the surveillance of the present invention's the 3rd execution mode.
The surveillance of the 3rd execution mode is the formation that is comprised in possessing lighting condition data 110 this point and server 102, identical with the surveillance (Fig. 1) of the 1st execution mode.Label and the omission explanation additional identical to identical formation.
Lighting condition data 110 usefulness Fig. 9 are in back narration, and expression is provided with the situation of the illumination in the place of video camera 101.In illumination, comprise from the exterior light of the sun with from the interior lights of indoor illuminating lamp etc.
In server 102, also comprise the illumination meteorological condition the formation that in the server 102 (Fig. 1) of the 1st execution mode, is comprised and infer portion 111.
The illumination meteorological condition is inferred portion 111 and is revised the face image that is extracted by facial recognition portion 106 based on lighting condition data 110.
Fig. 9 is the pie graph of the lighting condition data 110 of the present invention's the 3rd execution mode.
Lighting condition data 110 inclusion regions numbering 1101, photography situation 1102, suitable face view data numbering 1103, lighting system 1104, characteristic quantity correction factor 1105 and pre-treatment content 1106.
Zone number 1101 is identifiers that the place that is provided with video camera 101 is discerned uniquely.Photography situation 1102 is by being arranged at the situation of 101 pairs of images of this regional video camera when photographing.In this key diagram, the in store personage's who on this image, is mirrored the weather when time band that, this image is photographed and this image are photographed etc. in photography situation 1102.
Suitably face view data numbering 1103 is identifiers that the image of being photographed under this photography situation is discerned uniquely.In suitable face view data numbering 1103, have under a plurality of situations at qualified image, with regard in store a plurality of identifiers.
Lighting system 1104 is situations of the affected exterior light of image of being photographed under this photography situation.In this key diagram, the kind of in store exterior light and outside light intensity in lighting system 1104.The kind of exterior light for example is " frontlighting ", " backlight ", " right skew ray ", " left skew ray " or " indoor " etc.In addition, externally the kind of light is under " indoor " situation, represents that this image is not subjected to the influence of exterior light.In addition, outside light intensity for example be " by force ", " in " or " weak " etc.
Characteristic quantity correction factor 1105 is coefficients that the image feature amount of the image of being photographed under this photography situation is revised.Server 102 comes the correction image characteristic quantity to remove the influence of exterior light by use characteristic quantity correction coefficient 1105.Characteristic quantity correction factor 1105 is determined in advance so that the Luminance Distribution of the face image that a plurality of video camera 101 is photographed and distribution of color become identical.
For example, be daytime if weather is fine time band, then characteristic quantity correction factor 1105 is just bigger.Otherwise band is night if weather is the time at cloudy day, and then characteristic quantity correction factor 1105 is just less.The window time of being exposed to the west is with and is dusk in addition, and then characteristic quantity correction factor 1105 just strengthens the composition that backlight is revised.
Pre-treatment content 1106 is the contents at the pre-treatment of the image of being photographed under this photography situation.For example, in store in pre-treatment content 1106 " hot spot ", " brightness ", " backlight " etc.If extract the then processing carried out corresponding to pre-treatment content 1106 of server 102 of face image.If pre-treatment content 1106 is " hot spot ", then server 102 makes the lightness homogeneous of image.In addition, if pre-treatment content 1106 for " brightness " then server 102 make all deepenings of image.In addition, if pre-treatment content 1106 for " backlight " then server 102 make all brightening of image.
In addition, relevant information such as illuminating lamp that also can be in store with indoor in lighting condition data 110.For example, the having or not of the illuminating lamp in the photography situation 1102 in the preserving chamber, characteristic quantity correction factor 1105 is made as the value of having considered behind the indoor illuminating lamp.In view of the above, server 102 just can also have been considered the correction of indoor illuminating lamp.
Figure 10 A is the key diagram of influence of the illumination of the present invention's the 3rd execution mode.
Here, describe with the situation that possesses three video camera 101C, 101D and 101E.
At first, the illumination meteorological condition is inferred portion 111, the weather when obtaining image and being photographed and constantly.The illumination meteorological condition is inferred portion 111, the record of the weather that the zone number of the video camera 101 of this image of selecting to have photographed and the zone number of lighting condition data 110 1101 are consistent and obtained and the moment etc. and photography situation 1102 unanimities.Then, extract lighting system 1104 from selected record.The lighting system 1104 that this extracted is exactly the affected illuminating position of this image.
For example, in this key diagram, weather is 9 o'clock for fine, time.
In the case, video camera 101C is subjected to the influence of stronger right skew ray.In addition, video camera 101D is subjected to the influence of stronger backlight.In addition, video camera 101E is set at the indoor of the influence that is not subjected to exterior light.
Figure 10 B is the key diagram of the influence of the present invention's the 3rd execution mode exterior light.
Weather is 17 o'clock for cloudy day, time in this key diagram.
In the case, video camera 101C is subjected to the influence of more weak frontlighting.In addition, video camera 101D is subjected to the influence of more weak left skew ray.In addition, video camera 101E is set at the indoor of the influence that is not subjected to exterior light.
Figure 11 is the flow chart of image processing of the server 102 of the present invention's the 3rd execution mode.
If the image of being photographed by video camera 101 from video camera 101 input then server 102 begins to handle (401).At this moment, import in the lump in the zone number in the place that server 102 also can be photographed this image and the moment that this image is photographed etc.
At first, from the image (face image) (402) in the image contract face zone imported.
Then, the photography situation (403) of definite image of being imported.The photography situation comprises time band that this image photographed, the weather when this image is photographed etc.Particularly, based on from the weather information of external network, from the monochrome information of external light sensor or from gerentocratic input information, determine the weather when image is photographed.In addition, determine the time band that image is photographed based on the moment of being imported from video camera 101.
Then, the record of photography situation 1102 unanimities of consistent and determined photography situation of being imported from 110 selections of lighting condition data from video camera 101 of zone number and lighting condition data 110 with the zone number of lighting condition data 110.Then, extract characteristic quantity correction factor 1105 and pre-treatment content 1106 from selected record.
Then, this face image is carried out the pre-treatment corresponding with the pre-treatment content that extracted 1106.Then, from the picture characteristic quantity (404) of publishing picture through the face image calculation after the pre-treatment.
Then, use the characteristic quantity correction factor 1105 that is extracted, the image feature amount that calculates is revised (405).Then, carry out the personage and determine to handle (406).The personage determines that processing 406 is identical with step 204~210 of the image processing (Fig. 4) of the 1st execution mode, and omits explanation.
As described above, implement to revise accordingly, just can improve corresponding personage's precision with the situation of the illumination that image is photographed by opposite hole pattern picture.
Then, just the surveillance of present embodiment being used for the occasion that the safety in downtown streets district etc. guarantees describes.
Surveillance in surveillance in the district of downtown streets and the facility is same, and multiple cameras 101 is set.Surveillance store in advance the relative position of set video camera 101 and video camera 101 towards etc.Video camera 101 for example is set at anti-criminal's post that road along the street exists at certain intervals.In addition, video camera 101 also can utilize the surveillance camera that has been arranged at busy street.
The processing of at first, just searching for specific personage describes.Specific personage for example is wanted criminal, lost child etc.
At first, the image contract noodle outlet image of surveillance from being photographed by video camera 101.Then, based on lighting condition data 110, the face image that is extracted is revised.Then, to following the trail of through the face image of revising.All face images that can track then, are defined as same personage's face image.
In addition, surveillance, the then corresponding personage of definite face image of being photographed under the situation about can't follow the trail of.Then, surveillance is put aside the information that the face image reaches and this face is image-related that is extracted to each personage.
Then, surveillance, the goodness of fit of face image that all face image calculation of being put aside are put aside and definite personage's face image.Then, circulate a notice of to the manager as if many then surveillances that exist of the goodness of fit more than threshold value that calculate.
Because the surveillance of this enforcement is many with the savings of the face image in the various environment, so can improve the precision that the personage determines.In addition, though with the goodness of fit height of a part of face image of being put aside, so if then do not circulate a notice of to reduce wrong report with the goodness of fit of other face images of being put aside is low.
Surveillance also can detect personage's unusual action in the lump with definite personage's search processing.For example, surveillance utilizes existing image processing techniques to detect personage's unusual action.Unusual action is personage's occasion of being stranded in same position for a long time etc.Surveillance is then circulated a notice of to the manager if detect unusual action.In addition, surveillance also can be put aside the personage's who has carried out unusual action face image, and the personage of definite face image of being put aside.
Then, the surveillance of present embodiment also can be used in and mix detection.
By adopting facial recognition, just can understand the relation that mixes situation in a plurality of places according to the surveillance of present embodiment.Surveillance for example is judged as the personage's that appears at the A place 80% and appears at the B place.In the case, the manager can understand between the personage in the personage in visiting A place and the B place of visiting higher dependency relation.
The surveillance of present embodiment, the identification that is accompanied by the increase personage of personage's number becomes difficult.But, get final product owing to understand the relevance of the personage in a plurality of places, so also can use even if the precision of person recognition is hanged down this surveillance.
Representative view of the present invention beyond being put down in writing in the scope as claim can be enumerated following content.
Import captured image from video camera, obtain the monitoring of information device from the storage part of storing various information, above-mentioned storage portion stores is information and geography information in the past, wherein, described information in the past comprises the image-related information of being imported in the past with above-mentioned monitoring arrangement, and described geography information comprises the positional information of above-mentioned video camera;
From the above-mentioned image contract noodle outlet image of importing, according to the above-mentioned face image that extracts, find the solution moving direction and translational speed corresponding to the personage of this face image;
Based on above-mentioned moving direction of obtaining and translational speed, judge that the tracking of the above-mentioned face image that extracts could be carried out;
If the personage that can follow the trail of the face image of then this being followed the trail of is defined as the personage corresponding to the above-mentioned face image that extracts;
If not traceable, infer personage corresponding to the above-mentioned face image that extracts then based on above-mentioned information in the past;
Based on above-mentioned geography information, among the above-mentioned personage who infers, determine personage corresponding to the above-mentioned face image that extracts.
In view of the above, monitoring arrangement just can be determined the corresponding personage of face image by the tracking process of face image.And then, so monitoring arrangement only determines under the situation of the tracking that can't carry out the face image that the corresponding personage's of face image image processing can alleviate processing.
[utilizability on the industry]
So follow the trail of because the present invention is personage in the image that video camera is photographed Can be widely used in indoor and outdoor monitoring arrangement. In addition, owing to also possessing, the present invention recognizes The function of card device can also be used as the replacement of in the past authenticate device.

Claims (9)

1. a monitoring arrangement from the captured image of video camera input, is obtained information from the storage part of storing various information, it is characterized in that,
Above-mentioned storage portion stores is information and geography information in the past, and wherein, described information in the past comprises the image-related information in the past captured with above-mentioned video camera, and described geography information comprises the area information that above-mentioned video camera is photographed,
Described monitoring arrangement possesses: from the above-mentioned image contract face image of importing, and infer facial recognition portion corresponding to the personage of the above-mentioned face image that extracts based on above-mentioned information in the past; With
Based on above-mentioned geography information, among the personage who is inferred from above-mentioned facial recognition portion, determine detection unit corresponding to the personage of the above-mentioned face image that extracts.
2. according to the described monitoring arrangement of claim 1, it is characterized in that,
Above-mentioned information in the past comprises the characteristic quantity of the feature of the face image of representing that above-mentioned monitoring arrangement extracts in the past,
Above-mentioned facial recognition portion obtains the characteristic quantity of the feature of this face image of expression from the above-mentioned face image that extracts,
Infer personage by the characteristic quantity that comprised in the more above-mentioned information in the past and the above-mentioned characteristic quantity of obtaining corresponding to the above-mentioned face image that extracts.
3. according to the described monitoring arrangement of claim 1, it is characterized in that,
Above-mentioned geography information comprises the time that the personage moves in the zone that above-mentioned video camera is photographed,
Above-mentioned information in the past comprises the moment that above-mentioned face image that above-mentioned monitoring arrangement extracts in the past is taken,
Above-mentioned detection unit, the face image that the above-mentioned personage who infers of retrieval is taken before this from above-mentioned information in the past,
Obtain the moment that the above-mentioned face image of retrieving is taken the information in the past from above-mentioned,
The moment that is taken from the above-mentioned face image that extracts deducts the above-mentioned obtained moment,
By the traveling time that is comprised in time difference behind the more above-mentioned subtraction and the above-mentioned geography information, among the personage who is inferred from above-mentioned facial recognition portion, determine personage corresponding to the above-mentioned face image that extracts.
4. according to the described monitoring arrangement of claim 1, it is characterized in that also possessing:
Obtain moving direction and translational speed from the above-mentioned face image that extracts corresponding to the personage of this face image, and the tracking part of following the trail of the above-mentioned face image that extracts based on the above-mentioned moving direction of obtaining and translational speed.
5. according to the described monitoring arrangement of claim 1, it is characterized in that,
The illumination information of the illuminating position when above-mentioned storage portion stores comprises above-mentioned face image and is taken,
Described monitoring arrangement possesses: revise the correction portion of the above-mentioned face image that extracts based on above-mentioned illumination information,
Above-mentioned facial recognition portion, the face image of revising based on above-mentioned correction portion is inferred the personage corresponding to the above-mentioned face image that extracts.
6. according to the described monitoring arrangement of claim 1, it is characterized in that,
Above-mentioned detection unit, when the personage who determines corresponding to the above-mentioned face image that extracts, obtaining this personage who determines of expression is the reliability of same personage's possibility with personage corresponding to this face image,
The reliability of being obtained is stored in the above-mentioned information in the past.
7. the surveillance that facility is monitored possesses: the video camera more than; The storage device of storing various information; And from the captured image of video camera input, obtain the monitoring of information server from above-mentioned storage device, described surveillance is characterised in that,
Above-mentioned memory device stores is information and geography information in the past, and wherein, described information in the past comprises the image-related information in the past captured with above-mentioned video camera, and described geography information comprises the area information that above-mentioned video camera is photographed,
Above-mentioned monitor server possesses: from the above-mentioned image contract face image of importing, and infer facial recognition portion corresponding to the personage of the above-mentioned face image that extracts based on above-mentioned information in the past; With
Based on above-mentioned geography information, among the above-mentioned personage who infers, determine detection unit corresponding to the personage of the above-mentioned face image that extracts.
8. according to the described surveillance of claim 7, it is characterized in that,
Described surveillance possesses: go into to move back the chamber control device to what the switching of the gateway of facility was controlled,
Above-mentioned memory device stores comprises the whether licensed authentication information that enters the information of facility of personage,
Above-mentioned detection unit is judged the whether licensed facility that enters of above-mentioned determined personage based on above-mentioned authentication information,
Above-mentionedly go into to move back the chamber control device and control, so that be judged to be when being the licensed personage who enters facility the gateway of opening facility when above-mentioned detection unit.
9. according to the described surveillance of claim 7, it is characterized in that,
Above-mentioned monitor server possesses:
Obtain moving direction and translational speed from the above-mentioned face image that extracts corresponding to the personage of this face image, and the tracking part of following the trail of the above-mentioned face image that extracts based on the above-mentioned moving direction of obtaining and translational speed,
Above-mentioned detection unit is defined as personage corresponding to the above-mentioned face image that extracts with the personage of the above-mentioned face image of following the trail of.
CNA2005100897268A 2005-02-09 2005-08-05 Watching device and system Pending CN1819652A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005033290 2005-02-09
JP2005033290A JP2006221355A (en) 2005-02-09 2005-02-09 Monitoring device and monitoring system

Publications (1)

Publication Number Publication Date
CN1819652A true CN1819652A (en) 2006-08-16

Family

ID=36919305

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2005100897268A Pending CN1819652A (en) 2005-02-09 2005-08-05 Watching device and system

Country Status (3)

Country Link
JP (1) JP2006221355A (en)
KR (1) KR20060090554A (en)
CN (1) CN1819652A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101754045A (en) * 2008-12-08 2010-06-23 北京中星微电子有限公司 Method for distinguishing stay-dead picture in monitoring system, monitoring system and device
CN101964872A (en) * 2009-07-23 2011-02-02 索尼公司 Composition determination device, imaging system, composition determination method, and program
CN101415077B (en) * 2007-10-17 2011-06-08 索尼株式会社 Composition determining apparatus, composition determining method
CN102388608A (en) * 2009-04-10 2012-03-21 欧姆龙株式会社 Monitoring system, and monitoring terminal
CN101763506B (en) * 2008-12-22 2013-07-31 Nec九州软件株式会社 Facial image tracking apparatus and method
CN103632141A (en) * 2013-11-28 2014-03-12 小米科技有限责任公司 Method, device and terminal equipment for figure identifying
CN107704851A (en) * 2017-10-30 2018-02-16 歌尔股份有限公司 Character recognition method, Public Media exhibiting device, server and system
CN110140153A (en) * 2017-01-23 2019-08-16 韩华泰科株式会社 Monitoring device and system
CN111492371A (en) * 2017-12-14 2020-08-04 三菱电机株式会社 Retrieval system and monitoring system

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101282913B1 (en) * 2006-09-08 2013-07-05 엘지전자 주식회사 Method for controling monitoring camera in digital video recorder
JP2008236676A (en) * 2007-03-23 2008-10-02 Fujifilm Corp Photographing system and photographing method
JP4798042B2 (en) * 2007-03-29 2011-10-19 オムロン株式会社 Face detection device, face detection method, and face detection program
CN101785036B (en) 2007-08-21 2012-12-05 日本电气株式会社 Behavior monitoring system and behavior monitoring method
US9143573B2 (en) 2008-03-20 2015-09-22 Facebook, Inc. Tag suggestions for images on online social networks
US8666198B2 (en) 2008-03-20 2014-03-04 Facebook, Inc. Relationship mapping employing multi-dimensional context including facial recognition
JP2010118019A (en) * 2008-11-14 2010-05-27 Sharp Corp Terminal device, distribution device, control method of terminal device, control method of distribution device, control program, and recording medium
CA2826177C (en) * 2011-02-03 2017-08-08 Facebook, Inc. Systems and methods for image-to-text and text-to-image association
WO2014125882A1 (en) * 2013-02-15 2014-08-21 日本電気株式会社 Information processing system, information processing method, and program
WO2014171258A1 (en) * 2013-04-16 2014-10-23 日本電気株式会社 Information processing system, information processing method, and program
JP6551226B2 (en) * 2013-04-26 2019-07-31 日本電気株式会社 INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM
KR102097781B1 (en) * 2013-09-06 2020-05-29 에스케이 텔레콤주식회사 Apparatus and Method for Providing Object Information
US9690978B2 (en) 2013-09-13 2017-06-27 Nec Hong Kong Limited Information processing apparatus, information processing and program
JP6923011B2 (en) * 2015-12-16 2021-08-18 日本電気株式会社 Travel time storage system, travel time storage method and travel time storage program
JP6645166B2 (en) * 2015-12-16 2020-02-14 日本電気株式会社 Travel time storage system, travel time storage method, and travel time storage program
JP6433928B2 (en) 2016-02-15 2018-12-05 株式会社東芝 SEARCH DEVICE, SEARCH METHOD, AND SEARCH SYSTEM
JP6991771B2 (en) * 2017-07-31 2022-01-13 キヤノン株式会社 Information processing equipment, information processing system, information processing method and program
JP2019185384A (en) * 2018-04-10 2019-10-24 キヤノン株式会社 Image authentication device, image authentication method, computer program and storage medium
JP7409782B2 (en) * 2019-04-26 2024-01-09 清水建設株式会社 Reception guidance system, reception guidance method
US11507768B2 (en) 2019-07-04 2022-11-22 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
JP7374632B2 (en) * 2019-07-09 2023-11-07 キヤノン株式会社 Information processing device, information processing method and program
JP7419790B2 (en) 2019-12-18 2024-01-23 大日本印刷株式会社 Rename processing equipment and print sales system
JP7218778B2 (en) * 2020-01-09 2023-02-07 日本電気株式会社 Information processing system, method and program
CN112926722B (en) * 2021-01-27 2022-11-01 上海兰宝传感科技股份有限公司 Method for counting people in escalator entrance area

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002279466A (en) * 2001-03-15 2002-09-27 Toshiba Corp Device and method for admission control
JP2003204541A (en) * 2001-12-28 2003-07-18 Nippon Signal Co Ltd:The Video processing method and video processing apparatus

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101415077B (en) * 2007-10-17 2011-06-08 索尼株式会社 Composition determining apparatus, composition determining method
CN101754045A (en) * 2008-12-08 2010-06-23 北京中星微电子有限公司 Method for distinguishing stay-dead picture in monitoring system, monitoring system and device
CN101763506B (en) * 2008-12-22 2013-07-31 Nec九州软件株式会社 Facial image tracking apparatus and method
CN102388608A (en) * 2009-04-10 2012-03-21 欧姆龙株式会社 Monitoring system, and monitoring terminal
CN102388608B (en) * 2009-04-10 2014-07-02 欧姆龙株式会社 Monitoring system, and monitoring terminal
CN101964872A (en) * 2009-07-23 2011-02-02 索尼公司 Composition determination device, imaging system, composition determination method, and program
CN103632141A (en) * 2013-11-28 2014-03-12 小米科技有限责任公司 Method, device and terminal equipment for figure identifying
CN110140153A (en) * 2017-01-23 2019-08-16 韩华泰科株式会社 Monitoring device and system
CN107704851A (en) * 2017-10-30 2018-02-16 歌尔股份有限公司 Character recognition method, Public Media exhibiting device, server and system
CN107704851B (en) * 2017-10-30 2021-01-15 歌尔股份有限公司 Character identification method, public media display device, server and system
CN111492371A (en) * 2017-12-14 2020-08-04 三菱电机株式会社 Retrieval system and monitoring system
CN111492371B (en) * 2017-12-14 2023-05-26 三菱电机株式会社 Search system and monitoring system

Also Published As

Publication number Publication date
JP2006221355A (en) 2006-08-24
KR20060090554A (en) 2006-08-14

Similar Documents

Publication Publication Date Title
CN1819652A (en) Watching device and system
US8064656B2 (en) Image processing apparatus, imaging apparatus, image processing method, and computer program
US8848985B2 (en) Face-image registration device, face-image registration method, face-image registration program, and storage medium
US9602783B2 (en) Image recognition method and camera system
JP5992276B2 (en) Person recognition apparatus and method
CN1931697A (en) Intelligent dispatcher for group controlled lifts based on image recognizing technology
KR20130139338A (en) Facial recognition
CN1519757A (en) Image searching device, key word providing method and program of same
US20080279425A1 (en) Human face recognition and user interface system for digital camera and video camera
CN1492280A (en) Omnibearing monitoring and control system, omnibearing monitoring and control method, omnibearing monitoring and control program and computer readable record medium
CN1909670A (en) Image representation and analysis method
CN101080691A (en) Information presentation device and information presentation method
CN1950722A (en) Individual detector and accompanying detection device
CN101032159A (en) Image processing device, method, and image processing program
CN1615478A (en) Cellular terminal image processing system, cellular terminal, and server
CN1905629A (en) Image capturing apparatus and image capturing method
JP2008165700A (en) Image processing device, electronic equipment, image processing system, image processing method, and program
CN1744086A (en) Image file processing method and related technique thereof
US20110007975A1 (en) Image Display Apparatus and Image Display Method
US9639778B2 (en) Information processing apparatus, control method thereof, and storage medium
CN1679323A (en) Video generation device, video generation method, and video storage device
JP2018005555A (en) Image processing device, information processing device and method, as well as program
JP2011155385A (en) Electronic device, content transmission method, and program
CN1643543A (en) Method for linking edges in stereo images into chains
CN105960801A (en) Enhancing video conferences

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20060816