WO2015037713A1 - Dispositif de traitement d'informations, procédé de traitement d'informations et programme - Google Patents

Dispositif de traitement d'informations, procédé de traitement d'informations et programme Download PDF

Info

Publication number
WO2015037713A1
WO2015037713A1 PCT/JP2014/074271 JP2014074271W WO2015037713A1 WO 2015037713 A1 WO2015037713 A1 WO 2015037713A1 JP 2014074271 W JP2014074271 W JP 2014074271W WO 2015037713 A1 WO2015037713 A1 WO 2015037713A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
images
time
deletion
Prior art date
Application number
PCT/JP2014/074271
Other languages
English (en)
Japanese (ja)
Inventor
山田 洋志
白石 展久
エリック ラウ
エルザ ウォン
Original Assignee
日本電気株式会社
エヌイーシー ホンコン リミテッド
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社, エヌイーシー ホンコン リミテッド filed Critical 日本電気株式会社
Priority to US15/021,246 priority Critical patent/US9690978B2/en
Priority to EP14843348.5A priority patent/EP3046075A4/fr
Priority to JP2015536647A priority patent/JP6568476B2/ja
Publication of WO2015037713A1 publication Critical patent/WO2015037713A1/fr
Priority to HK16112602.7A priority patent/HK1224418A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D11/00Devices accepting coins; Devices accepting, dispensing, sorting or counting valuable papers
    • G07D11/20Controlling or monitoring the operation of devices; Data handling
    • G07D11/22Means for sensing or detection

Definitions

  • the present invention is based on a Japanese patent application: Japanese Patent Application No. 2013-190081 (filed on September 13, 2013), and the entire description of the application is incorporated herein by reference.
  • the present invention relates to an information processing apparatus, an information processing method, and a program.
  • the present invention also relates to a time difference measurement system, a time difference measurement method, an image processing device, an image processing method, and a program.
  • the present invention relates to an image processing technique for performing face detection and authentication (collation), and a time difference measurement to which the image processing technique is applied.
  • ATMs Automatic Teller Machines
  • Some face authentication technology is used to measure waiting time. Specifically, the waiting time is measured by extracting a face from the camera image at the start and end of the waiting time, and determining the same person using a face authentication technique.
  • the face authentication technique various known algorithms can be applied.
  • Patent Document 1 discloses a waiting time measurement system shown in FIG. 14 (corresponding to FIG. 3 of Patent Document 1).
  • this waiting time measurement system the first face authentication is performed at the entrance (point A) of the automatic transaction apparatus, and the second face authentication is performed before the automatic transaction apparatus (point B). Then, the waiting time is calculated from the difference between two times when the same person is authenticated.
  • the waiting time measurement by face authentication has the following problems.
  • the first problem is that when performing facial feature extraction on an acquired image, it is necessary to target faces included in many frames, resulting in a huge amount of processing time. This is because facial feature extraction is a process with a relatively large amount of calculation, and the amount of processing increases as the number of faces increases. Further, as the number of faces increases, it takes time for the collation processing for determining whether or not the same person is present. As a result, the amount of image data that can be processed decreases. In addition, if a large number of frame images are to be processed, a higher-performance processing device is required, resulting in a high-cost system.
  • the second problem is that a large amount of data storage area is required to store face images.
  • the reason is that the face of the same person is included over a plurality of frames, and face images are registered many times when performing facial feature extraction. Therefore, there is a problem that a large capacity storage device for storing a large amount of data is required.
  • Patent Document 1 does not touch on the problem of processing time in facial feature extraction and matching.
  • an object of the present invention is to provide a time difference measurement system that can contribute to performing a time difference measurement at a different point at a high speed with a low-cost device.
  • the time difference measurement system includes the following components. That is, the time difference measurement system detects a face area from a first camera and a plurality of frames of images taken by the first camera, and first face detection means (unit) that extracts the face area as a face image. ), A first face feature extracting means (unit) for extracting a first face feature amount from the face image, and a storage unit for storing the first face feature amount in association with a photographing time.
  • the time difference measurement system detects a face area from a second camera and a plurality of frames of images taken by the second camera, and extracts a second face detection unit (unit) that extracts the face area as a face image.
  • Second face feature extraction means for extracting a second face feature amount from the face image, and the first face feature amount stored in the storage unit. And the shooting time stored in the storage unit in association with the first face feature amount that has been successfully verified is the first time, and the shooting time of the second face feature amount is the second time.
  • a face collating unit for setting time, and a time difference calculating unit (unit) for calculating a time difference between the first time and the second time. Further, the time difference measurement system compares a plurality of face images cut out from different frames among a plurality of face images cut out by the first face detection means (unit) to obtain a face image of the same person.
  • the image processing device is an image processing device that processes a plurality of frames taken by the first and second cameras, and includes the following components. That is, the image processing apparatus detects a face area from a plurality of frames of images taken by the first camera, and cuts out the face area as a face image; and the face image The first facial feature extraction means (unit) for extracting the first facial feature amount and a storage unit for storing the first facial feature amount. In addition, the image processing apparatus detects a face area from a plurality of frames of images taken by the second camera and extracts the face area as a face image, and the face image.
  • Second face feature extraction means for extracting a second face feature quantity, and face matching for matching the second face feature quantity with the first face feature quantity stored in the storage section Means (parts). Further, the image processing apparatus compares a plurality of face images cut out from different frames among a plurality of face images cut out by the first face detection means (unit) to obtain a face image of the same person.
  • the time difference measurement method includes the following steps. That is, the time difference measuring method includes a first face detection step of detecting a face area from a plurality of frames of images captured by a first camera, and cutting out the face area as a face image; And a first face feature amount extracting step for extracting a face feature amount.
  • the time difference measurement method includes a second face detection step of detecting a face area from a plurality of frames of images taken by a second camera, and cutting out the face area as a face image; And a second face feature amount extracting step for extracting a face feature amount.
  • the time difference measuring method collates the second face feature quantity with the first face feature quantity, sets the shooting time of the first face feature quantity that has been successfully matched as the first time, and A face collating step in which the shooting time of the second face feature amount is a second time, and a time difference calculating step of calculating a time difference between the first and second times.
  • a plurality of face images cut out from different frames among a plurality of face images cut out in the first face detection step are compared to determine whether the face images are the same person. A part of the plurality of face images determined to be the same person's face image is deleted, and the face image remaining without being deleted is supplied to the first face feature amount extraction step.
  • a part of the plurality of face images determined to be face images of the same person is deleted, and the face image left without being deleted is used as the second face feature amount extraction step.
  • the image processing method includes the following steps. That is, the image processing method includes a first face detection step of detecting a face area from a plurality of frames of images taken by a first camera, and cutting out the face area as a face image; And a first face feature amount extracting step for extracting a face feature amount.
  • the image processing method includes a second face detection step of detecting a face area from a plurality of frames of images taken by a second camera, and cutting out the face area as a face image; and a second face detection from the face image. And a second face feature amount extracting step for extracting a face feature amount. Further, the image processing method includes a face matching step of matching the second face feature quantity with the first face feature quantity.
  • the image processing method compares the plurality of face images cut out from different frames among the plurality of face images cut out in the first face detection step to determine whether the face images of the same person.
  • a part of the plurality of face images determined to be the same person's face image is deleted, and the face image remaining without being deleted is supplied to the first face feature amount extraction step.
  • Whether the face images of the same person are compared by comparing a plurality of face images cut out from different frames among the plurality of face images cut out in the first duplicate deletion step and the second face detection step
  • a part of the plurality of face images determined to be face images of the same person is deleted, and the face image left without being deleted is used as the second face feature amount extraction step.
  • At least one of a second deduplication step to supply comprising a.
  • a program wherein a face area is detected from an image of a plurality of frames photographed by a first camera, and the face area is cut out as a face image;
  • a first face feature amount extraction process for extracting a first face feature amount, and a second face for detecting a face region from a plurality of frames of images taken by a second camera and cutting out the face region as a face image
  • the detection process, the second facial feature quantity extraction process for extracting the second facial feature quantity from the face image, and the second facial feature quantity are collated with the first facial feature quantity, and the collation is successful.
  • the time comparison between the first and second times and the face matching process in which the shooting time of the first face feature value is the first time and the shooting time of the second face feature value is the second time.
  • the computer executes the time difference calculation processing to be calculated. Further, the program compares a plurality of face images cut out from different frames among the plurality of face images cut out in the first face detection process, and determines whether or not the face images of the same person.
  • Determining Determining, deleting a part of the plurality of face images determined to be face images of the same person, and supplying the face image remaining without being deleted to the first face feature amount extraction process; Whether or not the face images of the same person are compared by comparing a plurality of face images cut out from different frames among the plurality of face images cut out in the first duplicate deletion process and the second face detection process , Delete part of the plurality of face images determined to be face images of the same person, and supply the face images left without being deleted to the second face feature amount extraction process At least one of the second deduplication process is executed on the computer That.
  • An information processing apparatus is an information processing apparatus that processes a plurality of frames taken by the first and second cameras, and a plurality of frames taken by the first camera.
  • a first face detecting means for detecting a face area from the face image and cutting out the face area as a face image; a first face feature extracting means for extracting a first face feature quantity from the face image;
  • a storage unit for storing a facial feature amount; a second face detection unit that detects a face area from a plurality of frames of images captured by the second camera; and extracts the face area as a face image; and the face image
  • Second facial feature extraction means for extracting a second facial feature quantity; and face matching means for matching the second facial feature quantity with the first facial feature quantity stored in the storage unit.
  • a plurality of face images cut out by the first face detection means That is, a plurality of face images cut out from different frames are compared to determine whether or not they are the same person's face image, and one of the plurality of face images determined to be the same person's face image.
  • An information processing method includes a first face detection step of detecting a face area from a plurality of frames of images taken by a first camera and cutting out the face area as a face image; A first face feature amount extracting step for extracting a first face feature amount from an image; a second step of detecting a face region from a plurality of frames of images taken by a second camera and cutting out the face region as a face image; A face detecting step, a second face feature amount extracting step for extracting a second face feature amount from the face image, and a face matching step for comparing the second face feature amount with the first face feature amount.
  • a plurality of face images cut out in the first face detection step are compared with each other to determine whether the face images of the same person
  • the face image of the same person A first duplication deletion step of deleting a part of the plurality of face images and supplying a face image left without being deleted to the first face feature amount extraction step; and the second face Among a plurality of face images cut out in the detection step, a plurality of face images cut out from different frames are compared to determine whether they are the same person's face image. Deleting at least one of the determined plurality of face images, and supplying at least one of a second duplicate deletion step of supplying a face image left without being deleted to the second face feature amount extraction step; In addition.
  • a program wherein a face area is detected from a plurality of frames of images captured by a first camera, and the face area is cut out as a face image.
  • a first face feature amount extraction process for extracting a first face feature amount, and a second face for detecting a face region from a plurality of frames of images taken by a second camera and cutting out the face region as a face image A detection process; a second facial feature quantity extraction process for extracting a second facial feature quantity from the face image; and a face matching process for collating the second facial feature quantity with the first facial feature quantity; Whether or not the face images of the same person are compared by comparing a plurality of face images cut out from different frames among the plurality of face images cut out in the first face detection process.
  • a first duplicate deletion process that deletes a part of the plurality of face images that have been deleted and supplies the face image left without being deleted to the first face feature amount extraction process; and the second face Among a plurality of face images cut out by the detection process, a plurality of face images cut out from different frames are compared to determine whether they are the same person's face image. Deleting at least one of the determined plurality of face images, and supplying at least one of a second duplicate deletion process that supplies the face image remaining without being deleted to the second face feature amount extraction process; Let the computer run.
  • the program can also be provided as a program product recorded on a non-transitory computer-readable storage medium.
  • the present invention it is possible to provide a time difference measurement system that can contribute to measuring a time difference in which the same person is photographed at different points at high speed with a low-cost device.
  • the time difference measurement system 100 in one embodiment measures the time difference ⁇ t when the same person is photographed by the first camera 101 and the second camera 106, as shown in FIG.
  • the time difference measurement system 100 detects a face area from a first camera 101 and a plurality of frames of images taken by the first camera 101, and first face detection means (unit) that extracts the face area as a face image K11. ) 102, a first face feature extraction means (unit) 104 that extracts the first face feature amount T1 from the face image, and a storage unit 105 that stores the first face feature amount T1 in association with the shooting time, including.
  • the time difference measurement system 100 detects the face area from the second camera 106 and a plurality of frames of images taken by the second camera 106, and second face detection means for cutting out the face area as the face image K21.
  • Part 107 second face feature extraction means (part) 109 for extracting the second face feature amount T 2 from the face image, and the first face feature amount T 2 stored in the storage unit 105.
  • the shooting time of the second face feature value T2 is set as the first time t1 with the shooting time stored in the storage unit 105 in association with the first face feature value that has been checked against the face feature value T1 and successfully verified.
  • the time difference measurement system 100 compares a plurality of face images cut out from different frames among the plurality of face images K11 cut out by the first face detection means (unit) 102, and compares the faces of the same person. It is determined whether the image is an image, a part of the plurality of face images determined to be the same person's face image is deleted, and the face image K12 left without being deleted is used as the first face feature.
  • the second face feature extraction is performed on the remaining face image K22. It includes a stage (parts) to the 109 second duplicate removal means (unit) 108, at least one.
  • the amount of image data to be subjected to face feature extraction is greatly increased. Can be reduced. This reduces the time required for face feature extraction, which is a relatively computationally intensive process, and provides a time difference measurement system that can quickly measure the time difference using face detection and matching with a low-cost device. can do.
  • the first and second duplication deletion means (units) (103, 108) are configured to extract the face image (K11, K21) cutout position (for example, the vertical position and horizontal position in FIG. 7) and the face. It is preferable to determine the same person using the image capturing time (for example, the image capturing time in FIG. 7).
  • the first and second duplicate deletion means (units) delete the face images to be deleted based on the shooting times of the plurality of face images determined to be the same person. You may make it select. For example, as shown in the face image detection information 20 in FIG. 7, from the face images determined to be the same person (with the same person ID (Identifier)), the one with the earliest shooting time is left, and the second and subsequent shooting times Are deleted (the status flag is 1).
  • the first and second duplicate deletion means (units) (103, 108) delete the face image to be deleted based on the image quality of each of the plurality of face images determined to be the same person. You may make it select.
  • the first and second duplicate deletion means (units) (23 and 28 in FIG. 9) have one or more deletion conditions for deleting the face image.
  • a registered deletion condition table 303 and a deletion condition selection criterion table 304 in which a criterion for selecting a deletion condition to be used from the deletion conditions registered in the deletion condition table may be provided.
  • the face collating means (unit) (110 in FIG. 1 and 11 in FIG. 3) has a plurality of first face feature amounts T1 for one second face feature amount T2.
  • the collation is successful, one of the plurality of first face feature amounts T1 that have been successfully collated is selected, and the shooting time corresponding to the selected first face feature amount is defined as the first time t1. It is preferable to do.
  • the image processing apparatus 200 is an image processing apparatus that processes a plurality of frames of images taken by the first and second cameras (101, 106). including. That is, the image processing apparatus 200 detects a face area from a plurality of frames of images taken by the first camera 101, and cuts the face area as a face image K11, and a face detection unit (section) 102, First face feature extraction means (unit) 104 that extracts the first face feature amount T1 from the image, and a storage unit 205 that stores the first face feature amount T1 are included.
  • the image processing apparatus 200 detects a face area from a plurality of frames of images taken by the second camera 106, and extracts a face area as a face image K21, a second face detection means (unit) 107, and a face Second facial feature extraction means (unit) 109 for extracting the second facial feature amount T2 from the image, and the second facial feature amount T2 is collated with the first facial feature amount T1 stored in the storage unit 205.
  • a face collating means (unit) 210 to perform.
  • the image processing apparatus 200 compares a plurality of face images cut out from different frames among the plurality of face images K11 cut out by the first face detection means (unit) 102, and compares the faces of the same person.
  • the image is an image
  • a part of the plurality of face images determined to be the same person's face image is deleted, and the face image K12 left without being deleted is used as the first face feature.
  • the plurality of face images K21 cut out by the first duplicate deletion means (part) 103 supplied to the extraction means (part) 104 and the second face detection means (part) 107 they were cut out from different frames. Compare multiple face images to determine if they are the same person's face image, delete some of the multiple face images that were determined to be the same person's face images, and without deleting them
  • the remaining face image K22 is converted into second face feature extraction means ( )
  • the image processing apparatus 200 in FIG. 2 has a configuration that does not include the first camera 101, the second camera 106, and the time difference calculation means (unit) 112 in the time difference measurement system in FIG.
  • the storage unit 205 stores only the first feature amount T1 (does not store the shooting time)
  • the face matching unit (unit) 210 stores the second feature amount T2 and the first feature amount. The same person is determined by collating with T1.
  • the time difference measurement method in one embodiment is a method of measuring the time difference ⁇ t when the same person is photographed by two cameras (for example, the first camera 101 and the second camera 106).
  • the time difference measurement method includes the following steps as shown in either FIG. 1 or FIG. That is, the time difference measurement method includes a first face detection step (S302) in which a face area is detected from a plurality of frames of images taken by the first camera 101 (S301), and the face area is cut out as a face image K11.
  • a first face feature amount extraction step (S304) for extracting the first face feature amount T1 from the face image.
  • the time difference measurement method includes a second face detection step (S307) in which a face area is detected from a plurality of frames of images photographed by the second camera 106 (S306), and the face area is cut out as a face image K21.
  • the time difference measuring method collates the second face feature amount T2 with the first face feature amount, sets the shooting time of the first face feature amount T1 successfully collated as the first time t1, A face collation step (S310) in which the shooting time of the face feature amount T2 is the second time t2, a time difference calculation step (S311) for calculating the time difference ⁇ t between the first and second times (t1, t2), including. Further, the time difference measuring method compares a plurality of face images cut out from different frames among the plurality of face images K11 cut out in the first face detection step (S302), and uses the face image of the same person.
  • the image processing method does not include the time difference calculating step (S311) among the steps included in the time difference measuring method, and the face matching step includes the second feature amount T2 and the first feature amount. Only the collation operation of T1 is performed. That is, in the image processing method, as shown in either FIG. 2 or FIG. 4, the face area is detected from the images of a plurality of frames photographed by the first camera 101 (S301), and the face area is detected as the face image K11. And a first face feature extraction step (S304) for extracting the first face feature amount T1 from the face image.
  • the image processing method includes a second face detection step (S307) for detecting a face area from a plurality of frames of images taken by the second camera 106 (S306), and cutting out the face area as a face image; A second face feature amount extraction step (S309) for extracting the second face feature amount T2 from the image.
  • the image processing method includes a face matching step (only the matching operation in S310) for matching the second face feature quantity T2 with the first face feature quantity T1.
  • the image processing method compares a plurality of face images cut out from different frames among the plurality of face images K11 cut out in the first face detection step (S302), and uses the face image of the same person.
  • Tsu further includes a flop (S309) second deduplication supplying the (S308), at least one of.
  • FIG. 3 is a block diagram illustrating a configuration of the time difference measurement system 10 according to the first embodiment.
  • the time difference measurement system 10 includes a monitoring camera (start side) 1, a monitoring camera (end side) 6, and an image processing device 30.
  • the time difference measurement system 10 extracts the image data of the same person from the image data acquired by the monitoring camera (start side) 1 and the monitoring camera (end side) 6, so that the monitoring camera (start side) 1 and the monitoring camera ( (End side) 6 has a function of measuring the time difference when the same person was photographed.
  • the time difference measurement system 10 can be suitably used for measuring waiting time, for example.
  • the surveillance camera (start side) 1 is installed at the start point (point A) of the waiting time.
  • the start point (point A) is the end of the queue, the entrance to the waiting room, the reception, the vicinity of the numbered ticket issuing machine, or the like.
  • the surveillance camera (end side) 6 is installed at the end point (point B) of the waiting time.
  • the end point (point B) is a service window, a device such as an ATM, an exit of a waiting room, a periphery of an entrance of a place where a service is received, and the like.
  • Surveillance cameras 1 and 6 are provided with an image sensor such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor), and output a plurality of frame images captured at a predetermined frame rate to the image processing device 30.
  • an image sensor such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor)
  • CCD Charge Coupled Device
  • CMOS Complementary Metal-Oxide Semiconductor
  • FIG. 3 is a block diagram showing the image processing apparatus 30 in units of functions.
  • the image processing apparatus 30 includes a face detection unit 2 that processes image data acquired from the monitoring camera 1, a duplication deletion unit 3, and a face feature extraction unit 4.
  • the image processing apparatus 30 includes a face detection unit 7, a duplication deletion unit 8, and a face feature extraction unit 9 that process image data acquired from the monitoring camera 6. Details of these will be described later.
  • the face feature database (face feature DB (Database)) 5 stores the face feature amount T1 calculated by the face feature extracting means 4 in association with the photographing time of the face feature amount T1.
  • a hard disk, an SSD (Solid State Drive), a memory, or the like can be used as a device for saving.
  • the photographing time of a face image or the photographing time of a facial feature value it means the photographing time of the image that is the source of the facial image or facial feature value.
  • the image processing apparatus 30 includes a face matching unit 11 that receives the face feature amount T2 output from the face feature extracting unit 9 and collates the face feature amount T2 with the face feature amount T1 stored in the face feature DB 5. .
  • a known face authentication processing technique can be applied to the collation.
  • the collation result database (collation result DB) 12 registers the photographing time of the face feature amounts (T1, T2) input to the face collating means 11 when the collation is successful.
  • a hard disk, an SSD (Solid State Drive), a memory, and the like can be used as a device for saving, like the face feature DB 5.
  • the image processing apparatus 30 includes a time difference calculation unit 13 that calculates a time difference ⁇ t of the photographing times (t1, t2) of the face feature amounts (T1, T2) that have been successfully verified that are registered in the verification result DB 12. .
  • the time difference ⁇ t corresponds to the waiting time.
  • the image processing apparatus 30 includes a result output unit 14 that outputs the calculated time difference ⁇ t to a monitor, a storage device, or another system included in the image processing apparatus 30.
  • Each processing function included in the image processing device 30 described above is stored in a memory (not shown) or the like as a program executed by a CPU (Central Processing ⁇ ⁇ ⁇ Unit) (not shown) provided in the image processing device 30 and called by the CPU. Executed. Note that some or all of the processing functions may be implemented by hardware.
  • a CPU Central Processing ⁇ ⁇ ⁇ Unit
  • FIG. 4 is a flowchart showing the operation of the time difference measurement system 10 according to the first embodiment.
  • FIG. 4A is a flowchart of the video processing of the start side point (point A).
  • the camera image is acquired by the surveillance camera 1 shooting a video at a predetermined frame rate (S301).
  • the face detection means 2 detects a face image from each frame (S302).
  • the detection of the face image has a function of detecting a portion of the face area in each frame and cutting it out as a face image, and a known face detection processing technique can be applied. There may be no face area detected in the frame, or there may be one or more faces detected.
  • a face area is detected, for example, as shown in FIG. 6, a rectangular frame area is cut out and output as a face image (if a plurality of faces are detected, a plurality of face images are extracted from one frame. Output).
  • the face detection process is a process that does not require a calculation load compared to the face matching process described later, and can be processed in real time with respect to the shooting frame rate.
  • the duplication deletion unit 3 compares a plurality of face images cut out from different frames with a plurality of face images cut out from each frame (K11 in FIG. 3) to obtain a face image of the same person. It is determined whether or not. Then, a part of the plurality of face images determined to be the same person's face image is deleted (S303). Then, the facial feature extraction unit 4 extracts a facial feature amount (T1 in FIG. 3) from the facial image (K12 in FIG. 3) remaining without being deleted (S304). Then, the face feature amount (T1 in FIG. 3) is registered in the face feature DB 5 together with the face position and the photographing time (S305).
  • the position of the upper left corner of the face area rectangular frame in FIG. 6 is used as the face position.
  • the present invention is not limited to this, and any shape that indicates the position of the face in the frame image may be used.
  • the position of the center of gravity of the face area rectangular frame may be used.
  • FIG. 4B is a flowchart of the video processing at the end point (point B).
  • the monitoring camera 6 acquires a camera image by shooting video at a predetermined frame rate (S306).
  • the subsequent processes of S307 to S309 are the same as the processes of S302 to S304 in FIG.
  • the facial features extracted from the remaining face image (K22 in FIG. 3) after performing duplicate deletion from the face image (K21 in FIG. 3) cut out from the camera image of the surveillance camera 6 by the processing of S307 to S309.
  • a quantity (T2 in FIG. 3) is calculated.
  • the face collating means 11 collates the face feature amount extracted in S309 (T2 in FIG. 3) with the face feature amount (T1 in FIG. 3) stored in the face feature DB5.
  • the photographing time t1 registered together with the face feature amount (T1 in FIG. 3) in the face feature DB 5 and the photographing time t2 of the face feature amount (T2 in FIG. 3) are stored in the collation result DB 12. Register (S310).
  • the result output unit 14 displays the calculated time difference ⁇ t on a monitor or the like provided in the image processing apparatus 30.
  • the duplicate deletion unit 3 receives the face image (K11 in FIG. 3), the position of the face image, and the shooting time from the face detection unit 2 (S401).
  • the received face image (K11 in FIG. 3) is compared with the position and shooting time of the face image in the past frame, and it is determined whether or not the difference between the position and time of the face image satisfies the specified condition.
  • the following conditions can be considered. Difference in upper left position of face image is within specified pixels (Condition 1) ,and, Difference in shooting time is within specified time (Condition 2)
  • the condition 1 “the upper left position of the face image” means the position of the upper left corner of the rectangular frame from which the face image is cut out from the frame image.
  • FIG. 6 shows a change in the position of the face image input to the duplicate deletion means 3.
  • the face position moves to the left. If the movement amount 50 shown in FIG. 6 (corresponding to the difference in the upper left position of the face image) and the time difference are within the specified value, it is determined that the faces of the same person. In FIG. 6, the two faces appear in different frames, but are shown overlapped for explanation.
  • S404 it is determined which face image is to be deleted (S405).
  • the following determination conditions can be considered. (1) Delete all but the first face image taken (or the specified number from the beginning). (2) Delete all but the last photographed face image (or the specified number from the end). (3) Delete images other than the image with the best image quality (or the specified number of images with the highest image quality). (4) Leave a specified number of images with image quality equal to or higher than a specified value, and delete the other images.
  • the image quality is a criterion as to whether or not the image is advantageous for face feature extraction.
  • a criterion such as the size of the face image and the degree of facialness is used.
  • the image quality value is normalized so that the maximum value (maximum image quality) is 1.0.
  • the image quality may be measured in the duplicate deletion unit 3 or a measurement result measured outside the duplicate deletion unit 3 may be supplied to the duplicate deletion unit 3.
  • the face image determined not to be deleted (other than the face image to be deleted determined in S405; K12 in FIG. 3) is output to the face feature extracting means 4 (S408).
  • the processing by the face feature extraction unit 4 is performed after the duplication deletion unit 3.
  • the processing of the face feature extraction unit 4 is performed simultaneously with the start of the processing of the duplication deletion unit 3. You may make it start a process. In that case, depending on the condition, it may be determined whether or not the face image whose face feature extraction has been processed is deleted retroactively.
  • FIG. 7 is an example of the face image detection information 20 registered in the temporary storage area.
  • FIG. 7 shows the data number, person ID, face image position (vertical position in the upper left corner of the rectangular frame, horizontal position) and size (height, width) for each face image detected by the face detection means 2. ), A state flag indicating the shooting time and the state of data is registered.
  • a deletion condition is set such that the face image that appears first is left and the second and subsequent face images are deleted. For example, with respect to three face images with the person ID 00001, the face images taken at 12: 00: 01.5 as the face feature extraction target are left, and 12: 00: 01.7 and 12:00:01. The face image photographed in 9 is deleted.
  • the status flag of data to be deleted is set to 1 by the duplicate deletion means 3, and the status flag of data to be subjected to feature extraction processing is set to 0.
  • the status flag of the data determined to be deleted by the duplicate deletion unit 3 may be set to 1.
  • a state flag indicating a state in which the process is not determined may be attached. Further, the data may be deleted without using the status flag.
  • the duplicate deletion unit 8 is the same as the duplicate deletion unit 3, and thus the description thereof is omitted.
  • the condition setting of the duplicate deletion unit 8 (for example, the setting of the deletion condition) may be set differently from that of the duplicate deletion unit 3.
  • FIG. 8 is an example of the collation result registered in the collation result DB 12.
  • information when the collation is successful is registered in the collation result DB 12.
  • the data on the first line includes the face image on the monitoring camera 1 side (person ID is 00001, shooting time is 12: 00: 01.5) and the face image on the monitoring camera 6 side (person ID is 01001, shooting).
  • the time is 12: 50: 13.6) indicating that the similarity is 0.80 and the verification is successful.
  • the data in the second row includes the face image on the monitoring camera 1 side (person ID is 00003, shooting time is 12: 00: 01.5) and the face image on the monitoring camera 6 side (person ID is 01005, the shooting time is 12: 55: 40.1) indicates that the matching was successful with a similarity of 0.92.
  • the following processing is performed to select any one of the plurality of face feature amounts. It is desirable to determine which of the following processes is adopted depending on the use and the situation of the shooting location (points A and B).
  • A When a plurality of face images with different person IDs are searched, (A-1) The face feature amount of the person ID having the highest similarity of the face feature amount is selected. (A-2) The face feature amount with the earliest shooting time among the searched ones is selected. (A-3) Select the face feature amount with the latest shooting time among the searched.
  • B When a plurality of face images having the same person ID are searched, (B-1) A face feature amount with the highest similarity of the face feature amount is selected.
  • (B-2) The face feature amount with the earliest shooting time among the searched ones is selected.
  • (B-3) Select the face feature amount with the latest shooting time among the searched.
  • (C) When a plurality of facial feature amounts are searched, a plurality of combinations are registered in the collation result DB 12 and any one is selected by the time difference calculation means 13 ( (A-1) to (A-3) and (B-1) to (B-3) are applied).
  • the time difference when the same person is photographed is measured at different points at high speed with a low-cost device.
  • the waiting time can be measured at high speed with a low-cost device. The reason is that it is possible to reduce the number of face images to be subjected to facial feature extraction processing that takes computation time by using duplicate deletion processing that does not require computation time.
  • the time difference measurement system 10 of the first embodiment can be modified to provide an image processing apparatus that recognizes that the same person has passed through different points A and B.
  • An image processing apparatus 200 in FIG. 2 corresponds to the image processing apparatus described above.
  • FIG. 9 is a block diagram showing a configuration of the duplicate deletion means 23 in the second embodiment.
  • the duplicate deletion unit 23 includes an identical person determination unit 301, a deletion condition selection unit 302, a deletion processing unit 305, a deletion condition table 303, and a deletion condition selection criterion table 304. Since the duplicate deletion unit 28 is the same as the duplicate deletion unit 23, the following description will be given with respect to the duplicate deletion unit 23, and description of the duplicate deletion unit 28 will be omitted.
  • the deletion condition table 303 can describe deletion conditions for one or more face images.
  • FIG. 10 is an example of the deletion condition table 303.
  • deletion condition number 1 leaves up to four face images in the appearance order where the image quality of face images determined to be the same person is 0.7 or higher, and deletes other face images.
  • the deletion condition number 2 leaves only one face image with an image quality of 0.8 or more of face images determined to be the same person in the order of appearance, and deletes other face images.
  • the deletion condition number 2 has more face images to be deleted than the deletion condition number 1.
  • a face appearance position, a time difference from the immediately preceding face image, or the like can be used.
  • the deletion condition selection criterion table 304 describes a criterion for selecting any one of a plurality of deletion conditions described in the deletion condition table 303.
  • FIG. 11 is an example of the deletion condition selection criterion table 304.
  • selection reference number 1 indicates that deletion condition number 1 in deletion condition table 303 is selected when the number of face images sent from face detection means 2 to duplicate deletion means 3 is 10 or less per frame. ing.
  • the deletion condition number 2 in the deletion condition table 303 is selected. Thereby, when the number of face images increases, the processing load of the latter stage can be reduced by increasing the number of face images to be deleted.
  • elements of selection criteria in addition to the number of face images, it is possible to use shooting time, usage status of resources (CPU and memory), and the like.
  • the same person determination unit 301 is a part that performs processing corresponding to S ⁇ b> 402 and S ⁇ b> 403 in FIG. 5, searches past data, and determines whether the acquired face image is a face image of the same person. Determine whether.
  • the deletion condition selection unit 302 refers to the deletion condition table 303 and the deletion condition selection criterion table 304 described above, and selects a deletion condition to be used. Further, the deletion processing unit 305 deletes a part of the plurality of face images determined as the face image of the same person among the plurality of face images sent to the duplicate deletion unit 23 based on the selected deletion condition. .
  • the deletion condition table 303 and the deletion condition selection criterion table 304 are used so that the deletion conditions of the duplicate deletion means 23 and 28 can be changed. Thereby, for example, the processing load and the data amount of the face image can be changed according to the place to be monitored (points A and B) and the state of the system.
  • the first and second embodiments can be modified as described below.
  • only one of the two duplicate deletion means may be provided. That is, in the case of the first embodiment, only one of the duplication deletion means 3 and 8 may be provided, and in the case of the second embodiment, only one of the duplication deletion means 23 and 28 may be provided. Even when there is only one duplicate deletion unit, it is possible to reduce to some extent the load on the face collation unit 11 that takes a long calculation time.
  • the face feature amount output by the face feature extraction means 9 may be temporarily stored in a storage device, and then a collation process and a time difference calculation process may be executed. If it is not necessary to perform real-time processing, the processing may be executed by, for example, idle time processing or nighttime batch processing.
  • the face detection means 2 and 7 may calculate the image quality of the face image (the size of the face image, the degree of facial appearance, etc.) and output it together with the face image.
  • the duplicate deletion means (3, 8, or 23, 28) uses the image quality calculated by the face detection means 2 and 7 to determine the face image to be deleted.
  • the image processing apparatus 30 in FIG. 3 may be divided into two apparatuses.
  • the first device is equipped with functions for processing the image of the monitoring camera 1 (face detection means 2, duplication deletion means 3, face feature extraction means 4, and face feature DB 5).
  • the second device is equipped with functions for processing the image of the monitoring camera 6 (face detection means 7, duplication deletion means 8, and face feature extraction means 9).
  • the face collating unit 11, the collation result DB 12, the time difference calculating unit 13, and the result output unit 14 other than those described above are mounted on one of the first and second devices, and the collation processing is performed by the mounted device. A time difference calculation process or the like may be performed.
  • the first and second devices are communicably connected to each other (wireless or wired), and information necessary for collation processing, time difference calculation processing, etc. (extracted face feature amount, shooting time information, etc.) It is configured to transmit to the apparatus on the side of executing these processes.
  • the image processing apparatus 30 in FIG. 3 may be divided into three apparatuses.
  • the first device is equipped with functions for processing the image of the monitoring camera 1 (face detection means 2, duplication deletion means 3, face feature extraction means 4, and face feature DB 5).
  • the second device is equipped with functions for processing the image of the monitoring camera 6 (face detection means 7, duplication deletion means 8, and face feature extraction means 9).
  • the face matching unit 11, the matching result DB 12, the time difference calculating unit 13, and the result output unit 14 other than those described above may be mounted on the third device to perform a matching process, a time difference calculating process, and the like.
  • the first device and the second device are each connected to be communicable with the third device (wireless or wired), and transmit the extracted facial feature amount, shooting time information, and the like to the third device.
  • the time difference measurement system of the second embodiment can be modified and provided as an image processing apparatus that recognizes that the same person has passed through different points A and B.
  • FIG. 12 is a diagram schematically showing a waiting time measurement system according to the third embodiment, and shows an example of a system for measuring the waiting time in the queue of the service counter 31.
  • a monitoring camera 1 for photographing the entrance 32 to the area waiting for the turn and a camera 6 for photographing the vicinity of the service counter 31 are installed.
  • the image processing apparatus (30 in FIG. 3) of the first embodiment is used.
  • the deletion condition set in the duplicate deletion means 3 is set so that the first face image is left as a face feature extraction target among a plurality of face images determined to be the same person, and the others are deleted.
  • the deletion condition set in the duplicate deletion unit 8 is set so that the last face image is left as a face feature extraction target among a plurality of face images determined to be the same person, and the others are deleted. .
  • FIG. 13 is a diagram schematically illustrating a waiting time measurement system according to the fourth embodiment.
  • a service is started from the reception 43 (for example, in the clinic 41).
  • An example of a system for measuring a waiting time until the start of diagnosis) is shown.
  • a monitoring camera 1 for photographing the reception 43 and a camera 6 for photographing the entrance of the medical office 41 are installed.
  • the image processing apparatus for processing the camera images of the monitoring cameras 1 and 6, the image processing apparatus (30 in FIG. 3) of the first embodiment is used.
  • the last face image is left as a face feature extraction target among a plurality of face images determined to be the same person in order to eliminate the reception waiting time, and the others are deleted. It is set. Further, since the deletion condition set in the duplicate deletion unit 8 is expected to be less stagnant at the entrance of the clinic 41, the first facial image among a plurality of facial images determined to be the same person is subject to facial feature extraction. It has a simple setting for processing, leaving it as it is and removing the rest.
  • the waiting time measurement system can measure the desired waiting time with high accuracy by changing the setting of the duplicate deletion unit according to the use and the situation of the shooting location. Can be.
  • the deletion condition table 303 and the deletion condition selection criterion table 304 of the second embodiment may be used.
  • the time difference measuring system of the present invention is not limited to the waiting time measurement, and the same person is used by cameras installed at two different points.
  • the present invention can be applied to various uses for measuring the time difference.
  • each process executed by the image processing apparatus 30 (for example, S302 to S305, S307 to S311 in FIG. 4) is stored as a program in a storage device (not shown) provided in the image processing apparatus 30. Then, it is called and executed by a CPU (not shown) provided in the image processing apparatus 30.
  • the program can be downloaded via a network or updated using a storage medium storing the program.
  • the said 1st and 2nd duplication deletion means selects the face image to delete based on each imaging
  • the said 1st and 2nd duplication deletion means selects the face image to delete based on each image quality of the several face image determined as the same person,
  • the additional remark 1 characterized by the above-mentioned. Or the time difference measuring system according to 2;
  • the first and second duplicate deletion means include: A deletion condition table in which one or more deletion conditions for deleting the face image are registered; A deletion condition selection criterion table in which a criterion for selecting a deletion condition to be used is selected from the deletion conditions registered in the deletion condition table;
  • the time difference measuring system according to any one of appendices 1 to 4, further comprising:
  • the face matching unit is configured to make a plurality of first matches that have been successfully matched.
  • One of the face feature values is selected, and the shooting time corresponding to the selected first face feature value is set as the first time.
  • the described time difference measurement system is configured to make a plurality of first matches that have been successfully matched.
  • the first camera is a camera that captures a scene at the start of a waiting time
  • the second camera is a camera that captures a scene at the end of the waiting time;
  • the time difference measuring system according to any one of appendices 1 to 6, wherein the time difference calculated by the time difference calculating means is a waiting time.
  • the said 1st and 2nd duplication deletion means selects the face image to delete based on each imaging
  • the said 1st and 2nd duplication deletion means selects the face image to delete based on each image quality of the several face image determined as the same person,
  • the additional remark 8 characterized by the above-mentioned. Or the image processing apparatus according to 9.
  • the first and second duplicate deletion means include: A deletion condition table in which one or more deletion conditions for deleting the face image are registered; A deletion condition selection criterion table in which a criterion for selecting a deletion condition to be used is selected from the deletion conditions registered in the deletion condition table;
  • the image processing apparatus according to any one of appendices 8 to 11, further comprising:
  • storage part matches and preserve
  • the face collating unit collates the second face feature quantity with the first face feature quantity stored in the storage unit, and associates the second face feature quantity with the first face feature quantity that has been successfully collated.
  • the shooting time stored in the section is the first time
  • the shooting time of the second facial feature is the second time
  • the image processing apparatus according to any one of appendices 8 to 12, further comprising time difference calculation means for calculating a time difference between the first time and the second time.
  • the said 1st and 2nd duplication deletion step selects the face image to delete based on each imaging
  • the said 1st and 2nd duplication deletion step selects the face image to delete based on each image quality of the some face image determined to be the same person,
  • the additional remark 14 characterized by the above-mentioned. Or the time difference measuring method of 15.
  • the supplementary note 19 is characterized in that the first and second duplicate deletion steps select a face image to be deleted based on the shooting times of a plurality of face images determined to be the same person. Or the image processing method of 20.
  • the said 1st and 2nd duplication deletion step selects the face image to delete based on each image quality of the some face image determined to be the same person,
  • the additional remark 19 characterized by the above-mentioned. Or the image processing method of 20.
  • the time difference measurement system of the present invention can be applied to measure the waiting time. Specifically, by measuring the waiting time until the service is provided to the customer, the customer satisfaction level is estimated, or the service counter number is increased or decreased to improve the customer satisfaction level. It can be used for.
  • the present invention can also be applied to an image processing apparatus that efficiently collates faces of the same person from image data obtained by photographing two different points.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

Selon la présente invention, un système de mesure de différence temporelle comprend un premier moyen de détection de visage pour détecter une région faciale dans une image prise par un premier appareil photo et pour découper la région faciale en tant qu'image faciale, un premier moyen d'extraction de caractéristiques faciales pour extraire une première quantité de caractéristiques faciales de l'image faciale, et une unité de stockage pour associer et stocker la première quantité de caractéristiques faciale et le moment de la prise de vue. Le système de mesure de différence temporelle comprend en outre, pour un second appareil photo, un second moyen de détection de visage similaire, un second moyen de suppression de chevauchement et un second moyen d'extraction de caractéristiques faciales. Le système de mesure de différence temporelle comprend en outre un moyen de mise en correspondance de visages pour mettre en correspondance une seconde quantité de caractéristiques faciales avec la première quantité de caractéristiques faciales, et un moyen de calcul de différence temporelle pour calculer une différence temporelle entre les premier et second moments de la prise de vue. En outre, le système de mesure de différence temporelle comprend le premier et/ou le second moyen de suppression de chevauchement pour comparer une pluralité d'images faciales découpées de cadres différents les uns des autres afin de déterminer si les images faciales sont des images faciales de la même personne ou non, et afin de supprimer certaines de celles déterminées comme étant des images faciales de la même personne. Par conséquent, la différence entre les moments auxquels des images de la même personne ont été prises à différents endroits est mesurée à grande vitesse par un dispositif à bas coût.
PCT/JP2014/074271 2013-09-13 2014-09-12 Dispositif de traitement d'informations, procédé de traitement d'informations et programme WO2015037713A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US15/021,246 US9690978B2 (en) 2013-09-13 2014-09-12 Information processing apparatus, information processing and program
EP14843348.5A EP3046075A4 (fr) 2013-09-13 2014-09-12 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2015536647A JP6568476B2 (ja) 2013-09-13 2014-09-12 情報処理装置、情報処理方法およびプログラム
HK16112602.7A HK1224418A1 (zh) 2013-09-13 2016-11-02 信息處理裝置、信息處理方法及程序

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-190081 2013-09-13
JP2013190081 2013-09-13

Publications (1)

Publication Number Publication Date
WO2015037713A1 true WO2015037713A1 (fr) 2015-03-19

Family

ID=52665810

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/074271 WO2015037713A1 (fr) 2013-09-13 2014-09-12 Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Country Status (5)

Country Link
US (1) US9690978B2 (fr)
EP (1) EP3046075A4 (fr)
JP (1) JP6568476B2 (fr)
HK (1) HK1224418A1 (fr)
WO (1) WO2015037713A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295469A (zh) * 2015-05-21 2017-01-04 北京文安智能技术股份有限公司 一种基于人脸的来客属性分析方法、装置及系统
JP2022051683A (ja) * 2020-09-22 2022-04-01 グラスパー テクノロジーズ エーピーエス 訓練データの生成と再識別に使用するための機械学習モデルの訓練とについての概念
JP2022180375A (ja) * 2018-10-19 2022-12-06 シャンハイ センスタイム インテリジェント テクノロジー カンパニー リミテッド 運転環境知能化調整、運転者登録方法及び装置、車両並びにデバイス

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016134803A (ja) * 2015-01-20 2016-07-25 キヤノン株式会社 画像処理装置及び画像処理方法
US10146797B2 (en) * 2015-05-29 2018-12-04 Accenture Global Services Limited Face recognition image data cache
JP6700791B2 (ja) * 2016-01-05 2020-05-27 キヤノン株式会社 情報処理装置、情報処理方法及びプログラム
CN108228872A (zh) 2017-07-21 2018-06-29 北京市商汤科技开发有限公司 人脸图像去重方法和装置、电子设备、存储介质、程序
US10949652B1 (en) * 2019-11-08 2021-03-16 Capital One Services, Llc ATM transaction security using facial detection
WO2021216356A1 (fr) * 2020-04-23 2021-10-28 ESD Technologies, Inc. Système et procédé d'identification de démographie d'un public et de distribution d'un contenu relatif au public

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006221355A (ja) * 2005-02-09 2006-08-24 Hitachi Ltd 監視装置及び監視システム
JP2012108681A (ja) 2010-11-17 2012-06-07 Hitachi Omron Terminal Solutions Corp 自動取引装置および待ち時間算出方式システム
JP2013161109A (ja) * 2012-02-01 2013-08-19 Sony Corp 情報処理装置、情報処理方法、及びプログラム

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002041770A (ja) * 2000-07-25 2002-02-08 Oki Electric Ind Co Ltd 顧客情報収集システム
JP4177598B2 (ja) 2001-05-25 2008-11-05 株式会社東芝 顔画像記録装置、情報管理システム、顔画像記録方法、及び情報管理方法
JP2005227957A (ja) 2004-02-12 2005-08-25 Mitsubishi Electric Corp 最適顔画像記録装置及び最適顔画像記録方法
JP4650669B2 (ja) * 2004-11-04 2011-03-16 富士ゼロックス株式会社 動体認識装置
JP2007190076A (ja) * 2006-01-17 2007-08-02 Mitsubishi Electric Corp 監視支援システム
US20080080748A1 (en) * 2006-09-28 2008-04-03 Kabushiki Kaisha Toshiba Person recognition apparatus and person recognition method
JP4577410B2 (ja) 2008-06-18 2010-11-10 ソニー株式会社 画像処理装置、画像処理方法およびプログラム
JP2010113692A (ja) * 2008-11-10 2010-05-20 Nec Corp 顧客行動記録装置及び顧客行動記録方法並びにプログラム
WO2012148000A1 (fr) 2011-04-28 2012-11-01 九州日本電気ソフトウェア株式会社 Système de traitement d'image, procédé d'identification de personne, dispositif de traitement d'image, procédé de commande et son programme de commande
US8769556B2 (en) * 2011-10-28 2014-07-01 Motorola Solutions, Inc. Targeted advertisement based on face clustering for time-varying video
US8761442B2 (en) * 2012-03-29 2014-06-24 The Nielsen Company (Us), Llc Methods and apparatus to count people in images
US9245276B2 (en) * 2012-12-12 2016-01-26 Verint Systems Ltd. Time-in-store estimation using facial recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006221355A (ja) * 2005-02-09 2006-08-24 Hitachi Ltd 監視装置及び監視システム
JP2012108681A (ja) 2010-11-17 2012-06-07 Hitachi Omron Terminal Solutions Corp 自動取引装置および待ち時間算出方式システム
JP2013161109A (ja) * 2012-02-01 2013-08-19 Sony Corp 情報処理装置、情報処理方法、及びプログラム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3046075A4

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295469A (zh) * 2015-05-21 2017-01-04 北京文安智能技术股份有限公司 一种基于人脸的来客属性分析方法、装置及系统
JP2022180375A (ja) * 2018-10-19 2022-12-06 シャンハイ センスタイム インテリジェント テクノロジー カンパニー リミテッド 運転環境知能化調整、運転者登録方法及び装置、車両並びにデバイス
JP2022051683A (ja) * 2020-09-22 2022-04-01 グラスパー テクノロジーズ エーピーエス 訓練データの生成と再識別に使用するための機械学習モデルの訓練とについての概念
JP7186269B2 (ja) 2020-09-22 2022-12-08 グラスパー テクノロジーズ エーピーエス 訓練データの生成と同一物判定に使用するための機械学習モデルの訓練とについての概念

Also Published As

Publication number Publication date
EP3046075A1 (fr) 2016-07-20
US20160224824A1 (en) 2016-08-04
EP3046075A4 (fr) 2017-05-03
US9690978B2 (en) 2017-06-27
JPWO2015037713A1 (ja) 2017-03-02
HK1224418A1 (zh) 2017-08-18
JP6568476B2 (ja) 2019-08-28

Similar Documents

Publication Publication Date Title
JP6568476B2 (ja) 情報処理装置、情報処理方法およびプログラム
JP5500303B1 (ja) 監視システム、監視方法、監視プログラム、ならびに該プログラムを記録した記録媒体
JP5450089B2 (ja) 対象検出装置および対象検出方法
US20140254891A1 (en) Method and apparatus for registering face images, and apparatus for inducing pose change, and apparatus for recognizing faces
CN107798288B (zh) 信息处理设备、信息处理方法、系统及存储介质
JP6270433B2 (ja) 情報処理装置、情報処理方法、情報処理システム
JP7250443B2 (ja) 画像処理装置
US10664523B2 (en) Information processing apparatus, information processing method, and storage medium
JP6035059B2 (ja) 認証システム、認証方法
JP2016031679A (ja) オブジェクト識別装置、オブジェクト識別方法及びプログラム
JP2017004431A (ja) 顔認識システム、顔認識サーバ及び顔認識方法
JP5139947B2 (ja) 監視画像記憶システム及び監視画像記憶システムの監視画像記憶方法
JP2019020777A (ja) 情報処理装置、及び、情報処理装置の制御方法、コンピュータプログラム、記憶媒体
US20190147251A1 (en) Information processing apparatus, monitoring system, method, and non-transitory computer-readable storage medium
CN111353374A (zh) 信息处理设备及其控制方法以及计算机可读存储介质
JP6485978B2 (ja) 画像処理装置および画像処理システム
JP5680954B2 (ja) 滞留時間測定装置、滞留時間測定システム、および滞留時間測定方法
WO2020115910A1 (fr) Système de traitement d'informations, dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP7298729B2 (ja) 情報処理システム、情報処理装置、情報処理方法、およびプログラム
JPWO2021125268A5 (fr)
JP2007257316A (ja) 物体計数装置
JP4408355B2 (ja) 画像処理装置及び画像処理プログラム
JP6905653B2 (ja) 顔認識システム、顔認識サーバ及び顔認識方法
JP7371806B2 (ja) 情報処理装置、情報処理方法、およびプログラム
WO2014097699A1 (fr) Système de traitement d'informations, procédé de traitement d'informations et programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14843348

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015536647

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 15021246

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014843348

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014843348

Country of ref document: EP