WO2019181479A1 - Système de collationnement facial - Google Patents

Système de collationnement facial Download PDF

Info

Publication number
WO2019181479A1
WO2019181479A1 PCT/JP2019/008575 JP2019008575W WO2019181479A1 WO 2019181479 A1 WO2019181479 A1 WO 2019181479A1 JP 2019008575 W JP2019008575 W JP 2019008575W WO 2019181479 A1 WO2019181479 A1 WO 2019181479A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
robot
angle
person
Prior art date
Application number
PCT/JP2019/008575
Other languages
English (en)
Japanese (ja)
Inventor
一浩 戸田
Original Assignee
株式会社日立国際電気
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立国際電気 filed Critical 株式会社日立国際電気
Priority to JP2020508154A priority Critical patent/JP6982168B2/ja
Publication of WO2019181479A1 publication Critical patent/WO2019181479A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to a face collation system that collates human face images.
  • video surveillance systems have been installed in facilities and equipment visited by an unspecified number of people, such as hotels, buildings, convenience stores, financial institutions, dams, and roads, for the purposes of crime prevention and accident prevention. Yes.
  • Patent Document 1 discloses an invention relating to a monitoring device or a monitoring camera system that records an image captured by a monitoring camera of a portable monitoring device and is used for a crime investigation or the like.
  • Patent Document 2 discloses an invention relating to a monitoring system and a person search method capable of performing a person search with high accuracy.
  • the present invention has been made in view of the conventional circumstances as described above, and an object of the present invention is to provide a face matching system that can accurately identify each person.
  • the face matching system is configured as follows. That is, in a face matching system for matching a person's face image, a camera configured to be rotatable in a planar direction and a face image included in an image captured by the camera are checked against a database, and the person corresponding to the face image is checked.
  • a face authentication server that acquires attribute information indicating an attribute from the database, and the camera transmits a photographed image and a photographing angle that is a camera angle in a planar direction at the time of photographing to the face authentication server, and the face authentication The server calculates a person angle that is an angle in a plane direction of the face image included in the photographed image with respect to the camera based on the photographed image and the photographing angle received from the camera, and corresponds to the attribute information regarding the face image. Output.
  • the face authentication server when there are a plurality of persons, the face authentication server generates a virtual panoramic image in which a range wider than the angle of view of the camera is captured based on a plurality of captured images having different shooting angles. For each face image included in the virtual panorama image, it is preferable to output attribute information or a person angle related to the face image.
  • the face authentication server calculates a person angle related to the face image for each face image included in the plurality of photographed images having different photographing angles, and the face images of the same person are selected from the plurality of face images having the same person angle. It is preferable to generate the virtual panoramic image as follows.
  • the attribute information includes age and gender
  • the face authentication server acquires attribute information related to the face image from the database for each face image included in a plurality of photographed images having different photographing angles.
  • the virtual panorama image is generated by using a plurality of face images having the same age and sex in the information as face images of the same person.
  • the face authentication server periodically regenerates the virtual panoramic image according to time.
  • the camera is mounted on a robot, and the robot has a voice output function for outputting a voice message.
  • FIG. 1 It is a figure which shows the structural example of the face collation system which concerns on one Embodiment of this invention. It is a figure which shows a mode that the seated person of a table is image
  • FIG. 1 shows a configuration example of a face matching system according to an embodiment of the present invention.
  • the face collation system of this example includes a robot 10, a face authentication server 20, and a client 30, which are connected to a hub 40 and configured to be able to communicate with each other.
  • the hub 40 is also connected to other systems (for example, a host system).
  • each device is connected by a cable, but these devices may be connected wirelessly.
  • the robot 10 is installed on, for example, a table (table) in a dining facility.
  • the robot 10 may have a shape imitating a human shape, and a camera is mounted on the head.
  • the camera of the robot 10 is preferably arranged at a height close to the line of sight of the person (seat person) seated on the table seat.
  • the face authentication server 20 is a device that performs face authentication by comparing a face image of a person included in an image with a database.
  • the client 30 is a device that collects marketing information.
  • FIG. 2 shows a situation where the robot 10 takes a picture of a table occupant.
  • the robot 10 is placed at the center of the end of the table, and five persons (X1 to X5) are seated so as to surround the table.
  • the robot 10 has a mechanism for rotating the head on which the camera is mounted in the plane direction and the vertical direction. That is, it is configured such that the camera angle (shooting direction) can be adjusted by rotating the head of the robot 10 in the plane direction or the vertical direction.
  • the camera angle in the plane direction is referred to as “shooting angle”
  • the camera angle in the vertical direction is referred to as “shooting elevation angle”.
  • FIG. 3 is a diagram illustrating the shooting angle of the robot 10.
  • the position (photographing angle) of a seated person to be photographed is defined as a minus angle on the left side and a plus angle on the right side with the front (0 °) facing the robot 10 as a reference. Note that this is only an example, and for example, the leftmost angle at which the robot 10 can rotate the head in the plane direction may be defined as a positive angle clockwise with reference to (0 °).
  • FIG. 4 is a diagram for explaining the shooting elevation angle of the robot 10.
  • the elevation angle of the seated person to be photographed is defined as a minus angle on the lower side and a plus angle on the upper side, with the horizontal direction being the reference (0 °).
  • the horizontal direction being the reference (0 °).
  • a negative shooting elevation angle is used, and when shooting a person with a high sitting height (for example, an adult), the shooting angle is positive.
  • the angle at the lowermost end at which the robot 10 can rotate the head in the vertical direction may be defined as a plus angle in the upper direction with reference to (0 °).
  • the image photographed by the robot 10 is transmitted to the face authentication server 20 together with information on the photographing angle and photographing elevation angle of the image.
  • the timing of shooting and image transmission by the robot 10 is arbitrary and may be always performed.
  • FIG. 5 shows a state in which the person X2 is photographed on a plurality of images P1 and P2. If processing is continued in this situation, the number of people in the group seated at the same table will be erroneously counted (increased from the original) and duplication of people in the same group will occur, resulting in a large error in marketing information. , It becomes a problem.
  • the robot 10 transmits information of the shooting angle and the shooting elevation angle together with the shot image to the face authentication server 20, so that the face authentication server 20 combines a plurality of images to generate a virtual panoramic image. It is configured so that it can be generated.
  • FIG. 6 shows how a virtual panoramic image is generated.
  • five captured images P1 to P6 are combined to generate a virtual panoramic image Q that captures a wider range in the plane direction than the angle of view of the camera of the robot 10.
  • the reference position of the photographing angle changes variously depending on the position and orientation of the robot 10 on the table, but the relative positional relationship (sitting relative angle) of each person does not change when everyone is seated. For this reason, it is possible to generate a virtual panoramic image indicating the relative positional relationship of each person based on the shooting angle of each image.
  • the seating position of each person in the group (the angle in the plane direction seen from the robot) can be specified, and the group composition of the group can be accurately grasped. Therefore, an erroneous count of the number of people in the group and duplication of people in the same group can be suppressed. As a result, marketing information for each person in the group can be collected accurately.
  • the virtual panoramic image is preferably expressed virtually by taking the elevation angle of shooting in consideration of the sitting height difference of each person.
  • the attribute of the person shown in each image is digitized to determine the seating position of each person. Details will be described with reference to the processing flow shown in FIG.
  • the face authentication server 20 includes a face authentication unit 22 that performs face authentication on the face image of a person included in the image, and a determination unit 21 that determines the group composition of the group based on the result of face authentication. I will do it. Further, the face authentication server 20 has a customer information database that stores information on a person (customer) who uses a dining facility to which the present system is applied. The customer information database stores a personal ID number that uniquely identifies each customer, a reference face image that is collated at the time of face authentication, and attribute information (for example, age, gender, facial feature amount, etc.) indicating customer attributes. Is done. The face authentication server 20 may be configured to be accessible to an external customer information database.
  • Robot 10 transmits image P1 photographed at an angle of 30 ° to face authentication server 20 (step S11).
  • the face authentication server 20 performs the following processing.
  • the image P1 is transmitted from the determination unit 21 to the face authentication unit 22 (step S12).
  • the face authentication unit 22 performs face authentication for each face image included in the image P1, and acquires attribute information of a person corresponding to the face image from the customer information database (step S13).
  • the face authentication is performed, for example, by comparing a face image included in a photographed image with each reference face image in the customer information database and searching for a person with a reference face image whose similarity is equal to or greater than a predetermined value.
  • the face authentication unit 22 transmits attribute information related to each face image obtained by the face authentication process to the determination unit 21 as a result response (step S14).
  • the determination unit 21 calculates the seating relative angle of the person of the face image (step S15) and stores it in the memory 1 (first memory area) together with attribute information related to each face image.
  • the seating relative angle is an angle in a planar direction (an angle in a planar direction with respect to the camera) when the person of the face image included in the captured image is viewed from the robot 10.
  • the seating relative angle can be calculated based on the position of the face image in the captured image and the capturing angle of the captured image.
  • the robot 10 transmits the image P2 captured at an angle of 60 ° to the face authentication server 20 (step S21).
  • the face authentication server 20 performs the following processing.
  • the image P2 is transmitted from the determination unit 21 to the face authentication unit 22 (step S22).
  • the face authentication unit 22 performs face authentication for each face image included in the image P2, and acquires attribute information of a person corresponding to the face image from the customer information database (step S23).
  • the face authentication unit 22 transmits attribute information related to each face image obtained by the face authentication process to the determination unit 21 as a result response (step S24).
  • the determination unit 21 calculates the seating relative angle of the person of the face image (step S25), and stores it in the memory 2 (second memory area) together with attribute information related to each face image. Store (step S26).
  • the second person included in the image P1 (the person on the right side of the image) and the first person included in the image P2 (the person on the left side of the image) have the same age, sex, and facial feature amount. Therefore, the same value can be obtained for the personal ID number.
  • the seating relative angle is also the same value.
  • the determination unit 21 determines whether the second person in the memory 1 and the first person in the memory 2 Are the same person (step S31). The same person can be determined by either matching the difference in the relative angle of arrival or matching the attribute information (especially age and gender) obtained by face authentication. It is possible to reduce erroneous determination.
  • the face authentication server 20 repeats the above processing, integrates a plurality of photographed images photographed at different photographing angles, and generates a virtual panoramic image obtained by photographing a wider range in the plane direction than the camera angle of view. Then, for each face image included in the virtual panoramic image, the attribute information and the person angle related to the face image are output. That is, the attribute information of each person surrounding the same table is output together with the person angle at which each seating position can be specified.
  • the attribute information and the person angle output from the face authentication server 20 are provided to the client 30 for collecting marketing information.
  • the virtual panorama image is preferably generated not only once when a plurality of groups are seated on the table, but also after that. That is, since it is possible to always take a picture of the table with a camera mounted on the robot, a virtual panoramic image is periodically regenerated according to the time. Accordingly, it is possible to detect an increase or decrease in the number of people seated at the table, a change in seating position, and the like. Therefore, it becomes possible to collect marketing information accurately by flexibly responding to changes in the personnel composition and arrangement of the group.
  • this system uses a face authentication function to determine the identity of people in multiple shot images with different shooting angles, thus reducing erroneous group counts and duplication of people within the same group.
  • the accuracy of the marketing information can be reduced and the reliability can be improved.
  • the group was identified in consideration of the time of the store visit (for example, family members who come to the store early in the evening, friends of the same generation are in the middle of the night) To visit the store) and to understand the characteristics of the group.
  • the robot 10 has a voice output function for outputting a voice message from the speaker.
  • the robot 10 recognizes that a person is seated on the table, the robot 10 outputs a voice message asking the seated person.
  • the face of the seated person is directed to the robot 10 so that the front face of the seated person can be photographed.
  • the captured image is transmitted to the face authentication server 20 (step S101).
  • the face authentication server 20 compares the face image of the person included in the image received from the robot with the reference face image registered in the customer information database, and performs face authentication (step S102). Thereafter, the face authentication server 20 transmits the result of the face authentication to the robot 10 that is the image transmission source (steps S103 to S104).
  • the robot 10 when the robot 10 receives a response from the face authentication server 20 that the face authentication has failed, it is preferable to perform a further call and recapture the seated person. For example, if it is determined as a result of face authentication that the seated person is wearing a mask, a voice message that prompts the user to remove the mask may be output from the robot.
  • the client 30 When the client 30 periodically inquires of the face authentication server 20 about the presence or absence of a new visitor and receives the customer identification internal ID of the new visitor from the face authentication server 20, the client 30 in the past about the corresponding person from the marketing information database.
  • the collected marketing information is acquired and transmitted to the robot 10 on the table on which the person is seated (steps S121 to S127).
  • the client 30 inquires of the customer information database for the latest customer information via the marketing function I / F of the face authentication server 20 (steps S121 and 122).
  • the customer information database responds to the client 30 with the customer identification internal ID of the new store visitor via the marketing function I / F (steps S123, 124).
  • the client 30 Based on the new store visitor's identification internal ID acquired from the face authentication server 20, the client 30 makes a customer information registration or update inquiry to the marketing information database, and acquires the marketing information collected in the past for the corresponding person. (Steps S125 and 126), the information is transmitted to the robot 10 on the table where the person is seated (Step S127).
  • the robot 10 When the robot 10 receives the marketing information from the client 30, the robot 10 outputs a voice message corresponding to the marketing information. For example, when a group that has visited the store again visits the store again, a voice message such as “Thank you for coming again with you” is output, or compared to the previous visit, If it occurs, the service will be improved by outputting a voice message such as “I don't have Mr. XX today, sorry.”
  • the face image sorting process (T1) is a process for narrowing down the best image for performing face authentication from the face images of the customers who have been photographed by the robot and accumulated in the temporary registration table.
  • the face image registration process (T2) is a process of registering the reference face image in the customer information database of the face authentication server 20 based on the face image list in which the best image is selected. Assume that a photographed face image (JPEG file) is stored in the temporary registration table as a temporary registration face image together with information on the date and time when the person entered the store.
  • the number of groups that can be calculated by grouping the temporarily registered face images with the date and time of entering the store (the number of groups that visited the store) is obtained from the temporary registration table (step S201). . Thereafter, the following processing (steps S203 to S205) is repeated for the number of groups (step S202). The number of customers visiting the group is extracted from the temporary registration table (step S203). Next, the number of face image data being temporarily registered is calculated from the temporary registration table (step S204). The number of temporarily registered face images can be calculated by counting the date of entry and the table ID. Next, the following processing (steps S206 to S217) is repeated for the number of temporarily registered face image data (step S205).
  • Temporary registration face image data (JPEG file) is extracted from the temporary registration table (step S206).
  • face detection processing (step S207) and face attribute extraction processing (step S208) are performed on the extracted face image data, and face detection conditions (direction, size, etc.) are determined (step S209).
  • step S209 If it is determined in step S209 that the face detection condition is not good, the face image data is discarded, and the process proceeds to the next face image data processing (steps S206 and after). On the other hand, if it is determined in step S209 that the face detection conditions are good, after performing facial organ detection processing (step S210) and facial feature amount extraction processing (step S211), A face matching process (step S212) for matching the face feature quantity with the registered face data in the internal memory is performed, and the similarity (reliability) of the face is determined (step S213). Note that the registered face data does not exist in the internal memory at the first face collation, but the registered face data exists in the internal memory at the second and subsequent face collations.
  • step S213 If it is determined in step S213 that the degree of similarity of the face is low, a customer identification ID is newly issued with the face image as a registration candidate, and temporarily registered together with the face direction (angle), size, similarity, etc. A new addition is made to the candidate table (step S217), and the process proceeds to the processing of the next face image data (from step S206).
  • step S213 if it is determined in step S213 that the degree of similarity of the face is high, the customer identification ID corresponding to the person of the face image and the search condition (the detection condition for the registration candidate having a similar face image) are extracted from the temporary registration candidate table. ) Is extracted (step S214), and it is determined whether the detection conditions (direction, size, etc.) of the face in the current face image exceed the registration candidates (step S215).
  • step S215 If it is determined in step S215 that the face detection condition does not exceed the registration candidates, the face image data is discarded and the process proceeds to the next face image data processing (steps S206 and after). On the other hand, if it is determined in step S215 that the face detection condition exceeds the registration candidate, the search condition corresponding to the corresponding customer identification ID is updated with the face detection condition in the current face image (step S216). Then, the process proceeds to the next face image data processing (from step S206). When the process is completed for all the groups, the face image selection process (T1) ends, and the face image registration process (T2) is performed.
  • a list of new face registration candidates is extracted from the temporary registration candidate table (step S221).
  • the following processing is repeated for the number of face registration candidate lists (step S222).
  • the temporary registration candidate face image data JPEG file
  • the face authentication server 20 is requested to register a new face (step S224).
  • the face authentication server 20 registers the face image as a reference face image in the customer information database (step S225). Thereafter, the face authentication server 20 transmits the registration result of the reference face image to the client 30 (steps S226 to 227).
  • the reference face image in the customer information database that the face authentication server 20 refers to when performing face authentication can be optimized. That is, it is possible to select an image satisfying the conditions such as “facing the front” and “the face portion is photographed large” from a large number of photographed images and register it as a reference face image.
  • the face collation system of this example collates the database with the robot 10 having a camera installed on the table and configured to be rotatable in the plane direction, and the face image included in the image captured by the robot 10.
  • a face authentication server 20 that acquires attribute information indicating the attribute of the person corresponding to the face image from the customer information database, and the robot 10 captures a captured image and a capturing angle that is a camera angle in a planar direction at the time of capturing Is transmitted to the face authentication server 20, and the face authentication server 20 calculates a person angle that is an angle in a plane direction of the face image included in the captured image with respect to the robot 10 based on the captured image and the captured angle received from the robot 10. In addition, it is configured to output in association with the attribute information related to the face image.
  • each person can be distinguished by the seating position (angle viewed from the robot).
  • the attribute information of each person can be acquired. Accordingly, it is possible to accurately identify each person around the table.
  • the face authentication server 20 when there are a plurality of persons around the table, the face authentication server 20 is wider in the plane direction than the angle of view of the camera based on a plurality of captured images having different shooting angles.
  • a virtual panorama image obtained by photographing a range is generated, and attribute information and a person angle related to the face image are output for each face image included in the virtual panorama image. Further, the virtual panorama image is generated by regarding the face images obtained from the matching person angle, age and sex as those of the same person.
  • the seating position of each person in the group can be specified, and the personnel composition and arrangement of the group can be accurately grasped, so that it is possible to suppress an erroneous count of the number of people in the group and duplication of persons within the same group.
  • the face authentication server 20 is configured to periodically regenerate virtual panoramic images. Therefore, even if the number of people seated at the table is increased or decreased, the seating position is changed, etc., it becomes possible to accurately grasp the latest group personnel composition and arrangement.
  • the robot 10 equipped with the camera has a voice output function for outputting a voice message. Therefore, it is possible to make the eyes of the seated person on the table face the camera or remove the mask, and it is possible to further improve the accuracy of face authentication.
  • the present invention has been described in detail above, it is needless to say that the present invention is not limited to the system described here and can be widely applied to systems other than those described above.
  • it can be applied not only to a facility surrounding a dining table such as a general restaurant, but also to a place that considers unmanned, such as a company reception and an entrance to the facility.
  • it can be applied to resting places in public facilities, etc., to improve communication by talking to robots and to analyze the actual situation of users (marketing).
  • the present invention can also be provided as, for example, a method and method for executing the processing according to the present invention, a program for realizing such a method and method, and a storage medium for storing the program.
  • the present invention provides a human face image in a place where services such as reception, entrance, accounting, customer service, etc. need to be communicated at various facilities such as restaurants, retail stores, shopping facilities, accommodation facilities, office buildings, and public facilities. It can be used for a face matching system for matching.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un système de collationnement facial capable d'identifier avec précision des personnes. Le système de collationnement facial comprend : un robot (10) ayant une caméra qui est configurée pour pouvoir tourner dans une direction plane; et un serveur de reconnaissance faciale (20) qui collationne avec une base de données une image faciale incluse dans une image capturée par le robot (10) et qui acquiert, à partir d'une base de données d'informations de client, des informations d'attribut indiquant un attribut d'une personne correspondant à l'image faciale. Le robot (10) transmet au serveur de reconnaissance faciale (20) l'image capturée et un angle de capture qui est l'angle de caméra dans la direction plane pendant la capture d'image. Le serveur de reconnaissance faciale (20) calcule, sur la base de l'image capturée et de l'angle de capture reçu en provenance du robot (10), un angle de la personne qui est l'angle dans la direction plane de l'image faciale incluse dans l'image capturée par rapport au robot (10), associe l'angle de la personne aux informations d'attribut pour l'image faciale, et délivre l'angle de la personne.
PCT/JP2019/008575 2018-03-20 2019-03-05 Système de collationnement facial WO2019181479A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020508154A JP6982168B2 (ja) 2018-03-20 2019-03-05 顔照合システム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-052164 2018-03-20
JP2018052164 2018-03-20

Publications (1)

Publication Number Publication Date
WO2019181479A1 true WO2019181479A1 (fr) 2019-09-26

Family

ID=67987746

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/008575 WO2019181479A1 (fr) 2018-03-20 2019-03-05 Système de collationnement facial

Country Status (2)

Country Link
JP (1) JP6982168B2 (fr)
WO (1) WO2019181479A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113218145A (zh) * 2020-01-21 2021-08-06 青岛海尔电冰箱有限公司 冰箱食材管理方法、冰箱和存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001275096A (ja) * 2000-03-24 2001-10-05 Sony Corp 撮像および表示装置並びにテレビ会議装置
JP2010109898A (ja) * 2008-10-31 2010-05-13 Canon Inc 撮影制御装置、撮影制御方法及びプログラム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001275096A (ja) * 2000-03-24 2001-10-05 Sony Corp 撮像および表示装置並びにテレビ会議装置
JP2010109898A (ja) * 2008-10-31 2010-05-13 Canon Inc 撮影制御装置、撮影制御方法及びプログラム

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113218145A (zh) * 2020-01-21 2021-08-06 青岛海尔电冰箱有限公司 冰箱食材管理方法、冰箱和存储介质

Also Published As

Publication number Publication date
JPWO2019181479A1 (ja) 2021-03-18
JP6982168B2 (ja) 2021-12-17

Similar Documents

Publication Publication Date Title
US11704936B2 (en) Object tracking and best shot detection system
JP6854881B2 (ja) 顔画像照合システムおよび顔画像検索システム
JP5851651B2 (ja) 映像監視システム、映像監視方法、および映像監視装置
JP6139364B2 (ja) 人物特定装置、人物特定方法及びプログラム
US20120207356A1 (en) Targeted content acquisition using image analysis
IL258817A (en) Methods and instrument for minimizing errors in face recognition applications
JP7405200B2 (ja) 人物検出システム
US20160224837A1 (en) Method And System For Facial And Object Recognition Using Metadata Heuristic Search
US10887553B2 (en) Monitoring system and monitoring method
JP2003187352A (ja) 特定人物検出システム
JP2011039959A (ja) 監視システム
AU2019232782B2 (en) Information processing device, authentication system, authentication method, and program
JP7103229B2 (ja) 不審度推定モデル生成装置
WO2019181479A1 (fr) Système de collationnement facial
JP2015233204A (ja) 画像記録装置及び画像記録方法
KR20170007070A (ko) 방문객 출입 통계 분석 방법 및 장치
WO2017006749A1 (fr) Dispositif de traitement d'images et système de traitement d'images
JP7069854B2 (ja) 監視システム、およびサーバ装置
US11637994B2 (en) Two-way intercept using coordinate tracking and video classification
CN207731302U (zh) 一种身份查验系统
CN111414799A (zh) 同行用户确定方法、装置、电子设备及计算机可读介质
WO2023181155A1 (fr) Appareil de traitement, procédé de traitement et support d'enregistrement
US20230084625A1 (en) Photographing control device, system, method, and non-transitory computer-readable medium storing program
WO2021192317A1 (fr) Dispositif, système, procédé de gestion de sièges et support non transitoire lisible par ordinateur dans lequel est stocké un programme
KR20180051015A (ko) 사진의 얼굴을 자동인식하여 미아를 식별할 수 있는 연락처 연동 얼굴인식 기반의 미아 찾기 방법 및 이를 이용한 미아 찾기 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19771841

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020508154

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19771841

Country of ref document: EP

Kind code of ref document: A1