WO2019181479A1 - Face collation system - Google Patents

Face collation system Download PDF

Info

Publication number
WO2019181479A1
WO2019181479A1 PCT/JP2019/008575 JP2019008575W WO2019181479A1 WO 2019181479 A1 WO2019181479 A1 WO 2019181479A1 JP 2019008575 W JP2019008575 W JP 2019008575W WO 2019181479 A1 WO2019181479 A1 WO 2019181479A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
robot
angle
person
Prior art date
Application number
PCT/JP2019/008575
Other languages
French (fr)
Japanese (ja)
Inventor
一浩 戸田
Original Assignee
株式会社日立国際電気
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立国際電気 filed Critical 株式会社日立国際電気
Priority to JP2020508154A priority Critical patent/JP6982168B2/en
Publication of WO2019181479A1 publication Critical patent/WO2019181479A1/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to a face collation system that collates human face images.
  • video surveillance systems have been installed in facilities and equipment visited by an unspecified number of people, such as hotels, buildings, convenience stores, financial institutions, dams, and roads, for the purposes of crime prevention and accident prevention. Yes.
  • Patent Document 1 discloses an invention relating to a monitoring device or a monitoring camera system that records an image captured by a monitoring camera of a portable monitoring device and is used for a crime investigation or the like.
  • Patent Document 2 discloses an invention relating to a monitoring system and a person search method capable of performing a person search with high accuracy.
  • the present invention has been made in view of the conventional circumstances as described above, and an object of the present invention is to provide a face matching system that can accurately identify each person.
  • the face matching system is configured as follows. That is, in a face matching system for matching a person's face image, a camera configured to be rotatable in a planar direction and a face image included in an image captured by the camera are checked against a database, and the person corresponding to the face image is checked.
  • a face authentication server that acquires attribute information indicating an attribute from the database, and the camera transmits a photographed image and a photographing angle that is a camera angle in a planar direction at the time of photographing to the face authentication server, and the face authentication The server calculates a person angle that is an angle in a plane direction of the face image included in the photographed image with respect to the camera based on the photographed image and the photographing angle received from the camera, and corresponds to the attribute information regarding the face image. Output.
  • the face authentication server when there are a plurality of persons, the face authentication server generates a virtual panoramic image in which a range wider than the angle of view of the camera is captured based on a plurality of captured images having different shooting angles. For each face image included in the virtual panorama image, it is preferable to output attribute information or a person angle related to the face image.
  • the face authentication server calculates a person angle related to the face image for each face image included in the plurality of photographed images having different photographing angles, and the face images of the same person are selected from the plurality of face images having the same person angle. It is preferable to generate the virtual panoramic image as follows.
  • the attribute information includes age and gender
  • the face authentication server acquires attribute information related to the face image from the database for each face image included in a plurality of photographed images having different photographing angles.
  • the virtual panorama image is generated by using a plurality of face images having the same age and sex in the information as face images of the same person.
  • the face authentication server periodically regenerates the virtual panoramic image according to time.
  • the camera is mounted on a robot, and the robot has a voice output function for outputting a voice message.
  • FIG. 1 It is a figure which shows the structural example of the face collation system which concerns on one Embodiment of this invention. It is a figure which shows a mode that the seated person of a table is image
  • FIG. 1 shows a configuration example of a face matching system according to an embodiment of the present invention.
  • the face collation system of this example includes a robot 10, a face authentication server 20, and a client 30, which are connected to a hub 40 and configured to be able to communicate with each other.
  • the hub 40 is also connected to other systems (for example, a host system).
  • each device is connected by a cable, but these devices may be connected wirelessly.
  • the robot 10 is installed on, for example, a table (table) in a dining facility.
  • the robot 10 may have a shape imitating a human shape, and a camera is mounted on the head.
  • the camera of the robot 10 is preferably arranged at a height close to the line of sight of the person (seat person) seated on the table seat.
  • the face authentication server 20 is a device that performs face authentication by comparing a face image of a person included in an image with a database.
  • the client 30 is a device that collects marketing information.
  • FIG. 2 shows a situation where the robot 10 takes a picture of a table occupant.
  • the robot 10 is placed at the center of the end of the table, and five persons (X1 to X5) are seated so as to surround the table.
  • the robot 10 has a mechanism for rotating the head on which the camera is mounted in the plane direction and the vertical direction. That is, it is configured such that the camera angle (shooting direction) can be adjusted by rotating the head of the robot 10 in the plane direction or the vertical direction.
  • the camera angle in the plane direction is referred to as “shooting angle”
  • the camera angle in the vertical direction is referred to as “shooting elevation angle”.
  • FIG. 3 is a diagram illustrating the shooting angle of the robot 10.
  • the position (photographing angle) of a seated person to be photographed is defined as a minus angle on the left side and a plus angle on the right side with the front (0 °) facing the robot 10 as a reference. Note that this is only an example, and for example, the leftmost angle at which the robot 10 can rotate the head in the plane direction may be defined as a positive angle clockwise with reference to (0 °).
  • FIG. 4 is a diagram for explaining the shooting elevation angle of the robot 10.
  • the elevation angle of the seated person to be photographed is defined as a minus angle on the lower side and a plus angle on the upper side, with the horizontal direction being the reference (0 °).
  • the horizontal direction being the reference (0 °).
  • a negative shooting elevation angle is used, and when shooting a person with a high sitting height (for example, an adult), the shooting angle is positive.
  • the angle at the lowermost end at which the robot 10 can rotate the head in the vertical direction may be defined as a plus angle in the upper direction with reference to (0 °).
  • the image photographed by the robot 10 is transmitted to the face authentication server 20 together with information on the photographing angle and photographing elevation angle of the image.
  • the timing of shooting and image transmission by the robot 10 is arbitrary and may be always performed.
  • FIG. 5 shows a state in which the person X2 is photographed on a plurality of images P1 and P2. If processing is continued in this situation, the number of people in the group seated at the same table will be erroneously counted (increased from the original) and duplication of people in the same group will occur, resulting in a large error in marketing information. , It becomes a problem.
  • the robot 10 transmits information of the shooting angle and the shooting elevation angle together with the shot image to the face authentication server 20, so that the face authentication server 20 combines a plurality of images to generate a virtual panoramic image. It is configured so that it can be generated.
  • FIG. 6 shows how a virtual panoramic image is generated.
  • five captured images P1 to P6 are combined to generate a virtual panoramic image Q that captures a wider range in the plane direction than the angle of view of the camera of the robot 10.
  • the reference position of the photographing angle changes variously depending on the position and orientation of the robot 10 on the table, but the relative positional relationship (sitting relative angle) of each person does not change when everyone is seated. For this reason, it is possible to generate a virtual panoramic image indicating the relative positional relationship of each person based on the shooting angle of each image.
  • the seating position of each person in the group (the angle in the plane direction seen from the robot) can be specified, and the group composition of the group can be accurately grasped. Therefore, an erroneous count of the number of people in the group and duplication of people in the same group can be suppressed. As a result, marketing information for each person in the group can be collected accurately.
  • the virtual panoramic image is preferably expressed virtually by taking the elevation angle of shooting in consideration of the sitting height difference of each person.
  • the attribute of the person shown in each image is digitized to determine the seating position of each person. Details will be described with reference to the processing flow shown in FIG.
  • the face authentication server 20 includes a face authentication unit 22 that performs face authentication on the face image of a person included in the image, and a determination unit 21 that determines the group composition of the group based on the result of face authentication. I will do it. Further, the face authentication server 20 has a customer information database that stores information on a person (customer) who uses a dining facility to which the present system is applied. The customer information database stores a personal ID number that uniquely identifies each customer, a reference face image that is collated at the time of face authentication, and attribute information (for example, age, gender, facial feature amount, etc.) indicating customer attributes. Is done. The face authentication server 20 may be configured to be accessible to an external customer information database.
  • Robot 10 transmits image P1 photographed at an angle of 30 ° to face authentication server 20 (step S11).
  • the face authentication server 20 performs the following processing.
  • the image P1 is transmitted from the determination unit 21 to the face authentication unit 22 (step S12).
  • the face authentication unit 22 performs face authentication for each face image included in the image P1, and acquires attribute information of a person corresponding to the face image from the customer information database (step S13).
  • the face authentication is performed, for example, by comparing a face image included in a photographed image with each reference face image in the customer information database and searching for a person with a reference face image whose similarity is equal to or greater than a predetermined value.
  • the face authentication unit 22 transmits attribute information related to each face image obtained by the face authentication process to the determination unit 21 as a result response (step S14).
  • the determination unit 21 calculates the seating relative angle of the person of the face image (step S15) and stores it in the memory 1 (first memory area) together with attribute information related to each face image.
  • the seating relative angle is an angle in a planar direction (an angle in a planar direction with respect to the camera) when the person of the face image included in the captured image is viewed from the robot 10.
  • the seating relative angle can be calculated based on the position of the face image in the captured image and the capturing angle of the captured image.
  • the robot 10 transmits the image P2 captured at an angle of 60 ° to the face authentication server 20 (step S21).
  • the face authentication server 20 performs the following processing.
  • the image P2 is transmitted from the determination unit 21 to the face authentication unit 22 (step S22).
  • the face authentication unit 22 performs face authentication for each face image included in the image P2, and acquires attribute information of a person corresponding to the face image from the customer information database (step S23).
  • the face authentication unit 22 transmits attribute information related to each face image obtained by the face authentication process to the determination unit 21 as a result response (step S24).
  • the determination unit 21 calculates the seating relative angle of the person of the face image (step S25), and stores it in the memory 2 (second memory area) together with attribute information related to each face image. Store (step S26).
  • the second person included in the image P1 (the person on the right side of the image) and the first person included in the image P2 (the person on the left side of the image) have the same age, sex, and facial feature amount. Therefore, the same value can be obtained for the personal ID number.
  • the seating relative angle is also the same value.
  • the determination unit 21 determines whether the second person in the memory 1 and the first person in the memory 2 Are the same person (step S31). The same person can be determined by either matching the difference in the relative angle of arrival or matching the attribute information (especially age and gender) obtained by face authentication. It is possible to reduce erroneous determination.
  • the face authentication server 20 repeats the above processing, integrates a plurality of photographed images photographed at different photographing angles, and generates a virtual panoramic image obtained by photographing a wider range in the plane direction than the camera angle of view. Then, for each face image included in the virtual panoramic image, the attribute information and the person angle related to the face image are output. That is, the attribute information of each person surrounding the same table is output together with the person angle at which each seating position can be specified.
  • the attribute information and the person angle output from the face authentication server 20 are provided to the client 30 for collecting marketing information.
  • the virtual panorama image is preferably generated not only once when a plurality of groups are seated on the table, but also after that. That is, since it is possible to always take a picture of the table with a camera mounted on the robot, a virtual panoramic image is periodically regenerated according to the time. Accordingly, it is possible to detect an increase or decrease in the number of people seated at the table, a change in seating position, and the like. Therefore, it becomes possible to collect marketing information accurately by flexibly responding to changes in the personnel composition and arrangement of the group.
  • this system uses a face authentication function to determine the identity of people in multiple shot images with different shooting angles, thus reducing erroneous group counts and duplication of people within the same group.
  • the accuracy of the marketing information can be reduced and the reliability can be improved.
  • the group was identified in consideration of the time of the store visit (for example, family members who come to the store early in the evening, friends of the same generation are in the middle of the night) To visit the store) and to understand the characteristics of the group.
  • the robot 10 has a voice output function for outputting a voice message from the speaker.
  • the robot 10 recognizes that a person is seated on the table, the robot 10 outputs a voice message asking the seated person.
  • the face of the seated person is directed to the robot 10 so that the front face of the seated person can be photographed.
  • the captured image is transmitted to the face authentication server 20 (step S101).
  • the face authentication server 20 compares the face image of the person included in the image received from the robot with the reference face image registered in the customer information database, and performs face authentication (step S102). Thereafter, the face authentication server 20 transmits the result of the face authentication to the robot 10 that is the image transmission source (steps S103 to S104).
  • the robot 10 when the robot 10 receives a response from the face authentication server 20 that the face authentication has failed, it is preferable to perform a further call and recapture the seated person. For example, if it is determined as a result of face authentication that the seated person is wearing a mask, a voice message that prompts the user to remove the mask may be output from the robot.
  • the client 30 When the client 30 periodically inquires of the face authentication server 20 about the presence or absence of a new visitor and receives the customer identification internal ID of the new visitor from the face authentication server 20, the client 30 in the past about the corresponding person from the marketing information database.
  • the collected marketing information is acquired and transmitted to the robot 10 on the table on which the person is seated (steps S121 to S127).
  • the client 30 inquires of the customer information database for the latest customer information via the marketing function I / F of the face authentication server 20 (steps S121 and 122).
  • the customer information database responds to the client 30 with the customer identification internal ID of the new store visitor via the marketing function I / F (steps S123, 124).
  • the client 30 Based on the new store visitor's identification internal ID acquired from the face authentication server 20, the client 30 makes a customer information registration or update inquiry to the marketing information database, and acquires the marketing information collected in the past for the corresponding person. (Steps S125 and 126), the information is transmitted to the robot 10 on the table where the person is seated (Step S127).
  • the robot 10 When the robot 10 receives the marketing information from the client 30, the robot 10 outputs a voice message corresponding to the marketing information. For example, when a group that has visited the store again visits the store again, a voice message such as “Thank you for coming again with you” is output, or compared to the previous visit, If it occurs, the service will be improved by outputting a voice message such as “I don't have Mr. XX today, sorry.”
  • the face image sorting process (T1) is a process for narrowing down the best image for performing face authentication from the face images of the customers who have been photographed by the robot and accumulated in the temporary registration table.
  • the face image registration process (T2) is a process of registering the reference face image in the customer information database of the face authentication server 20 based on the face image list in which the best image is selected. Assume that a photographed face image (JPEG file) is stored in the temporary registration table as a temporary registration face image together with information on the date and time when the person entered the store.
  • the number of groups that can be calculated by grouping the temporarily registered face images with the date and time of entering the store (the number of groups that visited the store) is obtained from the temporary registration table (step S201). . Thereafter, the following processing (steps S203 to S205) is repeated for the number of groups (step S202). The number of customers visiting the group is extracted from the temporary registration table (step S203). Next, the number of face image data being temporarily registered is calculated from the temporary registration table (step S204). The number of temporarily registered face images can be calculated by counting the date of entry and the table ID. Next, the following processing (steps S206 to S217) is repeated for the number of temporarily registered face image data (step S205).
  • Temporary registration face image data (JPEG file) is extracted from the temporary registration table (step S206).
  • face detection processing (step S207) and face attribute extraction processing (step S208) are performed on the extracted face image data, and face detection conditions (direction, size, etc.) are determined (step S209).
  • step S209 If it is determined in step S209 that the face detection condition is not good, the face image data is discarded, and the process proceeds to the next face image data processing (steps S206 and after). On the other hand, if it is determined in step S209 that the face detection conditions are good, after performing facial organ detection processing (step S210) and facial feature amount extraction processing (step S211), A face matching process (step S212) for matching the face feature quantity with the registered face data in the internal memory is performed, and the similarity (reliability) of the face is determined (step S213). Note that the registered face data does not exist in the internal memory at the first face collation, but the registered face data exists in the internal memory at the second and subsequent face collations.
  • step S213 If it is determined in step S213 that the degree of similarity of the face is low, a customer identification ID is newly issued with the face image as a registration candidate, and temporarily registered together with the face direction (angle), size, similarity, etc. A new addition is made to the candidate table (step S217), and the process proceeds to the processing of the next face image data (from step S206).
  • step S213 if it is determined in step S213 that the degree of similarity of the face is high, the customer identification ID corresponding to the person of the face image and the search condition (the detection condition for the registration candidate having a similar face image) are extracted from the temporary registration candidate table. ) Is extracted (step S214), and it is determined whether the detection conditions (direction, size, etc.) of the face in the current face image exceed the registration candidates (step S215).
  • step S215 If it is determined in step S215 that the face detection condition does not exceed the registration candidates, the face image data is discarded and the process proceeds to the next face image data processing (steps S206 and after). On the other hand, if it is determined in step S215 that the face detection condition exceeds the registration candidate, the search condition corresponding to the corresponding customer identification ID is updated with the face detection condition in the current face image (step S216). Then, the process proceeds to the next face image data processing (from step S206). When the process is completed for all the groups, the face image selection process (T1) ends, and the face image registration process (T2) is performed.
  • a list of new face registration candidates is extracted from the temporary registration candidate table (step S221).
  • the following processing is repeated for the number of face registration candidate lists (step S222).
  • the temporary registration candidate face image data JPEG file
  • the face authentication server 20 is requested to register a new face (step S224).
  • the face authentication server 20 registers the face image as a reference face image in the customer information database (step S225). Thereafter, the face authentication server 20 transmits the registration result of the reference face image to the client 30 (steps S226 to 227).
  • the reference face image in the customer information database that the face authentication server 20 refers to when performing face authentication can be optimized. That is, it is possible to select an image satisfying the conditions such as “facing the front” and “the face portion is photographed large” from a large number of photographed images and register it as a reference face image.
  • the face collation system of this example collates the database with the robot 10 having a camera installed on the table and configured to be rotatable in the plane direction, and the face image included in the image captured by the robot 10.
  • a face authentication server 20 that acquires attribute information indicating the attribute of the person corresponding to the face image from the customer information database, and the robot 10 captures a captured image and a capturing angle that is a camera angle in a planar direction at the time of capturing Is transmitted to the face authentication server 20, and the face authentication server 20 calculates a person angle that is an angle in a plane direction of the face image included in the captured image with respect to the robot 10 based on the captured image and the captured angle received from the robot 10. In addition, it is configured to output in association with the attribute information related to the face image.
  • each person can be distinguished by the seating position (angle viewed from the robot).
  • the attribute information of each person can be acquired. Accordingly, it is possible to accurately identify each person around the table.
  • the face authentication server 20 when there are a plurality of persons around the table, the face authentication server 20 is wider in the plane direction than the angle of view of the camera based on a plurality of captured images having different shooting angles.
  • a virtual panorama image obtained by photographing a range is generated, and attribute information and a person angle related to the face image are output for each face image included in the virtual panorama image. Further, the virtual panorama image is generated by regarding the face images obtained from the matching person angle, age and sex as those of the same person.
  • the seating position of each person in the group can be specified, and the personnel composition and arrangement of the group can be accurately grasped, so that it is possible to suppress an erroneous count of the number of people in the group and duplication of persons within the same group.
  • the face authentication server 20 is configured to periodically regenerate virtual panoramic images. Therefore, even if the number of people seated at the table is increased or decreased, the seating position is changed, etc., it becomes possible to accurately grasp the latest group personnel composition and arrangement.
  • the robot 10 equipped with the camera has a voice output function for outputting a voice message. Therefore, it is possible to make the eyes of the seated person on the table face the camera or remove the mask, and it is possible to further improve the accuracy of face authentication.
  • the present invention has been described in detail above, it is needless to say that the present invention is not limited to the system described here and can be widely applied to systems other than those described above.
  • it can be applied not only to a facility surrounding a dining table such as a general restaurant, but also to a place that considers unmanned, such as a company reception and an entrance to the facility.
  • it can be applied to resting places in public facilities, etc., to improve communication by talking to robots and to analyze the actual situation of users (marketing).
  • the present invention can also be provided as, for example, a method and method for executing the processing according to the present invention, a program for realizing such a method and method, and a storage medium for storing the program.
  • the present invention provides a human face image in a place where services such as reception, entrance, accounting, customer service, etc. need to be communicated at various facilities such as restaurants, retail stores, shopping facilities, accommodation facilities, office buildings, and public facilities. It can be used for a face matching system for matching.

Abstract

Provided is a face collation system capable of accurately identifying individual persons. The face collation system comprises: a robot 10 having a camera that is configured to be rotatable in a planar direction; and a facial recognition server 20 that collates with a database a facial image included in an image captured by the robot 10 and that acquires, from a customer information database, attribute information indicating an attribute of a person corresponding to the facial image. The robot 10 transmits to the facial recognition server 20 the captured image and a capture angle which is the camera angle in the planar direction during the image capture. The facial recognition server 20 calculates, on the basis of the captured image and the capture angle received from the robot 10, a person angle which is the angle in the planar direction of the facial image included in the captured image relative to the robot 10, associates the person angle with the attribute information for the facial image, and outputs the person angle.

Description

顔照合システムFace matching system
 本発明は、人物の顔画像を照合する顔照合システムに関する。 The present invention relates to a face collation system that collates human face images.
 従来より、ホテル、ビル、コンビニエンスストア、金融機関、ダム、道路のような、不特定多数の人が訪れる施設や設備には、犯罪防止や事故防止等の目的で、映像監視システムが設置されている。 Conventionally, video surveillance systems have been installed in facilities and equipment visited by an unspecified number of people, such as hotels, buildings, convenience stores, financial institutions, dams, and roads, for the purposes of crime prevention and accident prevention. Yes.
 このような映像監視システムに関しては、種々の発明が提案されている。例えば、特許文献1には、可搬型の監視装置の監視カメラで撮像した映像を記録し、犯罪捜査等に使用する監視装置や監視カメラシステムに関する発明が開示されている。また、特許文献2には、人物検索を高精度に行うことができる監視システムや人物検索方法に関する発明が開示されている。 Various inventions have been proposed for such a video surveillance system. For example, Patent Document 1 discloses an invention relating to a monitoring device or a monitoring camera system that records an image captured by a monitoring camera of a portable monitoring device and is used for a crime investigation or the like. Patent Document 2 discloses an invention relating to a monitoring system and a person search method capable of performing a person search with high accuracy.
特開2013-153304号公報JP 2013-153304 A 特開2009-199322号公報JP 2009-199322 A
 近年、レストラン、回転ずし店、ファーストフード店などを含む飲食施設において、来店者別に好みのメニューなどの情報を収集し、新メニューの開発や来店統計などのマーケティングに活用することが行われている。マーケティング情報の収集に、映像監視システムを利用することも検討されている。 In recent years, in restaurants, restaurants, fast food restaurants, etc., information such as favorite menus is collected for each visitor, and it is used for the development of new menus and marketing of store statistics, etc. Yes. The use of video surveillance systems for collecting marketing information is also under consideration.
 マーケティング情報を正確に収集するには、映像監視システムで撮影された来店者の顔画像に対して顔認証を行い、来店者の年齢や性別といった人物情報を取得することが必要となる。しかしながら、飲食施設に設けられる映像監視システムの多くは、テーブル(食卓)の上方の天井にカメラを設置し、上部側から撮影した映像で来店者の様子を監視する形態となっている。この形態では、主に着席者の頭部しか撮影できず、その人物の顔画像を正確に撮影できない。したがって、頭部をカウントすることにより人数の把握は可能であるが、顔認証技術を用いることができないので、各々の来店者を識別したマーケティング情報の収集を行えないという問題があった。このような問題は、飲食施設に限らず、受付け、出入り口、会計、接客等のコミュニケーションの必要なサービス一般に適用され、小売店、ショッピング施設、宿泊施設、オフィスビル、公共施設等の施設において、求められる課題である。 In order to collect marketing information accurately, it is necessary to perform face authentication on the face image of the visitor photographed by the video surveillance system and acquire personal information such as the age and sex of the visitor. However, many of the video monitoring systems provided in eating and drinking facilities are configured such that a camera is installed on the ceiling above the table (table) and the state of the store visitor is monitored by video taken from the upper side. In this embodiment, only the head of the seated person can be photographed, and the face image of the person cannot be photographed accurately. Therefore, although it is possible to grasp the number of persons by counting the heads, there is a problem that the marketing information identifying each visitor cannot be collected because the face authentication technique cannot be used. Such problems are applicable not only to restaurants but also to services that require communication such as acceptance, entrance / exit, accounting, customer service, etc., and are required in retail stores, shopping facilities, accommodation facilities, office buildings, public facilities, etc. It is a problem to be solved.
 本発明は、上記のような従来の事情に鑑みて為されたものであり、各人物をそれぞれ正確に特定することが可能な顔照合システムを提供することを目的とする。 The present invention has been made in view of the conventional circumstances as described above, and an object of the present invention is to provide a face matching system that can accurately identify each person.
 本発明では、上記目的を達成するために、顔照合システムを以下のように構成した。
 すなわち、人物の顔画像を照合する顔照合システムにおいて、平面方向に回転可能に構成されたカメラと、前記カメラによる撮影画像に含まれる顔画像をデータベースと照合し、該顔画像に対応する人物の属性を示す属性情報を前記データベースから取得する顔認証サーバとを備え、前記カメラは、撮影画像及びその撮影時の平面方向のカメラ角度である撮影角度を前記顔認証サーバへ送信し、前記顔認証サーバは、前記カメラから受信した撮影画像及び撮影角度に基づいて、前記撮影画像に含まれる顔画像の前記カメラに対する平面方向の角度である人物角度を算出し、該顔画像に係る属性情報と対応付けて出力する。
In the present invention, in order to achieve the above object, the face matching system is configured as follows.
That is, in a face matching system for matching a person's face image, a camera configured to be rotatable in a planar direction and a face image included in an image captured by the camera are checked against a database, and the person corresponding to the face image is checked. A face authentication server that acquires attribute information indicating an attribute from the database, and the camera transmits a photographed image and a photographing angle that is a camera angle in a planar direction at the time of photographing to the face authentication server, and the face authentication The server calculates a person angle that is an angle in a plane direction of the face image included in the photographed image with respect to the camera based on the photographed image and the photographing angle received from the camera, and corresponds to the attribute information regarding the face image. Output.
 ここで、前記顔認証サーバは、複数の人物がいる場合に、撮影角度が異なる複数の撮影画像に基づいて、前記カメラの画角よりも平面方向に広い範囲を撮影した仮想パノラマ画像を生成し、当該仮想パノラマ画像に含まれる顔画像毎に、該顔画像に係る属性情報又は人物角度を出力することが好ましい。 Here, when there are a plurality of persons, the face authentication server generates a virtual panoramic image in which a range wider than the angle of view of the camera is captured based on a plurality of captured images having different shooting angles. For each face image included in the virtual panorama image, it is preferable to output attribute information or a person angle related to the face image.
 また、前記顔認証サーバは、撮影角度が異なる複数の撮影画像に含まれる顔画像毎に、該顔画像に係る人物角度を算出し、人物角度が一致する複数の顔画像を同一人物の顔画像として前記仮想パノラマ画像を生成することが好ましい。 Further, the face authentication server calculates a person angle related to the face image for each face image included in the plurality of photographed images having different photographing angles, and the face images of the same person are selected from the plurality of face images having the same person angle. It is preferable to generate the virtual panoramic image as follows.
 また、前記属性情報は、年齢及び性別を含み、前記顔認証サーバは、撮影角度が異なる複数の撮影画像に含まれる顔画像毎に、該顔画像に係る属性情報を前記データベースから取得し、属性情報中の年齢及び性別が一致する複数の顔画像を同一人物の顔画像として前記仮想パノラマ画像を生成することが好ましい。 The attribute information includes age and gender, and the face authentication server acquires attribute information related to the face image from the database for each face image included in a plurality of photographed images having different photographing angles. Preferably, the virtual panorama image is generated by using a plurality of face images having the same age and sex in the information as face images of the same person.
 また、前記顔認証サーバは、時刻に応じて前記仮想パノラマ画像を定期的に生成し直すことが好ましい。 Moreover, it is preferable that the face authentication server periodically regenerates the virtual panoramic image according to time.
 また、前記カメラは、ロボットに搭載され、前記ロボットは、音声メッセージを出力する音声出力機能を有することが好ましい。 Further, it is preferable that the camera is mounted on a robot, and the robot has a voice output function for outputting a voice message.
 本発明によれば、各人物をそれぞれ正確に特定することが可能な顔照合システムを提供することができる。 According to the present invention, it is possible to provide a face matching system capable of accurately specifying each person.
本発明の一実施形態に係る顔照合システムの構成例を示す図である。It is a figure which shows the structural example of the face collation system which concerns on one Embodiment of this invention. ロボットでテーブルの着席者を撮影する様子を示す図である。It is a figure which shows a mode that the seated person of a table is image | photographed with a robot. ロボットの撮影角度を説明する図である。It is a figure explaining the imaging angle of a robot. ロボットの撮影仰角を説明する図である。It is a figure explaining the photography elevation angle of a robot. 同一人物が複数の画像に重複して撮影された様子を示す図である。It is a figure which shows a mode that the same person was image | photographed overlappingly on several images. 仮想パノラマ画像の生成の様子を示す図である。It is a figure which shows the mode of the production | generation of a virtual panoramic image. 仮想パノラマ画像の生成に関する処理フローを示す図である。It is a figure which shows the processing flow regarding the production | generation of a virtual panoramic image. ロボット、顔認証サーバ、クライアントの連携に関する処理フローを示す図である。It is a figure which shows the processing flow regarding cooperation of a robot, a face authentication server, and a client. 顔画像の登録に関する処理フローを示す図である。It is a figure which shows the processing flow regarding registration of a face image.
 以下、本発明の一実施形態について、図面を参照して詳細に説明する。以下では、本発明に係る顔照合システムを飲食施設でのマーケティング情報の収集に利用する場合を例にして説明する。 Hereinafter, an embodiment of the present invention will be described in detail with reference to the drawings. Below, the case where the face collation system which concerns on this invention is utilized for the collection of the marketing information in a restaurant is demonstrated as an example.
 図1には、本発明の一実施形態に係る顔照合システムの構成例を示してある。本例の顔照合システムは、ロボット10と、顔認証サーバ20と、クライアント30とを有しており、これらはハブ40に接続され、互いに通信可能に構成されている。また、ハブ40は、他のシステム(例えば、上位システム)にも接続されている。なお、図1では、各装置をケーブル接続した構成となっているが、これらは無線により接続されても構わない。 FIG. 1 shows a configuration example of a face matching system according to an embodiment of the present invention. The face collation system of this example includes a robot 10, a face authentication server 20, and a client 30, which are connected to a hub 40 and configured to be able to communicate with each other. The hub 40 is also connected to other systems (for example, a host system). In FIG. 1, each device is connected by a cable, but these devices may be connected wirelessly.
 ロボット10は、例えば、飲食施設内のテーブル(食卓)上に設置される。ロボット10は、人型を模した形状であってもよく、頭部にカメラが搭載されている。ロボット10のカメラは、好ましくは、テーブルの座席に着席した人物(着席者)の目線に近い高さに配置される。顔認証サーバ20は、画像中に含まれている人物の顔画像をデータベースと照合して顔認証を行う装置である。クライアント30は、マーケティング情報の収集を行う装置である。 The robot 10 is installed on, for example, a table (table) in a dining facility. The robot 10 may have a shape imitating a human shape, and a camera is mounted on the head. The camera of the robot 10 is preferably arranged at a height close to the line of sight of the person (seat person) seated on the table seat. The face authentication server 20 is a device that performs face authentication by comparing a face image of a person included in an image with a database. The client 30 is a device that collects marketing information.
 図2には、ロボット10でテーブルの着席者を撮影する様子を示してある。本例では、ロボット10がテーブルの端部中央に載置されており、このテーブルを取り囲むように5人の人物(X1~X5)が着席している。ロボット10は、カメラが搭載されている頭部を平面方向および上下方向に回転させる機構を有している。すなわち、ロボット10の頭部を平面方向または上下方向に回転させることで、カメラ角度(撮影方向)を調整できるように構成されている。本明細書では、平面方向のカメラ角度を「撮影角度」といい、上下方向のカメラ角度を「撮影仰角」という。 FIG. 2 shows a situation where the robot 10 takes a picture of a table occupant. In this example, the robot 10 is placed at the center of the end of the table, and five persons (X1 to X5) are seated so as to surround the table. The robot 10 has a mechanism for rotating the head on which the camera is mounted in the plane direction and the vertical direction. That is, it is configured such that the camera angle (shooting direction) can be adjusted by rotating the head of the robot 10 in the plane direction or the vertical direction. In this specification, the camera angle in the plane direction is referred to as “shooting angle”, and the camera angle in the vertical direction is referred to as “shooting elevation angle”.
 図3は、ロボット10の撮影角度を説明する図である。同図では、撮影対象となる着席者の位置(撮影角度)を、ロボット10が正対する正面を基準(0°)として、左側にマイナス角度、右側にプラス角度で規定している。なお、これは一例に過ぎず、例えば、ロボット10が頭部を平面方向に回転できる最左端の角度を基準(0°)として、時計回りにプラス角度で規定してもよい。 FIG. 3 is a diagram illustrating the shooting angle of the robot 10. In the figure, the position (photographing angle) of a seated person to be photographed is defined as a minus angle on the left side and a plus angle on the right side with the front (0 °) facing the robot 10 as a reference. Note that this is only an example, and for example, the leftmost angle at which the robot 10 can rotate the head in the plane direction may be defined as a positive angle clockwise with reference to (0 °).
 図4は、ロボット10の撮影仰角を説明する図である。同図では、撮影対象となる着席者の仰角を、水平方向を基準(0°)として、下側をマイナス角度、上側をプラス角度で規定している。一般に、座高が低い人物(例えば、子供)を撮影する場合はマイナスの撮影仰角となり、座高が高い人物(例えば、大人)を撮影する場合はプラスの撮影仰角となる。なお、これは一例に過ぎず、例えば、ロボット10が頭部を上下方向に回転できる最下端の角度を基準(0°)として、上部方向にプラス角度で規定してもよい。 FIG. 4 is a diagram for explaining the shooting elevation angle of the robot 10. In the figure, the elevation angle of the seated person to be photographed is defined as a minus angle on the lower side and a plus angle on the upper side, with the horizontal direction being the reference (0 °). Generally, when shooting a person with a low sitting height (for example, a child), a negative shooting elevation angle is used, and when shooting a person with a high sitting height (for example, an adult), the shooting angle is positive. Note that this is merely an example, and for example, the angle at the lowermost end at which the robot 10 can rotate the head in the vertical direction may be defined as a plus angle in the upper direction with reference to (0 °).
 ロボット10により撮影された画像は、その画像の撮影角度および撮影仰角の情報と共に、顔認証サーバ20へ送信される。ロボット10による撮影および画像送信のタイミングは任意であり、常に行っても構わない。 The image photographed by the robot 10 is transmitted to the face authentication server 20 together with information on the photographing angle and photographing elevation angle of the image. The timing of shooting and image transmission by the robot 10 is arbitrary and may be always performed.
 撮影角度を変更しながら撮影を行うと、同一人物が複数の画像に重複して撮影されることがある。図5には、人物X2が複数の画像P1,P2に重複して撮影された様子を示してある。このままの状況で処理を進めると、同一のテーブルに着席しているグループの人数の誤カウント(本来より増加)や同一グループ内の人物の重複が発生し、マーケティング情報に大きな誤差が生じてしまうため、問題となる。 If you shoot while changing the shooting angle, the same person may be shot in multiple images. FIG. 5 shows a state in which the person X2 is photographed on a plurality of images P1 and P2. If processing is continued in this situation, the number of people in the group seated at the same table will be erroneously counted (increased from the original) and duplication of people in the same group will occur, resulting in a large error in marketing information. , It becomes a problem.
 そこで、本例の顔照合システムでは、ロボット10が撮影画像と共に撮影角度および撮影仰角の情報を顔認証サーバ20へ送信することで、顔認証サーバ20で複数の画像を合成して仮想パノラマ画像を生成できるように構成してある。図6には、仮想パノラマ画像の生成の様子を示してある。同図では、5枚の撮影画像P1~P6を合成して、ロボット10のカメラの画角よりも平面方向に広い範囲を撮影した仮想パノラマ画像Qを生成している。ここで、テーブル上のロボット10の位置や向きに応じて撮影角度の基準位置は種々変化するが、全員が着席した状態では各人物の相対的な位置関係(着座相対角度)は変わらない。このため、各画像の撮影角度に基づいて、各人物の相対的な位置関係を示す仮想パノラマ画像を生成することが可能となる。 Therefore, in the face collation system of this example, the robot 10 transmits information of the shooting angle and the shooting elevation angle together with the shot image to the face authentication server 20, so that the face authentication server 20 combines a plurality of images to generate a virtual panoramic image. It is configured so that it can be generated. FIG. 6 shows how a virtual panoramic image is generated. In the figure, five captured images P1 to P6 are combined to generate a virtual panoramic image Q that captures a wider range in the plane direction than the angle of view of the camera of the robot 10. Here, the reference position of the photographing angle changes variously depending on the position and orientation of the robot 10 on the table, but the relative positional relationship (sitting relative angle) of each person does not change when everyone is seated. For this reason, it is possible to generate a virtual panoramic image indicating the relative positional relationship of each person based on the shooting angle of each image.
 このような仮想パノラマ画像を用いることで、グループ内の各人物の着座位置(ロボットから見た平面方向の角度)を特定でき、グループの人員構成を正確に把握できるようになる。したがって、グループ人数の誤カウントや同一グループ内の人物重複を抑制できるようになる。その結果、グループ内の人物毎のマーケティング情報を正確に収集することができる。なお、仮想パノラマ画像は、各人物の座高差を考慮して、撮影仰角を均して仮想的に表現することが好ましい。 Using such a virtual panoramic image, the seating position of each person in the group (the angle in the plane direction seen from the robot) can be specified, and the group composition of the group can be accurately grasped. Therefore, an erroneous count of the number of people in the group and duplication of people in the same group can be suppressed. As a result, marketing information for each person in the group can be collected accurately. Note that the virtual panoramic image is preferably expressed virtually by taking the elevation angle of shooting in consideration of the sitting height difference of each person.
 本例の顔照合システムでは、仮想パノラマ画像を生成する際に、個々の画像に映っている人物の属性を数値化して各人物の着座位置を判断するように構成してある。詳細については、図7に示す処理フローを参照して説明する。 In the face collation system of this example, when generating a virtual panoramic image, the attribute of the person shown in each image is digitized to determine the seating position of each person. Details will be described with reference to the processing flow shown in FIG.
 なお、顔認証サーバ20は、画像中に含まれている人物の顔画像について顔認証を行う顔認証部22と、顔認証の結果に基づいてグループの人員構成を判定する判定部21とを有することとする。また、顔認証サーバ20は、本システムが適用される飲食施設を利用する人物(顧客)の情報を記憶した顧客情報データベースを有していることとする。顧客情報データベースには、各顧客を一意に識別する個人ID番号、顔認証の際に照合される基準顔画像、顧客の属性を示す属性情報(例えば、年齢、性別、顔特徴量など)が記憶される。なお、顔認証サーバ20は、外部の顧客情報データベースにアクセス可能に構成されてもよい。 The face authentication server 20 includes a face authentication unit 22 that performs face authentication on the face image of a person included in the image, and a determination unit 21 that determines the group composition of the group based on the result of face authentication. I will do it. Further, the face authentication server 20 has a customer information database that stores information on a person (customer) who uses a dining facility to which the present system is applied. The customer information database stores a personal ID number that uniquely identifies each customer, a reference face image that is collated at the time of face authentication, and attribute information (for example, age, gender, facial feature amount, etc.) indicating customer attributes. Is done. The face authentication server 20 may be configured to be accessible to an external customer information database.
 ロボット10は、角度30°で撮影した画像P1を顔認証サーバ20に送信する(ステップS11)。顔認証サーバ20は、ロボット10から画像P1を受信すると、以下の処理を行う。まず、判定部21から顔認証部22に画像P1を送信する(ステップS12)。顔認証部22は、画像P1に含まれる顔画像毎に顔認証を行って、該顔画像に対応する人物の属性情報を顧客情報データベースから取得する(ステップS13)。顔認証は、例えば、撮影画像に含まれる顔画像を顧客情報データベースの各基準顔画像と照合し、その類似度が所定値以上の基準顔画像の人物を検索することで行われる。複数の人物が検索された場合には、例えば、類似度が最も高い人物を検索結果とすればよい。顔認証部22は、顔認証処理により得られた各顔画像に係る属性情報を、結果応答として判定部21に送信する(ステップS14)。判定部21は、画像P1に含まれる顔画像毎に、該顔画像の人物の着座相対角度を算出し(ステップS15)、各顔画像に係る属性情報と共にメモリ1(第1のメモリ領域)に記憶する(ステップS16)。ここで、着座相対角度は、撮影画像に含まれる顔画像の人物をロボット10から見た平面方向の角度(カメラに対する平面方向の角度)である。着座相対角度は、撮影画像内における顔画像の位置と、撮影画像の撮影角度とに基づいて算出することができる。 Robot 10 transmits image P1 photographed at an angle of 30 ° to face authentication server 20 (step S11). When receiving the image P1 from the robot 10, the face authentication server 20 performs the following processing. First, the image P1 is transmitted from the determination unit 21 to the face authentication unit 22 (step S12). The face authentication unit 22 performs face authentication for each face image included in the image P1, and acquires attribute information of a person corresponding to the face image from the customer information database (step S13). The face authentication is performed, for example, by comparing a face image included in a photographed image with each reference face image in the customer information database and searching for a person with a reference face image whose similarity is equal to or greater than a predetermined value. When a plurality of persons are searched, for example, the person with the highest similarity may be used as the search result. The face authentication unit 22 transmits attribute information related to each face image obtained by the face authentication process to the determination unit 21 as a result response (step S14). For each face image included in the image P1, the determination unit 21 calculates the seating relative angle of the person of the face image (step S15) and stores it in the memory 1 (first memory area) together with attribute information related to each face image. Store (step S16). Here, the seating relative angle is an angle in a planar direction (an angle in a planar direction with respect to the camera) when the person of the face image included in the captured image is viewed from the robot 10. The seating relative angle can be calculated based on the position of the face image in the captured image and the capturing angle of the captured image.
 次に、ロボット10は、角度60°で撮影した画像P2を顔認証サーバ20に送信する(ステップS21)。顔認証サーバ20は、ロボット10から画像P2を受信すると、以下の処理を行う。まず、判定部21から顔認証部22に画像P2を送信する(ステップS22)。顔認証部22は、画像P2に含まれる顔画像毎に顔認証を行って、該顔画像に対応する人物の属性情報を顧客情報データベースから取得する(ステップS23)。顔認証部22は、顔認証処理により得られた各顔画像に係る属性情報を、結果応答として判定部21に送信する(ステップS24)。判定部21は、画像P2に含まれる顔画像毎に、該顔画像の人物の着座相対角度を算出し(ステップS25)、各顔画像に係る属性情報と共にメモリ2(第2のメモリ領域)に記憶する(ステップS26)。 Next, the robot 10 transmits the image P2 captured at an angle of 60 ° to the face authentication server 20 (step S21). When receiving the image P2 from the robot 10, the face authentication server 20 performs the following processing. First, the image P2 is transmitted from the determination unit 21 to the face authentication unit 22 (step S22). The face authentication unit 22 performs face authentication for each face image included in the image P2, and acquires attribute information of a person corresponding to the face image from the customer information database (step S23). The face authentication unit 22 transmits attribute information related to each face image obtained by the face authentication process to the determination unit 21 as a result response (step S24). For each face image included in the image P2, the determination unit 21 calculates the seating relative angle of the person of the face image (step S25), and stores it in the memory 2 (second memory area) together with attribute information related to each face image. Store (step S26).
 この時点で、画像P1に含まれる2人目の人物(画像右側の人物)と、画像P2に含まれる1人目の人物(画像左側の人物)とは、年齢、性別、顔特徴量が一致しているので、個人ID番号も同一値が得られる。また、着座相対角度も同一値となっている。
 判定部21は、メモリ1,2に記憶した内容(年齢、性別、顔特徴量、個人ID番号および着座相対角度)に基づいて、メモリ1の2人目の人物とメモリ2の1人目の人物とが同一人物であると判定する(ステップS31)。同一人物判定は、着差相対角度の一致、または、顔認証によって得られる属性情報(特に年齢及び性別)の一致のいずれかだけでも実施できるが、両方の一致を以って同一人物と判定した方が、誤判定を減らすことができる。
At this point, the second person included in the image P1 (the person on the right side of the image) and the first person included in the image P2 (the person on the left side of the image) have the same age, sex, and facial feature amount. Therefore, the same value can be obtained for the personal ID number. The seating relative angle is also the same value.
Based on the contents (age, gender, facial feature amount, personal ID number and seating relative angle) stored in the memories 1 and 2, the determination unit 21 determines whether the second person in the memory 1 and the first person in the memory 2 Are the same person (step S31). The same person can be determined by either matching the difference in the relative angle of arrival or matching the attribute information (especially age and gender) obtained by face authentication. It is possible to reduce erroneous determination.
 顔認証サーバ20は、上記の処理を繰り返して、異なる撮影角度で撮影された複数の撮影画像を統合して、カメラの画角よりも平面方向に広い範囲を撮影した仮想パノラマ画像を生成する。そして、仮想パノラマ画像に含まれる顔画像毎に、該顔画像に係る属性情報及び人物角度を出力する。すなわち、同一のテーブルを取り囲む各人物の属性情報を、各々の着席位置を特定可能な人物角度と共に出力する。顔認証サーバ20から出力される属性情報及び人物角度は、マーケティング情報の収集用のクライアント30に提供される。 The face authentication server 20 repeats the above processing, integrates a plurality of photographed images photographed at different photographing angles, and generates a virtual panoramic image obtained by photographing a wider range in the plane direction than the camera angle of view. Then, for each face image included in the virtual panoramic image, the attribute information and the person angle related to the face image are output. That is, the attribute information of each person surrounding the same table is output together with the person angle at which each seating position can be specified. The attribute information and the person angle output from the face authentication server 20 are provided to the client 30 for collecting marketing information.
 このような処理により、同一のテーブルに着席しているグループの各人物を着席位置に応じて特定することができ、人物毎のマーケティング情報を正確に収集することが可能となる。また、グループ内の人物毎の年齢、性別、顔特徴量、個人ID番号を把握することで、そのグループがどのような集団であるかを総合的に判断することも可能となる。つまり、例えば、同一年代の同一性別のグループは、友達関係のグループである可能性が高いと判断できるし、性別が異なり且つ年齢の幅が広い等のばらつきがあるグループは、家族または親戚関係のグループである可能性が高いと判断することができる。更に、構成が似たような複数のグループがあったとしても、グループ内の各人物の属性情報および着座位置に基づいて各グループを識別することができる。 By such processing, it is possible to identify each person in the group seated on the same table according to the seating position, and it is possible to accurately collect marketing information for each person. Further, by grasping the age, sex, facial feature amount, and personal ID number for each person in the group, it is also possible to comprehensively determine what kind of group the group is. In other words, for example, a group of the same age and the same gender can be judged to be highly likely to be a friendship group, and a group having a variation such as a different gender and a wide age range is related to a family or relative relationship. It can be determined that there is a high possibility of being a group. Further, even if there are a plurality of groups having similar configurations, each group can be identified based on the attribute information and the seating position of each person in the group.
 なお、仮想パノラマ画像の生成は、テーブルに複数人のグループが着席した際に1回だけ行うのではなく、それ以降にも行うことが好ましい。すなわち、ロボットに搭載されたカメラによりテーブルの様子を常時撮影することが可能であるため、時刻に応じて仮想パノラマ画像を定期的に生成し直すようにする。これにより、テーブルに着席する人の増員または減員や、着席位置の入れ替わりなども検知することができる。したがって、グループの人員構成や配置の変化にも柔軟に対応して、正確にマーケティング情報を収集できるようになる。 It should be noted that the virtual panorama image is preferably generated not only once when a plurality of groups are seated on the table, but also after that. That is, since it is possible to always take a picture of the table with a camera mounted on the robot, a virtual panoramic image is periodically regenerated according to the time. Accordingly, it is possible to detect an increase or decrease in the number of people seated at the table, a change in seating position, and the like. Therefore, it becomes possible to collect marketing information accurately by flexibly responding to changes in the personnel composition and arrangement of the group.
 また、本システムでは、顔認証機能を利用して、撮影角度が異なる複数の撮影画像の人物の同一性を判定する仕組みを取り入れたので、グループ人数の誤カウントや同一グループ内の人物重複を抑制し、マーケティング情報の精度の誤差を低減して信頼性を向上させることができる。また、撮影を行った日時の情報も記憶しておくことで、来店の時間帯を考慮してグループを特定(例えば、家族連れは夜の早い時間に来店する、同世代の友達グループは夜中に来店する等)したり、グループの特性を把握したりすることも可能となる。 In addition, this system uses a face authentication function to determine the identity of people in multiple shot images with different shooting angles, thus reducing erroneous group counts and duplication of people within the same group. In addition, the accuracy of the marketing information can be reduced and the reliability can be improved. Also, by storing information on the date and time when the photo was taken, the group was identified in consideration of the time of the store visit (for example, family members who come to the store early in the evening, friends of the same generation are in the middle of the night) To visit the store) and to understand the characteristics of the group.
 顔認証を精度良く行う際の問題として、(1)正面顔を撮影できない、(2)マスクを着用しているが挙げられる。この対策について、図8の処理フローを参照して説明する。
 ロボット10は、スピーカから音声メッセージを出力する音声出力機能を備えており、テーブルに人物が着席したことを認識すると、着席者に問いかける音声メッセージを出力する。このように、ロボットから積極的に声を掛けることで、着席者の顔をロボット10に向けさせて、着席者の正面顔を撮影できるようにする。事前にシナリオを準備しておくことで、着席者の正面顔の撮影を良好に行うことが可能となり、顔認証サーバ20による顔認証の精度が高められる。
Problems when performing face authentication with high accuracy include (1) unable to photograph the front face and (2) wearing a mask. This countermeasure will be described with reference to the processing flow of FIG.
The robot 10 has a voice output function for outputting a voice message from the speaker. When the robot 10 recognizes that a person is seated on the table, the robot 10 outputs a voice message asking the seated person. In this way, by actively speaking from the robot, the face of the seated person is directed to the robot 10 so that the front face of the seated person can be photographed. By preparing the scenario in advance, it is possible to satisfactorily photograph the front face of the seated person, and the accuracy of face authentication by the face authentication server 20 is improved.
 ロボット10は、着席者を撮影すると、撮影した画像を顔認証サーバー20へ送信する(ステップS101)。顔認証サーバー20は、ロボットから受信した画像に含まれる人物の顔画像を、顧客情報データベースに登録されている基準顔画像と照合し、顔認証を行う(ステップS102)。その後、顔認証サーバー20は、顔認証の結果を、画像の送信元のロボット10に送信する(ステップS103~104)。 When the robot 10 captures the seated person, the captured image is transmitted to the face authentication server 20 (step S101). The face authentication server 20 compares the face image of the person included in the image received from the robot with the reference face image registered in the customer information database, and performs face authentication (step S102). Thereafter, the face authentication server 20 transmits the result of the face authentication to the robot 10 that is the image transmission source (steps S103 to S104).
 なお、ロボット10は、顔認証サーバ20から顔認証に失敗した旨の応答を得た場合には、更なる声掛けを行って、着席者を撮影し直すことが好ましい。例えば、顔認証の結果、着席者がマスクを着用していることが判明した場合には、マスクの取り外しを促す音声メッセージをロボットから出力させればよい。 Note that, when the robot 10 receives a response from the face authentication server 20 that the face authentication has failed, it is preferable to perform a further call and recapture the seated person. For example, if it is determined as a result of face authentication that the seated person is wearing a mask, a voice message that prompts the user to remove the mask may be output from the robot.
 クライアント30は、新規来店者の有無を顔認証サーバ20に定期的に問い合わせ、顔認証サーバ20から新規来店者の顧客識別内部IDを受信した場合には、マーケティング情報データベースから該当する人物について過去に収集したマーケティング情報を取得して、その人物が着席しているテーブル上のロボット10に送信する(ステップS121~127)。
 具体的には、まず、クライアント30が、顔認証サーバ20のマーケティング機能I/Fを介して顧客情報データベースに最新顧客情報を問い合わせる(ステップS121,122)。顧客情報データベースは、クライアント30からの問い合わせに応じて、新規来店者の顧客識別内部IDをマーケティング機能I/Fを介してクライアント30に応答する(ステップS123,124)。クライアント30は、顔認証サーバ20から取得した新規来店者の識別内部IDに基づいて、マーケティング情報データベースに顧客情報の登録又は更新問い合わせを行い、該当する人物について過去に収集したマーケティング情報を取得して(ステップS125,126)、その人物が着席しているテーブル上のロボット10に送信する(ステップS127)。
When the client 30 periodically inquires of the face authentication server 20 about the presence or absence of a new visitor and receives the customer identification internal ID of the new visitor from the face authentication server 20, the client 30 in the past about the corresponding person from the marketing information database. The collected marketing information is acquired and transmitted to the robot 10 on the table on which the person is seated (steps S121 to S127).
Specifically, first, the client 30 inquires of the customer information database for the latest customer information via the marketing function I / F of the face authentication server 20 (steps S121 and 122). In response to the inquiry from the client 30, the customer information database responds to the client 30 with the customer identification internal ID of the new store visitor via the marketing function I / F (steps S123, 124). Based on the new store visitor's identification internal ID acquired from the face authentication server 20, the client 30 makes a customer information registration or update inquiry to the marketing information database, and acquires the marketing information collected in the past for the corresponding person. (Steps S125 and 126), the information is transmitted to the robot 10 on the table where the person is seated (Step S127).
 ロボット10は、クライアント30からマーケティング情報を受信すると、そのマーケティング情報に応じた音声メッセージを出力する。例えば、以前に来店したグループが再度来店した場合には、「みなさんでのまたのお越し、ありがとうございます。」といった音声メッセージを出力したり、前回の来店時と比較して今回は不在の人物が生じている場合には、「本日は○○さんはいらっしゃらないですね、残念です。」といった音声メッセージを出力したりすることで、サービス向上を図る。 When the robot 10 receives the marketing information from the client 30, the robot 10 outputs a voice message corresponding to the marketing information. For example, when a group that has visited the store again visits the store again, a voice message such as “Thank you for coming again with you” is output, or compared to the previous visit, If it occurs, the service will be improved by outputting a voice message such as “I don't have Mr. XX today, sorry.”
 ロボットによる撮影回数を多くすると、同一人物の画像が多数撮影されることがある。次回の来店時などに別人と判断されないように、同一人物を撮影した複数の画像の中から顔認証に最適な顔画像を選別し、基準顔画像として顧客情報データベースに常に最新の顔画像を登録する必要がある。この処理について、図9の処理フローを参照して説明する。 When the number of times of shooting by the robot is increased, many images of the same person may be taken. Select the most suitable face image for face authentication from multiple images of the same person so that they are not judged as another person at the next visit, etc., and always register the latest face image in the customer information database as the reference face image There is a need to. This process will be described with reference to the process flow of FIG.
 本例では、顔画像の登録を、顔画像の選別処理(T1)と、顔画像の登録処理(T2)の2段階で行う。これらの処理は、クライアント30が行う。顔画像の選別処理(T1)は、ロボットが撮影して仮登録テーブルに蓄積した来店者の顔画像の中から、顔認証を行う上で最良な画像を絞り込む処理である。顔画像を絞り込む際には、顔認証精度向上のために撮影時の顔の向き・大きさが最良の顔画像を選定し、来店グループの人数分の顔画像リストを保持する仮登録候補テーブルを作成する。顔画像の登録処理(T2)は、最良の画像を選定した顔画像リストに基づいて、顔認証サーバ20の顧客情報データベースに基準顔画像を登録する処理である。なお、撮影済みの顔画像(JPEGファイル)が、仮登録の顔画像として、その人物の入店日時の情報と共に、仮登録テーブルに記憶されているものとする。 In this example, registration of a face image is performed in two stages: a face image selection process (T1) and a face image registration process (T2). These processes are performed by the client 30. The face image sorting process (T1) is a process for narrowing down the best image for performing face authentication from the face images of the customers who have been photographed by the robot and accumulated in the temporary registration table. When narrowing down face images, select a face image with the best face orientation and size at the time of shooting to improve face recognition accuracy, and create a temporary registration candidate table that holds face image lists for the number of people in the store group create. The face image registration process (T2) is a process of registering the reference face image in the customer information database of the face authentication server 20 based on the face image list in which the best image is selected. Assume that a photographed face image (JPEG file) is stored in the temporary registration table as a temporary registration face image together with information on the date and time when the person entered the store.
 顔画像の選別処理(T1)では、まず、仮登録テーブルから、入店日時で仮登録の顔画像をグループ化することで算出できるグループ数(来店したグループの数)を取得する(ステップS201)。その後、グループ数分、以下の処理(ステップS203~S205)を繰り返す(ステップS202)。
 仮登録テーブルから、グループの来店人数を取り出す(ステップS203)。次に、仮登録テーブルから、仮登録中の顔画像のデータ数を算出する(ステップS204)。仮登録の顔画像のデータ数は、入店日時とテーブルIDで集計することで算出できる。次に、仮登録の顔画像のデータ数分、以下の処理(ステップS206~S217)を繰り返す(ステップS205)。
In the face image selection process (T1), first, the number of groups that can be calculated by grouping the temporarily registered face images with the date and time of entering the store (the number of groups that visited the store) is obtained from the temporary registration table (step S201). . Thereafter, the following processing (steps S203 to S205) is repeated for the number of groups (step S202).
The number of customers visiting the group is extracted from the temporary registration table (step S203). Next, the number of face image data being temporarily registered is calculated from the temporary registration table (step S204). The number of temporarily registered face images can be calculated by counting the date of entry and the table ID. Next, the following processing (steps S206 to S217) is repeated for the number of temporarily registered face image data (step S205).
 仮登録テーブルから、仮登録の顔画像のデータ(JPEGファイル)を取り出す(ステップS206)。次に、取り出した顔画像のデータについて、顔検出処理(ステップS207)、顔属性取出し処理(ステップS208)を行い、顔の検出条件(向き、大きさ等)を判定する(ステップS209)。 Temporary registration face image data (JPEG file) is extracted from the temporary registration table (step S206). Next, face detection processing (step S207) and face attribute extraction processing (step S208) are performed on the extracted face image data, and face detection conditions (direction, size, etc.) are determined (step S209).
 ステップS209で、顔の検出条件が良好でないと判定された場合は、その顔画像のデータは破棄し、次の顔画像のデータの処理(ステップS206~)に進む。
 一方、ステップS209で、顔の検出条件が良好であると判定された場合は、顔器官検出処理(ステップS210)、顔特徴量抽出処理(ステップS211)を行って顔特徴量を抽出した後、顔特徴量を内部メモリの登録顔データと照合する顔照合処理(ステップS212)を行い、顔の類似度(信頼度)を判定する(ステップS213)。なお、1回目の顔照合時は、内部メモリに登録顔データは存在しないが、2回目以降の顔照合時には、内部メモリに登録顔データが存在する。
If it is determined in step S209 that the face detection condition is not good, the face image data is discarded, and the process proceeds to the next face image data processing (steps S206 and after).
On the other hand, if it is determined in step S209 that the face detection conditions are good, after performing facial organ detection processing (step S210) and facial feature amount extraction processing (step S211), A face matching process (step S212) for matching the face feature quantity with the registered face data in the internal memory is performed, and the similarity (reliability) of the face is determined (step S213). Note that the registered face data does not exist in the internal memory at the first face collation, but the registered face data exists in the internal memory at the second and subsequent face collations.
 ステップS213で、顔の類似度が低いと判定された場合は、その顔画像を登録候補として新規に顧客識別IDを発行して、顔の向き(角度)、大きさ、類似度等と共に仮登録候補テーブルに新規追加し(ステップS217)、次の顔画像のデータの処理(ステップS206~)に進む。
 一方、ステップS213で、顔の類似度が高いと判定された場合は、仮登録候補テーブルから、その顔画像の人物に該当する顧客識別ID及び検索条件(顔画像が類似する登録候補の検出条件)を取り出し(ステップS214)、今回の顔画像における顔の検出条件(向き、大きさ等)が登録候補を上回るか判定する(ステップS215)。
If it is determined in step S213 that the degree of similarity of the face is low, a customer identification ID is newly issued with the face image as a registration candidate, and temporarily registered together with the face direction (angle), size, similarity, etc. A new addition is made to the candidate table (step S217), and the process proceeds to the processing of the next face image data (from step S206).
On the other hand, if it is determined in step S213 that the degree of similarity of the face is high, the customer identification ID corresponding to the person of the face image and the search condition (the detection condition for the registration candidate having a similar face image) are extracted from the temporary registration candidate table. ) Is extracted (step S214), and it is determined whether the detection conditions (direction, size, etc.) of the face in the current face image exceed the registration candidates (step S215).
 ステップS215で、顔の検出条件が登録候補を上回らないと判定された場合は、その顔画像のデータは破棄し、次の顔画像のデータの処理(ステップS206~)に進む。
 一方、ステップS215で、顔の検出条件が登録候補を上回ると判定された場合は、今回の顔画像における顔の検出条件で、該当する顧客識別IDに対応する検索条件を更新し(ステップS216)、次の顔画像のデータの処理(ステップS206~)に進む。
 全てのグループについて処理を終えると、顔画像の選別処理(T1)は終了し、顔画像の登録処理(T2)を行う。
If it is determined in step S215 that the face detection condition does not exceed the registration candidates, the face image data is discarded and the process proceeds to the next face image data processing (steps S206 and after).
On the other hand, if it is determined in step S215 that the face detection condition exceeds the registration candidate, the search condition corresponding to the corresponding customer identification ID is updated with the face detection condition in the current face image (step S216). Then, the process proceeds to the next face image data processing (from step S206).
When the process is completed for all the groups, the face image selection process (T1) ends, and the face image registration process (T2) is performed.
 顔画像の登録処理(T2)では、仮登録候補テーブルから、新規の顔登録候補のリストを取り出す(ステップS221)。
 次に、顔登録候補のリスト数分、以下の処理(ステップS223~S227)を繰り返す(ステップS222)。
 仮登録候補テーブルから、仮登録候補の顔画像のデータ(JPEGファイル)を取り出し(ステップS223)、顔認証サーバ20に新規の顔登録を依頼する(ステップS224)。顔認証サーバ20は、顔画像の登録依頼を受信すると、その顔画像を基準顔画像として顧客情報データベースに登録する(ステップS225)。その後、顔認証サーバー20は、基準顔画像の登録結果を、クライアント30に送信する(ステップS226~227)。
In the face image registration process (T2), a list of new face registration candidates is extracted from the temporary registration candidate table (step S221).
Next, the following processing (steps S223 to S227) is repeated for the number of face registration candidate lists (step S222).
The temporary registration candidate face image data (JPEG file) is extracted from the temporary registration candidate table (step S223), and the face authentication server 20 is requested to register a new face (step S224). Upon receiving the face image registration request, the face authentication server 20 registers the face image as a reference face image in the customer information database (step S225). Thereafter, the face authentication server 20 transmits the registration result of the reference face image to the client 30 (steps S226 to 227).
 以上の処理により、顔認証サーバー20が顔認証の際に参照する顧客情報データベースの基準顔画像を最適化することができる。すなわち、撮影された多数の画像の中から、「正面を向いている」、「顔部分が大きく撮影されている」といった条件を満たす画像を選別して、基準顔画像として登録することができる。 Through the above processing, the reference face image in the customer information database that the face authentication server 20 refers to when performing face authentication can be optimized. That is, it is possible to select an image satisfying the conditions such as “facing the front” and “the face portion is photographed large” from a large number of photographed images and register it as a reference face image.
 以上説明したように、本例の顔照合システムは、テーブル上に設置され、平面方向に回転可能に構成されたカメラを有するロボット10と、ロボット10による撮影画像に含まれる顔画像をデータベースと照合し、該顔画像に対応する人物の属性を示す属性情報を顧客情報データベースから取得する顔認証サーバ20とを備え、ロボット10は、撮影画像及びその撮影時の平面方向のカメラ角度である撮影角度を顔認証サーバ20へ送信し、顔認証サーバ20は、ロボット10から受信した撮影画像及び撮影角度に基づいて、撮影画像に含まれる顔画像のロボット10に対する平面方向の角度である人物角度を算出し、該顔画像に係る属性情報と対応付けて出力する構成となっている。 As described above, the face collation system of this example collates the database with the robot 10 having a camera installed on the table and configured to be rotatable in the plane direction, and the face image included in the image captured by the robot 10. And a face authentication server 20 that acquires attribute information indicating the attribute of the person corresponding to the face image from the customer information database, and the robot 10 captures a captured image and a capturing angle that is a camera angle in a planar direction at the time of capturing Is transmitted to the face authentication server 20, and the face authentication server 20 calculates a person angle that is an angle in a plane direction of the face image included in the captured image with respect to the robot 10 based on the captured image and the captured angle received from the robot 10. In addition, it is configured to output in association with the attribute information related to the face image.
 このような構成によれば、テーブルの着席者の目線に近い位置から撮影した画像を用いて各人物の顔認証を行えるだけでなく、各人物を着席位置(ロボットから見た角度)で区別して各人物の属性情報を取得することができる。したがって、テーブルの周囲にいる各人物をそれぞれ正確に特定することが可能となる。 According to such a configuration, not only can the face authentication of each person be performed using an image taken from a position close to the eyes of the seated person on the table, but each person can be distinguished by the seating position (angle viewed from the robot). The attribute information of each person can be acquired. Accordingly, it is possible to accurately identify each person around the table.
 また、本例の顔照合システムでは、顔認証サーバ20は、テーブルの周囲に複数の人物がいる場合に、撮影角度が異なる複数の撮影画像に基づいて、カメラの画角よりも平面方向に広い範囲を撮影した仮想パノラマ画像を生成し、当該仮想パノラマ画像に含まれる顔画像毎に、該顔画像に係る属性情報及び人物角度を出力する構成となっている。更に、一致する人物角度および年齢・性別から得られた顔画像を同一人物のものと見なして仮想パノラマ画像を生成する構成となっている。したがって、グループ内の各人物の着座位置を特定でき、グループの人員構成および配置を正確に把握できるので、グループ人数の誤カウントや同一グループ内の人物重複を抑制することが可能となる。 Further, in the face verification system of this example, when there are a plurality of persons around the table, the face authentication server 20 is wider in the plane direction than the angle of view of the camera based on a plurality of captured images having different shooting angles. A virtual panorama image obtained by photographing a range is generated, and attribute information and a person angle related to the face image are output for each face image included in the virtual panorama image. Further, the virtual panorama image is generated by regarding the face images obtained from the matching person angle, age and sex as those of the same person. Accordingly, the seating position of each person in the group can be specified, and the personnel composition and arrangement of the group can be accurately grasped, so that it is possible to suppress an erroneous count of the number of people in the group and duplication of persons within the same group.
 また、本例の顔照合システムでは、顔認証サーバ20は、仮想パノラマ画像を定期的に生成し直す構成となっている。したがって、テーブルに着席する人の増員または減員や、着席位置の入れ替わり等があっても、最新のグループの人員構成や配置を正確に把握することが可能となる。 In the face collation system of this example, the face authentication server 20 is configured to periodically regenerate virtual panoramic images. Therefore, even if the number of people seated at the table is increased or decreased, the seating position is changed, etc., it becomes possible to accurately grasp the latest group personnel composition and arrangement.
 また、本例の顔照合システムでは、カメラを搭載したロボット10が、音声メッセージを出力する音声出力機能を有する構成となっている。したがって、テーブルの着席者の目線をカメラに向けさせたり、マスクを取り外させたりすることができ、顔認証の正確性を更に高めることが可能となる。 In the face collation system of this example, the robot 10 equipped with the camera has a voice output function for outputting a voice message. Therefore, it is possible to make the eyes of the seated person on the table face the camera or remove the mask, and it is possible to further improve the accuracy of face authentication.
 以上、本発明について詳細に説明したが、本発明は、ここに記載されたシステムに限定されるものではなく、上記以外のシステムにも広く適用できることは言うまでもない。例えば、一般的なレストランなどの食卓を囲む施設だけでなく、会社の受付や施設への入り口など、無人化を検討するような場所への適用も考えられる。また、公共施設の休憩場所などにも設置することで、ロボットの声掛けによるコミュニケーション向上や、利用者の実態分析(マーケティング)などへの応用展開が考えられる。
 また、本発明は、例えば、本発明に係る処理を実行する方法や方式、そのような方法や方式を実現するためのプログラム、そのプログラムを記憶する記憶媒体などとして提供することも可能である。
Although the present invention has been described in detail above, it is needless to say that the present invention is not limited to the system described here and can be widely applied to systems other than those described above. For example, it can be applied not only to a facility surrounding a dining table such as a general restaurant, but also to a place that considers unmanned, such as a company reception and an entrance to the facility. In addition, it can be applied to resting places in public facilities, etc., to improve communication by talking to robots and to analyze the actual situation of users (marketing).
The present invention can also be provided as, for example, a method and method for executing the processing according to the present invention, a program for realizing such a method and method, and a storage medium for storing the program.
 本発明は、飲食店、小売店、ショッピング施設、宿泊施設、オフィスビル、公共施設といった各種の施設における受付け、出入り口、会計、接客等のコミュニケーションが必要なサービス提供の場において、人物の顔画像を照合する顔照合システムに利用することができる。 The present invention provides a human face image in a place where services such as reception, entrance, accounting, customer service, etc. need to be communicated at various facilities such as restaurants, retail stores, shopping facilities, accommodation facilities, office buildings, and public facilities. It can be used for a face matching system for matching.
 10:ロボット、 20:顔認証サーバー、 30:クライアント、 40:ハブ 10: Robot, 20: Face authentication server, 30: Client, 40: Hub

Claims (14)

  1.  人物の顔画像を照合する顔照合システムにおいて、
     平面方向に回転可能に構成されたカメラと、
     前記カメラによる撮影画像に含まれる顔画像をデータベースと照合し、該顔画像に対応する人物の属性を示す属性情報を前記データベースから取得する顔認証サーバとを備え、
     前記カメラは、撮影画像及びその撮影時の平面方向のカメラ角度である撮影角度を前記顔認証サーバへ送信し、
     前記顔認証サーバは、前記カメラから受信した撮影画像及び撮影角度に基づいて、前記撮影画像に含まれる顔画像の前記カメラに対する平面方向の角度である人物角度を算出し、該顔画像に係る属性情報と対応付けて出力することを特徴とする顔照合システム。
    In a face matching system for matching human face images,
    A camera configured to be rotatable in a plane direction;
    A face authentication server that collates a face image included in an image captured by the camera with a database, and acquires attribute information indicating an attribute of a person corresponding to the face image from the database;
    The camera transmits a captured image and a photographing angle that is a camera angle in a planar direction at the time of photographing to the face authentication server,
    The face authentication server calculates a person angle that is an angle in a plane direction of the face image included in the photographed image with respect to the camera based on the photographed image and the photographing angle received from the camera, and attributes related to the face image A face collation system characterized by being output in association with information.
  2.  請求項1に記載の顔照合システムにおいて、
     前記顔認証サーバは、複数の人物がいる場合に、撮影角度が異なる複数の撮影画像に基づいて、前記カメラの画角よりも平面方向に広い範囲を撮影した仮想パノラマ画像を生成し、当該仮想パノラマ画像に含まれる顔画像毎に、該顔画像に係る属性情報又は人物角度を出力することを特徴する顔照合システム。
    The face matching system according to claim 1,
    When there are a plurality of persons, the face authentication server generates a virtual panoramic image that captures a wider range in the plane direction than the angle of view of the camera based on a plurality of captured images having different capturing angles. A face matching system that outputs attribute information or a person angle related to a face image for each face image included in the panoramic image.
  3.  請求項2に記載の顔照合システムにおいて、
     前記顔認証サーバは、撮影角度が異なる複数の撮影画像に含まれる顔画像毎に、該顔画像に係る人物角度を算出し、人物角度が一致する複数の顔画像を同一人物の顔画像として前記仮想パノラマ画像を生成することを特徴する顔照合システム。
    The face matching system according to claim 2,
    The face authentication server calculates a person angle related to the face image for each face image included in a plurality of captured images having different shooting angles, and uses the plurality of face images having the same person angle as the face images of the same person. A face matching system characterized by generating a virtual panoramic image.
  4.  請求項2に記載の顔照合システムにおいて、
     前記属性情報は、年齢及び性別を含み、
     前記顔認証サーバは、撮影角度が異なる複数の撮影画像に含まれる顔画像毎に、該顔画像に係る属性情報を前記データベースから取得し、属性情報中の年齢及び性別が一致する複数の顔画像を同一人物の顔画像として前記仮想パノラマ画像を生成することを特徴する顔照合システム。
    The face matching system according to claim 2,
    The attribute information includes age and gender,
    The face authentication server acquires attribute information related to the face image from the database for each face image included in a plurality of photographed images with different photographing angles, and a plurality of face images having the same age and gender in the attribute information Generating a virtual panoramic image using a face image of the same person as a face image.
  5.  請求項3に記載の顔照合システムにおいて、
     前記属性情報は、年齢及び性別を含み、
     前記顔認証サーバは、撮影角度が異なる複数の撮影画像に含まれる顔画像毎に、該顔画像に係る属性情報を前記データベースから取得し、属性情報中の年齢及び性別が一致する複数の顔画像を同一人物の顔画像として前記仮想パノラマ画像を生成することを特徴する顔照合システム。
    The face matching system according to claim 3,
    The attribute information includes age and gender,
    The face authentication server acquires attribute information related to the face image from the database for each face image included in a plurality of photographed images with different photographing angles, and a plurality of face images having the same age and gender in the attribute information Generating a virtual panoramic image using a face image of the same person as a face image.
  6.  請求項2に記載の顔照合システムにおいて、
     前記顔認証サーバは、時刻に応じて前記仮想パノラマ画像を定期的に生成し直すことを特徴する顔照合システム。
    The face matching system according to claim 2,
    The face verification system, wherein the face authentication server periodically regenerates the virtual panoramic image according to time.
  7.  請求項3に記載の顔照合システムにおいて、
     前記顔認証サーバは、時刻に応じて前記仮想パノラマ画像を定期的に生成し直すことを特徴する顔照合システム。
    The face matching system according to claim 3,
    The face verification system, wherein the face authentication server periodically regenerates the virtual panoramic image according to time.
  8.  請求項4に記載の顔照合システムにおいて、
     前記顔認証サーバは、時刻に応じて前記仮想パノラマ画像を定期的に生成し直すことを特徴する顔照合システム。
    The face matching system according to claim 4,
    The face verification system, wherein the face authentication server periodically regenerates the virtual panoramic image according to time.
  9.  請求項5に記載の顔照合システムにおいて、
     前記顔認証サーバは、時刻に応じて前記仮想パノラマ画像を定期的に生成し直すことを特徴する顔照合システム。
    The face matching system according to claim 5,
    The face verification system, wherein the face authentication server periodically regenerates the virtual panoramic image according to time.
  10.  請求項1に記載の顔照合システムにおいて、
     前記カメラは、ロボットに搭載され、
     前記ロボットは、音声メッセージを出力する音声出力機能を有することを特徴とする顔照合システム。
    The face matching system according to claim 1,
    The camera is mounted on a robot,
    The robot has a voice output function for outputting a voice message.
  11.  請求項2に記載の顔照合システムにおいて、
     前記カメラは、ロボットに搭載され、
     前記ロボットは、音声メッセージを出力する音声出力機能を有することを特徴とする顔照合システム。
    The face matching system according to claim 2,
    The camera is mounted on a robot,
    The robot has a voice output function for outputting a voice message.
  12.  請求項3に記載の顔照合システムにおいて、
     前記カメラは、ロボットに搭載され、
     前記ロボットは、音声メッセージを出力する音声出力機能を有することを特徴とする顔照合システム。
    The face matching system according to claim 3,
    The camera is mounted on a robot,
    The robot has a voice output function for outputting a voice message.
  13.  請求項4に記載の顔照合システムにおいて、
     前記カメラは、ロボットに搭載され、
     前記ロボットは、音声メッセージを出力する音声出力機能を有することを特徴とする顔照合システム。
    The face matching system according to claim 4,
    The camera is mounted on a robot,
    The robot has a voice output function for outputting a voice message.
  14.  請求項5に記載の顔照合システムにおいて、
     前記カメラは、ロボットに搭載され、
     前記ロボットは、音声メッセージを出力する音声出力機能を有することを特徴とする顔照合システム。
    The face matching system according to claim 5,
    The camera is mounted on a robot,
    The robot has a voice output function for outputting a voice message.
PCT/JP2019/008575 2018-03-20 2019-03-05 Face collation system WO2019181479A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020508154A JP6982168B2 (en) 2018-03-20 2019-03-05 Face matching system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-052164 2018-03-20
JP2018052164 2018-03-20

Publications (1)

Publication Number Publication Date
WO2019181479A1 true WO2019181479A1 (en) 2019-09-26

Family

ID=67987746

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/008575 WO2019181479A1 (en) 2018-03-20 2019-03-05 Face collation system

Country Status (2)

Country Link
JP (1) JP6982168B2 (en)
WO (1) WO2019181479A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113218145A (en) * 2020-01-21 2021-08-06 青岛海尔电冰箱有限公司 Refrigerator food material management method, refrigerator and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001275096A (en) * 2000-03-24 2001-10-05 Sony Corp Image pickup and display device and videoconference device
JP2010109898A (en) * 2008-10-31 2010-05-13 Canon Inc Photographing control apparatus, photographing control method and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001275096A (en) * 2000-03-24 2001-10-05 Sony Corp Image pickup and display device and videoconference device
JP2010109898A (en) * 2008-10-31 2010-05-13 Canon Inc Photographing control apparatus, photographing control method and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113218145A (en) * 2020-01-21 2021-08-06 青岛海尔电冰箱有限公司 Refrigerator food material management method, refrigerator and storage medium

Also Published As

Publication number Publication date
JPWO2019181479A1 (en) 2021-03-18
JP6982168B2 (en) 2021-12-17

Similar Documents

Publication Publication Date Title
US20220375262A1 (en) Object tracking and best shot detection system
JP6854881B2 (en) Face image matching system and face image search system
JP5851651B2 (en) Video surveillance system, video surveillance method, and video surveillance device
US20120207356A1 (en) Targeted content acquisition using image analysis
JP7405200B2 (en) person detection system
AU2016342028A1 (en) Methods and apparatus for false positive minimization in facial recognition applications
US10887553B2 (en) Monitoring system and monitoring method
WO2010001311A1 (en) Networked face recognition system
CN101095149A (en) Image comparison
JP2011039959A (en) Monitoring system
JP7400862B2 (en) information processing equipment
AU2017239587A1 (en) Information processing device, authentication system, authentication method, and program
JP7103229B2 (en) Suspiciousness estimation model generator
KR20180006016A (en) method for searching missing child basedon face recognition AND missing child search system using the same
WO2019181479A1 (en) Face collation system
JP2015233204A (en) Image recording device and image recording method
KR20170007070A (en) Method for visitor access statistics analysis and apparatus for the same
WO2017006749A1 (en) Image processing device and image processing system
US11637994B2 (en) Two-way intercept using coordinate tracking and video classification
US10628682B2 (en) Augmenting gesture based security technology using mobile devices
CN207731302U (en) A kind of challenge system
CN111414799A (en) Method and device for determining peer users, electronic equipment and computer readable medium
WO2023181155A1 (en) Processing apparatus, processing method, and recording medium
US20230084625A1 (en) Photographing control device, system, method, and non-transitory computer-readable medium storing program
CN111652173B (en) Acquisition method suitable for personnel flow control in comprehensive market

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19771841

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020508154

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19771841

Country of ref document: EP

Kind code of ref document: A1