WO2012020591A1 - System for identifying individuals, feature value specification device, feature specification method, and recording medium - Google Patents

System for identifying individuals, feature value specification device, feature specification method, and recording medium Download PDF

Info

Publication number
WO2012020591A1
WO2012020591A1 PCT/JP2011/062313 JP2011062313W WO2012020591A1 WO 2012020591 A1 WO2012020591 A1 WO 2012020591A1 JP 2011062313 W JP2011062313 W JP 2011062313W WO 2012020591 A1 WO2012020591 A1 WO 2012020591A1
Authority
WO
WIPO (PCT)
Prior art keywords
individual
information
feature
feature amount
unit
Prior art date
Application number
PCT/JP2011/062313
Other languages
French (fr)
Japanese (ja)
Inventor
昭裕 早坂
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2012528607A priority Critical patent/JPWO2012020591A1/en
Publication of WO2012020591A1 publication Critical patent/WO2012020591A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition

Definitions

  • the present invention relates to an individual identification system, a feature quantity specifying device, a feature quantity specifying method, and a recording medium, and in particular, an individual identification system, a feature quantity specifying device, and a feature quantity for specifying a feature quantity for each of a plurality of individuals to be identified.
  • the present invention relates to a specifying method and a recording medium.
  • Patent Document 2 describes a technique for simultaneously authenticating a plurality of individuals.
  • the authentication system described in Patent Literature 2 includes a weight measurement floor configured by a plurality of weight measurement units, a weight sensor unit, a controller, and an information processing device. When a plurality of authentication subjects pass through the weight measurement floor of this authentication system, each weight sensor unit measures the weight.
  • Patent Document 3 describes a technique for associating each of a plurality of feature amounts output from a plurality of individuals with an individual that has output each feature amount.
  • the conference system described in Patent Document 3 includes a plurality of microphones, voice recognition means, position specifying means, association means, and synthesis means.
  • the voice recognition means recognizes the voice input to each microphone.
  • the position specifying means specifies the position of the speaker in the captured image.
  • the personal recognition system described in Patent Document 1 determines that the acquired facial feature quantity and voice feature quantity are information obtained from the same person. That is, the personal recognition system described in Patent Document 1 results in erroneous identification.
  • the authentication system described in Patent Document 2 described above performs authentication based on one type of feature amount. Therefore, in the authentication system described in Patent Document 2, when a plurality of feature amounts are collected, it is difficult to specify an appropriate feature amount for use in individual authentication for each of a plurality of authentication targets. is there.
  • the conference system described in Patent Document 3 described above associates each feature quantity based on the position where the feature quantity obtained from the image information is detected and the position where the feature quantity obtained from the audio information is detected.
  • the conference system identifies the individual by performing individual recognition based on the feature amount obtained from the image information. Then, the conference system associates the feature quantity obtained from the individual and the image information with the feature quantity obtained from the audio information. That is, since the conference system described in Patent Document 3 cannot identify an individual based on a plurality of feature amounts, it is necessary to identify an appropriate feature amount for use in individual identification for each of a plurality of identification targets. It is difficult.
  • An example of an object of the present invention is to provide an individual identification system, a feature amount specifying device, a feature amount specifying method, and a recording medium for specifying an appropriate feature amount for use in individual identification for each of a plurality of identification objects. It is in.
  • the first individual identification system includes an environmental information acquisition unit that acquires environmental information that is information resulting from an environment of a space where a plurality of individuals to be subjected to individual identification processing exist, and the environmental information
  • An individual detection unit for detecting the plurality of individuals together with position information indicating the position of each individual, and feature amount extraction for extracting a plurality of feature amounts from the environment information together with attribute information indicating the type of each feature amount
  • a registration unit that determines, for each feature quantity, an individual in which the feature quantity has occurred, based on the position information of each of the individuals and the attribute information of the feature quantities, and the environment information
  • an effective value calculation unit for obtaining an effective value indicating the quality of the feature quantity, and for each detected individual based on the result of the discrimination and the effective value.
  • the first feature amount specifying method includes: information indicating an individual detected from environment information that is information derived from an environment in a space where an individual that is an object of individual identification processing is present; and the individual Each of the position information indicating the position, the feature amount extracted from the environment information, and the attribute information indicating each attribute of the feature amount, respectively, and based on the position information and the attribute information, For each feature amount, an individual in which the feature amount has occurred is determined, and for each feature amount extracted from the environment information, an effective value indicating the quality of the feature amount is obtained, and the determination result and the effective value are obtained.
  • the first recording medium in one aspect of the present invention includes information indicating the individual detected from environmental information that is information derived from the environment of the space where the individual that is the target of the individual identification process exists, and each of the individual Receiving the position information indicating the position of the feature, the feature quantity extracted from the environment information, and the attribute information indicating the attribute of the feature quantity, respectively, and based on the position information and the attribute information, the feature quantity A process for discriminating an individual in which the feature quantity has occurred, a process for obtaining an effective value indicating the quality of the feature quantity for each feature quantity extracted from the environment information, a result of the discrimination and the effective value
  • a program for causing the computer to execute a process for specifying a feature amount used for the individual identification process of the individual is recorded.
  • One of the effects of the present invention is that it is possible to specify an appropriate feature amount for use in individual identification for each of a plurality of identification targets.
  • FIG. 1 is a block diagram illustrating a configuration example of an individual identification system 1 according to the first embodiment.
  • FIG. 2 is a block diagram illustrating a configuration example of the sensor unit 1000 according to the first embodiment.
  • FIG. 3 is a block diagram illustrating a configuration example of the feature amount determination unit 1103 according to the first embodiment.
  • FIG. 4 is a flowchart illustrating an operation example of the individual identification system 1 according to the first embodiment.
  • FIG. 5 is a block diagram illustrating a configuration example of the individual identification system 2 according to the second embodiment.
  • FIG. 6 is a block diagram illustrating a configuration example of the composite sensor unit 2200 according to the second embodiment.
  • FIG. 7 is a flowchart illustrating an operation example of the individual identification system 2 according to the second embodiment.
  • FIG. 1 is a block diagram illustrating a configuration example of an individual identification system 1 according to the first embodiment.
  • FIG. 2 is a block diagram illustrating a configuration example of the sensor unit 1000 according to the first embodiment.
  • FIG. 8 is a block diagram illustrating a configuration example of the individual identification system 3 according to the third embodiment.
  • FIG. 9 is a block diagram illustrating a configuration example of the biological information acquisition unit 3200 according to the third embodiment.
  • FIG. 10 is a flowchart illustrating an operation example of the individual identification system 3 according to the third embodiment.
  • FIG. 11 is a block diagram illustrating a configuration example of the individual identification system 4 according to the fourth embodiment.
  • FIG. 12 is a flowchart illustrating an operation example of the individual identification system 4 according to the fourth embodiment.
  • FIG. 13 is a block diagram illustrating a configuration example of the feature quantity specifying device 50 according to the fifth embodiment.
  • FIG. 14 is a diagram illustrating an example of information stored in the database unit 1107.
  • FIG. 1 is a block diagram showing a configuration of an individual identification system 1 according to the first embodiment of the present invention.
  • the individual identification system 1 according to the first embodiment of the present invention includes a sensor unit 1000 and an identification processing device 1100.
  • the configuration of the sensor unit 1000 will be described.
  • the sensor unit 1000 acquires environmental information that is information resulting from the environment of the space in which the individual that is the target of the individual identification process exists.
  • a space in which an individual subject to individual identification processing exists is simply referred to as a real space.
  • the environmental information may include image information in real space.
  • the sensor unit 1000 may include an image information acquisition unit 1001 that acquires real space image information as illustrated in FIG. 2.
  • the environment information may include audio information in real space.
  • the sensor unit 1000 may include an audio information acquisition unit 1002 that acquires audio information in real space as illustrated in FIG.
  • the image information acquisition unit 1001 may be a video camera capable of capturing a real space image or a still image.
  • the sound information acquisition unit 1002 may be a microphone that can acquire sound in real space.
  • the microphone constituting the audio information acquisition unit 1002 may have a directivity function that can specify the position where the audio is generated.
  • the means for acquiring environment information of the real space is not limited to the above configuration.
  • the means for acquiring the real space environment information may be acquired by reading the real space environment information held in the external storage device.
  • the environment information is information including image information and audio information.
  • the configuration of the identification processing device 1100 will be described.
  • the identification processing device 1100 includes an individual detection unit 1101, a feature amount extraction unit 1102, a feature amount determination unit 1103, a feature amount identification unit 1104, a database addition processing unit 1105, a collation unit 1106, and a database unit 1107.
  • the identification processing device 1100 determines an individual in which each feature amount extracted from the environment information has occurred. Then, the identification processing device 1100 obtains an effective value indicating the quality of the feature value for each feature value extracted from the environment information. Then, the identification processing device 1100 specifies a feature amount used for the individual identification process for each individual detected from the environment information based on the determined result and the calculated effective value. Therefore, the identification processing device 1100 according to the first embodiment uses a feature value having a high effective value, that is, a higher quality feature among the feature values associated with each individual, and uses the feature value for individual identification processing. Can do.
  • each component provided in the identification processing device 1100 according to the first embodiment will be described in detail.
  • the individual detection unit 1101 receives environmental information of the real space acquired by the sensor unit 1000 from the sensor unit 1000. Then, the individual detection unit 1101 performs a process of detecting an individual to be identified that exists in the real space from the received environment information. The individual detection unit 1101 also identifies the position of the individual together with the individual from the received environment information. Specifically, the individual detection unit 1101 may specify the individual to be identified and the position of the individual by applying a background difference method or pattern matching to the image information included in the acquired environment information. . Alternatively, when the individual detection unit 1101 receives information including a multi-viewpoint image as environment information, the individual detection unit 1101 may specify an individual to be identified and a three-dimensional position in the space of the individual using the multi-viewpoint image.
  • the individual detection part 1101 may detect the individual used as identification object as follows, and may specify the position of the individual.
  • the individual detection unit 1101 may specify the position of the pronunciation individual by the following method. That is, the individual detection unit 1101 detects a sounding individual based on the environmental information received from the sensor unit 1000 and the estimated position of the sounding individual specified by the function of the directional microphone included in the sensor unit 1000, and determines the position of the sounding individual. You may specify.
  • the individual detection unit 1101 generates position information indicating the position of the specified individual for each detected individual.
  • the individual detection unit 1101 includes, for each detected individual, a feature quantity extraction unit 1102 that includes information indicating the individual, position information of the individual, and image information and audio information of at least a part of the environment information corresponding to the position information. Output to. At least a part of the image information may be image information including a region of the individual. The image information may be video information including the area of the individual. Further, at least a part of the voice information may be voice information indicating a voice estimated to be generated within a predetermined distance from the position indicated by the position information of the individual. In addition, the individual detection unit 1101 outputs, for each detected individual, information indicating the individual and position information of the individual to a feature amount specifying unit 1104 described later.
  • the feature amount extraction unit 1102 receives information indicating an individual, position information of the individual, image information corresponding to the individual, and audio information from the individual detection unit 1101.
  • the feature amount extraction unit 1102 extracts feature amounts from the received image information and audio information.
  • the feature amount extraction unit 1102 may extract feature amounts based on the color, shape, size, pattern, etc. of the object that can be acquired from the received image information.
  • the feature amount extraction unit 1102 may extract a feature amount based on an action of an object that can be acquired from the received video information.
  • the feature amount extraction unit 1102 may extract a feature amount based on sound emitted from an object that can be acquired from the received audio information.
  • the feature quantity extraction unit 1102 specifies the attribute of the feature quantity when extracting the feature quantity.
  • the attribute of the feature amount may be, for example, the following attribute.
  • A Type of information from which feature quantity is acquired
  • B Whether or not the feature quantity originated from a human
  • C Whether the feature quantity was generated from a man or a woman
  • D The race of the person who generated the feature
  • E Feature quantity strength
  • F Feature level
  • G Whether the feature quantity is a language
  • H How many words the feature value is
  • I Position where the feature amount is estimated to have occurred
  • the feature amount extraction unit 1102 generates attribute information indicating the attribute of the specified feature amount.
  • the feature value extraction unit 1102 determines whether the feature value is obtained from audio information in the environment information, or Specify whether it was obtained from image information. Then, the feature amount extraction unit 1102 generates information indicating the specified acquisition source as attribute information. For example, if the attribute of the feature quantity is (b) “whether or not the feature quantity originated from a person”, the feature quantity extraction unit 1102 analyzes the feature quantity and includes information peculiar to the person. Judge whether or not. Any known method can be applied as this determination method. When the feature quantity includes information unique to a person, the feature quantity extraction unit 1102 generates information indicating that the feature quantity is generated from a person as attribute information.
  • the feature value extraction unit 1102 analyzes the feature value and determines whether the feature value is generated from a man or a woman. Judgment is made from Any known method can be applied as this determination method.
  • the feature quantity includes information specific to men or women
  • the feature quantity extraction unit 1102 generates, as attribute information, information indicating that the feature quantity is generated from a man or a woman. Even if the attribute of the feature quantity is other than the above (a) to (c), the feature quantity extraction unit 1102 generates predetermined attribute information in the same manner as described above.
  • the feature quantity extraction unit 1102 outputs the extracted feature quantity and attribute information of each feature quantity to a feature quantity determination unit 1103 described later.
  • the feature quantity discriminating unit 1103 discriminates from which individual in the real space the feature quantity extracted by the feature quantity extracting unit 1102 is a feature quantity, and outputs a discrimination result. Then, the feature amount determination unit 1103 calculates an effective value of each feature amount.
  • the feature amount determination unit 1103 includes a registration unit 1113 that associates an individual with a feature amount, and an effective value calculation unit 1123 that calculates an effective value of each feature amount.
  • the registration unit 1113 receives information indicating an individual and position information of each individual from the individual detection unit 1101.
  • the registration unit 1113 receives the feature amount and attribute information of each feature amount from the feature amount extraction unit 1102. Then, the registration unit 1113 determines an individual in which each of the received feature amounts has occurred based on the received position information and attribute information. Specifically, the registration unit 1113 may associate the individual and the feature amount by the following method. First, the registration unit 1113 specifies the difference between the position of each individual specified by the individual detection unit 1101 based on the environment information and the position where each feature amount specified by the feature amount extraction unit 1102 is estimated to have occurred. . The registration unit 1113 associates the individual with the feature amount when the identified difference is equal to or less than a predetermined threshold. Alternatively, the registration unit 1113 may associate an individual with a feature amount by the following method.
  • the registration unit 1113 associates the individual with the feature quantity when the difference in time of the position of each individual corresponds to the difference with time of the position where each feature quantity is estimated to occur.
  • “corresponding” may mean a case where each difference is a vector and the vector specified by the difference between the vectors is less than a predetermined length.
  • the individual detection unit 1101 and the feature amount extraction unit 1102 may associate information indicating the position of the individual or the time when the attribute of the feature amount is specified with information or feature amount indicating each individual. Then, the individual detection unit 1101 and the feature amount extraction unit 1102 may output information or feature amount indicating an individual associated with information indicating time to the feature amount determination unit 1103.
  • the sensor unit 1000 may associate the time when the environment information is acquired with the environment information, and pass the environment information to the individual detection unit 1101 or the feature amount extraction unit 1102.
  • the individual detection unit 1101 or the feature amount extraction unit 1102 may associate the time associated with the received environment information with the position information indicating the position of the individual or the attribute information indicating the attribute of the feature amount. Then, the individual detection unit 1101 or the feature amount extraction unit 1102 may output the position information of the individual associated with the time or the attribute information of the feature amount to the feature amount determination unit 1103.
  • the registration unit 1113 may associate an individual with a feature amount by the following method.
  • the registration unit 1113 uses the following method to Correspond with the feature value. That is, the registration unit 1113 may perform speaker estimation based on the feature amount indicating the movement of the lips among the feature amounts included in the image information. Then, the registration unit 1113 associates the feature amount extracted from the image information estimated to indicate the speaker with the feature amount extracted from the speech information. Finally, the registration unit 1113 may specify an individual to be associated with the above-described associated feature quantity based on the relationship between each feature quantity and the individual.
  • the registration unit 1113 can perform the association with higher accuracy than the association based on the one-to-one relationship between the feature amount and the individual.
  • the registration unit 1113 may assign one feature amount to a plurality of individuals by probabilistic weighting in addition to assigning one feature amount to only one individual. For example, the registration unit 1113 associates one feature amount with a weight of a probability of 80% for the first individual and associates it with a weight of the probability of 20% for the second individual. Good.
  • This probability is a probability indicating the likelihood that the feature amount has occurred from the individual. This probability may be calculated by the following method.
  • the registration unit 1113 is specified by the position information of each individual and the attribute information of each feature quantity.
  • the above-described probability may be calculated based on the difference between the positions. For example, the registration unit 1113 may assign a higher probability as the difference is smaller and assign a lower probability as the difference is larger. Specifically, the registration unit 1113 may assign probabilities so as to be inversely proportional to the difference. Further, this probability may be determined based on the feature amount attribute information. For example, when the feature quantity attribute information includes information indicating the type of feature quantity, the registration unit 1113 may weight each of the above probabilities according to the type of feature quantity.
  • the feature quantity specifying unit 1104 uses the feature quantity discrimination result and effective value obtained by the feature quantity discrimination unit 1103 to specify the feature quantity used for collation for individual identification processing.
  • the feature quantity specifying unit 1104 refers to the effective value of the feature quantity determined to be generated from one individual for each individual detected by the individual detection unit 1101.
  • the feature amount specifying unit 1104 specifies a feature amount whose effective value referred to is equal to or greater than a predetermined threshold as a feature amount to be used for individual identification processing of the individual.
  • This predetermined threshold may be a predetermined constant.
  • the predetermined threshold may be a value calculated by the feature amount specifying unit 1104 based on the feature amount attribute information.
  • the feature amount specifying unit 1104 detects the feature amount used for individual identification by the following method. It may be specified for each individual detected by the unit 1101. That is, for each individual detected by the individual detection unit 1101, the feature amount specifying unit 1104 is estimated that the effective value of the feature amount determined to have occurred from one individual and that the feature amount has been generated from that one individual. Each product with the probability is calculated. Then, when the calculated product is equal to or greater than a predetermined threshold, the feature amount specifying unit 1104 specifies the feature amount as a feature amount used for the individual identification process of the one individual.
  • the collation unit 1106 determines whether or not the correlation value for all the feature amounts stored in the database unit 1107 is equal to or less than a threshold value or a distance between feature amounts is equal to or greater than a threshold value.
  • the matching unit 1106 performs the following processing. That is, the collation unit 1106 determines that the target individual is not registered in the database unit 1107.
  • the database unit 1107 associates the feature quantity specified by the feature quantity specifying unit 1104 with information indicating the individual and the database addition processing unit 1105 described later.
  • FIG. 4 is a flowchart showing the operation of the individual identification system 1 in the first embodiment for carrying out the present invention.
  • the image information acquisition unit 1001 acquires image information included in environment information in the real space.
  • the voice information acquisition unit 1002 acquires voice information included in the environment information in the real space (step S1).
  • the sensor unit 1000 outputs the acquired environment information, that is, image information and audio information to the individual detection unit 1101.
  • the individual detection unit 1101 analyzes image information and audio information output from the sensor unit 1000, detects an individual to be identified, and specifies position information of the individual.
  • the individual detection unit 1101 outputs information indicating the detected individual and the position information of the individual to the feature amount extraction unit 1102 and the feature amount determination unit 1103. Further, for each detected individual, the individual detection unit 1101 includes information indicating the individual, position information of the individual, image information of at least a part of the environment information corresponding to the position information, and at least one of the environment information. Are output to the feature quantity extraction unit 1102 (step S2).
  • the individual detection unit 1101 performs object detection processing based on a background difference method and pattern matching on image information, three-dimensional position detection processing using a multi-viewpoint image, and position detection of a sounding object by sound using a directional microphone. Use a combination of processing. Then, the individual detection unit 1101 detects an individual to be identified by the above-described combined process, and specifies position information of the individual.
  • the feature quantity extraction unit 1102 extracts a feature quantity for identifying the individual from the image information and audio information received from the individual detection unit 1101 and outputs the feature quantity to the feature quantity identification unit 1104 (step S3). Specifically, the feature amount extraction unit 1102 may extract a feature amount based on the color, shape, size, pattern, etc.
  • the feature amount extraction unit 1102 may extract a feature amount based on an action of an object that can be acquired from video information.
  • the feature amount extraction unit 1102 may extract a feature amount based on sound emitted by an object that can be acquired from audio information.
  • the feature quantity discriminating unit 1103 uses the image information and audio information corresponding to the individual input from the individual detection unit 1101 and the position information of the individual, and each of the feature quantities extracted by the feature quantity extracting unit 1102 is generated. Is discriminated (step S4).
  • the feature quantity discriminating unit 1103 considers the relation between the individual spatial position and the direction of the audio signal, the relation between the individual movement obtained from the video information and the movement direction of the audio signal, and the like. Identify the individual in which each occurred. Then, the feature amount determination unit 1103 outputs the determination result to the feature amount specifying unit 1104. Further, the feature amount determination unit 1103 calculates an effective value representing the validity of the individual image information and audio information input from the individual detection unit 1101, and outputs the effective value to the feature amount specifying unit 1104 (step S5).
  • an effective value calculation method for example, a method of calculating quantitatively using information such as signal SN ratio and signal strength, a method of calculating by a neural network or a support vector machine using previously collected learning data and so on.
  • the feature quantity specifying unit 1104 uses a plurality of types of feature quantities input from the feature quantity extraction unit 1102, the discrimination results input from the feature quantity discrimination unit 1103, and the effective value of each signal to identify the target individual.
  • An effective feature amount is specified (step S6).
  • the identified feature amount is output to the matching unit 1106.
  • the collation unit 1106 collates the feature quantity input from the feature quantity identification unit 1104 with a plurality of types of feature quantities stored in the database unit 1107, and calculates a collation score indicating the correlation between the two feature quantities (step S7). ).
  • the matching unit 1106 calculates a correlation value or a distance for each set of feature values, and calculates a matching score by integrating them.
  • a method of integrating correlation values or distances there are a method of taking an average value of each value, a method of taking a maximum value, or addition or multiplication of each value. Further, as a method of integrating correlation values or distances, there is a method of integrating by a neural network or a support vector machine using learning data prepared in advance.
  • the collation unit 1106 determines whether or not the calculated collation score is equal to or less than a first threshold value (or greater than the first threshold value when the distance-based collation score is used). That is, the collation unit 1106 determines whether there is an individual corresponding to the database unit 1107 (step S8).
  • the database addition processing unit 1105 executes the following processing. That is, the database addition processing unit 1105 newly registers the feature amount specified by the feature amount specifying unit 1104 in the database unit 1107 (step S9).
  • the matching unit 1106 determines that there is an individual corresponding to the database unit 1107 ( If “NO” (registered) in step S8). In this case, the collation by the collation unit 1106 is completed.
  • the collation unit 1106 may have the following functions. First, the collation unit 1106 determines whether or not the calculated collation score is significantly higher than the first threshold value. When the calculated matching score is a distance-based matching score, the matching unit 1106 determines whether or not the calculated matching score is significantly smaller than the first threshold value.
  • the second threshold indicates a value larger than the first threshold (a value smaller than the first threshold when the matching score is a distance-based matching score).
  • the collation unit 1106 determines whether or not the calculated collation score is greater than or equal to the second threshold value based on a predetermined second threshold value.
  • the collation unit 1106 determines that the individual collation reliability is extremely high when the collation score is equal to or greater than a predetermined second threshold.
  • the collation unit 1106 adds the individual feature amount to the database unit 1107 when the collation score is equal to or greater than a predetermined second threshold.
  • the feature amount specifying unit 1104 may determine whether or not a feature amount associated with information indicating a certain individual is stored in the database unit 1107.
  • the feature amount specifying unit 1104 determines that the feature amount is stored in the database unit 1107
  • the feature amount having attribute information corresponding to the attribute information of the feature amount is used for the above-described individual identification processing. You may specify with quantity.
  • the individual identification system 1 determines whether there is an individual that has not yet been identified among the individuals detected by the individual detection unit 1101 (step S10).
  • step S10 the individual steps from the feature determination process in step S4 are performed. repeat. By this operation, the individual identification system 1 can identify all the individuals existing in the real space.
  • the individual identification system 1 determines that the identification process has been executed for all the individuals detected by the individual detection unit 1101 (in the case of “NO” (no unidentified individual) in step S10), the identification process is performed. finish.
  • the individual identification system 1 in the first embodiment determines an individual in which each of the feature amounts extracted from the environmental information has occurred.
  • the individual identification system 1 obtains an effective value indicating the quality of the feature value for each feature value extracted from the environment information. Then, the individual identification system 1 specifies the feature amount used for the individual identification process for each individual detected from the environment information based on the determined result and the calculated effective value. Therefore, the individual identification system 1 according to the first embodiment uses a feature value having a high effective value, that is, a higher quality feature amount among the feature amounts associated with each individual, and uses the feature amount in the individual identification process. Can do. As a result, the individual identification system 1 in the first embodiment can specify an appropriate feature amount to be used for individual identification for each of a plurality of identification targets.
  • the individual identification system 1 can perform more accurate individual identification processing even when a plurality of individuals must be identified at the same time while being robust against environmental changes.
  • the feature amount determination unit 1103 may be included in the individual detection unit 1101.
  • the individual detection unit 1101 when detecting the individual to be identified from the image information and audio information received from the sensor unit 1000, the individual detection unit 1101 includes image information or audio information in a predetermined region including the position of the individual and the individual. May be associated with each other. Then, the individual detection unit 1101 calculates an effective value of the feature amount included in the image information or audio information based on the signal intensity of the image information or audio information associated with the individual.
  • the composite sensor unit 2200 includes a shape measuring unit 2201, a weight measuring unit 2202, a calorie measuring unit 2203, a speed measuring unit 2204, an optical property measuring unit 2205, and an odor measurement as shown in FIG.
  • One or more of the part 2206 and the material inspection part 2207 are provided.
  • the composite sensor unit 2200 is connected to the individual detection unit 1101. That is, the difference of the second embodiment from the first embodiment is that the composite sensor unit 2200 is added to the configuration of the individual identification system. Other components of the individual identification system 2 are the same as those of the individual identification system 1 in the first embodiment.
  • the shape measuring unit 2201 acquires information about the three-dimensional shape and volume of the individual.
  • the weight measuring unit 2202 measures the weight of the individual.
  • the calorie measurement unit 2203 measures the temperature of the individual.
  • the speed measuring unit 2204 measures the speed of the individual.
  • the optical characteristic measurement unit 2205 measures optical characteristics such as reflectance, transmittance, and refractive index of the solid surface.
  • the odor measuring unit 2206 measures the odor of the individual.
  • the material inspection unit 2207 acquires information such as hardness and material of the individual surface by infrared spectroscopy, ultrasonic inspection, or the like.
  • various sensor information acquired by the composite sensor unit 2200 is output to the feature amount extraction unit 1102 in the same manner as individual image information and audio information.
  • a feature amount extraction unit 1102 extracts feature amounts from input image information, audio information, and various sensor information.
  • the feature quantity discriminating unit 1103 performs processing for associating the feature quantity extracted from the sensor information with the individual and calculating the effective value of each feature quantity.
  • FIG. 7 is a flowchart showing an example of the operation of the individual identification system 2 in the second embodiment for carrying out the present invention. About the operation
  • the second embodiment is different from the first embodiment in the following points.
  • the first point is a point where image information and audio information included in the environment information as sensing information and sensing data acquired from the composite sensor unit 2200 are given as inputs to the individual detection unit 1101 (step S21).
  • FIG. 8 is a block diagram showing the configuration of the individual identification system 3 according to the third embodiment of the present invention. As shown in FIG.
  • the individual identification system 3 according to the third embodiment of the present invention is different from the individual identification system 1 according to the first embodiment in a biological information acquisition unit 3200 and a person detection unit 3101. That is, the individual detection unit 1101 of the identification processing device 1100 in the first embodiment is changed to the person detection unit 3101 of the identification processing device 3100 in the third embodiment.
  • Other components of the individual identification system 3 are the same as those of the individual identification system 1 in the first embodiment.
  • the same components as those of the individual identification system 1 in the first embodiment of the individual identification system 3 are denoted by the same reference numerals as those in FIG. 1, and detailed description thereof is omitted.
  • the biometric information acquisition unit 3200 of the individual identification system 3 according to the third embodiment includes the components shown in FIG.
  • a fingerprint pattern acquisition unit 3202 acquires a fingerprint pattern of a person.
  • the fingerprint pattern acquisition unit 3202 may be configured to acquire a fingerprint pattern using a contact sensor, or may be configured to acquire a fingerprint pattern in a non-contact manner using a camera or the like.
  • a palm print pattern acquisition unit 3203 acquires a palm print and a palm pattern of a person.
  • the palm pattern acquisition unit 3203 may be configured to acquire a palm pattern and a palm pattern by a contact sensor, or acquire a palm pattern and a palm pattern without contact with a camera or the like. Such a configuration may be adopted.
  • the palm print pattern acquisition unit 3203 may be configured to simultaneously acquire a fingerprint pattern.
  • the vein pattern acquisition unit 3204 acquires a vein pattern of a person.
  • the vein pattern acquisition unit 3204 may be configured to acquire a vein pattern from a part such as a finger, palm, back of the hand, face, or neck, or may be configured to acquire a vein pattern from another part. Also good. Further, the fingerprint pattern acquisition unit 3202 or the palm print pattern acquisition unit 3203 may also acquire the vein pattern at the same time.
  • the dentition pattern acquisition unit 3205 acquires the shape and arrangement pattern of human teeth.
  • the dentition pattern acquisition unit 3205 may acquire three-dimensional shape information as a dentition pattern in addition to image information captured by a camera.
  • An auricle pattern acquisition unit 3206 acquires a shape pattern of a human ear.
  • the auricle pattern acquisition unit 3206 may be configured to acquire three-dimensional shape information in addition to image information.
  • the gene sequence information acquisition unit 3207 acquires gene sequence information of a person.
  • the gene sequence information acquisition unit 3207 may be configured to acquire gene sequence information from human skin, body hair, body fluid, or the like.
  • the person detection unit 3101 detects a person from the image information acquired by the sensor unit 1000.
  • the person detection unit 3101 detects a person by using face detection for image information, walking pattern detection for video information, and the like.
  • the person detection unit 3101 also has a function of detecting a person from image information and acquiring a person's face pattern and walking pattern.
  • the identification processing device 4100 is a so-called personal computer (PC).
  • the database unit 4107 includes at least communication means such as a network interface and a magnetic storage device.
  • the magnetic storage device stores facial feature amount information and voice feature amount information of a plurality of persons.
  • the magnetic storage device stores at least one or more of face feature amount information and voice feature amount information per person.
  • the magnetic storage device may store a plurality of both feature quantities per person.
  • the facial feature amount information and the voice feature amount information may be managed by a relational database management system (RDBMS).
  • RDBMS relational database management system
  • the fourth embodiment is an embodiment in which feature amounts are a face pattern and a voice pattern.
  • the individual identification system 4 in 4th Embodiment is naturally applicable also to the individual identification system using another feature-value.
  • the person detection unit 4101 detects that there are two persons in the space by processing such as face detection, and passes the detected face image data and audio data to the feature amount extraction unit 4102 and the feature amount determination unit 4103 (Ste S42).
  • the feature amount extraction unit 4102 extracts feature amounts for personal identification from the face image data and audio data acquired from the person detection unit 4101 (step S43).
  • the feature amount discriminating unit 4103 uses the face image data acquired from the person detecting unit 4101 to specify the speaker by speaker estimation or the like, and associates the speaker's face image and voice data with the speaker (step S44). Also, the feature amount discriminating unit 4103 calculates the effectiveness of the feature amount extracted from the face image or audio signal from the state of the face image and audio signal acquired from the person detection unit 4101 (step S45).
  • each component in each embodiment of the present invention can be realized by a computer and a program as well as its function in hardware.
  • the program is provided by being recorded on a computer-readable recording medium such as a magnetic disk or a semiconductor memory, and is read by the computer when the computer is started up.
  • the read program causes the computer to function as a component in each of the embodiments described above by controlling the operation of the computer.
  • the computer includes a central processing unit (CPU) on which the individual identification program is read and executed, a storage device (such as a hard disk) that stores feature quantities as a database, and input means such as a camera and a microphone.
  • the individual identification program read into the CPU causes the computer to function as the identification processing device 1100 described in the first embodiment.
  • An example of the effect of the present invention is that it is possible to specify an appropriate feature amount for use in individual identification for each of a plurality of identification targets.
  • Appendix 1 Extracted from the environmental information, information indicating the individual detected from the environmental information, which is information resulting from the environment of the space where the individual subject to the individual identification process exists, position information indicating the position of the individual, and the environmental information And the attribute information indicating each attribute of the feature quantity is received, and the individual in which the feature quantity has occurred is determined for each feature quantity based on the position information and the attribute information.
  • a registration section For each feature quantity extracted from the environment information, an effective value calculation unit for obtaining an effective value indicating the quality of the feature quantity; Based on the determination result and the effective value, for each detected individual, a feature amount specifying unit that specifies a feature amount used for individual identification processing of the individual; A feature quantity specifying device.
  • the feature quantity specifying unit specifies a feature quantity having a product of the effective value and the probability equal to or greater than a predetermined threshold as a feature quantity used for individual identification processing of the one individual. apparatus.
  • the attribute information of the feature amount includes position information indicating a position where the feature amount has occurred,
  • the registration unit includes: Based on the difference between the position specified based on the position information of each individual and the position specified based on the attribute information of each feature amount, each of the feature amounts is a feature amount generated from the individual.
  • the feature quantity specifying device according to claim 3, wherein certain probabilities are respectively calculated.
  • the feature amount specifying unit when the feature amount has attribute information corresponding to the attribute information of the feature amount, specifies the feature amount used for the individual identification processing of the one individual.
  • a feature amount specifying method for specifying, for each detected individual, a feature amount used for individual identification processing of the individual based on the result of the determination and the effective value. (Appendix 10) Extracted from the environmental information, information indicating the individual detected from the environmental information, which is information resulting from the environment of the space where the individual subject to the individual identification process exists, position information indicating the position of the individual, and the environmental information And the attribute information indicating each attribute of the feature quantity is received, and the individual in which the feature quantity has occurred is determined for each feature quantity based on the position information and the attribute information.
  • the feature quantity specifying unit includes: Storing associating information, which is information associating individual position information with feature amount attribute information; The feature quantity specification according to appendix 1, wherein the feature quantity having the attribute information of the feature quantity corresponding to the association information including the position information of one individual is identified as a feature quantity used for the individual identification process of the one individual. apparatus.
  • the individual position information includes information indicating the movement direction of the individual
  • the attribute information of the feature amount includes information indicating whether the volume is increasing or decreasing
  • the feature quantity specifying unit includes: When the attribute information of the first individual includes information indicating the first direction, the feature amount having the attribute information including the information whose volume is increased is used for the individual identification processing of the first individual And identify When the attribute information of the second individual includes information indicating a second direction that is the opposite direction to the first direction, the feature amount having attribute information including information in which the volume is decreased
  • the feature amount specifying device according to attachment 1, wherein the feature amount is specified as a feature amount used for individual identification processing of the individual.
  • the feature quantity identifying device comprising: (Appendix 15)
  • the individual identification system includes a database unit that stores a matching feature amount used for individual identification, attribute information of the feature amount, and information indicating the individual in association with each other.
  • the collation unit reads a feature quantity for collation used for identifying an individual associated with the feature quantity based on the feature quantity identified by the feature quantity identification unit, and identifies the feature quantity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

Appropriate feature values for use in the identification of individuals are specified for each of a plurality of identification targets. A feature value specification device is provided with a registration unit, an effective value calculation unit, and feature value specification unit. The registration unit receives: information indicating an individual that is an identification target and is detected from environment information originating in the spatial environment where the individual is present; position information indicating the position of each individual; feature values extracted from the environment information; and attribute information indicating the attributes of each feature value. For each feature value, the registration unit uses the position information and the attribute information to determine the individual that generated the feature value. For every feature value extracted from the environment information, the effective value calculation unit obtains an effective value indicating the quality of that feature value. For each individual detected, the feature value specification unit uses the determination result and the effective value to specify each feature value used to identify that individual.

Description

個体識別システム、特徴量特定装置、特徴量特定方法および記録媒体Individual identification system, feature amount specifying device, feature amount specifying method, and recording medium
 本発明は、個体識別システム、特徴量特定装置、特徴量特定方法および記録媒体に関し、特に識別対象の複数の個体のそれぞれに対して特徴量を特定する個体識別システム、特徴量特定装置、特徴量特定方法および記録媒体に関する。 The present invention relates to an individual identification system, a feature quantity specifying device, a feature quantity specifying method, and a recording medium, and in particular, an individual identification system, a feature quantity specifying device, and a feature quantity for specifying a feature quantity for each of a plurality of individuals to be identified. The present invention relates to a specifying method and a recording medium.
 個体の識別に際し、複数の特徴量を用いて認証を行う技術が知られている。複数の特徴量を用いて個体識別を行う技術において、個体識別のための照合に用いる特徴の検出信号の品質が考慮されていない。よって、例えば特徴の計測条件が良好でなくSN比(signal−to−noise ratio)が悪い状況において、すべての特徴を用いての照合における誤り発生率は、検出信号の品質の良い特徴だけを用いた照合における誤り発生率と比べて高くなる。
 この課題を解決する技術の一例が、特許文献1に記載されている。
 特許文献1に記載された個人認識システムは、取得手段と、収集手段と、計算手段と、特定手段と、認識手段とを備える。取得手段は、人が存在する空間の環境情報を取得する。収集手段は、取得手段が取得した環境情報から複数種の特徴量を収集する。計算手段は、収集手段が収集したそれぞれの特徴量について、有効値を求める。特定手段は、計算手段が計算したそれぞれの有効値に基づいて、個人認識処理のために用いる特徴量を特定する。認識手段は、特定手段が特定した特徴量を用いて個人認識処理を行う。
 有効値とは、特徴の検出信号の品質を表した数値である。例えば有効値は、特徴の計測条件が良好でなくSN比が悪い場合、より検出信号の品質のよい特徴を選択的に照合に利用するといった処理を可能とする指標である。
 このような構成を有する個人認識システムは、次のように動作する。
 すなわち、取得手段が環境情報を取得し、取得した環境情報から、収集手段が複数種の特徴量を収集する。そして計算手段がそれぞれの特徴の有効値を算出し、算出した有効値に基づいて特定手段が個人認識処理のために用いる特徴量を特定する。そして認識手段は、特定手段が特定した特徴量を用いて個人認識処理を行う。
 複数の個体を同時に認証する技術が特許文献2に記載されている。
 特許文献2に記載された認証システムは、複数の重量測定ユニットによって構成される重量測定床と、重量センサ部と、コントローラと、情報処理装置とを備える。そしてこの認証システムの重量測定床を、複数の認証対象者が通過すると、各重量センサ部が重量の測定を行う。測定結果は、コントローラを介して情報処理装置に通知される。そして、情報処理装置は、各認証対象者に関する情報をそれぞれ分離し、重心移動などの歩行データを求める。そして情報処理装置は、記憶しているデータと比較し、一致すればその認証対象者に認証許可を与える。
 複数の個体から出力される複数の特徴量のそれぞれを、それぞれを出力した個体に対応付ける技術が特許文献3に記載されている。
 特許文献3に記載された会議システムは、複数のマイクロホンと、音声認識手段と、位置特定手段と、関連付ける手段と、合成手段とを備える。そして、音声認識手段は、各マイクロホンに入力された音声をそれぞれ認識する。また位置特定手段は、撮像画像における発言者の位置を特定する。関連付ける手段は、音声認識手段が認識して得た発言文字情報および位置特定手段が特定した発言者の位置を示す使用者位置情報を関連付ける。合成手段は、発言文字情報にかかる文字画像を、撮像画像における使用者位置情報が示す位置に応じた画像部分に合成する。
There is known a technique for performing authentication using a plurality of feature amounts when identifying an individual. In a technique for performing individual identification using a plurality of feature amounts, the quality of a feature detection signal used for collation for individual identification is not considered. Therefore, for example, in a situation where the measurement conditions of features are not good and the signal-to-noise ratio (signal-to-noise ratio) is bad, the error occurrence rate in matching using all the features uses only features with good detection signal quality. It is higher than the error occurrence rate in collation.
An example of a technique for solving this problem is described in Patent Document 1.
The personal recognition system described in Patent Literature 1 includes an acquisition unit, a collection unit, a calculation unit, a specifying unit, and a recognition unit. The acquisition unit acquires environmental information of a space where a person exists. The collecting unit collects a plurality of types of feature amounts from the environment information acquired by the acquiring unit. The calculating means obtains an effective value for each feature amount collected by the collecting means. The specifying unit specifies a feature amount used for the personal recognition process based on each effective value calculated by the calculating unit. The recognition means performs personal recognition processing using the feature amount specified by the specifying means.
The effective value is a numerical value representing the quality of the feature detection signal. For example, the effective value is an index that enables a process of selectively using features with better detection signal quality for matching when the feature measurement conditions are not good and the SN ratio is poor.
The personal recognition system having such a configuration operates as follows.
That is, the acquisition unit acquires the environment information, and the collection unit collects a plurality of types of feature amounts from the acquired environment information. Then, the calculating means calculates the effective value of each feature, and the specifying means specifies the feature amount used for the personal recognition processing based on the calculated effective value. The recognizing means performs personal recognition processing using the feature amount specified by the specifying means.
Patent Document 2 describes a technique for simultaneously authenticating a plurality of individuals.
The authentication system described in Patent Literature 2 includes a weight measurement floor configured by a plurality of weight measurement units, a weight sensor unit, a controller, and an information processing device. When a plurality of authentication subjects pass through the weight measurement floor of this authentication system, each weight sensor unit measures the weight. The measurement result is notified to the information processing apparatus via the controller. Then, the information processing apparatus separates information about each authentication target person and obtains walking data such as movement of the center of gravity. Then, the information processing apparatus compares the stored data with each other, and if they match, gives an authentication permission to the person to be authenticated.
Patent Document 3 describes a technique for associating each of a plurality of feature amounts output from a plurality of individuals with an individual that has output each feature amount.
The conference system described in Patent Document 3 includes a plurality of microphones, voice recognition means, position specifying means, association means, and synthesis means. The voice recognition means recognizes the voice input to each microphone. Further, the position specifying means specifies the position of the speaker in the captured image. The associating means associates the remark character information obtained by the recognition by the voice recognition means and the user position information indicating the position of the speaker specified by the position specifying means. The synthesizing unit synthesizes the character image related to the comment character information into an image portion corresponding to the position indicated by the user position information in the captured image.
特開2006−293644号公報JP 2006-293644 A 特開2005−078228号公報Japanese Patent Laying-Open No. 2005-078228 特開2009−194857号公報JP 2009-194857 A
 上述した特許文献1に記載された個人認識システムは、取得手段により取得される環境情報に含まれる特徴量が一つの認識対象より発生された場合のみを想定している。そのため、環境情報を取得する空間内に複数の認識対象が存在する場合、特許文献1に記載された個人認識システムは、収集手段により収集された特徴量がすべて一つの対象から得られた特徴量であるとみなしてしまう。したがって特許文献1に記載された個人認識システムは、複数の認識対象のそれぞれに対して個体認識に用いるための適切な特徴量を特定することが困難である。
 例えば、環境情報を取得するための空間中に2つの人物が存在し、一方の人物からは顔の特徴量のみが取得され、もう一方の人物からは音声の特徴量のみが取得され、取得されたそれぞれの特徴量の有効値が高い場合がある。この場合、特許文献1に記載された個人認識システムは、取得された顔の特徴量と音声の特徴量とが同一人物から得られた情報であると判断してしまう。つまり特許文献1に記載された個人認識システムは、結果的に誤識別を招いてしまう。
 上述した特許文献2に記載された認証システムは、一種類の特徴量に基づいて認証を行う。したがって特許文献2に記載された認証システムは、複数の特徴量が収集されている場合に、複数の認証対象のそれぞれに対して個体認証に用いるための適切な特徴量を特定することが困難である。
 上述した特許文献3に記載された会議システムは、画像情報から得られる特徴量が検出された位置と音声情報から得られる特徴量が検出された位置とに基づいてそれぞれの特徴量を対応付ける。そして会議システムは、画像情報から得られる特徴量に基づいて個体認識を行うことで個体を特定する。そして会議システムは、その個体と画像情報から得られる特徴量と音声情報から得られる特徴量とを対応付ける。つまり特許文献3に記載された会議システムは、複数の特徴量に基づいて個体の特定を行えないので、複数の識別対象のそれぞれに対して個体識別に用いるための適切な特徴量を特定することは困難である。
 本発明の目的の一例は、複数の識別対象のそれぞれに対して個体識別に用いるための適切な特徴量を特定する個体識別システム、特徴量特定装置、特徴量特定方法および記録媒体を提供することにある。
The personal recognition system described in Patent Document 1 described above assumes only the case where the feature amount included in the environment information acquired by the acquisition unit is generated from one recognition target. Therefore, when there are a plurality of recognition targets in the space for acquiring environment information, the personal recognition system described in Patent Document 1 is a feature amount in which all the feature amounts collected by the collecting means are obtained from one target. It is considered that. Therefore, it is difficult for the personal recognition system described in Patent Document 1 to specify an appropriate feature amount for use in individual recognition for each of a plurality of recognition targets.
For example, there are two persons in the space for acquiring environment information, only the facial feature quantity is acquired from one person, and only the audio feature quantity is acquired and acquired from the other person. In addition, the effective value of each feature amount may be high. In this case, the personal recognition system described in Patent Document 1 determines that the acquired facial feature quantity and voice feature quantity are information obtained from the same person. That is, the personal recognition system described in Patent Document 1 results in erroneous identification.
The authentication system described in Patent Document 2 described above performs authentication based on one type of feature amount. Therefore, in the authentication system described in Patent Document 2, when a plurality of feature amounts are collected, it is difficult to specify an appropriate feature amount for use in individual authentication for each of a plurality of authentication targets. is there.
The conference system described in Patent Document 3 described above associates each feature quantity based on the position where the feature quantity obtained from the image information is detected and the position where the feature quantity obtained from the audio information is detected. The conference system identifies the individual by performing individual recognition based on the feature amount obtained from the image information. Then, the conference system associates the feature quantity obtained from the individual and the image information with the feature quantity obtained from the audio information. That is, since the conference system described in Patent Document 3 cannot identify an individual based on a plurality of feature amounts, it is necessary to identify an appropriate feature amount for use in individual identification for each of a plurality of identification targets. It is difficult.
An example of an object of the present invention is to provide an individual identification system, a feature amount specifying device, a feature amount specifying method, and a recording medium for specifying an appropriate feature amount for use in individual identification for each of a plurality of identification objects. It is in.
 本発明の一形態における第一の特徴量特定装置は、個体識別処理の対象となる個体が存在する空間の環境に起因する情報である環境情報から検出される当該個体を示す情報と、前記個体のそれぞれの位置を示す位置情報と、前記環境情報から抽出される特徴量と、前記特徴量のそれぞれの属性を示す属性情報とを、それぞれ受け取り、前記位置情報および前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別するレジストレーション部と、前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求める有効値算出部と、前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する特徴量特定部と、を備える。
 本発明の一形態における第一の個体識別システムは、個体識別処理の対象となる複数の個体が存在する空間の環境に起因する情報である環境情報を取得する環境情報取得部と、前記環境情報から前記複数の個体を、それぞれの個体の位置を示す位置情報と共に検出する個体検出部と、前記環境情報から複数の特徴量を、それぞれの特徴量の種類を示す属性情報と共に抽出する特徴量抽出部と、前記個体のそれぞれの前記位置情報および前記特徴量のそれぞれの前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別するレジストレーション部と、前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求める有効値算出部と、前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する特徴量特定部と、前記特定された特徴量に基づいて前記抽出された個体毎に個体識別処理を行う照合部と、を備える。
 本発明の一態様における第一の特徴量特定方法は、個体識別処理の対象となる個体が存在する空間の環境に起因する情報である環境情報から検出される当該個体を示す情報と、前記個体のそれぞれの位置を示す位置情報と、前記環境情報から抽出される特徴量と、前記特徴量のそれぞれの属性を示す属性情報とを、それぞれ受け取り、前記位置情報および前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別し、前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求め、前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する。
 本発明の一態様における第一の記録媒体は、個体識別処理の対象となる個体が存在する空間の環境に起因する情報である環境情報から検出される当該個体を示す情報と、前記個体のそれぞれの位置を示す位置情報と、前記環境情報から抽出される特徴量と、前記特徴量のそれぞれの属性を示す属性情報とを、それぞれ受け取り、前記位置情報および前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別する処理と、前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求める処理と、前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する処理と、をコンピュータに実行させるためのプログラムを記録する。
The first feature amount specifying device according to one aspect of the present invention includes information indicating an individual detected from environment information that is information derived from an environment in a space where an individual to be subjected to individual identification processing exists, and the individual Each of the position information indicating the position, the feature amount extracted from the environment information, and the attribute information indicating each attribute of the feature amount, respectively, and based on the position information and the attribute information, For each feature quantity, a registration unit that discriminates the individual in which the feature quantity has occurred; for each feature quantity extracted from the environment information, an effective value calculation unit that obtains an effective value indicating the quality of the feature quantity; and And a feature amount specifying unit that specifies a feature amount used for individual identification processing of the individual for each detected individual based on the determination result and the effective value.
The first individual identification system according to one aspect of the present invention includes an environmental information acquisition unit that acquires environmental information that is information resulting from an environment of a space where a plurality of individuals to be subjected to individual identification processing exist, and the environmental information An individual detection unit for detecting the plurality of individuals together with position information indicating the position of each individual, and feature amount extraction for extracting a plurality of feature amounts from the environment information together with attribute information indicating the type of each feature amount A registration unit that determines, for each feature quantity, an individual in which the feature quantity has occurred, based on the position information of each of the individuals and the attribute information of the feature quantities, and the environment information For each extracted feature quantity, an effective value calculation unit for obtaining an effective value indicating the quality of the feature quantity, and for each detected individual based on the result of the discrimination and the effective value. It includes a feature amount specifying unit for specifying a feature quantity for use in individual identification process of an individual, and a matching unit for performing individual identification process for each individual that has been extracted on the basis of the identified feature value.
The first feature amount specifying method according to one aspect of the present invention includes: information indicating an individual detected from environment information that is information derived from an environment in a space where an individual that is an object of individual identification processing is present; and the individual Each of the position information indicating the position, the feature amount extracted from the environment information, and the attribute information indicating each attribute of the feature amount, respectively, and based on the position information and the attribute information, For each feature amount, an individual in which the feature amount has occurred is determined, and for each feature amount extracted from the environment information, an effective value indicating the quality of the feature amount is obtained, and the determination result and the effective value are obtained. Based on the detected individual, the feature quantity used for the individual identification process of the individual is specified.
The first recording medium in one aspect of the present invention includes information indicating the individual detected from environmental information that is information derived from the environment of the space where the individual that is the target of the individual identification process exists, and each of the individual Receiving the position information indicating the position of the feature, the feature quantity extracted from the environment information, and the attribute information indicating the attribute of the feature quantity, respectively, and based on the position information and the attribute information, the feature quantity A process for discriminating an individual in which the feature quantity has occurred, a process for obtaining an effective value indicating the quality of the feature quantity for each feature quantity extracted from the environment information, a result of the discrimination and the effective value For each detected individual, a program for causing the computer to execute a process for specifying a feature amount used for the individual identification process of the individual is recorded.
 本発明の効果の一つは、複数の識別対象のそれぞれに対して個体識別に用いるための適切な特徴量を特定することが可能となる点にある。 One of the effects of the present invention is that it is possible to specify an appropriate feature amount for use in individual identification for each of a plurality of identification targets.
図1は、第1の実施の形態の個体識別システム1の構成例を示すブロック図である。FIG. 1 is a block diagram illustrating a configuration example of an individual identification system 1 according to the first embodiment. 図2は、第1の実施の形態のセンサ部1000の構成例を示すブロック図である。FIG. 2 is a block diagram illustrating a configuration example of the sensor unit 1000 according to the first embodiment. 図3は、第1の実施の形態の特徴量判別部1103の構成例を示すブロック図である。FIG. 3 is a block diagram illustrating a configuration example of the feature amount determination unit 1103 according to the first embodiment. 図4は、第1の実施の形態の個体識別システム1の動作例を示すフローチャートである。FIG. 4 is a flowchart illustrating an operation example of the individual identification system 1 according to the first embodiment. 図5は、第2の実施の形態の個体識別システム2の構成例を示すブロック図である。FIG. 5 is a block diagram illustrating a configuration example of the individual identification system 2 according to the second embodiment. 図6は、第2の実施の形態の複合センサ部2200の構成例を示すブロック図である。FIG. 6 is a block diagram illustrating a configuration example of the composite sensor unit 2200 according to the second embodiment. 図7は、第2の実施の形態の個体識別システム2の動作例を示すフローチャートである。FIG. 7 is a flowchart illustrating an operation example of the individual identification system 2 according to the second embodiment. 図8は、第3の実施の形態の個体識別システム3の構成例を示すブロック図である。FIG. 8 is a block diagram illustrating a configuration example of the individual identification system 3 according to the third embodiment. 図9は、第3の実施の形態の生体情報取得部3200の構成例を示すブロック図である。FIG. 9 is a block diagram illustrating a configuration example of the biological information acquisition unit 3200 according to the third embodiment. 図10は、第3の実施の形態の個体識別システム3の動作例を示すフローチャートである。FIG. 10 is a flowchart illustrating an operation example of the individual identification system 3 according to the third embodiment. 図11は、第4の実施の形態の個体識別システム4の構成例を示すブロック図である。FIG. 11 is a block diagram illustrating a configuration example of the individual identification system 4 according to the fourth embodiment. 図12は、第4の実施の形態の個体識別システム4の動作例を示すフローチャートである。FIG. 12 is a flowchart illustrating an operation example of the individual identification system 4 according to the fourth embodiment. 図13は、第5の実施の形態の特徴量特定装置50の構成例を示すブロック図である。FIG. 13 is a block diagram illustrating a configuration example of the feature quantity specifying device 50 according to the fifth embodiment. 図14は、データベース部1107に記憶される情報の一例を示す図である。FIG. 14 is a diagram illustrating an example of information stored in the database unit 1107.
 次に、発明を実施するための形態について図面を参照して詳細に説明する。
 [第1の実施の形態]
 図1は、本発明の第1の実施の形態における個体識別システム1の構成を示すブロック図である。
 図1に示されるように、本発明の第1の実施の形態における個体識別システム1は、センサ部1000と識別処理装置1100とを備える。
 センサ部1000の構成について説明する。
 センサ部1000は、個体識別処理の対象となる個体が存在する空間の環境に起因する情報である環境情報を取得する。以降、個体識別処理の対象となる個体が存在する空間を単に実空間と称する。
 環境情報は、実空間における画像情報を含んでもよい。この場合、センサ部1000は、具体的な構成として、図2に示されるように実空間の画像情報を取得する画像情報取得部1001を備えてもよい。また環境情報は、実空間における音声情報を含んでもよい。この場合センサ部1000は、具体的な構成として、図2に示されるように実空間の音声情報を取得する音声情報取得部1002を備えてもよい。
 画像情報取得部1001は、具体的には、実空間の映像や静止画像を撮影できるビデオカメラであってもよい。また音声情報取得部1002は、具体的には、実空間の音声を取得できるマイクロホンであってもよい。音声情報取得部1002を構成するマイクロホンは、音声が発生した位置を特定できる指向性を持つ機能を備えてもよい。
 なお、画像情報取得部1001および音声情報取得部1002は、実空間上の2以上の位置に複数配置されてもよい。
 また、実空間の環境情報を取得する手段は、上記の構成に限定されるものではない。例えば実空間の環境情報を取得する手段は、外部記憶装置に保持されている実空間の環境情報を読み出すことによって取得してもよい。
 本実施の形態では、環境情報とは、画像情報および音声情報を含む情報であると仮定する。
 識別処理装置1100の構成について説明する。
 識別処理装置1100は、個体検出部1101、特徴量抽出部1102、特徴量判別部1103、特徴量特定部1104、データベース追加処理部1105、照合部1106およびデータベース部1107を備える。
 第1の実施の形態における識別処理装置1100は、環境情報から抽出される特徴量のそれぞれが発生した個体を判別する。そして識別処理装置1100は、環境情報から抽出される特徴量のそれぞれについて、特徴量の品質を示す有効値を求める。そして識別処理装置1100は、判別された結果と算出された有効値とに基づいて、環境情報から検出された個体毎に、個体識別処理に用いる特徴量をそれぞれ特定する。
 よって第1の実施の形態における識別処理装置1100は、各個体に対応付けられた特徴量の中から有効値の高い、すなわち、より高品質である特徴量を優先して個体識別処理に用いることができる。
 以下、第1の実施の形態における識別処理装置1100が備える各構成要素について詳細に説明する。
 個体検出部1101は、センサ部1000が取得した実空間の環境情報をセンサ部1000から受け取る。そして個体検出部1101は、受け取った環境情報からその実空間に存在する識別対象となる個体を検出する処理を行う。また個体検出部1101は、受け取った環境情報からその個体の位置もその個体と共に特定する。
 具体的には、個体検出部1101は、取得した環境情報に含まれる画像情報に対して背景差分法やパターンマッチングを適用することで識別対象となる個体とその個体の位置を特定してもよい。あるいは個体検出部1101は、環境情報として多視点画像を含む情報を受け取った場合、その多視点画像を用いて識別対象となる個体とその個体の空間上における3次元位置を特定してもよい。あるいは個体検出部1101は、以下のように識別対象となる個体を検出し、その個体の位置を特定してもよい。例えばセンサ部1000が指向性マイクロホンを含む場合、個体検出部1101は、以下の方法で発音個体の位置を特定してもよい。すなわち個体検出部1101は、センサ部1000から受け取った環境情報とセンサ部1000が含む指向性マイクロホンの機能により特定される発音個体の推定位置とに基づいて発音個体を検出し、発音個体の位置を特定してもよい。
 また個体検出部1101は、検出した個体毎に、特定された個体の位置を示す位置情報を生成する。
 個体検出部1101は、検出した個体毎に、その個体を示す情報とその個体の位置情報とその位置情報に対応する、環境情報の少なくとも一部の画像情報と音声情報とを特徴量抽出部1102へ出力する。少なくとも一部の画像情報とは、その個体の領域を含む画像情報であってもよい。またこの画像情報は、その個体の領域を含む映像情報であってもよい。また少なくとも一部の音声情報とは、その個体の位置情報で示される位置から所定の距離以内から発生したと推定される音声を示す音声情報であってもよい。
 また個体検出部1101は、検出した個体毎に、その個体を示す情報とその個体の位置情報とを後述の特徴量特定部1104へ出力する。
 特徴量抽出部1102は、個体検出部1101から個体を示す情報とその個体の位置情報とその個体に対応する画像情報と音声情報とを受け取る。そして特徴量抽出部1102は、受け取った画像情報と音声情報とから特徴量を抽出する。具体的には、特徴量抽出部1102は、受け取った画像情報から取得することができる物体の色、形状、大きさ、模様などに基づく特徴量を抽出してもよい。あるいは特徴量抽出部1102は、受け取った映像情報から取得することができる物体の動作などに基づく特徴量を抽出してもよい。あるいは特徴量抽出部1102は、受け取った音声情報から取得することができる物体の発する音に基づく特徴量などを抽出してもよい。
 特徴量抽出部1102は、特徴量を抽出する際にその特徴量の属性を特定する。特徴量の属性とは、例えば、次に挙げる属性であってもよい。
(a)特徴量の取得元の情報の種類
(b)特徴量が人間から発生したか否か
(c)特徴量が男性から発生されたか女性から発生されたか
(d)特徴量を発生させた人物の人種
(e)特徴量の強弱
(f)特徴量の高低
(g)特徴量が言語であるか否か
(h)特徴量が何語であるか
(i)特徴量が発生したと推定される位置
 特徴量抽出部1102は、特定した特徴量の属性を示す属性情報を生成する。例えば特徴量の属性が、(a)「特徴量の取得元の情報の種類」であった場合、特徴量抽出部1102は、特徴量が、環境情報のうち音声情報から得られたのか、それとも画像情報から得られたのかを特定する。そして特徴量抽出部1102は、特定した取得元を示す情報を属性情報として生成する。
 例えば特徴量の属性が、(b)「特徴量が人間から発生したか否か」であった場合、特徴量抽出部1102は、その特徴量を解析し人間に特有な情報が含まれているか否か判定する。この判定方法として、既知のあらゆる方法が適用されうる。そして特徴量抽出部1102は、その特徴量に人間に特有な情報が含まれている場合に、その特徴量は人間から発生したことを示す情報を属性情報として生成する。
 例えば特徴量の属性が、(c)「特徴量が男性から発生されたか女性から発生されたか」であった場合、特徴量抽出部1102は、その特徴量を解析し男性から発生したものか女性から発生したものかを判定する。この判定方法として、既知のあらゆる方法が適用されうる。そして特徴量抽出部1102は、その特徴量に男性または女性に特有な情報が含まれている場合に、その特徴量は男性または女性から発生したことを示す情報を属性情報として生成する。
 特徴量の属性が上記(a)~(c)以外のものであった場合でも、上記の説明と同様にそれぞれ特徴量抽出部1102は、所定の属性情報を生成する。
 特徴量抽出部1102は、抽出された特徴量と各特徴量の属性情報とを後述の特徴量判別部1103へ出力する。
 特徴量判別部1103は、特徴量抽出部1102が抽出した特徴量が実空間に存在するどの個体から得られた特徴量であるかを判別し、判別結果を出力する。そして特徴量判別部1103は、それぞれの特徴量の有効値を算出する。具体的には、特徴量判別部1103は、図3に示されるように、個体と特徴量とを対応付けるレジストレーション部1113と各特徴量の有効値を算出する有効値算出部1123とを含む。
 レジストレーション部1113は、個体検出部1101から、個体を示す情報とその各個体の位置情報とを受け取る。またレジストレーション部1113は、特徴量抽出部1102から、特徴量とその各特徴量の属性情報とを受け取る。そしてレジストレーション部1113は、受け取った位置情報と属性情報とに基づいて、受け取った特徴量のそれぞれが発生した個体を判別する。
 具体的には、レジストレーション部1113は、以下の方法で個体と特徴量とを対応付けてもよい。まずレジストレーション部1113は、個体検出部1101が環境情報に基づいて特定する各個体の位置と、特徴量抽出部1102が特定する各特徴量が発生したと推定される位置との差分を特定する。そしてレジストレーション部1113は、特定された差分が所定の閾値以下であるときに、その個体と特徴量とを対応付ける。
 または、レジストレーション部1113は、以下の方法で個体と特徴量とを対応付けてもよい。すなわち、レジストレーション部1113は、各個体の位置の時間ごとの差分と、各特徴量が発生したと推定される位置の時間ごとの差分とが、対応する場合にその個体と特徴量とを対応付ける。ここで「対応する」とは、各差分をベクトルとし、それらのベクトルの差分で特定されるベクトルが所定の長さ未満である場合を意味してもよい。
 この場合、個体検出部1101および特徴量抽出部1102は、個体の位置、または特徴量の属性を特定した時刻を示す情報を、それぞれの個体を示す情報または特徴量に対応付けてもよい。そして個体検出部1101および特徴量抽出部1102は、時刻を示す情報を対応付けた個体を示す情報または特徴量を特徴量判別部1103に出力してもよい。
 または、センサ部1000は、環境情報を取得した時刻をその環境情報に対応付け、その環境情報を個体検出部1101または特徴量抽出部1102に渡してもよい。そして個体検出部1101または特徴量抽出部1102は、個体の位置を示す位置情報または特徴量の属性を示す属性情報に対し、受け取った環境情報に対応付けられている時刻を対応付けてもよい。そして個体検出部1101または特徴量抽出部1102は、時刻を対応付けた個体の位置情報または特徴量の属性情報を特徴量判別部1103に出力してもよい。
 または、レジストレーション部1113は、以下の方法で個体と特徴量とを対応付けてもよい。特徴量の属性が(b)「特徴量が人間から発生したか否か」であり、その属性情報が「人間である」という情報を示す場合、レジストレーション部1113は、以下の方法で個体と特徴量とを対応付ける。すなわちレジストレーション部1113は、画像情報に含まれる特徴量のうち口唇の動きを示す特徴量に基づいて話者推定を行ってもよい。そしてレジストレーション部1113は、話者を示すと推定された画像情報から抽出される特徴量と、音声情報から抽出される特徴量とを対応付ける。最後にレジストレーション部1113は、前述の対応付けられた特徴量と対応付けるべき個体を、その各特徴量と個体との関係に基づいて特定してもよい。これにより、レジストレーション部1113は、特徴量と個体との一対一の関係に基づく対応付けよりも高精度の対応付けを行うことができる。
 なお、レジストレーション部1113は、一特徴量を一個体だけに割り当てる以外に、確率的に重みを付けて複数の個体に対して一特徴量を割り当ててもよい。例えばレジストレーション部1113は、一の特徴量を、第一の個体に確率80%という重みを付けたうえで対応付け、第二の個体に確率20%という重みを付けたうえで対応付けてもよい。
 この確率は、特徴量がその個体から発生した確からしさを示す確率である。この確率は、以下の方法で算出されてもよい。例えば、特徴量の属性情報はその特徴量が発生したと推定される位置を示す位置情報を含む場合、レジストレーション部1113は、各個体の位置情報と各特徴量の属性情報とでそれぞれ特定される位置の差分に基づいて前述の確率を算出してもよい。例えば、レジストレーション部1113は、前述の差分が小さいほど高い確率を割り当て、差分が大きいほど低い確率を割り当ててもよい。具体的にはレジストレーション部1113は、差分に反比例するようにそれぞれ確率を割り当てればよい。
 また、この確率は、特徴量の属性情報に基づいて定められてもよい。例えば、特徴量の属性情報が特徴量の種類を示す情報を含む場合、レジストレーション部1113は、特徴量の種類に応じて前述の確率にそれぞれ重みをつけてもよい。
 有効値算出部1123は、特徴量が抽出された画像情報および音声情報の信号強度などに基づいて特徴量の品質を示す有効値を算出する。有効値の算出方法として、信号のSN比や信号強度の情報を利用して定量的に算出する方法や、予め収集した学習データを利用したニューラルネットワークやサポートベクトルマシンにより算出する方法がある。また有効値の算出方法として、特許文献1に記載の方法が適用されても良い。これらは例示であって、有効値の算出方法は、上記に限られない。
 特徴量特定部1104は、特徴量抽出部1102が抽出した特徴量の中から、個体の識別に利用する特徴量を、個体検出部1101が検出した個体毎に特定する。具体的には、特徴量特定部1104は、特徴量判別部1103で得られた特徴量の判別結果および有効値を利用し、個体識別処理のための照合に利用される特徴量を特定する。
 例えば、特徴量特定部1104は、個体検出部1101が検出した個体毎に、一の個体から発生したと判別された特徴量の有効値をそれぞれ参照する。そして、特徴量特定部1104は、参照されたそれぞれの有効値が所定の閾値以上の特徴量をその一の個体の個体識別処理に用いる特徴量と特定する。この所定の閾値はあらかじめ定められる定数であってもよい。あるいはこの所定の閾値は、特徴量の属性情報に基づいて特徴量特定部1104によって算出される値であってもよい。
 レジストレーション部1113が、確率的に重み付けをつけて複数の個体に対して一特徴量を割り当てている場合、特徴量特定部1104は以下の方法によって個体の識別に利用する特徴量を、個体検出部1101が検出した個体毎に特定してもよい。すなわち特徴量特定部1104は、個体検出部1101が検出した個体毎に、一の個体から発生したと判別された特徴量の有効値と、その特徴量がその一の個体から発生したと推定された確率との積をそれぞれ算出する。そして特徴量特定部1104は、算出した積が所定の閾値以上である場合に、その特徴量をその一の個体の個体識別処理に用いる特徴量と特定する。
 また特徴量特定部1104は、後述のデータベース部1107を参照することによって、各個体の個体識別処理に用いる特徴量を特定してもよい。具体的には、特徴量特定部1104は、ある個体を示す情報に対応付けられている特徴量がデータベース部1107に記憶されているか否か判定する。そして、特徴量特定部1104は、データベース部1107に記憶されていると判定した場合に、その特徴量の属性情報に対応する属性情報を持つ特徴量を前述の個体の個体識別処理に用いる特徴量と特定する。
 データベース部1107は、さまざまな個体から取得された複数種の特徴量を記憶する。特徴量の種類とは、物体の色、形状、大きさ、模様のようなテクスチャパターンなどに基づく画像特徴、物体の動作などに基づく映像特徴、物体の発する音に基づく音声特徴などを含む。それぞれの種類の特徴量が個体毎に対応付けられてデータベース部1107に登録される。
 なお、データベース部1107は、識別処理装置1100と通信可能に接続されていれば識別処理装置1100の外部に設置されるような構成であってもよい。また各特徴量はその特徴量の種類を示す情報と対応付けられてデータベース部1107に記憶されてもよい。さらに各特徴量は、その特徴量の属性情報と対応付けられてデータベース部1107に記憶されてもよい。さらに各特徴量は、特徴量判別部1103がその特徴量と対応付けた個体を示す情報と対応付けられてデータベース部1107に記憶されてもよい。
 図14は、データベース部1107に記憶される情報を示す一例である。図14を参照すると、データベース部1107は、個体を示す情報と、各個体に対応付けられている特徴量と、その特徴量の属性情報と、をそれぞれ対応付けて記憶している。図14において、特徴量の属性情報とは、その特徴量が発生したと推定される位置を示す情報である。
 照合部1106は、特徴量特定部1104によって特定された特徴量と、データベース部1107に蓄積された特徴量とを照合し、個体識別を行う。照合部1106は、特徴量特定部1104によって特定された特徴量とデータベース部1107に蓄積された特徴量との相関を示す相関値がある閾値以上あるいは前述の特徴量間の距離がある閾値以下となった場合、それらの特徴量の類似性が高いと判定する。そしてこの場合に照合部1106は、それぞれの特徴量に対応付けられている個体は同一であると判定する。
 照合部1106は、特徴量特定部1104によって特定された特徴量について、データベース部1107に蓄積された特徴量のすべてに対する相関値あるいは特徴量間の距離を算出する。そして照合部1106は、データベース部1107に蓄積されたすべての特徴量に対して相関値がある閾値以下あるいは特徴量間の距離がある閾値以上であるか否か判定する。データベース部1107に蓄積されたすべての特徴量に対して相関値がある閾値以下あるいは特徴量間の距離がある閾値以上であった場合、照合部1106は、以下の処理を行う。すなわち照合部1106は、その対象の個体がデータベース部1107に未登録であると判定する。照合部1106によりその対象の個体が未登録と判断された場合、後述のデータベース追加処理部1105が特徴量特定部1104で特定された特徴量とその個体を示す情報とを対応付けてデータベース部1107へ登録する。
 これにより、次回以降その個体が実空間に存在した場合、照合部1106は、データベース部1107へ登録された特徴量を用いた照合を行うことにより、その個体の識別処理を行うことができる。
 データベース追加処理部1105は、照合部1106がデータベース部1107に未登録であると判定した個体を示す情報とその個体を識別するために用いる特徴量とを照合部1106から受け取る。またデータベース追加処理部1105は、受け取った特徴量の属性情報も受け取る。そしてデータベース追加処理部1105は、受け取った情報をそれぞれ対応付けてデータベース部1107に記憶する。
 図4は、本発明を実施するための第1の実施の形態における個体識別システム1の動作を示すフローチャートである。
 画像情報取得部1001は、実空間中の環境情報に含まれる画像情報を取得する。そして音声情報取得部1002は、実空間中の環境情報に含まれる音声情報を取得する(ステップS1)。センサ部1000は、取得した環境情報、すなわち画像情報および音声情報を個体検出部1101へ出力する。
 個体検出部1101は、センサ部1000から出力された画像情報および音声情報を解析し、識別すべき個体を検出し、その個体の位置情報を特定する。そして個体検出部1101は、検出した個体を示す情報とその個体の位置情報とを特徴量抽出部1102および特徴量判別部1103へ出力する。また個体検出部1101は、検出した個体毎に、その個体を示す情報と、その個体の位置情報と、その位置情報に対応する、環境情報の少なくとも一部の画像情報と、環境情報の少なくとも一部の音声情報とを特徴量抽出部1102へ出力する(ステップS2)。具体的には、個体検出部1101は、画像情報に対する背景差分法やパターンマッチングによる物体検出処理、多視点画像を利用した3次元位置検出処理、指向性マイクロホンを利用した音声による発音物体の位置検出処理などを組み合わせて利用する。そして個体検出部1101は、前述の組み合わされた処理により、識別対象となる個体を検出しその個体の位置情報を特定する。
 特徴量抽出部1102は、個体検出部1101から受け取る画像情報および音声情報から個体を識別するための特徴量を抽出し、特徴量特定部1104へ出力する(ステップS3)。具体的には、特徴量抽出部1102は、画像情報から取得することができる物体の色、形状、大きさ、模様などに基づく特徴量を抽出してもよい。あるいは特徴量抽出部1102は、映像情報から取得することができる物体の動作などに基づく特徴量を抽出してもよい。あるいは特徴量抽出部1102は、音声情報から取得することができる物体の発する音に基づく特徴量などを抽出してもよい。
 特徴量判別部1103は、個体検出部1101から入力された個体に対応する画像情報および音声情報とその個体の位置情報を利用し、特徴量抽出部1102が抽出した特徴量のそれぞれが発生した個体を判別する(ステップS4)。具体的には、特徴量判別部1103は、個体の空間位置と音声信号の方向の関係や、映像情報から得られる個体の動作と音声信号の移動方向の関係などを考慮することで、特徴量のそれぞれが発生した個体を判別する。そして特徴量判別部1103は、その判別結果を特徴量特定部1104へ出力する。
 さらに、特徴量判別部1103は、個体検出部1101から入力された個体の画像情報および音声情報の有効性を表す有効値を算出し、特徴量特定部1104へ出力する(ステップS5)。有効値の算出方法として、例えば、信号のSN比や信号強度などの情報を利用して定量的に算出する方法や、予め収集された学習データを利用したニューラルネットワークやサポートベクトルマシンにより算出する方法などがある。
 特徴量特定部1104は、特徴量抽出部1102から入力された複数種の特徴量と特徴量判別部1103から入力された判別結果および各信号の有効値を利用し、対象となる個体の識別に有効な特徴量を特定する(ステップS6)。特定された特徴量は照合部1106へ出力される。照合部1106は特徴量特定部1104から入力された特徴量とデータベース部1107に蓄積されている複数種の特徴量を照合し、2つの特徴量間の相関を示す照合スコアを算出する(ステップS7)。照合部1106は、特徴量の組毎に相関値あるいは距離を算出し、それらを統合することで照合スコアを算出する。相関値あるいは距離の統合の方法として、それぞれの値の平均値をとる方法や最大値をとる方法、またはそれぞれの値の加算や乗算などがある。また、相関値あるいは距離の統合の方法として、予め用意した学習データを利用してニューラルネットワークやサポートベクトルマシンによって統合する方法もある。本機能を有するシステムが構築される際には、上記統合方法のうち少なくとも1つが実装されればよい。
 照合部1106は、算出した照合スコアがある第一の閾値以下(距離ベースの照合スコアだった場合は第一の閾値以上)であるか否か判定する。すなわち照合部1106は、データベース部1107に該当する個体が存在しないかどうか判定する(ステップS8)。照合部1106がデータベース部1107に該当する個体は存在しないと判定した場合(ステップS8で「YES」(未登録)の場合)、データベース追加処理部1105は、以下の処理を実行する。すなわちデータベース追加処理部1105は、特徴量特定部1104で特定された特徴量を新たにデータベース部1107へ登録する(ステップS9)。これにより、次回以降の個体識別が可能となる。
 一方、算出した照合スコアが第一の閾値以上(距離ベースの照合スコアだった場合は第一の閾値以下)である場合、照合部1106は、データベース部1107に該当する個体が存在すると判定する(ステップS8で「NO」(登録済)の場合)。そしてこの場合、照合部1106による照合は、完了となる。
 なお、照合部1106は以下の機能を備えてもよい。まず、照合部1106は、算出された照合スコアが第一の閾値よりも大幅に高いか否か判定する。なお、算出された照合スコアが距離ベースの照合スコアであった場合、照合部1106は、第一の閾値より大幅に小さいか否か判定する。ここで第二の閾値は、第一の閾値よりも大きい値(照合スコアが距離ベースの照合スコアであった場合、第一の閾値よりも小さい値)を示す。
 例えば照合部1106は、あらかじめ定められている第二の閾値に基づいて、算出した照合スコアがその第二の閾値以上であるか否か判定する。そして照合部1106は、その照合スコアが所定の第二の閾値以上であった場合に、個体の照合信頼度が極めて高いと判定する。さらに照合部1106は、その照合スコアが所定の第二の閾値以上であった場合に、データベース部1107にその個体の特徴量を追加する。
 照合部1106が上記の機能を備える場合、特徴量特定部1104は、ある個体を示す情報に対応付けられている特徴量がデータベース部1107に記憶されているか否か判定してもよい。そして、特徴量特定部1104は、データベース部1107に記憶されていると判定した場合に、その特徴量の属性情報に対応する属性情報を持つ特徴量を、前述の個体の個体識別処理に用いる特徴量と特定してもよい。
 照合部1106による照合完了の後、個体識別システム1は、個体検出部1101で検出された個体の中でまだ識別処理を行っていない個体が存在するか否か判定する(ステップS10)。そして、個体識別システム1は、まだ識別処理を行っていない個体が存在すると判定した場合(ステップS10で「YES」(未識別個体あり)の場合)、ステップS4の特徴判別処理から同様のステップを繰り返す。この動作により、個体識別システム1は、実空間に存在する個体をすべて識別することができる。
 一方、個体識別システム1は、個体検出部1101で検出された個体に対してすべて識別処理が実行されたと判定した場合(ステップS10で「NO」(未識別個体なし)の場合)、識別処理を終了する。
 第1の実施の形態における個体識別システム1は、環境情報から抽出される特徴量のそれぞれが発生した個体を判別する。そして個体識別システム1は、環境情報から抽出される特徴量のそれぞれについて、特徴量の品質を示す有効値を求める。そして個体識別システム1は、判別された結果と算出された有効値とに基づいて、環境情報から検出された個体毎に、個体識別処理に用いる特徴量をそれぞれ特定する。
 よって第1の実施の形態における個体識別システム1は、各個体に対応付けられた特徴量の中から有効値の高い、すなわち、より高品質である特徴量を優先して個体識別処理に用いることができる。結果として、第1の実施の形態における個体識別システム1は、複数の識別対象のそれぞれに対して個体識別に用いるための適切な特徴量を特定することができる。これにより個体識別システム1は、環境の変動にロバストでかつ複数個体を同時に識別しなければならない場合においても、より高精度な個体識別処理が可能となる。
 なお、第1の実施の形態において、特徴量判別部1103は個体検出部1101に内包されるような構成であってもよい。この場合、個体検出部1101は、センサ部1000から受け取った画像情報および音声情報から、識別する個体を検出する際に、その個体の位置を含む所定の領域内の画像情報または音声情報とその個体とを対応付けてもよい。そして個体検出部1101は、個体に対応付けられた画像情報または音声情報の信号強度に基づいてその画像情報または音声情報に含まれる特徴量の有効値を算出する。そして個体検出部1101は、個体を示す情報とその個体の位置情報とその個体に対応付けられた画像情報および音声情報とを特徴量抽出部1102へ出力してもよい。特徴量抽出部1102は、受け取った画像情報と音声情報とに基づいてそれらに含まれる特徴量をそれぞれ抽出する。そして特徴量抽出部1102は、抽出された特徴量毎に特徴量とその特徴量の属性情報とその特徴量を抽出した画像情報または音声情報に対応する有効値とを対応付けて特徴量特定部1104へ出力してもよい。
 [第2の実施の形態]
 図5は、本発明の第2の実施の形態における個体識別システム2の構成を示すブロック図である。図5に示されるように、本発明の第2の実施の形態における個体識別システム2は、複合センサ部2200が第1の実施の形態における個体識別システム1と異なる。個体識別システム2のその他の構成要素については、第1の実施の形態における個体識別システム1と同様である。画像音声取得部2000は、複合センサ部2200との混乱を避けるため、センサ部1000から名称が変更されているが、画像音声取得部2000の構成は、第1の実施の形態におけるセンサ部1000の構成と同様である。個体識別システム2の第1の実施の形態における個体識別システム1と同様の構成要素については、図1と同一の符号を付し、詳細な説明を省略する。
 第2の実施の形態において、複合センサ部2200は、図6に示されるように、形状測定部2201と重量測定部2202と熱量測定部2203と速度測定部2204と光学特性測定部2205と臭気測定部2206と材質検査部2207のうち1以上を備える。複合センサ部2200は、個体検出部1101に接続されている。すなわち、第2の実施の形態の第1の実施の形態との相違は、複合センサ部2200を個体識別システムの構成に追加した点である。個体識別システム2のその他の構成要素については第1の実施の形態における個体識別システム1と同様である。
 形状測定部2201は、個体の3次元形状および体積の情報を取得する。重量測定部2202は、個体の重量を測定する。熱量測定部2203は、個体の温度を計測する。速度測定部2204は、個体の速度を測定する。光学特性測定部2205は、個体表面の反射率や透過率、屈折率などの光学特性を測定する。臭気測定部2206は、個体の臭気を測定する。材質検査部2207は、赤外分光や超音波検査などにより、個体表面の硬度や材質などの情報を取得する。
 また、複合センサ部2200で取得した各種センサ情報は、個体の画像情報および音声情報と同様に、特徴量抽出部1102に出力される。そして特徴量抽出部1102は入力される画像情報と音声情報と各種センサ情報から特徴量抽出を行う。そして特徴量判別部1103は、センサ情報から抽出される特徴量と個体との対応付けおよび各特徴量の有効値算出処理を行う。
 図7は、本発明を実施するための第2の実施の形態における個体識別システム2の動作の一例を示すフローチャートである。第1の実施の形態における個体識別システム1と同様の動作については図4と同一の符号を付し、詳細な説明を省略する。
 第2の実施の形態は、以下の点で第1の実施の形態と異なる。1点目は、センシング情報として環境情報に含まれる画像情報と音声情報と複合センサ部2200から取得されるセンシングデータとが入力として個体検出部1101に与えられる(ステップS21)点である。2点目は、個体検出部1101がセンシング情報から個体検出を行う際に、画像情報と音声情報とに加えて複合センサから得られたセンサ情報も利用する(ステップS22)点である。
 第2の実施の形態の個体識別システム2は、複合センサ部2200により、通常の画像情報や音声情報からは取得することが困難な個体の特徴を取得する。このような個体の特徴は、個体識別の際に非常に有用な情報となり得る。第2の実施の形態の個体識別システム2は、画像情報および音声情報から得られる特徴量以外の特徴量を組み合わせて利用することにより、更なる識別性能の向上が可能となる。
 [第3の実施の形態]
 図8は、本発明の第3の実施の形態における個体識別システム3の構成を示すブロック図である。図8に示されるように、本発明の第3の実施の形態における個体識別システム3は、生体情報取得部3200および人物検出部3101が第1の実施の形態における個体識別システム1と異なる。すなわち、第1の実施の形態における識別処理装置1100の個体検出部1101は、第3の実施の形態における識別処理装置3100の人物検出部3101へと変更されている。個体識別システム3のその他の構成要素については第1の実施の形態における個体識別システム1と同様である。個体識別システム3の第1の実施の形態における個体識別システム1と同様の構成要素については、図1と同一の符号を付し、詳細な説明を省略する。
 第3の実施の形態における個体識別システム3の生体情報取得部3200は、図9に示される各構成要素を備える。すなわち生体情報取得部3200は、虹彩パターン取得部3201と指紋パターン取得部3202と掌紋パターン取得部3203と静脈パターン取得部3204と歯列パターン取得部3205と耳介パターン取得部3206と遺伝子配列情報取得部3207のうち1以上を備える。生体情報取得部3200は、特徴量抽出部1102および特徴量判別部1103に接続されている。
 虹彩パターン取得部3201は、主に赤外光源と赤外線カメラから構成され、人物の虹彩パターンを取得する。画像情報取得部1001を構成するビデオカメラによって虹彩パターンが取得可能な場合は、画像情報取得部1001が虹彩パターンを取得してもよい。
 指紋パターン取得部3202は、人物の指紋パターンを取得する。指紋パターン取得部3202は、接触式センサにより指紋パターンを取得するような構成であってもよいし、カメラ等により非接触で指紋パターンを取得するような構成であってもよい。
 掌紋パターン取得部3203は、人物の掌紋および掌形パターンを取得する。掌紋パターン取得部3203は、指紋パターン取得部3202と同様、接触式センサにより掌紋および掌形パターンを取得するような構成であってもよいし、カメラ等により非接触で掌紋および掌形パターンを取得するような構成であってもよい。また、掌紋パターン取得部3203は、指紋パターンも同時に取得するような構成であってもよい。
 静脈パターン取得部3204は、人物の静脈パターンを取得する。静脈パターン取得部3204は、指、掌、手の甲、顔面、首などの部位から静脈パターンを取得するような構成であってもよいし、その他の部位から静脈パターンを取得するような構成であってもよい。また、指紋パターン取得部3202あるいは掌紋パターン取得部3203が静脈パターンも同時に取得するような構成であってもよい。
 歯列パターン取得部3205は、人物の歯の形状および並び方のパターンを取得する。歯列パターン取得部3205は、歯列パターンとして、カメラで撮影した画像情報以外に、3次元形状情報を取得するようにしてもよい。
 耳介パターン取得部3206は、人物の耳の形状パターンを取得する。耳介パターン取得部3206は、歯列パターン取得部3205と同様、画像情報以外に3次元形状情報を取得するような構成であってもよい。
 遺伝子配列情報取得部3207は、人物の遺伝子配列情報を取得する。遺伝子配列情報取得部3207は、人物の皮膚、体毛、体液等から遺伝子配列情報を取得するような構成であってもよい。
 人物検出部3101は、センサ部1000が取得した画像情報から人物を検出する。人物検出部3101は、画像情報に対する顔検出や映像情報に対する歩行パターン検出などを利用することで人物を検出する。また、人物検出部3101は、画像情報から人物を検出するとともに、人物の顔パターンおよび歩行パターンを取得する機能も有する。
 また、生体情報取得部3200が取得した各種センサ情報は、センサ部1000で取得した画像情報および音声情報と同様に、特徴量抽出部1102に出力される。そして特徴量抽出部1102は、入力される各種センサ情報から特徴量抽出を行う。特徴量判別部1103は、センサ情報から抽出される各特徴量と個体との対応付けおよび各特徴量の有効値算出処理を行う。
 図10は、本発明を実施するための第3の実施の形態における個体識別システム3の動作の一例を示すフローチャートである。第1の実施の形態と同様の動作については図4と同一の符号を付し、詳細な説明を省略する。
 第3の実施の形態は、以下の点で第1の実施の形態と異なる。1点目は、識別対象が人間に限定されたことにより、センシング情報としてセンサ部1000で取得される画像情報と音声情報と生体情報取得部3200で取得される生体情報とが入力として個体検出部1101に与えられる(ステップS31)点である。2点目は、人物検出部3101がセンシング情報から人物を検出する処理を行い、同時に検出した人物の顔パターンおよび歩行パターンを取得する(ステップS32)点である。
 第3の実施の形態における個体識別システム3は、識別対象が一般的な物体から人間へ限定されることで、個人識別に有効な人間の生体情報を利用し個体識別を行う。センサ部1000が画像情報および音声情報を取得するとともに、生体情報取得部3200が人物の生体情報を取得する。また、人物検出部3101が、生体情報として顔パターンや歩行パターンを取得する。人間の生体情報を利用した個人認証技術は関連技術として広く研究されている。よって本実施の形態で説明された個々の生体認証には、その関連技術が利用されればよい。
 第3の実施の形態における個体識別システム3は、画像情報と音声情報、その他生体情報それぞれから抽出される特徴量と人物との対応付けを行いかつ各特徴量の有効値を算出する。そして第3の実施の形態における個体識別システム3は、その対応付けと有効値との両方に基づいて、人物検出部3101が検出した人物毎に、その人物の個人識別処理に用いる特徴量をそれぞれ特定する。個体識別システム3は、各人物に対応付けられた特徴量の中から有効値の高い、すなわち、より高品質である特徴量を優先して個人識別処理に用いることができる。
 結果として、第3の実施の形態における個体識別システム3は、複数の識別対象のそれぞれに対して個人識別に用いるための適切な特徴量を特定することができる。これにより個体識別システム3は、環境の変動にロバストでかつ複数人を同時に識別しなければならない場合においても、より高精度な個人識別処理が可能となる。
 [第4の実施の形態]
 図11は、本発明の第4の実施の形態における個体識別システム4の構成を示すブロック図である。
 図11に示されるように、本発明の第4の実施の形態における個体識別システム4は、センサ部4000と識別処理装置4100とデータベース部4107を備える。そして、センサ部4000と識別処理装置4100と、および識別処理装置4100とデータベース部4107と、はそれぞれ通信可能に接続されている。
 センサ部4000は、映像情報を取得するビデオカメラと音声情報を取得するマイクロホンであり、それぞれがネットワークインタフェース等の通信手段を備える。センサ部4000は、映像情報と音声情報を同時に取得可能なビデオカメラで構成される場合もある。
 識別処理装置4100は、コンピュータであり、少なくともCPU(Central Processing Unit)、メモリおよびネットワークインタフェース等の通信手段を備える。識別処理装置4100は、フレキシブルディスクやCD−ROM(Compact Disc Read Only Memory)等のコンピュータで読み取り可能な記録媒体を読み取るための読取装置や、磁気記憶装置を備える場合もある。識別処理装置4100は、ネットワークインタフェースから受信したプログラムコードをメモリ上に展開し、あるいは、CD−ROMまたは磁気記憶装置などに記憶されたプログラムコードを読み出してメモリ上に展開する。そして識別処理装置4100は、展開されたプログラムコードをCPUが解釈実行することで図11に示すような人物検出部4101、特徴量抽出部4102、特徴量判別部4103、特徴量特定部4104、データベース追加処理部4105および照合部4106として各種機能を実現するコンピュータとして動作する。第4の実施の形態では、識別処理装置4100はいわゆるパーソナルコンピュータ(PC:Personal Computer)である。
 データベース部4107は、少なくともネットワークインタフェース等の通信手段および磁気記憶装置を備える。磁気記憶装置には、複数人の顔特徴量情報と音声特徴量情報が記憶されている。磁気記憶装置は、顔特徴量情報と音声特徴量情報について、一人物につき少なくともどちらかを1以上記憶している。そして磁気記憶装置は、どちらの特徴量も一人物につき複数記憶しておいてもよい。また、顔特徴量情報と音声特徴量情報は、関係データベース管理システム(RDBMS:Relational Database Management System)によって管理されていてもよい。なお、第4の実施の形態は、特徴量が顔パターンと音声パターンとである実施の形態である。そして第4の実施の形態における個体識別システム4は、他の特徴量を利用した個体識別システムにも当然適用できる。
 第4の実施の形態は、データベース部4107に登録されている人物が2人存在(以下人物Aおよび人物Bとする)し、人物Aのみが発話している状況において個体識別システム4が人物Aおよび人物Bを識別する場合を示す実施の形態の例であるとする。
 図12は、本発明を実施するための第4の実施の形態における個体識別システム4の動作を示すフローチャートである。
 センサ部4000は、人物Aおよび人物Bが映った映像情報および発話の音声情報を取得し人物検出部4101へ渡す(ステップS41)。
 人物検出部4101は、顔検出等の処理により空間内に人物が2人存在することを検出し、検出した顔画像データと音声データとを特徴量抽出部4102および特徴量判別部4103へ渡す(ステップS42)。
 特徴量抽出部4102は、人物検出部4101から取得した顔画像データおよび音声データから個人識別のための特徴量を抽出する(ステップS43)。
 特徴量判別部4103は、人物検出部4101から取得した顔画像データを利用し話者推定等により発話者を特定するとともに発話者の顔画像および音声データと発話者とを対応付ける(ステップS44)。また、特徴量判別部4103は、人物検出部4101から取得した顔画像および音声信号の状態からその顔画像または音声信号から抽出される特徴量の有効度を算出する(ステップS45)。
 特徴量特定部4104は、特徴量判別部4103で得られた特徴量と発話者との対応関係および各特徴量の有効度とに基づき、特徴量抽出部4102で取得された特徴量から発話者の照合に利用する特徴量を特定し適切な重み付けをする(ステップS46)。
 照合部4106は、特徴量特定部4104で特定した特徴量とデータベース部4107に記憶された特徴量とを照合し、1人の人物の照合スコアを算出する(ステップS47)。
 第4の実施の形態では、2人の人物ともデータベースに登録されている状況が想定されているため、データベース追加処理部4105によるデータベース追加処理は、実行されない(ステップS48において「NO」(登録済み)の場合)。その後、未識別の人物がもう1名存在しているため、もう1名の人物を照合するための処理(ステップS44以降)が繰り返し実行される(ステップS50において「YES」(未識別人物あり)の場合)。2人の人物ともに照合が完了した場合、未識別の人物は存在しないため処理が終了される(ステップS50において「NO」(未識別人物なし)の場合)。
 上記特許文献1に記載された個人認識システムでは、人物が存在する空間の環境情報を取得する取得手段により取得される画像情報および音声情報が一人物の場合のみに限定されている。よって上記特許文献1に記載された個人認識システムは、第4の実施の形態で想定された状況では正確な識別を行うための適切な特徴量の特定が困難である。それに対し、本発明の第4の実施の形態における個体識別システム4は、人物が2名以上存在するような状況であっても複数の識別対象のそれぞれに対して個人識別に用いるための適切な特徴量を特定することができる。
 [第5の実施の形態]
 図13は、本発明の第5の実施の形態における特徴量特定装置50の構成を示すブロック図である。
 図13を参照すると、本実施の形態の特徴量特定装置50は、レジストレーション部5001と、有効値算出部5002と、特徴量特定部5003とを備える。
 レジストレーション部5001は、個体識別処理の対象となる個体が存在する空間の環境に起因する情報である環境情報から検出される個体を示す情報と、その個体のそれぞれの位置を示す位置情報とを受け取る。またレジストレーション部5001は、前述の環境情報から抽出される特徴量と、その特徴量のそれぞれの属性を示す属性情報とを受け取る。そしてレジストレーション部5001は、受け取った位置情報と属性情報とに基づいて、受け取った特徴量のそれぞれが発生した個体を判別する。
 有効値算出部5002は、レジストレーション部5001が受け取った特徴量のそれぞれについて、特徴量の品質を示す有効値を求める。
 特徴量特定部5003は、レジストレーション部5001による判別結果と、有効値算出部5002が算出する有効値とに基づいて、以下の処理を行う。すなわち特徴量特定部5003は、レジストレーション部5001が受け取った個体を示す情報で特定される個体毎に、その個体の個体識別処理に用いる特徴量をそれぞれ特定する。
 本実施の形態の特徴量特定装置50は、個体識別処理の対象となる個体の種類を示す属性情報と、環境情報から抽出される特徴量の種類を示す属性情報とに基づいて、個体と特徴量とを対応付ける。したがって本実施の形態の特徴量特定装置50は、複数の識別対象のそれぞれに対して個体識別に用いるための適切な特徴量を特定することができる。
 以上、これまで述べてきた各実施の形態は、本発明の好適な実施形態であり、上記実施の形態のみに本発明の範囲を限定するものではない。各実施の形態は、本発明の要旨を逸脱しない範囲において種々の変更を施した形態での実施が可能である。
 例えば、個体識別システムとして、上記第1の実施の形態、第2の実施の形態、第3の実施の形態、第4の実施の形態および第5の実施の形態を互いに組み合わせた実施形態が構成されてもよい。
 また、本発明の各実施の形態における各構成要素は、その機能をハードウェア的に実現することはもちろん、コンピュータとプログラムとで実現することができる。プログラムは、磁気ディスクや半導体メモリなどのコンピュータ可読記録媒体に記録されて提供され、コンピュータの立ち上げ時などにコンピュータに読み取られる。この読み取られたプログラムは、そのコンピュータの動作を制御することにより、そのコンピュータを前述した各実施の形態における構成要素として機能させる。この場合、コンピュータは、個体識別プログラムが読み込まれ実行される中央処理装置(CPU)、特徴量をデータベースとして記憶する記憶装置(ハードディスク等)、およびカメラやマイクなどの入力手段を備える。
 本発明の第1の実施の形態を例にとれば、CPUに読み込まれた個体識別プログラムは、コンピュータを第1の実施の形態において説明された識別処理装置1100として機能させる。
 本発明の効果の一例は、複数の識別対象のそれぞれに対して個体識別に用いるための適切な特徴量を特定することが可能となることである。
 この出願は、2010年8月9日に出願された日本出願特願2010−178644を基礎とする優先権を主張し、その開示のすべてをここに取り込む。
 上記の実施の形態の一部または全部は、以下の付記のようにも記載されうるが、以下には限られない。
 (付記1)
 個体識別処理の対象となる個体が存在する空間の環境に起因する情報である環境情報から検出される当該個体を示す情報と、前記個体のそれぞれの位置を示す位置情報と、前記環境情報から抽出される特徴量と、前記特徴量のそれぞれの属性を示す属性情報とを、それぞれ受け取り、前記位置情報および前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別するレジストレーション部と、
 前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求める有効値算出部と、
 前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する特徴量特定部と、
を備える特徴量特定装置。
 (付記2)
 前記特徴量特定部は、前記個体毎に、前記判別の結果に基づいて一の個体から発生したと判別された特徴量の中で前記有効値が所定の閾値以上の特徴量を当該一の個体の個体識別処理に用いる特徴量と特定する、請求項1に記載の特徴量特定装置。
 (付記3)
 前記レジストレーション部は、前記環境情報から抽出される特徴量毎に、その特徴量が各個体から発生した確率を個体毎に算出し、
 前記特徴量特定部は、前記有効値と前記確率とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する、請求項1または2に記載の特徴量特定装置。
 (付記4)
 前記特徴量特定部は、前記有効値と前記確率との積が所定の閾値以上の特徴量を当該一の個体の個体識別処理に用いる特徴量と特定する、請求項3に記載の特徴量特定装置。
 (付記5)
 前記特徴量の属性情報は、当該特徴量が発生した位置を示す位置情報を含み、
 前記レジストレーション部は、
 各個体の位置情報に基づいて特定される位置と各特徴量の属性情報に基づいて特定される位置との差分に基づいて、前記各特徴量のそれぞれが、前記各個体から発生した特徴量である確率をそれぞれ算出する、請求項3または4に記載の特徴量特定装置。
 (付記6)
 前記レジストレーション部は、
 前記個体の前記位置情報とともに当該個体が検出された時刻を示す情報を受け取り、当該位置情報と当該時刻を示す情報とを対応付け、前記特徴量の前記属性情報とともに当該特徴量が特定された時刻を示す情報を受け取り、当該属性情報と当該時刻を示す情報とを対応付け、第一の個体の位置情報と各位置情報に対応付けられている時刻を示す情報とに基づいて当該第一の個体の位置の時間変化を示すベクトルを生成し、第一の特徴量の属性情報と各属性情報に対応付けられている時刻を示す情報とに基づいて当該第一の特徴量の位置の時間変化を示すベクトルを生成し、第二の個体に基づいて生成されたベクトルと第二の特徴量に基づいて生成されたベクトルとの差分に基づいて特定されるベクトルの長さが所定の値未満である場合に、当該第二の特徴量は当該第二の個体から発生した特徴量と判別する、請求項1ないし5のいずれか1項に記載の特徴量特定装置。
 (付記7)
 前記特徴量特定装置は、個体の識別に利用される照合用の特徴量と当該特徴量の属性情報と当該個体を示す情報とを対応付けて記憶するデータベース部を備え、
 前記特徴量特定部は、一の個体を示す情報に対応付けられている特徴量が前記データベース部に記憶されている場合に、当該特徴量の属性情報に対応する属性情報を持つ特徴量を、当該一の個体の個体識別処理に用いる特徴量と特定する、請求項1ないし6のいずれか1項に記載の特徴量特定装置。
 (付記8)
 個体識別処理の対象となる複数の個体が存在する空間の環境に起因する情報である環境情報を取得する環境情報取得部と、
 前記環境情報から前記複数の個体を、それぞれの個体の位置を示す位置情報と共に検出する個体検出部と、
 前記環境情報から複数の特徴量を、それぞれの特徴量の種類を示す属性情報と共に抽出する特徴量抽出部と、
 前記個体のそれぞれの前記位置情報および前記特徴量のそれぞれの前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別するレジストレーション部と、
 前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求める有効値算出部と、
 前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する特徴量特定部と、
 前記特定された特徴量に基づいて前記抽出された個体毎に個体識別処理を行う照合部と、
を備える個体識別システム。
 (付記9)
 個体識別処理の対象となる個体が存在する空間の環境に起因する情報である環境情報から検出される当該個体を示す情報と、前記個体のそれぞれの位置を示す位置情報と、前記環境情報から抽出される特徴量と、前記特徴量のそれぞれの属性を示す属性情報とを、それぞれ受け取り、前記位置情報および前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別し、
 前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求め、
 前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する特徴量特定方法。
 (付記10)
 個体識別処理の対象となる個体が存在する空間の環境に起因する情報である環境情報から検出される当該個体を示す情報と、前記個体のそれぞれの位置を示す位置情報と、前記環境情報から抽出される特徴量と、前記特徴量のそれぞれの属性を示す属性情報とを、それぞれ受け取り、前記位置情報および前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別する処理と、
 前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求める処理と、
 前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する処理と、をコンピュータに実行させるためのプログラム。
 (付記11)
 前記特徴量の属性情報は、当該特徴量が発生した位置を示す位置情報を含み、
 前記レジストレーション部は、
 一の個体の位置情報に基づいて特定される位置と一の特徴量が発生した位置との差分が所定の閾値以下の場合に、前記一の特徴量が、前記一の個体から発生した特徴量であると特定する、付記1に記載の特徴量特定装置。
 (付記12)
 前記特徴量特定部は、
 個体の位置情報と特徴量の属性情報とを対応付けた情報である対応付け情報を記憶し、
 一の個体の位置情報を含む前記対応付け情報に対応する特徴量の属性情報を持つ当該特徴量を当該一の個体の個体識別処理に用いる特徴量と特定する、付記1に記載の特徴量特定装置。
 (付記13)
 前記個体の位置情報は、個体の移動方向を示す情報を含み、
 前記特徴量の属性情報は、音量が増大しているか減少しているかを示す情報を含み、
 前記特徴量特定部は、
 第一の個体の属性情報が第一の方向を示す情報を含む場合に、音量が増大している情報を含む属性情報を持つ特徴量を、当該第一の個体の個体識別処理に用いる特徴量と特定し、
 第二の個体の属性情報が前記第一の方向の反対方向である第二の方向を示す情報を含む場合に、音量が減少している情報を含む属性情報を持つ特徴量を、当該第二の個体の個体識別処理に用いる特徴量と特定する、付記1に記載の特徴量特定装置。
 (付記14)
 前記環境情報を取得する環境情報取得部と、
 前記環境情報から前記個体を、それぞれの個体の種類を示す属性情報と共に検出し、それらを前記レジストレーション部へ出力する個体検出部と、
 前記環境情報から前記特徴量を、それぞれの特徴量の種類を示す属性情報と共に抽出し、それらを前記レジストレーション部へ出力する特徴量抽出部と、
 を備える、付記1に記載の特徴量特定装置。
 (付記15)
 前記個体識別システムは、個体の識別に利用される照合用の特徴量と当該特徴量の属性情報と当該個体を示す情報とを対応付けて記憶するデータベース部を備え、
 前記照合部は、前記特徴量特定部が特定した特徴量に基づいて当該特徴量に対応付けられている個体の識別に利用される照合用の特徴量を前記データベース部から読み出し、前記特徴量特定部が特定した特徴量と、前記データベース部から読み出した特徴量との相関を示す照合スコアを算出し、当該照合スコアが所定の閾値以上の場合に、前記特徴量特定部が特定した特徴量と前記個体を示す情報と当該特徴量の属性情報とを対応付けて前記データベース部に記憶し、
 前記特徴量特定部は、一の個体を示す情報に対応付けられている特徴量が前記データベース部に記憶されている場合に、当該特徴量の属性情報に対応する属性情報を持つ特徴量を、当該一の個体の個体識別処理に用いる特徴量と特定する、付記8に記載の個体識別システム。
Next, embodiments for carrying out the invention will be described in detail with reference to the drawings.
[First Embodiment]
FIG. 1 is a block diagram showing a configuration of an individual identification system 1 according to the first embodiment of the present invention.
As shown in FIG. 1, the individual identification system 1 according to the first embodiment of the present invention includes a sensor unit 1000 and an identification processing device 1100.
The configuration of the sensor unit 1000 will be described.
The sensor unit 1000 acquires environmental information that is information resulting from the environment of the space in which the individual that is the target of the individual identification process exists. Hereinafter, a space in which an individual subject to individual identification processing exists is simply referred to as a real space.
The environmental information may include image information in real space. In this case, as a specific configuration, the sensor unit 1000 may include an image information acquisition unit 1001 that acquires real space image information as illustrated in FIG. 2. The environment information may include audio information in real space. In this case, as a specific configuration, the sensor unit 1000 may include an audio information acquisition unit 1002 that acquires audio information in real space as illustrated in FIG.
Specifically, the image information acquisition unit 1001 may be a video camera capable of capturing a real space image or a still image. Further, the sound information acquisition unit 1002 may be a microphone that can acquire sound in real space. The microphone constituting the audio information acquisition unit 1002 may have a directivity function that can specify the position where the audio is generated.
Note that a plurality of the image information acquisition unit 1001 and the audio information acquisition unit 1002 may be arranged at two or more positions in the real space.
In addition, the means for acquiring environment information of the real space is not limited to the above configuration. For example, the means for acquiring the real space environment information may be acquired by reading the real space environment information held in the external storage device.
In the present embodiment, it is assumed that the environment information is information including image information and audio information.
The configuration of the identification processing device 1100 will be described.
The identification processing device 1100 includes an individual detection unit 1101, a feature amount extraction unit 1102, a feature amount determination unit 1103, a feature amount identification unit 1104, a database addition processing unit 1105, a collation unit 1106, and a database unit 1107.
The identification processing device 1100 according to the first embodiment determines an individual in which each feature amount extracted from the environment information has occurred. Then, the identification processing device 1100 obtains an effective value indicating the quality of the feature value for each feature value extracted from the environment information. Then, the identification processing device 1100 specifies a feature amount used for the individual identification process for each individual detected from the environment information based on the determined result and the calculated effective value.
Therefore, the identification processing device 1100 according to the first embodiment uses a feature value having a high effective value, that is, a higher quality feature among the feature values associated with each individual, and uses the feature value for individual identification processing. Can do.
Hereinafter, each component provided in the identification processing device 1100 according to the first embodiment will be described in detail.
The individual detection unit 1101 receives environmental information of the real space acquired by the sensor unit 1000 from the sensor unit 1000. Then, the individual detection unit 1101 performs a process of detecting an individual to be identified that exists in the real space from the received environment information. The individual detection unit 1101 also identifies the position of the individual together with the individual from the received environment information.
Specifically, the individual detection unit 1101 may specify the individual to be identified and the position of the individual by applying a background difference method or pattern matching to the image information included in the acquired environment information. . Alternatively, when the individual detection unit 1101 receives information including a multi-viewpoint image as environment information, the individual detection unit 1101 may specify an individual to be identified and a three-dimensional position in the space of the individual using the multi-viewpoint image. Or the individual detection part 1101 may detect the individual used as identification object as follows, and may specify the position of the individual. For example, when the sensor unit 1000 includes a directional microphone, the individual detection unit 1101 may specify the position of the pronunciation individual by the following method. That is, the individual detection unit 1101 detects a sounding individual based on the environmental information received from the sensor unit 1000 and the estimated position of the sounding individual specified by the function of the directional microphone included in the sensor unit 1000, and determines the position of the sounding individual. You may specify.
In addition, the individual detection unit 1101 generates position information indicating the position of the specified individual for each detected individual.
The individual detection unit 1101 includes, for each detected individual, a feature quantity extraction unit 1102 that includes information indicating the individual, position information of the individual, and image information and audio information of at least a part of the environment information corresponding to the position information. Output to. At least a part of the image information may be image information including a region of the individual. The image information may be video information including the area of the individual. Further, at least a part of the voice information may be voice information indicating a voice estimated to be generated within a predetermined distance from the position indicated by the position information of the individual.
In addition, the individual detection unit 1101 outputs, for each detected individual, information indicating the individual and position information of the individual to a feature amount specifying unit 1104 described later.
The feature amount extraction unit 1102 receives information indicating an individual, position information of the individual, image information corresponding to the individual, and audio information from the individual detection unit 1101. The feature amount extraction unit 1102 extracts feature amounts from the received image information and audio information. Specifically, the feature amount extraction unit 1102 may extract feature amounts based on the color, shape, size, pattern, etc. of the object that can be acquired from the received image information. Alternatively, the feature amount extraction unit 1102 may extract a feature amount based on an action of an object that can be acquired from the received video information. Alternatively, the feature amount extraction unit 1102 may extract a feature amount based on sound emitted from an object that can be acquired from the received audio information.
The feature quantity extraction unit 1102 specifies the attribute of the feature quantity when extracting the feature quantity. The attribute of the feature amount may be, for example, the following attribute.
(A) Type of information from which feature quantity is acquired
(B) Whether or not the feature quantity originated from a human
(C) Whether the feature quantity was generated from a man or a woman
(D) The race of the person who generated the feature
(E) Feature quantity strength
(F) Feature level
(G) Whether the feature quantity is a language
(H) How many words the feature value is
(I) Position where the feature amount is estimated to have occurred
The feature amount extraction unit 1102 generates attribute information indicating the attribute of the specified feature amount. For example, if the attribute of the feature value is (a) “type of information from which the feature value is acquired”, the feature value extraction unit 1102 determines whether the feature value is obtained from audio information in the environment information, or Specify whether it was obtained from image information. Then, the feature amount extraction unit 1102 generates information indicating the specified acquisition source as attribute information.
For example, if the attribute of the feature quantity is (b) “whether or not the feature quantity originated from a person”, the feature quantity extraction unit 1102 analyzes the feature quantity and includes information peculiar to the person. Judge whether or not. Any known method can be applied as this determination method. When the feature quantity includes information unique to a person, the feature quantity extraction unit 1102 generates information indicating that the feature quantity is generated from a person as attribute information.
For example, if the attribute of the feature value is (c) “whether the feature value is generated from a man or a woman”, the feature value extraction unit 1102 analyzes the feature value and determines whether the feature value is generated from a man or a woman. Judgment is made from Any known method can be applied as this determination method. When the feature quantity includes information specific to men or women, the feature quantity extraction unit 1102 generates, as attribute information, information indicating that the feature quantity is generated from a man or a woman.
Even if the attribute of the feature quantity is other than the above (a) to (c), the feature quantity extraction unit 1102 generates predetermined attribute information in the same manner as described above.
The feature quantity extraction unit 1102 outputs the extracted feature quantity and attribute information of each feature quantity to a feature quantity determination unit 1103 described later.
The feature quantity discriminating unit 1103 discriminates from which individual in the real space the feature quantity extracted by the feature quantity extracting unit 1102 is a feature quantity, and outputs a discrimination result. Then, the feature amount determination unit 1103 calculates an effective value of each feature amount. Specifically, as illustrated in FIG. 3, the feature amount determination unit 1103 includes a registration unit 1113 that associates an individual with a feature amount, and an effective value calculation unit 1123 that calculates an effective value of each feature amount.
The registration unit 1113 receives information indicating an individual and position information of each individual from the individual detection unit 1101. The registration unit 1113 receives the feature amount and attribute information of each feature amount from the feature amount extraction unit 1102. Then, the registration unit 1113 determines an individual in which each of the received feature amounts has occurred based on the received position information and attribute information.
Specifically, the registration unit 1113 may associate the individual and the feature amount by the following method. First, the registration unit 1113 specifies the difference between the position of each individual specified by the individual detection unit 1101 based on the environment information and the position where each feature amount specified by the feature amount extraction unit 1102 is estimated to have occurred. . The registration unit 1113 associates the individual with the feature amount when the identified difference is equal to or less than a predetermined threshold.
Alternatively, the registration unit 1113 may associate an individual with a feature amount by the following method. That is, the registration unit 1113 associates the individual with the feature quantity when the difference in time of the position of each individual corresponds to the difference with time of the position where each feature quantity is estimated to occur. . Here, “corresponding” may mean a case where each difference is a vector and the vector specified by the difference between the vectors is less than a predetermined length.
In this case, the individual detection unit 1101 and the feature amount extraction unit 1102 may associate information indicating the position of the individual or the time when the attribute of the feature amount is specified with information or feature amount indicating each individual. Then, the individual detection unit 1101 and the feature amount extraction unit 1102 may output information or feature amount indicating an individual associated with information indicating time to the feature amount determination unit 1103.
Alternatively, the sensor unit 1000 may associate the time when the environment information is acquired with the environment information, and pass the environment information to the individual detection unit 1101 or the feature amount extraction unit 1102. The individual detection unit 1101 or the feature amount extraction unit 1102 may associate the time associated with the received environment information with the position information indicating the position of the individual or the attribute information indicating the attribute of the feature amount. Then, the individual detection unit 1101 or the feature amount extraction unit 1102 may output the position information of the individual associated with the time or the attribute information of the feature amount to the feature amount determination unit 1103.
Alternatively, the registration unit 1113 may associate an individual with a feature amount by the following method. When the attribute of the feature quantity is (b) “whether or not the feature quantity originated from a human being” and the attribute information indicates information “human being”, the registration unit 1113 uses the following method to Correspond with the feature value. That is, the registration unit 1113 may perform speaker estimation based on the feature amount indicating the movement of the lips among the feature amounts included in the image information. Then, the registration unit 1113 associates the feature amount extracted from the image information estimated to indicate the speaker with the feature amount extracted from the speech information. Finally, the registration unit 1113 may specify an individual to be associated with the above-described associated feature quantity based on the relationship between each feature quantity and the individual. As a result, the registration unit 1113 can perform the association with higher accuracy than the association based on the one-to-one relationship between the feature amount and the individual.
Note that the registration unit 1113 may assign one feature amount to a plurality of individuals by probabilistic weighting in addition to assigning one feature amount to only one individual. For example, the registration unit 1113 associates one feature amount with a weight of a probability of 80% for the first individual and associates it with a weight of the probability of 20% for the second individual. Good.
This probability is a probability indicating the likelihood that the feature amount has occurred from the individual. This probability may be calculated by the following method. For example, when the feature quantity attribute information includes position information indicating the position where the feature quantity is estimated to occur, the registration unit 1113 is specified by the position information of each individual and the attribute information of each feature quantity. The above-described probability may be calculated based on the difference between the positions. For example, the registration unit 1113 may assign a higher probability as the difference is smaller and assign a lower probability as the difference is larger. Specifically, the registration unit 1113 may assign probabilities so as to be inversely proportional to the difference.
Further, this probability may be determined based on the feature amount attribute information. For example, when the feature quantity attribute information includes information indicating the type of feature quantity, the registration unit 1113 may weight each of the above probabilities according to the type of feature quantity.
The effective value calculation unit 1123 calculates an effective value indicating the quality of the feature amount based on the signal intensity of the image information and the sound information from which the feature amount is extracted. As an effective value calculation method, there are a method of calculating quantitatively using information of signal SN ratio and signal intensity, and a method of calculating by a neural network or support vector machine using previously collected learning data. Further, as a method for calculating the effective value, the method described in Patent Document 1 may be applied. These are examples, and the calculation method of the effective value is not limited to the above.
The feature amount specifying unit 1104 specifies a feature amount used for identifying an individual for each individual detected by the individual detecting unit 1101 from the feature amounts extracted by the feature amount extracting unit 1102. Specifically, the feature quantity specifying unit 1104 uses the feature quantity discrimination result and effective value obtained by the feature quantity discrimination unit 1103 to specify the feature quantity used for collation for individual identification processing.
For example, the feature quantity specifying unit 1104 refers to the effective value of the feature quantity determined to be generated from one individual for each individual detected by the individual detection unit 1101. Then, the feature amount specifying unit 1104 specifies a feature amount whose effective value referred to is equal to or greater than a predetermined threshold as a feature amount to be used for individual identification processing of the individual. This predetermined threshold may be a predetermined constant. Alternatively, the predetermined threshold may be a value calculated by the feature amount specifying unit 1104 based on the feature amount attribute information.
When the registration unit 1113 assigns one feature amount to a plurality of individuals with weighting probabilistically, the feature amount specifying unit 1104 detects the feature amount used for individual identification by the following method. It may be specified for each individual detected by the unit 1101. That is, for each individual detected by the individual detection unit 1101, the feature amount specifying unit 1104 is estimated that the effective value of the feature amount determined to have occurred from one individual and that the feature amount has been generated from that one individual. Each product with the probability is calculated. Then, when the calculated product is equal to or greater than a predetermined threshold, the feature amount specifying unit 1104 specifies the feature amount as a feature amount used for the individual identification process of the one individual.
The feature quantity specifying unit 1104 may specify a feature quantity used for individual identification processing of each individual by referring to a database unit 1107 described later. Specifically, the feature quantity specifying unit 1104 determines whether or not a feature quantity associated with information indicating a certain individual is stored in the database unit 1107. When the feature amount specifying unit 1104 determines that the feature amount is stored in the database unit 1107, the feature amount having attribute information corresponding to the attribute information of the feature amount is used for the individual identification processing of the individual. Is identified.
The database unit 1107 stores a plurality of types of feature amounts acquired from various individuals. The type of feature amount includes an image feature based on a texture pattern such as a color, shape, size, and pattern of an object, a video feature based on the motion of the object, an audio feature based on sound generated by the object, and the like. Each type of feature amount is registered in the database unit 1107 in association with each individual.
Note that the database unit 1107 may be configured to be installed outside the identification processing device 1100 as long as it is communicably connected to the identification processing device 1100. Each feature amount may be stored in the database unit 1107 in association with information indicating the type of the feature amount. Further, each feature amount may be stored in the database unit 1107 in association with the attribute information of the feature amount. Further, each feature amount may be stored in the database unit 1107 in association with information indicating an individual associated with the feature amount by the feature amount determination unit 1103.
FIG. 14 is an example showing information stored in the database unit 1107. Referring to FIG. 14, the database unit 1107 stores information indicating an individual, a feature amount associated with each individual, and attribute information of the feature amount in association with each other. In FIG. 14, feature amount attribute information is information indicating a position where the feature amount is estimated to have occurred.
The collation unit 1106 collates the feature amount specified by the feature amount specifying unit 1104 with the feature amount stored in the database unit 1107, and performs individual identification. The collation unit 1106 has a correlation value indicating a correlation between the feature quantity identified by the feature quantity identification unit 1104 and the feature quantity accumulated in the database unit 1107 above a threshold value or below a threshold value below a certain distance between the feature quantities. When it becomes, it determines with the similarity of those feature-values being high. In this case, the matching unit 1106 determines that the individuals associated with each feature amount are the same.
The collation unit 1106 calculates the correlation value or the distance between the feature amounts for all the feature amounts stored in the database unit 1107 for the feature amounts specified by the feature amount specifying unit 1104. Then, the collation unit 1106 determines whether or not the correlation value for all the feature amounts stored in the database unit 1107 is equal to or less than a threshold value or a distance between feature amounts is equal to or greater than a threshold value. When the correlation value for all the feature values stored in the database unit 1107 is equal to or less than a certain threshold value or the distance between feature values is equal to or greater than a certain threshold value, the matching unit 1106 performs the following processing. That is, the collation unit 1106 determines that the target individual is not registered in the database unit 1107. When the collation unit 1106 determines that the target individual is not registered, the database unit 1107 associates the feature quantity specified by the feature quantity specifying unit 1104 with information indicating the individual and the database addition processing unit 1105 described later. Register with
Thereby, when the individual exists in the real space after the next time, the collation unit 1106 can perform identification processing of the individual by performing collation using the feature amount registered in the database unit 1107.
The database addition processing unit 1105 receives information indicating an individual that the collation unit 1106 has determined to be unregistered in the database unit 1107 and a feature amount used to identify the individual from the collation unit 1106. The database addition processing unit 1105 also receives the attribute information of the received feature amount. Then, the database addition processing unit 1105 stores the received information in the database unit 1107 in association with each other.
FIG. 4 is a flowchart showing the operation of the individual identification system 1 in the first embodiment for carrying out the present invention.
The image information acquisition unit 1001 acquires image information included in environment information in the real space. The voice information acquisition unit 1002 acquires voice information included in the environment information in the real space (step S1). The sensor unit 1000 outputs the acquired environment information, that is, image information and audio information to the individual detection unit 1101.
The individual detection unit 1101 analyzes image information and audio information output from the sensor unit 1000, detects an individual to be identified, and specifies position information of the individual. The individual detection unit 1101 outputs information indicating the detected individual and the position information of the individual to the feature amount extraction unit 1102 and the feature amount determination unit 1103. Further, for each detected individual, the individual detection unit 1101 includes information indicating the individual, position information of the individual, image information of at least a part of the environment information corresponding to the position information, and at least one of the environment information. Are output to the feature quantity extraction unit 1102 (step S2). Specifically, the individual detection unit 1101 performs object detection processing based on a background difference method and pattern matching on image information, three-dimensional position detection processing using a multi-viewpoint image, and position detection of a sounding object by sound using a directional microphone. Use a combination of processing. Then, the individual detection unit 1101 detects an individual to be identified by the above-described combined process, and specifies position information of the individual.
The feature quantity extraction unit 1102 extracts a feature quantity for identifying the individual from the image information and audio information received from the individual detection unit 1101 and outputs the feature quantity to the feature quantity identification unit 1104 (step S3). Specifically, the feature amount extraction unit 1102 may extract a feature amount based on the color, shape, size, pattern, etc. of the object that can be acquired from the image information. Alternatively, the feature amount extraction unit 1102 may extract a feature amount based on an action of an object that can be acquired from video information. Alternatively, the feature amount extraction unit 1102 may extract a feature amount based on sound emitted by an object that can be acquired from audio information.
The feature quantity discriminating unit 1103 uses the image information and audio information corresponding to the individual input from the individual detection unit 1101 and the position information of the individual, and each of the feature quantities extracted by the feature quantity extracting unit 1102 is generated. Is discriminated (step S4). Specifically, the feature quantity discriminating unit 1103 considers the relation between the individual spatial position and the direction of the audio signal, the relation between the individual movement obtained from the video information and the movement direction of the audio signal, and the like. Identify the individual in which each occurred. Then, the feature amount determination unit 1103 outputs the determination result to the feature amount specifying unit 1104.
Further, the feature amount determination unit 1103 calculates an effective value representing the validity of the individual image information and audio information input from the individual detection unit 1101, and outputs the effective value to the feature amount specifying unit 1104 (step S5). As an effective value calculation method, for example, a method of calculating quantitatively using information such as signal SN ratio and signal strength, a method of calculating by a neural network or a support vector machine using previously collected learning data and so on.
The feature quantity specifying unit 1104 uses a plurality of types of feature quantities input from the feature quantity extraction unit 1102, the discrimination results input from the feature quantity discrimination unit 1103, and the effective value of each signal to identify the target individual. An effective feature amount is specified (step S6). The identified feature amount is output to the matching unit 1106. The collation unit 1106 collates the feature quantity input from the feature quantity identification unit 1104 with a plurality of types of feature quantities stored in the database unit 1107, and calculates a collation score indicating the correlation between the two feature quantities (step S7). ). The matching unit 1106 calculates a correlation value or a distance for each set of feature values, and calculates a matching score by integrating them. As a method of integrating correlation values or distances, there are a method of taking an average value of each value, a method of taking a maximum value, or addition or multiplication of each value. Further, as a method of integrating correlation values or distances, there is a method of integrating by a neural network or a support vector machine using learning data prepared in advance. When a system having this function is constructed, at least one of the integration methods may be implemented.
The collation unit 1106 determines whether or not the calculated collation score is equal to or less than a first threshold value (or greater than the first threshold value when the distance-based collation score is used). That is, the collation unit 1106 determines whether there is an individual corresponding to the database unit 1107 (step S8). When the collation unit 1106 determines that there is no individual corresponding to the database unit 1107 (in the case of “YES” (unregistered) in step S8), the database addition processing unit 1105 executes the following processing. That is, the database addition processing unit 1105 newly registers the feature amount specified by the feature amount specifying unit 1104 in the database unit 1107 (step S9). Thereby, individual identification after the next time is attained.
On the other hand, when the calculated matching score is equal to or higher than the first threshold (or lower than the first threshold when the distance-based matching score is used), the matching unit 1106 determines that there is an individual corresponding to the database unit 1107 ( If “NO” (registered) in step S8). In this case, the collation by the collation unit 1106 is completed.
The collation unit 1106 may have the following functions. First, the collation unit 1106 determines whether or not the calculated collation score is significantly higher than the first threshold value. When the calculated matching score is a distance-based matching score, the matching unit 1106 determines whether or not the calculated matching score is significantly smaller than the first threshold value. Here, the second threshold indicates a value larger than the first threshold (a value smaller than the first threshold when the matching score is a distance-based matching score).
For example, the collation unit 1106 determines whether or not the calculated collation score is greater than or equal to the second threshold value based on a predetermined second threshold value. The collation unit 1106 determines that the individual collation reliability is extremely high when the collation score is equal to or greater than a predetermined second threshold. Furthermore, the collation unit 1106 adds the individual feature amount to the database unit 1107 when the collation score is equal to or greater than a predetermined second threshold.
When the collation unit 1106 has the above function, the feature amount specifying unit 1104 may determine whether or not a feature amount associated with information indicating a certain individual is stored in the database unit 1107. When the feature amount specifying unit 1104 determines that the feature amount is stored in the database unit 1107, the feature amount having attribute information corresponding to the attribute information of the feature amount is used for the above-described individual identification processing. You may specify with quantity.
After the collation is completed by the collation unit 1106, the individual identification system 1 determines whether there is an individual that has not yet been identified among the individuals detected by the individual detection unit 1101 (step S10). When the individual identification system 1 determines that there is an individual that has not yet been identified (if “YES” in step S10 (there is an unidentified individual)), the individual steps from the feature determination process in step S4 are performed. repeat. By this operation, the individual identification system 1 can identify all the individuals existing in the real space.
On the other hand, when the individual identification system 1 determines that the identification process has been executed for all the individuals detected by the individual detection unit 1101 (in the case of “NO” (no unidentified individual) in step S10), the identification process is performed. finish.
The individual identification system 1 in the first embodiment determines an individual in which each of the feature amounts extracted from the environmental information has occurred. The individual identification system 1 obtains an effective value indicating the quality of the feature value for each feature value extracted from the environment information. Then, the individual identification system 1 specifies the feature amount used for the individual identification process for each individual detected from the environment information based on the determined result and the calculated effective value.
Therefore, the individual identification system 1 according to the first embodiment uses a feature value having a high effective value, that is, a higher quality feature amount among the feature amounts associated with each individual, and uses the feature amount in the individual identification process. Can do. As a result, the individual identification system 1 in the first embodiment can specify an appropriate feature amount to be used for individual identification for each of a plurality of identification targets. As a result, the individual identification system 1 can perform more accurate individual identification processing even when a plurality of individuals must be identified at the same time while being robust against environmental changes.
In the first embodiment, the feature amount determination unit 1103 may be included in the individual detection unit 1101. In this case, when detecting the individual to be identified from the image information and audio information received from the sensor unit 1000, the individual detection unit 1101 includes image information or audio information in a predetermined region including the position of the individual and the individual. May be associated with each other. Then, the individual detection unit 1101 calculates an effective value of the feature amount included in the image information or audio information based on the signal intensity of the image information or audio information associated with the individual. The individual detection unit 1101 may output information indicating the individual, position information of the individual, and image information and audio information associated with the individual to the feature amount extraction unit 1102. The feature amount extraction unit 1102 extracts feature amounts included in the received image information and audio information, respectively. Then, the feature quantity extraction unit 1102 associates the feature quantity, the attribute information of the feature quantity, and the effective value corresponding to the image information or audio information from which the feature quantity is extracted for each extracted feature quantity. You may output to 1104.
[Second Embodiment]
FIG. 5 is a block diagram showing a configuration of the individual identification system 2 according to the second embodiment of the present invention. As shown in FIG. 5, the individual identification system 2 according to the second embodiment of the present invention is different from the individual identification system 1 according to the first embodiment in the composite sensor unit 2200. Other components of the individual identification system 2 are the same as those of the individual identification system 1 in the first embodiment. The name of the audio / video acquisition unit 2000 is changed from that of the sensor unit 1000 in order to avoid confusion with the composite sensor unit 2200, but the configuration of the audio / video acquisition unit 2000 is the same as that of the sensor unit 1000 according to the first embodiment. The configuration is the same. The same components as those of the individual identification system 1 in the first embodiment of the individual identification system 2 are denoted by the same reference numerals as those in FIG. 1, and detailed description thereof is omitted.
In the second embodiment, the composite sensor unit 2200 includes a shape measuring unit 2201, a weight measuring unit 2202, a calorie measuring unit 2203, a speed measuring unit 2204, an optical property measuring unit 2205, and an odor measurement as shown in FIG. One or more of the part 2206 and the material inspection part 2207 are provided. The composite sensor unit 2200 is connected to the individual detection unit 1101. That is, the difference of the second embodiment from the first embodiment is that the composite sensor unit 2200 is added to the configuration of the individual identification system. Other components of the individual identification system 2 are the same as those of the individual identification system 1 in the first embodiment.
The shape measuring unit 2201 acquires information about the three-dimensional shape and volume of the individual. The weight measuring unit 2202 measures the weight of the individual. The calorie measurement unit 2203 measures the temperature of the individual. The speed measuring unit 2204 measures the speed of the individual. The optical characteristic measurement unit 2205 measures optical characteristics such as reflectance, transmittance, and refractive index of the solid surface. The odor measuring unit 2206 measures the odor of the individual. The material inspection unit 2207 acquires information such as hardness and material of the individual surface by infrared spectroscopy, ultrasonic inspection, or the like.
In addition, various sensor information acquired by the composite sensor unit 2200 is output to the feature amount extraction unit 1102 in the same manner as individual image information and audio information. A feature amount extraction unit 1102 extracts feature amounts from input image information, audio information, and various sensor information. The feature quantity discriminating unit 1103 performs processing for associating the feature quantity extracted from the sensor information with the individual and calculating the effective value of each feature quantity.
FIG. 7 is a flowchart showing an example of the operation of the individual identification system 2 in the second embodiment for carrying out the present invention. About the operation | movement similar to the identification system 1 in 1st Embodiment, the code | symbol same as FIG. 4 is attached | subjected and detailed description is abbreviate | omitted.
The second embodiment is different from the first embodiment in the following points. The first point is a point where image information and audio information included in the environment information as sensing information and sensing data acquired from the composite sensor unit 2200 are given as inputs to the individual detection unit 1101 (step S21). The second point is that when the individual detection unit 1101 performs individual detection from sensing information, sensor information obtained from the composite sensor is also used in addition to image information and audio information (step S22).
In the individual identification system 2 according to the second embodiment, the composite sensor unit 2200 acquires individual features that are difficult to acquire from normal image information and audio information. Such individual characteristics can be very useful information for individual identification. The individual identification system 2 according to the second embodiment can further improve the identification performance by using a combination of feature quantities other than the feature quantities obtained from image information and audio information.
[Third Embodiment]
FIG. 8 is a block diagram showing the configuration of the individual identification system 3 according to the third embodiment of the present invention. As shown in FIG. 8, the individual identification system 3 according to the third embodiment of the present invention is different from the individual identification system 1 according to the first embodiment in a biological information acquisition unit 3200 and a person detection unit 3101. That is, the individual detection unit 1101 of the identification processing device 1100 in the first embodiment is changed to the person detection unit 3101 of the identification processing device 3100 in the third embodiment. Other components of the individual identification system 3 are the same as those of the individual identification system 1 in the first embodiment. The same components as those of the individual identification system 1 in the first embodiment of the individual identification system 3 are denoted by the same reference numerals as those in FIG. 1, and detailed description thereof is omitted.
The biometric information acquisition unit 3200 of the individual identification system 3 according to the third embodiment includes the components shown in FIG. That is, the biometric information acquisition unit 3200 includes an iris pattern acquisition unit 3201, a fingerprint pattern acquisition unit 3202, a palmprint pattern acquisition unit 3203, a vein pattern acquisition unit 3204, a dentition pattern acquisition unit 3205, an auricle pattern acquisition unit 3206, and a gene sequence information acquisition. One or more of the units 3207 are provided. The biometric information acquisition unit 3200 is connected to the feature amount extraction unit 1102 and the feature amount determination unit 1103.
An iris pattern acquisition unit 3201 mainly includes an infrared light source and an infrared camera, and acquires an iris pattern of a person. When the iris pattern can be acquired by the video camera constituting the image information acquisition unit 1001, the image information acquisition unit 1001 may acquire the iris pattern.
A fingerprint pattern acquisition unit 3202 acquires a fingerprint pattern of a person. The fingerprint pattern acquisition unit 3202 may be configured to acquire a fingerprint pattern using a contact sensor, or may be configured to acquire a fingerprint pattern in a non-contact manner using a camera or the like.
A palm print pattern acquisition unit 3203 acquires a palm print and a palm pattern of a person. Similarly to the fingerprint pattern acquisition unit 3202, the palm pattern acquisition unit 3203 may be configured to acquire a palm pattern and a palm pattern by a contact sensor, or acquire a palm pattern and a palm pattern without contact with a camera or the like. Such a configuration may be adopted. Further, the palm print pattern acquisition unit 3203 may be configured to simultaneously acquire a fingerprint pattern.
The vein pattern acquisition unit 3204 acquires a vein pattern of a person. The vein pattern acquisition unit 3204 may be configured to acquire a vein pattern from a part such as a finger, palm, back of the hand, face, or neck, or may be configured to acquire a vein pattern from another part. Also good. Further, the fingerprint pattern acquisition unit 3202 or the palm print pattern acquisition unit 3203 may also acquire the vein pattern at the same time.
The dentition pattern acquisition unit 3205 acquires the shape and arrangement pattern of human teeth. The dentition pattern acquisition unit 3205 may acquire three-dimensional shape information as a dentition pattern in addition to image information captured by a camera.
An auricle pattern acquisition unit 3206 acquires a shape pattern of a human ear. Similar to the dentition pattern acquisition unit 3205, the auricle pattern acquisition unit 3206 may be configured to acquire three-dimensional shape information in addition to image information.
The gene sequence information acquisition unit 3207 acquires gene sequence information of a person. The gene sequence information acquisition unit 3207 may be configured to acquire gene sequence information from human skin, body hair, body fluid, or the like.
The person detection unit 3101 detects a person from the image information acquired by the sensor unit 1000. The person detection unit 3101 detects a person by using face detection for image information, walking pattern detection for video information, and the like. The person detection unit 3101 also has a function of detecting a person from image information and acquiring a person's face pattern and walking pattern.
In addition, the various sensor information acquired by the biological information acquisition unit 3200 is output to the feature amount extraction unit 1102 in the same manner as the image information and audio information acquired by the sensor unit 1000. Then, the feature quantity extraction unit 1102 performs feature quantity extraction from the various sensor information that is input. The feature quantity discriminating unit 1103 associates each feature quantity extracted from the sensor information with an individual and performs an effective value calculation process for each feature quantity.
FIG. 10 is a flowchart showing an example of the operation of the individual identification system 3 in the third embodiment for carrying out the present invention. About the operation | movement similar to 1st Embodiment, the code | symbol same as FIG. 4 is attached | subjected and detailed description is abbreviate | omitted.
The third embodiment differs from the first embodiment in the following points. The first point is that the identification target is limited to humans, so that the individual detection unit receives as input the image information acquired by the sensor unit 1000 as the sensing information, the voice information, and the biological information acquired by the biological information acquisition unit 3200. 1101 (step S31). The second point is a point in which the person detection unit 3101 performs a process of detecting a person from the sensing information, and acquires the face pattern and walking pattern of the person detected at the same time (step S32).
The individual identification system 3 according to the third embodiment performs individual identification using human biological information effective for personal identification by limiting the identification target from a general object to a human. The sensor unit 1000 acquires image information and audio information, and the biological information acquisition unit 3200 acquires biological information of a person. In addition, the person detection unit 3101 acquires a face pattern and a walking pattern as biometric information. Personal authentication technology using human biometric information has been widely studied as a related technology. Therefore, the related technique should just be utilized for each biometric authentication demonstrated by this Embodiment.
The individual identification system 3 in the third embodiment associates feature amounts extracted from image information, audio information, and other biological information with a person, and calculates an effective value of each feature amount. Then, the individual identification system 3 in the third embodiment uses the feature amount used for the individual identification processing of the person for each person detected by the person detection unit 3101 based on both the association and the effective value. Identify. The individual identification system 3 can prioritize a feature value having a high effective value, that is, a higher quality feature value among the feature amounts associated with each person, and can use the feature amount for personal identification processing.
As a result, the individual identification system 3 in the third embodiment can specify an appropriate feature amount for use in individual identification for each of a plurality of identification objects. As a result, the individual identification system 3 can perform more accurate personal identification processing even in the case where a plurality of persons must be identified at the same time while being robust against environmental changes.
[Fourth Embodiment]
FIG. 11 is a block diagram showing a configuration of the individual identification system 4 according to the fourth embodiment of the present invention.
As shown in FIG. 11, the individual identification system 4 in the fourth exemplary embodiment of the present invention includes a sensor unit 4000, an identification processing device 4100, and a database unit 4107. The sensor unit 4000 and the identification processing device 4100, and the identification processing device 4100 and the database unit 4107 are connected to be communicable with each other.
The sensor unit 4000 is a video camera that acquires video information and a microphone that acquires audio information, and each includes communication means such as a network interface. The sensor unit 4000 may be composed of a video camera that can simultaneously acquire video information and audio information.
The identification processing device 4100 is a computer, and includes at least communication means such as a CPU (Central Processing Unit), a memory, and a network interface. The identification processing device 4100 may include a reading device for reading a computer-readable recording medium such as a flexible disk or a CD-ROM (Compact Disc Read Only Memory), or a magnetic storage device. The identification processing device 4100 expands the program code received from the network interface on the memory, or reads the program code stored in the CD-ROM or the magnetic storage device and expands it on the memory. The identification processing device 4100 interprets and executes the developed program code so that the person detection unit 4101, the feature amount extraction unit 4102, the feature amount determination unit 4103, the feature amount identification unit 4104, the database, as shown in FIG. The additional processing unit 4105 and the collation unit 4106 operate as a computer that implements various functions. In the fourth embodiment, the identification processing device 4100 is a so-called personal computer (PC).
The database unit 4107 includes at least communication means such as a network interface and a magnetic storage device. The magnetic storage device stores facial feature amount information and voice feature amount information of a plurality of persons. The magnetic storage device stores at least one or more of face feature amount information and voice feature amount information per person. The magnetic storage device may store a plurality of both feature quantities per person. Further, the facial feature amount information and the voice feature amount information may be managed by a relational database management system (RDBMS). The fourth embodiment is an embodiment in which feature amounts are a face pattern and a voice pattern. And the individual identification system 4 in 4th Embodiment is naturally applicable also to the individual identification system using another feature-value.
In the fourth embodiment, there are two persons registered in the database unit 4107 (hereinafter referred to as person A and person B), and in the situation where only the person A speaks, the individual identification system 4 has the person A. And an example of an embodiment showing a case where person B is identified.
FIG. 12 is a flowchart showing the operation of the individual identification system 4 in the fourth embodiment for carrying out the present invention.
The sensor unit 4000 acquires the video information in which the person A and the person B are reflected and the voice information of the utterance, and passes them to the person detection unit 4101 (step S41).
The person detection unit 4101 detects that there are two persons in the space by processing such as face detection, and passes the detected face image data and audio data to the feature amount extraction unit 4102 and the feature amount determination unit 4103 ( Step S42).
The feature amount extraction unit 4102 extracts feature amounts for personal identification from the face image data and audio data acquired from the person detection unit 4101 (step S43).
The feature amount discriminating unit 4103 uses the face image data acquired from the person detecting unit 4101 to specify the speaker by speaker estimation or the like, and associates the speaker's face image and voice data with the speaker (step S44). Also, the feature amount discriminating unit 4103 calculates the effectiveness of the feature amount extracted from the face image or audio signal from the state of the face image and audio signal acquired from the person detection unit 4101 (step S45).
Based on the feature quantity acquired by the feature quantity extraction unit 4102 based on the correspondence between the feature quantity and the speaker obtained by the feature quantity discrimination unit 4103 and the effectiveness of each feature quantity, the feature quantity specifying unit 4104 The feature amount used for collation is identified and appropriately weighted (step S46).
The collation unit 4106 collates the feature quantity identified by the feature quantity identification unit 4104 with the feature quantity stored in the database unit 4107, and calculates a collation score of one person (step S47).
In the fourth embodiment, since it is assumed that two people are registered in the database, the database addition processing by the database addition processing unit 4105 is not executed (“NO” in step S48 (registered). )in the case of). Thereafter, since there is another unidentified person, the process for collating the other person (after step S44) is repeatedly executed ("YES" in step S50 (there is an unidentified person)). in the case of). When the collation is completed for both of the two persons, there is no unidentified person, and the process ends (in the case of “NO” (no unidentified person) in step S50).
In the personal recognition system described in Patent Literature 1, image information and audio information acquired by an acquisition unit that acquires environmental information of a space where a person exists is limited to a single object. Therefore, in the personal recognition system described in Patent Document 1, it is difficult to specify an appropriate feature amount for accurate identification in the situation assumed in the fourth embodiment. On the other hand, the individual identification system 4 according to the fourth embodiment of the present invention is suitable for use in individual identification for each of a plurality of identification targets even in a situation where there are two or more persons. A feature amount can be specified.
[Fifth Embodiment]
FIG. 13 is a block diagram showing a configuration of the feature quantity specifying device 50 according to the fifth embodiment of the present invention.
Referring to FIG. 13, the feature quantity specifying device 50 according to the present embodiment includes a registration unit 5001, an effective value calculation unit 5002, and a feature quantity specifying unit 5003.
The registration unit 5001 includes information indicating an individual detected from environment information that is information resulting from the environment of the space where the individual to be subjected to the individual identification process exists, and position information indicating the position of each individual. receive. Further, the registration unit 5001 receives the feature amount extracted from the environment information and attribute information indicating each attribute of the feature amount. Then, the registration unit 5001 determines an individual in which each of the received feature amounts has occurred based on the received position information and attribute information.
The effective value calculation unit 5002 obtains an effective value indicating the quality of the feature value for each of the feature values received by the registration unit 5001.
The feature amount specifying unit 5003 performs the following processing based on the determination result by the registration unit 5001 and the effective value calculated by the effective value calculating unit 5002. That is, the feature quantity specifying unit 5003 specifies the feature quantity used for the individual identification process for each individual specified by the information indicating the individual received by the registration unit 5001.
The feature quantity specifying device 50 according to the present embodiment is based on attribute information indicating the type of an individual to be subjected to individual identification processing and attribute information indicating the type of feature quantity extracted from environment information. Associate quantity with. Therefore, the feature quantity specifying device 50 of this embodiment can specify an appropriate feature quantity to be used for individual identification for each of a plurality of identification targets.
As mentioned above, each embodiment described so far is a preferred embodiment of the present invention, and the scope of the present invention is not limited only to the above-described embodiment. Each embodiment can be implemented in various forms without departing from the gist of the present invention.
For example, as an individual identification system, an embodiment in which the first embodiment, the second embodiment, the third embodiment, the fourth embodiment, and the fifth embodiment are combined with each other is configured. May be.
In addition, each component in each embodiment of the present invention can be realized by a computer and a program as well as its function in hardware. The program is provided by being recorded on a computer-readable recording medium such as a magnetic disk or a semiconductor memory, and is read by the computer when the computer is started up. The read program causes the computer to function as a component in each of the embodiments described above by controlling the operation of the computer. In this case, the computer includes a central processing unit (CPU) on which the individual identification program is read and executed, a storage device (such as a hard disk) that stores feature quantities as a database, and input means such as a camera and a microphone.
Taking the first embodiment of the present invention as an example, the individual identification program read into the CPU causes the computer to function as the identification processing device 1100 described in the first embodiment.
An example of the effect of the present invention is that it is possible to specify an appropriate feature amount for use in individual identification for each of a plurality of identification targets.
This application claims the priority on the basis of Japanese application Japanese Patent Application No. 2010-178644 for which it applied on August 9, 2010, and takes in those the indications of all here.
A part or all of the above embodiment can be described as in the following supplementary notes, but is not limited thereto.
(Appendix 1)
Extracted from the environmental information, information indicating the individual detected from the environmental information, which is information resulting from the environment of the space where the individual subject to the individual identification process exists, position information indicating the position of the individual, and the environmental information And the attribute information indicating each attribute of the feature quantity is received, and the individual in which the feature quantity has occurred is determined for each feature quantity based on the position information and the attribute information. A registration section;
For each feature quantity extracted from the environment information, an effective value calculation unit for obtaining an effective value indicating the quality of the feature quantity;
Based on the determination result and the effective value, for each detected individual, a feature amount specifying unit that specifies a feature amount used for individual identification processing of the individual;
A feature quantity specifying device.
(Appendix 2)
For each individual, the feature amount specifying unit assigns a feature amount having an effective value equal to or greater than a predetermined threshold among the feature amounts determined to be generated from one individual based on the determination result to the one individual. The feature amount specifying apparatus according to claim 1, wherein the feature amount is specified as a feature amount used for individual identification processing.
(Appendix 3)
For each feature amount extracted from the environment information, the registration unit calculates, for each individual, a probability that the feature amount has occurred from each individual,
The feature amount according to claim 1 or 2, wherein the feature amount specifying unit specifies, for each detected individual, a feature amount used for individual identification processing of the individual based on the effective value and the probability. Specific device.
(Appendix 4)
The feature quantity specifying unit according to claim 3, wherein the feature quantity specifying unit specifies a feature quantity having a product of the effective value and the probability equal to or greater than a predetermined threshold as a feature quantity used for individual identification processing of the one individual. apparatus.
(Appendix 5)
The attribute information of the feature amount includes position information indicating a position where the feature amount has occurred,
The registration unit includes:
Based on the difference between the position specified based on the position information of each individual and the position specified based on the attribute information of each feature amount, each of the feature amounts is a feature amount generated from the individual. The feature quantity specifying device according to claim 3, wherein certain probabilities are respectively calculated.
(Appendix 6)
The registration unit includes:
The information indicating the time when the individual is detected is received together with the position information of the individual, the position information is associated with the information indicating the time, and the feature amount is specified together with the attribute information of the feature amount The attribute information is associated with the information indicating the time, and the first individual is based on the position information of the first individual and the information indicating the time associated with each position information. A vector indicating the time change of the position of the first feature value is generated, and the time change of the position of the first feature value is calculated based on the attribute information of the first feature value and the information indicating the time associated with each attribute information. The vector shown is generated, and the length of the vector specified based on the difference between the vector generated based on the second individual and the vector generated based on the second feature amount is less than a predetermined value In case, Feature quantity of the second discriminates the feature quantity generated from the second individual, the feature identification apparatus according to any one of claims 1 to 5.
(Appendix 7)
The feature quantity specifying device includes a database unit that stores a matching feature quantity used for individual identification, attribute information of the feature quantity, and information indicating the individual in association with each other.
When the feature amount associated with information indicating one individual is stored in the database unit, the feature amount specifying unit, when the feature amount has attribute information corresponding to the attribute information of the feature amount, The feature amount specifying apparatus according to claim 1, wherein the feature amount specifying device specifies the feature amount used for the individual identification processing of the one individual.
(Appendix 8)
An environment information acquisition unit for acquiring environment information, which is information resulting from the environment of a space where a plurality of individuals to be subjected to individual identification processing exist;
An individual detection unit for detecting the plurality of individuals from the environment information together with position information indicating the position of each individual;
A feature amount extraction unit that extracts a plurality of feature amounts from the environment information together with attribute information indicating the type of each feature amount;
A registration unit that determines, for each feature quantity, an individual in which the feature quantity has occurred, based on the position information of each of the individuals and the attribute information of each of the feature quantities;
For each feature quantity extracted from the environment information, an effective value calculation unit for obtaining an effective value indicating the quality of the feature quantity;
Based on the determination result and the effective value, for each detected individual, a feature amount specifying unit that specifies a feature amount used for individual identification processing of the individual;
A collation unit that performs individual identification processing for each individual extracted based on the identified feature amount;
An individual identification system comprising:
(Appendix 9)
Extracted from the environmental information, information indicating the individual detected from the environmental information, which is information resulting from the environment of the space where the individual subject to the individual identification process exists, position information indicating the position of the individual, and the environmental information And the attribute information indicating the attribute of each of the feature quantities is received, and the individual in which the feature quantity has occurred is determined for each feature quantity based on the position information and the attribute information. ,
For each feature quantity extracted from the environment information, an effective value indicating the quality of the feature quantity is obtained,
A feature amount specifying method for specifying, for each detected individual, a feature amount used for individual identification processing of the individual based on the result of the determination and the effective value.
(Appendix 10)
Extracted from the environmental information, information indicating the individual detected from the environmental information, which is information resulting from the environment of the space where the individual subject to the individual identification process exists, position information indicating the position of the individual, and the environmental information And the attribute information indicating each attribute of the feature quantity is received, and the individual in which the feature quantity has occurred is determined for each feature quantity based on the position information and the attribute information. Processing,
For each feature quantity extracted from the environment information, a process for obtaining an effective value indicating the quality of the feature quantity;
A program for causing a computer to execute, for each detected individual, a feature amount used for individual identification processing of the individual based on the determination result and the effective value.
(Appendix 11)
The attribute information of the feature amount includes position information indicating a position where the feature amount has occurred,
The registration unit includes:
When the difference between the position specified based on the position information of one individual and the position where the one feature amount is less than or equal to a predetermined threshold, the one feature amount is a feature amount generated from the one individual. The feature quantity specifying device according to supplementary note 1, wherein the feature quantity specifying device is specified.
(Appendix 12)
The feature quantity specifying unit includes:
Storing associating information, which is information associating individual position information with feature amount attribute information;
The feature quantity specification according to appendix 1, wherein the feature quantity having the attribute information of the feature quantity corresponding to the association information including the position information of one individual is identified as a feature quantity used for the individual identification process of the one individual. apparatus.
(Appendix 13)
The individual position information includes information indicating the movement direction of the individual,
The attribute information of the feature amount includes information indicating whether the volume is increasing or decreasing,
The feature quantity specifying unit includes:
When the attribute information of the first individual includes information indicating the first direction, the feature amount having the attribute information including the information whose volume is increased is used for the individual identification processing of the first individual And identify
When the attribute information of the second individual includes information indicating a second direction that is the opposite direction to the first direction, the feature amount having attribute information including information in which the volume is decreased The feature amount specifying device according to attachment 1, wherein the feature amount is specified as a feature amount used for individual identification processing of the individual.
(Appendix 14)
An environmental information acquisition unit for acquiring the environmental information;
Detecting the individual from the environmental information together with attribute information indicating the type of each individual, and outputting them to the registration unit;
Extracting the feature value from the environment information together with attribute information indicating the type of each feature value, and outputting the feature value to the registration unit;
The feature quantity identifying device according to appendix 1, comprising:
(Appendix 15)
The individual identification system includes a database unit that stores a matching feature amount used for individual identification, attribute information of the feature amount, and information indicating the individual in association with each other.
The collation unit reads a feature quantity for collation used for identifying an individual associated with the feature quantity based on the feature quantity identified by the feature quantity identification unit, and identifies the feature quantity. A collation score indicating a correlation between the feature quantity identified by the unit and the feature quantity read from the database unit, and when the collation score is equal to or greater than a predetermined threshold, the feature quantity identified by the feature quantity identification unit The information indicating the individual and the attribute information of the feature amount are associated with each other and stored in the database unit,
When the feature amount associated with information indicating one individual is stored in the database unit, the feature amount specifying unit, when the feature amount has attribute information corresponding to the attribute information of the feature amount, The individual identification system according to appendix 8, wherein the individual identification system identifies the feature amount used for the individual identification processing of the one individual.
 1  個体識別システム
 2  個体識別システム
 3  個体識別システム
 4  個体識別システム
 50  特徴量特定装置
 1000  センサ部
 1001  画像情報取得部
 1002  音声情報取得部
 2000  画像音声取得部
 4000  センサ部
 1100  識別処理装置
 3100  識別処理装置
 4100  識別処理装置
 1101  個体検出部
 3101  人物検出部
 4101  人物検出部
 1102  特徴量抽出部
 4102  特徴量抽出部
 1103  特徴量判別部
 4103  特徴量判別部
 1113  レジストレーション部
 5001  レジストレーション部
 1123  有効値算出部
 5002  有効値算出部
 1104  特徴量特定部
 4104  特徴量特定部
 5003  特徴量特定部
 1105  データベース追加処理部
 4105  データベース追加処理部
 1106  照合部
 4106  照合部
 1107  データベース部
 4107  データベース部
 2200  複合センサ部
 2201  形状測定部
 2202  重量測定部
 2203  熱量測定部
 2204  速度測定部
 2205  光学特性測定部
 2206  臭気測定部
 2207  材質検査部
 3200  生体情報取得部
 3201  虹彩パターン取得部
 3202  指紋パターン取得部
 3203  掌紋パターン取得部
 3204  静脈パターン取得部
 3205  歯列パターン取得部
 3206  耳介パターン取得部
 3207  遺伝子配列情報取得部
DESCRIPTION OF SYMBOLS 1 Individual identification system 2 Individual identification system 3 Individual identification system 4 Individual identification system 50 Feature-value identification apparatus 1000 Sensor part 1001 Image information acquisition part 1002 Audio | voice information acquisition part 2000 Image audio | voice acquisition part 4000 Sensor part 1100 Identification processing apparatus 3100 Identification processing apparatus 4100 Identification processing device 1101 Individual detection unit 3101 Person detection unit 4101 Person detection unit 1102 Feature amount extraction unit 4102 Feature amount extraction unit 1103 Feature amount determination unit 4103 Feature amount determination unit 1113 Registration unit 5001 Registration unit 1123 Effective value calculation unit 5002 Effective value calculation unit 1104 Feature amount specifying unit 4104 Feature amount specifying unit 5003 Feature amount specifying unit 1105 Database addition processing unit 4105 Database addition processing unit 1106 Joint unit 4106 Verification unit 1107 Database unit 4107 Database unit 2200 Composite sensor unit 2201 Shape measurement unit 2202 Weight measurement unit 2203 Calorie measurement unit 2204 Speed measurement unit 2205 Optical property measurement unit 2206 Odor measurement unit 2207 Material inspection unit 3200 Biometric information acquisition unit 3201 Iris pattern acquisition unit 3202 Fingerprint pattern acquisition unit 3203 Palmprint pattern acquisition unit 3204 Vein pattern acquisition unit 3205 Dental pattern acquisition unit 3206 Auricular pattern acquisition unit 3207 Gene sequence information acquisition unit

Claims (10)

  1.  個体識別処理の対象となる個体が存在する空間の環境に起因する情報である環境情報から検出される当該個体を示す情報と、前記個体のそれぞれの位置を示す位置情報と、前記環境情報から抽出される特徴量と、前記特徴量のそれぞれの属性を示す属性情報とを、それぞれ受け取り、前記位置情報および前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別するレジストレーション部と、
     前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求める有効値算出部と、
     前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する特徴量特定部と、
    を備える特徴量特定装置。
    Extracted from the environmental information, information indicating the individual detected from the environmental information, which is information resulting from the environment of the space where the individual subject to the individual identification process exists, position information indicating the position of the individual, and the environmental information And the attribute information indicating each attribute of the feature quantity is received, and the individual in which the feature quantity has occurred is determined for each feature quantity based on the position information and the attribute information. A registration section;
    For each feature quantity extracted from the environment information, an effective value calculation unit for obtaining an effective value indicating the quality of the feature quantity;
    Based on the determination result and the effective value, for each detected individual, a feature amount specifying unit that specifies a feature amount used for individual identification processing of the individual;
    A feature quantity specifying device.
  2.  前記特徴量特定部は、前記個体毎に、前記判別の結果に基づいて一の個体から発生したと判別された特徴量の中で前記有効値が所定の閾値以上の特徴量を当該一の個体の個体識別処理に用いる特徴量と特定する、請求項1に記載の特徴量特定装置。 For each individual, the feature amount specifying unit assigns a feature amount whose effective value is equal to or greater than a predetermined threshold among the feature amounts determined to be generated from one individual based on the determination result to the one individual. The feature amount specifying device according to claim 1, wherein the feature amount is specified as a feature amount used for individual identification processing.
  3.  前記レジストレーション部は、前記環境情報から抽出される特徴量毎に、その特徴量が各個体から発生した確率を個体毎に算出し、
     前記特徴量特定部は、前記有効値と前記確率とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する、請求項1または2に記載の特徴量特定装置。
    For each feature amount extracted from the environment information, the registration unit calculates, for each individual, a probability that the feature amount has occurred from each individual,
    The feature amount according to claim 1 or 2, wherein the feature amount specifying unit specifies, for each detected individual, a feature amount used for individual identification processing of the individual based on the effective value and the probability. Specific device.
  4.  前記特徴量特定部は、前記有効値と前記確率との積が所定の閾値以上の特徴量を当該一の個体の個体識別処理に用いる特徴量と特定する、請求項3に記載の特徴量特定装置。 The feature quantity specifying unit according to claim 3, wherein the feature quantity specifying unit specifies a feature quantity having a product of the effective value and the probability equal to or greater than a predetermined threshold as a feature quantity used for individual identification processing of the one individual. apparatus.
  5.  前記特徴量の属性情報は、当該特徴量が発生した位置を示す位置情報を含み、
     前記レジストレーション部は、
     各個体の位置情報に基づいて特定される位置と各特徴量の属性情報に基づいて特定される位置との差分に基づいて、前記各特徴量のそれぞれが、前記各個体から発生した特徴量である確率をそれぞれ算出する、請求項3または4に記載の特徴量特定装置。
    The attribute information of the feature amount includes position information indicating a position where the feature amount has occurred,
    The registration unit includes:
    Based on the difference between the position specified based on the position information of each individual and the position specified based on the attribute information of each feature amount, each of the feature amounts is a feature amount generated from the individual. The feature quantity specifying device according to claim 3, wherein certain probabilities are respectively calculated.
  6.  前記レジストレーション部は、
     前記個体の前記位置情報とともに当該個体が検出された時刻を示す情報を受け取り、当該位置情報と当該時刻を示す情報とを対応付け、前記特徴量の前記属性情報とともに当該特徴量が特定された時刻を示す情報を受け取り、当該属性情報と当該時刻を示す情報とを対応付け、第一の個体の位置情報と各位置情報に対応付けられている時刻を示す情報とに基づいて当該第一の個体の位置の時間変化を示すベクトルを生成し、第一の特徴量の属性情報と各属性情報に対応付けられている時刻を示す情報とに基づいて当該第一の特徴量の位置の時間変化を示すベクトルを生成し、第二の個体に基づいて生成されたベクトルと第二の特徴量に基づいて生成されたベクトルとの差分に基づいて特定されるベクトルの長さが所定の値未満である場合に、当該第二の特徴量は当該第二の個体から発生した特徴量と判別する、請求項1ないし5のいずれか1項に記載の特徴量特定装置。
    The registration unit includes:
    The information indicating the time when the individual is detected is received together with the position information of the individual, the position information is associated with the information indicating the time, and the feature amount is specified together with the attribute information of the feature amount The attribute information is associated with the information indicating the time, and the first individual is based on the position information of the first individual and the information indicating the time associated with each position information. A vector indicating the time change of the position of the first feature value is generated, and the time change of the position of the first feature value is calculated based on the attribute information of the first feature value and the information indicating the time associated with each attribute information. The vector shown is generated, and the length of the vector specified based on the difference between the vector generated based on the second individual and the vector generated based on the second feature amount is less than a predetermined value In case, Feature quantity of the second discriminates the feature quantity generated from the second individual, the feature identification apparatus according to any one of claims 1 to 5.
  7.  前記特徴量特定装置は、個体の識別に利用される照合用の特徴量と当該特徴量の属性情報と当該個体を示す情報とを対応付けて記憶するデータベース部を備え、
     前記特徴量特定部は、一の個体を示す情報に対応付けられている特徴量が前記データベース部に記憶されている場合に、当該特徴量の属性情報に対応する属性情報を持つ特徴量を、当該一の個体の個体識別処理に用いる特徴量と特定する、請求項1ないし6のいずれか1項に記載の特徴量特定装置。
    The feature quantity specifying device includes a database unit that stores a matching feature quantity used for individual identification, attribute information of the feature quantity, and information indicating the individual in association with each other.
    When the feature amount associated with information indicating one individual is stored in the database unit, the feature amount specifying unit, when the feature amount has attribute information corresponding to the attribute information of the feature amount, The feature amount specifying apparatus according to claim 1, wherein the feature amount specifying device specifies the feature amount used for the individual identification processing of the one individual.
  8.  個体識別処理の対象となる複数の個体が存在する空間の環境に起因する情報である環境情報を取得する環境情報取得部と、
     前記環境情報から前記複数の個体を、それぞれの個体の位置を示す位置情報と共に検出する個体検出部と、
     前記環境情報から複数の特徴量を、それぞれの特徴量の種類を示す属性情報と共に抽出する特徴量抽出部と、
     前記個体のそれぞれの前記位置情報および前記特徴量のそれぞれの前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別するレジストレーション部と、
     前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求める有効値算出部と、
     前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する特徴量特定部と、
     前記特定された特徴量に基づいて前記抽出された個体毎に個体識別処理を行う照合部と、
    を備える個体識別システム。
    An environment information acquisition unit for acquiring environment information, which is information resulting from the environment of a space where a plurality of individuals to be subjected to individual identification processing exist;
    An individual detection unit for detecting the plurality of individuals from the environment information together with position information indicating the position of each individual;
    A feature amount extraction unit that extracts a plurality of feature amounts from the environment information together with attribute information indicating the type of each feature amount;
    A registration unit that determines, for each feature quantity, an individual in which the feature quantity has occurred, based on the position information of each of the individuals and the attribute information of each of the feature quantities;
    For each feature quantity extracted from the environment information, an effective value calculation unit for obtaining an effective value indicating the quality of the feature quantity;
    Based on the determination result and the effective value, for each detected individual, a feature amount specifying unit that specifies a feature amount used for individual identification processing of the individual;
    A collation unit that performs individual identification processing for each individual extracted based on the identified feature amount;
    An individual identification system comprising:
  9.  個体識別処理の対象となる個体が存在する空間の環境に起因する情報である環境情報から検出される当該個体を示す情報と、前記個体のそれぞれの位置を示す位置情報と、前記環境情報から抽出される特徴量と、前記特徴量のそれぞれの属性を示す属性情報とを、それぞれ受け取り、前記位置情報および前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別し、
     前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求め、
     前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する特徴量特定方法。
    Extracted from the environmental information, information indicating the individual detected from the environmental information, which is information resulting from the environment of the space where the individual subject to the individual identification process exists, position information indicating the position of the individual, and the environmental information And the attribute information indicating the attribute of each of the feature quantities is received, and the individual in which the feature quantity has occurred is determined for each feature quantity based on the position information and the attribute information. ,
    For each feature quantity extracted from the environment information, an effective value indicating the quality of the feature quantity is obtained,
    A feature amount specifying method for specifying, for each detected individual, a feature amount used for individual identification processing of the individual based on the result of the determination and the effective value.
  10.  個体識別処理の対象となる個体が存在する空間の環境に起因する情報である環境情報から検出される当該個体を示す情報と、前記個体のそれぞれの位置を示す位置情報と、前記環境情報から抽出される特徴量と、前記特徴量のそれぞれの属性を示す属性情報とを、それぞれ受け取り、前記位置情報および前記属性情報に基づいて、前記特徴量毎に、その特徴量が発生した個体を判別する処理と、
     前記環境情報から抽出される特徴量毎に、その特徴量の品質を示す有効値を求める処理と、
     前記判別の結果と前記有効値とに基づいて、前記検出された個体毎に、その個体の個体識別処理に用いる特徴量を特定する処理と、をコンピュータに実行させるためのプログラムを記録した記録媒体。
    Extracted from the environmental information, information indicating the individual detected from the environmental information, which is information resulting from the environment of the space where the individual subject to the individual identification process exists, position information indicating the position of the individual, and the environmental information And the attribute information indicating each attribute of the feature quantity is received, and the individual in which the feature quantity has occurred is determined for each feature quantity based on the position information and the attribute information. Processing,
    For each feature quantity extracted from the environment information, a process for obtaining an effective value indicating the quality of the feature quantity;
    A recording medium recording a program for causing a computer to execute, for each detected individual, a feature amount used for individual identification processing based on the determination result and the effective value. .
PCT/JP2011/062313 2010-08-09 2011-05-24 System for identifying individuals, feature value specification device, feature specification method, and recording medium WO2012020591A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012528607A JPWO2012020591A1 (en) 2010-08-09 2011-05-24 Individual identification system, feature amount specifying device, feature amount specifying method and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-178644 2010-08-09
JP2010178644 2010-08-09

Publications (1)

Publication Number Publication Date
WO2012020591A1 true WO2012020591A1 (en) 2012-02-16

Family

ID=45567561

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/062313 WO2012020591A1 (en) 2010-08-09 2011-05-24 System for identifying individuals, feature value specification device, feature specification method, and recording medium

Country Status (2)

Country Link
JP (1) JPWO2012020591A1 (en)
WO (1) WO2012020591A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014092904A (en) * 2012-11-02 2014-05-19 Nissan Motor Co Ltd Method and apparatus for identifying workpiece
JP2015529365A (en) * 2012-09-05 2015-10-05 エレメント,インク. System and method for biometric authentication associated with a camera-equipped device
CN105264542A (en) * 2013-02-06 2016-01-20 索纳维森股份有限公司 Biometric sensing device for three dimensional imaging of subcutaneous structures embedded within finger tissue
US9913135B2 (en) 2014-05-13 2018-03-06 Element, Inc. System and method for electronic key provisioning and access management in connection with mobile devices
US9965728B2 (en) 2014-06-03 2018-05-08 Element, Inc. Attendance authentication and management in connection with mobile devices
JP2019175081A (en) * 2018-03-28 2019-10-10 株式会社日立パワーソリューションズ Movement course identification system and method
US10735959B2 (en) 2017-09-18 2020-08-04 Element Inc. Methods, systems, and media for detecting spoofing in mobile authentication
JP2021092809A (en) * 2021-02-26 2021-06-17 日本電気株式会社 Voice processing device, voice processing method and voice processing program
US11250860B2 (en) 2017-03-07 2022-02-15 Nec Corporation Speaker recognition based on signal segments weighted by quality
US11343277B2 (en) 2019-03-12 2022-05-24 Element Inc. Methods and systems for detecting spoofing of facial recognition in connection with mobile devices
US11507248B2 (en) 2019-12-16 2022-11-22 Element Inc. Methods, systems, and media for anti-spoofing using eye-tracking

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0484277A (en) * 1990-07-26 1992-03-17 Nec Corp Method and device for feature quantity selection and method and device for high-speed discrimination
JP2006293644A (en) * 2005-04-08 2006-10-26 Canon Inc Information processing device and information processing method
JP2009194857A (en) * 2008-02-18 2009-08-27 Sharp Corp Communication conference system, communication apparatus, communication conference method, and computer program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0484277A (en) * 1990-07-26 1992-03-17 Nec Corp Method and device for feature quantity selection and method and device for high-speed discrimination
JP2006293644A (en) * 2005-04-08 2006-10-26 Canon Inc Information processing device and information processing method
JP2009194857A (en) * 2008-02-18 2009-08-27 Sharp Corp Communication conference system, communication apparatus, communication conference method, and computer program

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015529365A (en) * 2012-09-05 2015-10-05 エレメント,インク. System and method for biometric authentication associated with a camera-equipped device
US10135815B2 (en) 2012-09-05 2018-11-20 Element, Inc. System and method for biometric authentication in connection with camera equipped devices
JP2018200716A (en) * 2012-09-05 2018-12-20 エレメント,インク. System and method for biometric authentication in connection with camera-equipped devices
US10728242B2 (en) 2012-09-05 2020-07-28 Element Inc. System and method for biometric authentication in connection with camera-equipped devices
JP2014092904A (en) * 2012-11-02 2014-05-19 Nissan Motor Co Ltd Method and apparatus for identifying workpiece
US10621404B2 (en) 2013-02-06 2020-04-14 Sonavation, Inc. Biometric sensing device for three dimensional imaging of subcutaneous structures embedded within finger tissue
CN105264542A (en) * 2013-02-06 2016-01-20 索纳维森股份有限公司 Biometric sensing device for three dimensional imaging of subcutaneous structures embedded within finger tissue
JP2016513983A (en) * 2013-02-06 2016-05-19 ソナベーション, インコーポレイテッド Biometric sensing device for 3D imaging of subcutaneous structures embedded in finger tissue
US10528785B2 (en) 2013-02-06 2020-01-07 Sonavation, Inc. Method and system for beam control in biometric sensing
US9913135B2 (en) 2014-05-13 2018-03-06 Element, Inc. System and method for electronic key provisioning and access management in connection with mobile devices
US9965728B2 (en) 2014-06-03 2018-05-08 Element, Inc. Attendance authentication and management in connection with mobile devices
US11250860B2 (en) 2017-03-07 2022-02-15 Nec Corporation Speaker recognition based on signal segments weighted by quality
US11837236B2 (en) 2017-03-07 2023-12-05 Nec Corporation Speaker recognition based on signal segments weighted by quality
US10735959B2 (en) 2017-09-18 2020-08-04 Element Inc. Methods, systems, and media for detecting spoofing in mobile authentication
US11425562B2 (en) 2017-09-18 2022-08-23 Element Inc. Methods, systems, and media for detecting spoofing in mobile authentication
JP2019175081A (en) * 2018-03-28 2019-10-10 株式会社日立パワーソリューションズ Movement course identification system and method
US11343277B2 (en) 2019-03-12 2022-05-24 Element Inc. Methods and systems for detecting spoofing of facial recognition in connection with mobile devices
US11507248B2 (en) 2019-12-16 2022-11-22 Element Inc. Methods, systems, and media for anti-spoofing using eye-tracking
JP2021092809A (en) * 2021-02-26 2021-06-17 日本電気株式会社 Voice processing device, voice processing method and voice processing program
JP7216348B2 (en) 2021-02-26 2023-02-01 日本電気株式会社 Speech processing device, speech processing method, and speech processing program

Also Published As

Publication number Publication date
JPWO2012020591A1 (en) 2013-10-28

Similar Documents

Publication Publication Date Title
WO2012020591A1 (en) System for identifying individuals, feature value specification device, feature specification method, and recording medium
Oloyede et al. Unimodal and multimodal biometric sensing systems: a review
Jain et al. Integrating faces, fingerprints, and soft biometric traits for user recognition
KR101189765B1 (en) Method and apparatus for classification sex-gender based on voice and video
US7881524B2 (en) Information processing apparatus and information processing method
WO2017198014A1 (en) Identity authentication method and apparatus
Kataria et al. A survey of automated biometric authentication techniques
Gafurov et al. Gait recognition using acceleration from MEMS
Nandakumar Integration of multiple cues in biometric systems
KR100940902B1 (en) The biometrics using finger geometry information
AlMahafzah et al. A survey of multibiometric systems
TW201201115A (en) Facial expression recognition systems and methods and computer program products thereof
Majekodunmi et al. A review of the fingerprint, speaker recognition, face recognition and iris recognition based biometric identification technologies
WO2009131209A1 (en) Image matching device, image matching method, and image matching program
Soltane et al. Multi-modal biometric authentications: concept issues and applications strategies
Derawi Smartphones and biometrics: Gait and activity recognition
US20070253598A1 (en) Image monitoring apparatus
JP2021015443A (en) Complement program and complement method and complementary device
Poh et al. A methodology for separating sheep from goats for controlled enrollment and multimodal fusion
Senarath et al. BehaveFormer: A Framework with Spatio-Temporal Dual Attention Transformers for IMU-enhanced Keystroke Dynamics
KR20110100008A (en) User recognition apparatus and method using age and gender as semi biometrics
KR101208678B1 (en) Incremental personal autentication system and method using multi bio-data
Bigun et al. Combining biometric evidence for person authentication
Vasavi et al. Novel Multimodal Biometric Feature Extraction for Precise Human Identification.
JP2002208011A (en) Image collation processing system and its method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11816257

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2012528607

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11816257

Country of ref document: EP

Kind code of ref document: A1