WO2018155594A1 - Information processing device, information processing method, and computer-readable recording medium - Google Patents

Information processing device, information processing method, and computer-readable recording medium Download PDF

Info

Publication number
WO2018155594A1
WO2018155594A1 PCT/JP2018/006585 JP2018006585W WO2018155594A1 WO 2018155594 A1 WO2018155594 A1 WO 2018155594A1 JP 2018006585 W JP2018006585 W JP 2018006585W WO 2018155594 A1 WO2018155594 A1 WO 2018155594A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
person
information
image
match
Prior art date
Application number
PCT/JP2018/006585
Other languages
French (fr)
Japanese (ja)
Inventor
千聡 大川
良彰 廣谷
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Publication of WO2018155594A1 publication Critical patent/WO2018155594A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems

Definitions

  • the present invention relates to an information processing apparatus and an information processing method for automatically specifying a suspicious person based on feature information other than a face image related to the suspicious person, and further, a program for realizing the information processing apparatus.
  • the present invention relates to a recorded computer-readable recording medium.
  • a supervisor needs to identify a suspicious person or the like by visual observation from a video taken by the surveillance camera.
  • the visual identification has a problem that the labor is large, and that it is difficult to deal with when a plurality of persons are shown in the video.
  • Patent Document 1 discloses a monitoring device having a function of automatically specifying a person. Specifically, the monitoring device disclosed in Patent Document 1 extracts a person's face image from a video sent from the monitoring camera, and extracts the extracted face image and the person's face image registered in the database. Are automatically identified by checking the above.
  • the face image of the person registered in the database is usually a face image from the front
  • the face image of the person extracted from the video sent from the surveillance camera is not a face image from the front
  • the monitoring device disclosed in Patent Document 1 displays both the face image of the person registered in the database and the extracted face image of the person on the screen, and the judgment by the monitor is performed. I support.
  • An example of an object of the present invention is an information processing apparatus, an information processing method, and a program capable of solving the above-described problem and automatically specifying a suspicious person based on feature information other than a face image related to the suspicious person. Is to provide.
  • a monitoring device includes: An input receiving unit that receives input of feature information other than a face image indicating the characteristics of a person to be identified; A first determination unit that determines, for each feature of the person included in the feature information, the degree of coincidence between the feature and the feature of the person shown in the image; A second determination unit that determines whether or not the feature information matches a person shown in the image based on a determination result for each feature of the person; It is characterized by having.
  • a monitoring method includes: (A) receiving an input of feature information other than a face image indicating a feature of a person to be identified; (B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image; (C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person; It is characterized by having.
  • a computer-readable recording medium On the computer, (A) receiving an input of feature information other than a face image indicating a feature of a person to be identified; (B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image; (C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person; A program including an instruction for executing is recorded.
  • a suspicious person or the like can be automatically specified based on feature information other than a face image related to the suspicious person or the like.
  • FIG. 1 is a block diagram showing a schematic configuration of an information processing apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a specific configuration of the information processing apparatus according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating an example of feature information used in the embodiment of the present invention and processing of the first determination unit and the second determination unit.
  • FIG. 4 is a flowchart showing the operation of the information processing apparatus according to the embodiment of the present invention.
  • FIG. 5 is a diagram for explaining processing of the first determination unit and the second determination unit in a modification of the embodiment of the present invention.
  • FIG. 6 is a diagram illustrating an example of a person detection system configured by the detection device according to the present embodiment.
  • FIG. 7 is a block diagram illustrating an example of a computer that implements the information processing apparatus according to the embodiment of the present invention.
  • FIG. 1 is a block diagram showing a schematic configuration of an information processing apparatus according to an embodiment of the present invention.
  • An information processing apparatus 10 is an apparatus for performing information processing when a specific person is detected. As illustrated in FIG. 1, the information processing apparatus 10 includes an input reception unit 11, a first determination unit 12, and a second determination unit 13.
  • the input reception unit 11 receives input of feature information other than a face image indicating the characteristics of a person to be specified.
  • the first determination unit 12 determines the degree of coincidence between each feature and the feature of the person shown in the image for each feature of the person included in the feature information.
  • the second determination unit 13 determines whether the feature information matches the person shown in the image based on the determination result for each feature of the person.
  • the degree of coincidence between a feature other than a face such as the age, system, and sex of a person to be specified and a feature reflected in an image is determined, and based on the determination result.
  • it is finally determined whether or not the person in the image is a person to be specified.
  • FIG. 2 is a block diagram showing a specific configuration of the information processing apparatus according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating an example of feature information used in the embodiment of the present invention and processing of the first determination unit and the second determination unit.
  • the information processing apparatus 10 and the imaging apparatus 20 constitute a detection apparatus 100 that detects a person to be identified.
  • the imaging device 20 outputs image data of an image obtained by shooting at set intervals.
  • a specific example of the imaging device 20 is a surveillance camera.
  • the information processing apparatus 10 includes an interpretation unit 14 and an output unit 15 in addition to the input reception unit 11, the first determination unit 12, and the second determination unit 13.
  • the feature information for example, as shown in FIG. 3, text information that expresses features such as age, sex, physique, behavior, and object in a person to be identified as text can be cited.
  • the object here includes an object worn by a person and an object carried by the person.
  • the characteristic information is text information including “20s or 30s, male, middle-back, running, wearing no hat, carrying a knife”.
  • the input reception unit 11 receives feature information including the input feature. .
  • the input receiving unit 11 can receive only the input of the feature information including the specified feature when the administrator specifies only some of the preset features of the person.
  • the input reception unit 11 can display various icons representing features on a screen of a display device or a screen of a terminal device or the like connected to the detection device 100. In this case, when the administrator specifies several icons, the input receiving unit 11 receives an identifier or text associated with the specified icon as feature information.
  • the interpretation unit 14 takes out the feature included in the feature information and passes the extracted feature to the first determination unit 12.
  • the interpretation unit 14 determines, from the feature information, as features, “20s or 30s”, “male”, “middle meat”, “running”, “hat” “I do n’t wear a knife and carry a knife” is taken out, and the taken-out features are notified to the first determination unit 12.
  • the first determination unit 12 includes a plurality of feature classifiers 16 for each feature.
  • each feature discriminator 16 corresponds to any one of age, gender, physique, action, and object.
  • each feature classifier 16 is represented as “age classifier”, “gender classifier”, “physique classifier”, “behavior classifier”, and “object classifier”.
  • the type of feature discriminator 16 is not limited to that shown in FIG. Further, when the feature is notified from the interpretation unit 14, the first determination unit 12 selects a necessary feature discriminator 16 from the feature discriminators 16 held in advance according to the notified feature. You can also
  • Each feature discriminator 16 determines whether the feature to which it corresponds and the feature of the person in the image specified by the image data from the imaging device 20 are matched or mismatched. Each feature discriminator 16 calculates the probability that the corresponding feature matches the feature of the person in the image, and determines that they match if the calculated probability is equal to or greater than a threshold value. Also good. Specifically, each feature discriminator 16 performs determination as follows.
  • the age discriminator discriminates the age of a person in the image using deep learning, which is one of machine learning. Specifically, the age discriminator learns a convolutional neural network (Convolutional Neural Network) having a convolutional layer and a fully connected layer in advance. Then, the age discriminator extracts a feature amount from the image by each convolution layer of the convolutional neural network learned in advance. Then, the age discriminator performs weight calculation on the extracted feature quantity with respect to the extracted feature quantity by all the coupling layers, and discriminates the age based on the obtained calculated value.
  • Convolutional neural network Convolutional Neural Network
  • the gender discriminator discriminates the gender of the person in the image using deep learning as an example. Specifically, the gender discriminator also extracts a feature amount from the image by each convolution layer of the convolutional neural network learned in advance. However, the gender discriminator performs weight calculation for gender on the extracted feature quantity by all the connected layers, and discriminates gender based on the obtained calculated value.
  • the object discriminator also uses deep learning as an example to determine what a person is wearing and what he is carrying. Specifically, the object discriminator also learns a convolution network having a convolution layer and a fully connected layer in advance. Then, the object discriminator extracts a feature amount from the image by each convolution layer of the learned convolutional neural network. Then, the object discriminator performs weight calculation on the object with respect to the extracted feature quantity by using all the coupling layers, and discriminates the type of the object shown in the image based on the obtained calculated value.
  • the behavior discriminator also determines the behavior of a person in the image using deep learning as an example. However, unlike the above-described discriminator, the behavior discriminator detects a person shown in the image and its periphery by using a previously learned Region Convolution Neural Network. Next, the behavior discriminator determines whether or not the detected person and the surroundings correspond to a character expression indicating the behavior of the feature information by using a recurrent neural network (Recurrent Neural Network) learned in advance.
  • Recurrent Neural Network recurrent Neural Network
  • the physique discriminator extracts the width of a person's head, neck, shoulders, belly, legs, etc. from the image, and determines the length and pattern of a pre-set body shape (slim type, fillet type, The body type of the person is determined based on the comparison result.
  • a pre-set body shape slim type, fillet type, The body type of the person is determined based on the comparison result.
  • the second determination unit 13 determines whether or not the number of determination results determined to match in a determination for each person characteristic is equal to or greater than a predetermined number, or the determination result determined to match. If the ratio is equal to or greater than a predetermined ratio, it is determined that the feature information matches the person in the image.
  • the second determination unit 13 performs logical addition, and matches in all of the age discriminator, gender discriminator, physique discriminator, behavior discriminator, and object discriminator. If it is determined that the feature information matches the person shown in the image.
  • the output unit 15 outputs a determination result. Examples of the output destination include a display device of the detection device 100, a terminal device of an administrator of the detection device 100, and the like.
  • FIG. 4 is a flowchart showing the operation of the information processing apparatus according to the embodiment of the present invention.
  • FIGS. 1 to 3 are referred to as appropriate.
  • the information processing method is implemented by operating the information processing apparatus 10. Therefore, the description of the information processing method in the present embodiment is replaced with the following description of the operation of the information processing apparatus 10.
  • the input receiving unit 11 first includes feature information including the input feature. Is received (step A1). The input receiving unit 11 passes the received feature information to the interpreting unit 14.
  • the interpretation unit 14 extracts each feature included in the passed feature information, and passes the extracted feature to the first determination unit 12 (step A2).
  • the first determination unit 12 acquires image data from the imaging device 20, and for each feature passed in step A2, the degree of coincidence between the feature and the feature of the person shown in the image of the image data is determined. Determine (Step A3). Specifically, the first determination unit 12 selects a feature discriminator 16 corresponding to each feature, and causes each selected feature discriminator 16 to perform a match determination between the corresponding feature and the feature of the person in the image. .
  • the second determination unit 13 determines whether the feature information matches the person shown in the image based on the determination result for each feature (step A4). Specifically, the second determination unit 13 performs a logical addition, and when it is determined that all of the age discriminator, gender discriminator, physique discriminator, behavior discriminator, and object discriminator match, It is determined that the feature information matches the person shown in the image (see FIG. 3).
  • the output unit 15 outputs the determination result in step A4 to the display device of the detection device 100, the terminal device of the administrator of the detection device 100, and the like (step A5).
  • the output unit 15 may output a warning. Steps A3 to A5 are repeatedly executed every time image data is output from the imaging device 20.
  • a person whose features match each other is automatically specified from the video imaged by the imaging device 20. For this reason, if features such as a suspicious person and a wanted criminal are input as feature information based on the sighting information, these people can be automatically identified even if they do not have face images.
  • a person is specified using only feature information.
  • the present embodiment may be an embodiment in which a person is specified using both feature information and a face image. .
  • the program in the present embodiment may be a program that causes a computer to execute steps A1 to A5 shown in FIG.
  • the information processing apparatus 10 and the information processing method in the present embodiment can be realized by installing and executing this program on a computer.
  • the processor of the computer functions as the input reception unit 11, the first determination unit 12, the second determination unit 13, the interpretation unit 14, and the output unit 15, and performs processing.
  • each computer may function as any one of the input reception unit 11, the first determination unit 12, the second determination unit 13, the interpretation unit 14, and the output unit 15, respectively.
  • FIG. 5 is a diagram for explaining processing of the first determination unit and the second determination unit in a modification of the embodiment of the present invention.
  • each feature discriminator 16 calculates the probability that the corresponding feature matches the feature of the person in the image.
  • the first determination unit 12 outputs the probability calculated by each feature classifier 16 to the second determination unit 13.
  • the 2nd determination part 13 calculates
  • the second determination unit 13 can calculate the average value by multiplying the probability of each feature by a weighting factor when calculating the average value.
  • the value of the weighting factor is appropriately set by the administrator of the detection apparatus 100.
  • FIG. 6 is a diagram illustrating an example of a person detection system configured by the detection device according to the present embodiment.
  • the person detection system 400 includes a plurality of detection devices 100 and a management server 200, which are connected via the Internet 300.
  • an administrator terminal device 210 is connected to the management server 200.
  • the detection devices 100 are installed in different areas.
  • the terminal device 210 When the witness gives a testimony such as ⁇ a man who does not wear a medium-thickness hat running with a knife in his 20s or 30s '' based on this testimony, the terminal device The feature information is input on 210. As a result, the terminal device 210 transmits the feature information to the management server 200.
  • the management server 200 When receiving the feature information, the management server 200 converts the format of the received feature information into a logically interpretable format shown in FIG. Then, the management server 200 transmits the converted feature information to each detection device 100. In addition, when the area where the sighting information is acquired can be acquired, the management server 200 can transmit the feature information only to the detection device 100 corresponding to the specified area and the adjacent area.
  • each detecting device 100 receives the feature information transmitted from the management server 200
  • the input accepting unit 11 accepts the input of the feature information (step A1), and thereafter, steps A2 to A5 are executed. Steps A3 to A5 are repeatedly executed every time image data is output from the imaging device 20.
  • the person detection system 400 can be applied not only to suspicious persons and wanted criminals, but also to detection of lost children.
  • FIG. 7 is a block diagram illustrating an example of a computer that implements the information processing apparatus according to the embodiment of the present invention.
  • the computer 110 includes a CPU (Central Processing Unit) 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. With. These units are connected to each other via a bus 121 so that data communication is possible.
  • the computer 110 may include a GPU (GraphicsGraphProcessing Unit) or an FPGA (Field-Programmable Gate Array) in addition to or instead of the CPU 111.
  • GPU GraphicsGraphProcessing Unit
  • FPGA Field-Programmable Gate Array
  • the CPU 111 performs various operations by developing the program (code) in the present embodiment stored in the storage device 113 in the main memory 112 and executing them in a predetermined order.
  • the main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory).
  • the program in the present embodiment is provided in a state of being stored in a computer-readable recording medium 120. Note that the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
  • the storage device 113 includes a hard disk drive and a semiconductor storage device such as a flash memory.
  • the input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and a mouse.
  • the display controller 115 is connected to the display device 119 and controls display on the display device 119.
  • the data reader / writer 116 mediates data transmission between the CPU 111 and the recording medium 120, and reads a program from the recording medium 120 and writes a processing result in the computer 110 to the recording medium 120.
  • the communication interface 117 mediates data transmission between the CPU 111 and another computer.
  • the recording medium 120 include general-purpose semiconductor storage devices such as CF (Compact Flash (registered trademark)) and SD (Secure Digital), magnetic recording media such as a flexible disk, or CD- Optical recording media such as ROM (Compact Disk Read Only Memory) are listed.
  • CF Compact Flash
  • SD Secure Digital
  • magnetic recording media such as a flexible disk
  • CD- Optical recording media such as ROM (Compact Disk Read Only Memory) are listed.
  • the information processing apparatus 10 in the present embodiment can be realized by using hardware corresponding to each unit, not a computer in which a program is installed. Furthermore, a part of the information processing apparatus 10 may be realized by a program, and the remaining part may be realized by hardware.
  • the input reception part which receives the input of feature information other than the face image which shows the characteristic of the person used as specific object, A first determination unit that determines, for each feature of the person included in the feature information, the degree of coincidence between the feature and the feature of the person shown in the image; A second determination unit that determines whether or not the feature information matches a person shown in the image based on a determination result for each feature of the person;
  • An information processing apparatus comprising:
  • the said input reception part receives the text information by which the characteristic of the said person was represented by the text as the said characteristic information.
  • the information processing apparatus according to appendix 1 or 2.
  • the said input reception part receives the input of the characteristic information containing the designated characteristic among the characteristics of the said person,
  • the information processing apparatus according to appendix 1 or 2.
  • the first determination unit determines whether the feature and the feature of the person shown in the image match or do not match. And In the determination for each feature of the person, the second determination unit determines that the determination results determined to be equal to or greater than a predetermined number, or the ratio of the determination results determined to match is a predetermined ratio In the case of the above, it is determined that the feature information and the person shown in the image match.
  • the information processing apparatus according to any one of appendices 1 to 4.
  • the first determination unit calculates a probability that the feature and the feature of the person in the image match.
  • the second determination unit obtains an average value of the probabilities calculated for each feature of the person, and when the obtained average value exceeds a threshold, the feature information and the person shown in the image match. It is determined that The information processing apparatus according to any one of appendices 1 to 4.
  • (Appendix 7) (a) accepting input of feature information other than a face image indicating the characteristics of a person to be identified; (B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image; (C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person;
  • step (a) as the feature information, text information in which the feature of the person is expressed in text is received.
  • step (a) input of feature information including a specified feature among the features of the person is accepted.
  • step (b) for each feature of the person included in the feature information, whether the feature and the feature of the person shown in the image match or do not match. Judgment, in the step (c), in the determination for each feature of the person, when the determination result determined to match is a predetermined number or more, or the ratio of the determination result determined to match is predetermined. When the ratio is equal to or greater than the ratio, it is determined that the feature information matches the person shown in the image.
  • the information processing method according to any one of appendices 7 to 10.
  • step (b) for each feature of the person included in the feature information, a probability that the feature and the feature of the person shown in the image match is calculated, In the step (c), an average value of the probabilities calculated for each feature of the person is obtained, and when the obtained average value exceeds a threshold value, the feature information and the person shown in the image are identical. Judge that you are doing, The information processing method according to any one of appendices 7 to 10.
  • (Supplementary note 13) (A) receiving an input of feature information other than a face image indicating a feature of a person to be identified; (B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image; (C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person;
  • the computer-readable recording medium which recorded the program containing the instruction
  • step (a) As the feature information, text information in which the feature of the person is expressed in text is received.
  • step (a) input of feature information including a specified feature among the features of the person is accepted.
  • the computer-readable recording medium according to appendix 13 or 14.
  • step (b) for each feature of the person included in the feature information, whether the feature and the feature of the person shown in the image match or do not match. Judgment, in the step (c), in the determination for each feature of the person, when the determination result determined to match is a predetermined number or more, or the ratio of the determination result determined to match is predetermined. When the ratio is equal to or greater than the ratio, it is determined that the feature information matches the person shown in the image.
  • the computer-readable recording medium according to any one of appendices 13 to 16.
  • step (b) for each feature of the person included in the feature information, a probability that the feature and the feature of the person shown in the image match is calculated, In the step (c), an average value of the probabilities calculated for each feature of the person is obtained, and when the obtained average value exceeds a threshold value, the feature information and the person shown in the image are identical. Judge that you are doing, The computer-readable recording medium according to any one of appendices 13 to 16.
  • a suspicious person or the like can be automatically specified based on feature information other than a face image related to the suspicious person or the like.
  • INDUSTRIAL APPLICABILITY The present invention is useful for a system for detecting a suspicious person, a wanted crime, and a system for searching for lost children.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Alarm Systems (AREA)

Abstract

An information processing device 10 equipped with: an input reception unit 11 that accepts an input of feature information other than a facial image and indicating features of person to be identified; a first determination unit 12 that, for each feature of the person included in the feature information, determines a degree of matching between that feature and a feature of a person appearing in an image; and a second determination unit 13 that, on the basis of the determination results for each of the features of the person, determines whether the feature information and the person appearing in the image match.

Description

情報処理装置、情報処理方法、及びコンピュータ読み取り可能な記録媒体Information processing apparatus, information processing method, and computer-readable recording medium
 本発明は、不審人物等に関する顔画像以外の特徴情報に基づいて不審人物等を自動的に特定するための、情報処理装置、及び情報処理方法に関し、更には、これらを実現するためのプログラムを記録したコンピュータ読み取り可能な記録媒体に関する。 The present invention relates to an information processing apparatus and an information processing method for automatically specifying a suspicious person based on feature information other than a face image related to the suspicious person, and further, a program for realizing the information processing apparatus. The present invention relates to a recorded computer-readable recording medium.
 従来、監視カメラを用いた監視システムにおいては、監視員は、監視カメラで撮影された映像から、目視によって、不審人物等を特定する必要がある。しかし、目視による特定には、労力が大きいという問題、映像に複数の人物が写っている場合に対応が難しいという問題がある。 Conventionally, in a surveillance system using a surveillance camera, a supervisor needs to identify a suspicious person or the like by visual observation from a video taken by the surveillance camera. However, the visual identification has a problem that the labor is large, and that it is difficult to deal with when a plurality of persons are shown in the video.
 これに対して、特許文献1は、人物特定を自動で行なう機能を備えた監視装置を開示している。具体的には、特許文献1に開示された監視装置は、監視カメラから送られてきた映像から、人物の顔画像を抽出し、抽出した顔画像とデータベースに登録されている人物の顔画像とを照合することによって、自動的に不審人物等を特定する。 On the other hand, Patent Document 1 discloses a monitoring device having a function of automatically specifying a person. Specifically, the monitoring device disclosed in Patent Document 1 extracts a person's face image from a video sent from the monitoring camera, and extracts the extracted face image and the person's face image registered in the database. Are automatically identified by checking the above.
 また、データベースに登録されている人物の顔画像は、通常、正面からの顔画像であるため、監視カメラから送られてきた映像から抽出された人物の顔画像が、正面からの顔画像でない場合は、照合精度が低下してしまう。この場合においては、特許文献1に開示された監視装置は、データベースに登録されている人物の顔画像と、抽出された人物の顔画像との両方を画面に表示して、監視員による判断を支援している。 In addition, since the face image of the person registered in the database is usually a face image from the front, the face image of the person extracted from the video sent from the surveillance camera is not a face image from the front The collation accuracy is reduced. In this case, the monitoring device disclosed in Patent Document 1 displays both the face image of the person registered in the database and the extracted face image of the person on the screen, and the judgment by the monitor is performed. I support.
特開2010-231402号公報JP 2010-231402 A
 しかしながら、上記特許文献1に開示された監視装置では、自動的に人物を特定する場合であっても、監視員による判断を支援する場合であっても、予め、データベースに登録された顔画像に基づいて不審人物等を特定している。このため、不審人物等に関する顔画像以外の特徴情報、例えば、30代、男性、中肉中背といった情報を用いて不審人物等を特定する場合には、結局のところ監視員が目視で行なう必要がある。 However, in the monitoring device disclosed in the above-mentioned Patent Document 1, even when a person is automatically specified or when a judgment by a monitor is supported, a facial image registered in a database in advance is used. Based on this, suspicious persons are identified. For this reason, when a suspicious person or the like is specified using feature information other than a face image related to the suspicious person, for example, information such as 30's, male, and fillet of the inside, it is necessary for the supervisor to visually check after all. There is.
 本発明の目的の一例は、上記問題を解消し、不審人物等に関する顔画像以外の特徴情報に基づいて不審人物等を自動的に特定することができる、情報処理装置、情報処理方法、及びプログラムを提供することにある。 An example of an object of the present invention is an information processing apparatus, an information processing method, and a program capable of solving the above-described problem and automatically specifying a suspicious person based on feature information other than a face image related to the suspicious person. Is to provide.
 上記目的を達成するため、本発明の一側面における監視装置は、
 特定対象となる人物の特徴を示す顔画像以外の特徴情報の入力を受け付ける、入力受付部と、
 前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴との一致度合を判定する、第1判定部と、
 前記人物の特徴毎の判定結果に基づいて、前記特徴情報と前記画像に写っている人物とが一致しているかどうかを判定する、第2判定部と、
を備えている、ことを特徴とする。
In order to achieve the above object, a monitoring device according to one aspect of the present invention includes:
An input receiving unit that receives input of feature information other than a face image indicating the characteristics of a person to be identified;
A first determination unit that determines, for each feature of the person included in the feature information, the degree of coincidence between the feature and the feature of the person shown in the image;
A second determination unit that determines whether or not the feature information matches a person shown in the image based on a determination result for each feature of the person;
It is characterized by having.
 また、上記目的を達成するため、本発明の一側面における監視方法は、
(a)特定対象となる人物の特徴を示す顔画像以外の特徴情報の入力を受け付ける、ステップと、
(b)前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴との一致度合を判定する、ステップと、
(c)前記人物の特徴毎の判定結果に基づいて、前記特徴情報と前記画像に写っている人物とが一致しているかどうかを判定する、ステップと、
を有する、ことを特徴とする。
In order to achieve the above object, a monitoring method according to one aspect of the present invention includes:
(A) receiving an input of feature information other than a face image indicating a feature of a person to be identified;
(B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image;
(C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person;
It is characterized by having.
 更に、上記目的を達成するため、本発明の一側面におけるコンピュータ読み取り可能な記録媒体は、
コンピュータに、
(a)特定対象となる人物の特徴を示す顔画像以外の特徴情報の入力を受け付ける、ステップと、
(b)前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴との一致度合を判定する、ステップと、
(c)前記人物の特徴毎の判定結果に基づいて、前記特徴情報と前記画像に写っている人物とが一致しているかどうかを判定する、ステップと、
を実行させる命令を含む、プログラムを記録していることを特徴とする。
Furthermore, in order to achieve the above object, a computer-readable recording medium according to one aspect of the present invention is provided.
On the computer,
(A) receiving an input of feature information other than a face image indicating a feature of a person to be identified;
(B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image;
(C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person;
A program including an instruction for executing is recorded.
 以上のように、本発明によれば、不審人物等に関する顔画像以外の特徴情報に基づいて不審人物等を自動的に特定することができる。 As described above, according to the present invention, a suspicious person or the like can be automatically specified based on feature information other than a face image related to the suspicious person or the like.
図1は、本発明の実施の形態における情報処理装置の概略構成を示すブロック図である。FIG. 1 is a block diagram showing a schematic configuration of an information processing apparatus according to an embodiment of the present invention. 図2は、本発明の実施の形態における情報処理装置の具体的構成を示すブロック図である。FIG. 2 is a block diagram showing a specific configuration of the information processing apparatus according to the embodiment of the present invention. 図3は、本発明の実施の形態で用いられる特徴情報の一例と第1判定部及び第2判定部の処理とを説明する図である。FIG. 3 is a diagram illustrating an example of feature information used in the embodiment of the present invention and processing of the first determination unit and the second determination unit. 図4は、本発明の実施の形態における情報処理装置の動作を示すフロー図である。FIG. 4 is a flowchart showing the operation of the information processing apparatus according to the embodiment of the present invention. 図5は、本発明の実施の形態における変形例での第1判定部及び第2判定部の処理を説明する図である。FIG. 5 is a diagram for explaining processing of the first determination unit and the second determination unit in a modification of the embodiment of the present invention. 図6は、本実施の形態における検出装置によって構成された人物検出システムの一例を示す図である。FIG. 6 is a diagram illustrating an example of a person detection system configured by the detection device according to the present embodiment. 図7は、本発明の実施の形態における情報処理装置を実現するコンピュータの一例を示すブロック図である。FIG. 7 is a block diagram illustrating an example of a computer that implements the information processing apparatus according to the embodiment of the present invention.
(実施の形態)
 以下、本発明の実施の形態における情報処理装置、情報処理方法、及びプログラムについて、図1~図7を参照しながら説明する。
(Embodiment)
Hereinafter, an information processing apparatus, an information processing method, and a program according to an embodiment of the present invention will be described with reference to FIGS.
[装置構成]
 最初に、本実施の形態における情報処理装置の概略構成について説明する。図1は、本発明の実施の形態における情報処理装置の概略構成を示すブロック図である。
[Device configuration]
First, a schematic configuration of the information processing apparatus in the present embodiment will be described. FIG. 1 is a block diagram showing a schematic configuration of an information processing apparatus according to an embodiment of the present invention.
 図1に示す、本実施の形態における情報処理装置10は、特定の人物を検出する際の情報処理を行なうための装置である。図1に示すように、情報処理装置10は、入力受付部11と、第1判定部12と、第2判定部13とを備えている。 An information processing apparatus 10 according to the present embodiment shown in FIG. 1 is an apparatus for performing information processing when a specific person is detected. As illustrated in FIG. 1, the information processing apparatus 10 includes an input reception unit 11, a first determination unit 12, and a second determination unit 13.
 入力受付部11は、特定対象となる人物の特徴を示す顔画像以外の特徴情報の入力を受付ける。第1判定部12は、特徴情報に含まれる人物の特徴毎に、各特徴と、画像に写っている人物の特徴との一致度合を判定する。第2判定部13は、人物の特徴毎の判定結果に基づいて、特徴情報と画像に写っている人物とが一致しているかどうかを判定する。 The input reception unit 11 receives input of feature information other than a face image indicating the characteristics of a person to be specified. The first determination unit 12 determines the degree of coincidence between each feature and the feature of the person shown in the image for each feature of the person included in the feature information. The second determination unit 13 determines whether the feature information matches the person shown in the image based on the determination result for each feature of the person.
 このように、本実施の形態では、例えば、特定対象となる人物の年齢、体系、性別等といった顔以外の特徴と、画像に写っている特徴との一致度合が判定され、その判定結果に基づいて、画像の人物が特定対象となる人物であるかどうかが最終的に判定される。このため、本実施の形態によれば、不審人物等に関する顔画像以外の特徴情報に基づいて、不審人物等を自動的に特定することが可能となる。 As described above, in the present embodiment, for example, the degree of coincidence between a feature other than a face such as the age, system, and sex of a person to be specified and a feature reflected in an image is determined, and based on the determination result. Thus, it is finally determined whether or not the person in the image is a person to be specified. For this reason, according to the present embodiment, it is possible to automatically identify a suspicious person or the like based on feature information other than a face image related to the suspicious person or the like.
 続いて、図2及び図3を用いて、本実施の形態における情報処理装置の構成について具体的に説明する。図2は、本発明の実施の形態における情報処理装置の具体的構成を示すブロック図である。図3は、本発明の実施の形態で用いられる特徴情報の一例と第1判定部及び第2判定部の処理とを説明する図である。 Subsequently, the configuration of the information processing apparatus according to the present embodiment will be specifically described with reference to FIGS. 2 and 3. FIG. 2 is a block diagram showing a specific configuration of the information processing apparatus according to the embodiment of the present invention. FIG. 3 is a diagram illustrating an example of feature information used in the embodiment of the present invention and processing of the first determination unit and the second determination unit.
 図2に示すように、本実施の形態では、情報処理装置10は、撮像装置20と共に、特定対象となる人物を検出する検出装置100を構成している。撮像装置20は、撮影によって得られた画像の画像データを、設定された間隔で出力する。撮像装置20の具体例としては、監視カメラが挙げられる。 As shown in FIG. 2, in the present embodiment, the information processing apparatus 10 and the imaging apparatus 20 constitute a detection apparatus 100 that detects a person to be identified. The imaging device 20 outputs image data of an image obtained by shooting at set intervals. A specific example of the imaging device 20 is a surveillance camera.
 また、図2に示すように、情報処理装置10は、入力受付部11、第1判定部12、及び第2判定部13に加えて、解釈部14と、出力部15とを備えている。 As shown in FIG. 2, the information processing apparatus 10 includes an interpretation unit 14 and an output unit 15 in addition to the input reception unit 11, the first determination unit 12, and the second determination unit 13.
 本実施の形態では、特徴情報としては、例えば、図3に示すように、特定対象となる人物における、年齢、性別、体格、行動、物体等の特徴をテキストで表したテキスト情報が挙げられる。なお、ここでいう物体には、人物が身に付けている物、及び人物が携えている物が含まれる。図3の例では、特徴情報は、「20代又は30代、男性、中肉中背、走っている、帽子を身につけず、ナイフを携えている」を含んだテキスト情報である。 In the present embodiment, as the feature information, for example, as shown in FIG. 3, text information that expresses features such as age, sex, physique, behavior, and object in a person to be identified as text can be cited. Note that the object here includes an object worn by a person and an object carried by the person. In the example of FIG. 3, the characteristic information is text information including “20s or 30s, male, middle-back, running, wearing no hat, carrying a knife”.
 入力受付部11は、キーボード等の入力機器、又は検出装置100に接続された端末装置等を介して、検出装置100の管理者が、特徴を入力すると、入力された特徴を含む特徴情報を受け付ける。 When the administrator of the detection device 100 inputs a feature via an input device such as a keyboard or a terminal device connected to the detection device 100, the input reception unit 11 receives feature information including the input feature. .
 また、入力受付部11は、予め設定されている人物の特徴の中から、管理者が幾つかの特徴のみを指定した場合は、指定された特徴を含む特徴情報の入力のみを受け付けることができる。例えば、入力受付部11は、表示装置の画面、又は検出装置100に接続された端末装置等の画面に、特徴を表す種々のアイコンを表示させることができる。この場合、管理者が幾つかのアイコンを指定すると、入力受付部11は、指定されたアイコンに関連付けられた識別子またはテキストを特徴情報としてを受け付ける。 Further, the input receiving unit 11 can receive only the input of the feature information including the specified feature when the administrator specifies only some of the preset features of the person. . For example, the input reception unit 11 can display various icons representing features on a screen of a display device or a screen of a terminal device or the like connected to the detection device 100. In this case, when the administrator specifies several icons, the input receiving unit 11 receives an identifier or text associated with the specified icon as feature information.
 解釈部14は、入力受付部11によって特徴情報が受け付けられると、特徴情報に含まれる特徴を取り出し、取り出した特徴を、第1判定部12に渡す。 When the feature information is received by the input receiving unit 11, the interpretation unit 14 takes out the feature included in the feature information and passes the extracted feature to the first determination unit 12.
 具体的には、図3の例では、解釈部14は、特徴情報から、特徴として、「20代又は30代」、「男性」、「中肉中背」、「走っている」、「帽子を身につけず、ナイフを携えている」を取り出し、取り出したこれらの特徴を第1判定部12に通知する。 Specifically, in the example of FIG. 3, the interpretation unit 14 determines, from the feature information, as features, “20s or 30s”, “male”, “middle meat”, “running”, “hat” “I do n’t wear a knife and carry a knife” is taken out, and the taken-out features are notified to the first determination unit 12.
 第1判定部12は、本実施の形態では、特徴毎に、複数の特徴判別器16を備えている。具体的には、図3に示すように、各特徴判別器16は、年齢、性別、体格、行動、物体のいずれかに対応している。図3の例では、各特徴判別器16は、「年齢判別器」、「性別判別器」、「体格判別器」、「行動判別器」、「物体判別器」と表記されている。 In the present embodiment, the first determination unit 12 includes a plurality of feature classifiers 16 for each feature. Specifically, as shown in FIG. 3, each feature discriminator 16 corresponds to any one of age, gender, physique, action, and object. In the example of FIG. 3, each feature classifier 16 is represented as “age classifier”, “gender classifier”, “physique classifier”, “behavior classifier”, and “object classifier”.
 また、本実施の形態において、特徴判別器16の種類は図3に限定されるものではない。更に、第1判定部12は、解釈部14から特徴が通知されると、通知された特徴に合わせて、予め保持している特徴判別器16の中から、必要となる特徴判別器16を選出することもできる。 In the present embodiment, the type of feature discriminator 16 is not limited to that shown in FIG. Further, when the feature is notified from the interpretation unit 14, the first determination unit 12 selects a necessary feature discriminator 16 from the feature discriminators 16 held in advance according to the notified feature. You can also
 各特徴判別器16は、それが対応する特徴と、撮像装置20からの画像データで特定される画像中の人物の特徴とが、一致及び非一致のいずれであるかを判定する。また、各特徴判別器16は、対応する特徴と画像中の人物の特徴とが一致している確率を算出し、算出された確率が閾値以上となる場合に、一致していると判定しても良い。具体的には、各特徴判別器16は、下記のようにして判定を行なう。 Each feature discriminator 16 determines whether the feature to which it corresponds and the feature of the person in the image specified by the image data from the imaging device 20 are matched or mismatched. Each feature discriminator 16 calculates the probability that the corresponding feature matches the feature of the person in the image, and determines that they match if the calculated probability is equal to or greater than a threshold value. Also good. Specifically, each feature discriminator 16 performs determination as follows.
 年齢判別器は、例として、機械学習の1つであるディープラーニングを用いて、画像中の人物の年齢を判別する。具体的には、年齢判別器は、予め畳み込み層と全結合層とを有する畳み込みニューラルネットワーク(Convolutional Neural Network)を学習する。そして、年齢判別器は、予め学習した畳み込みニューラルネットワークの各畳み込み層によって、画像から特徴量を抽出する。そして、年齢判別器は、全結合層によって、抽出された特徴量に対して年齢についての重み計算を行ない、得られた計算値に基づいて年齢を判別する。 As an example, the age discriminator discriminates the age of a person in the image using deep learning, which is one of machine learning. Specifically, the age discriminator learns a convolutional neural network (Convolutional Neural Network) having a convolutional layer and a fully connected layer in advance. Then, the age discriminator extracts a feature amount from the image by each convolution layer of the convolutional neural network learned in advance. Then, the age discriminator performs weight calculation on the extracted feature quantity with respect to the extracted feature quantity by all the coupling layers, and discriminates the age based on the obtained calculated value.
 また、性別判別器も、例としてディープラーニングを用いて、画像中の人物の性別を判別する。具体的には、性別判別器も、予め学習した畳み込みニューラルネットワークの各畳み込み層によって、画像から特徴量を抽出する。但し、性別判別器は、全結合層によって、抽出された特徴量に対して性別についての重み計算を行ない、得られた計算値に基づいて性別を判別する。 Also, the gender discriminator discriminates the gender of the person in the image using deep learning as an example. Specifically, the gender discriminator also extracts a feature amount from the image by each convolution layer of the convolutional neural network learned in advance. However, the gender discriminator performs weight calculation for gender on the extracted feature quantity by all the connected layers, and discriminates gender based on the obtained calculated value.
 物体判別器も、例としてディープラーニングを用いて、人物が身に付けている物及び携えている物を判定する。具体的には、物体判別器も、予め畳み込み層と全結合層とを有する畳み込みネットワークを学習する。そして、物体判別器は、学習した畳み込みニューラルネットワークの各畳み込み層によって、画像から特徴量を抽出する。そして、物体判別器は、全結合層によって、抽出した特徴量に対して物体についての重み計算を行ない、得られた計算値に基づいて、画像に写っている物体の種類を判別する。 The object discriminator also uses deep learning as an example to determine what a person is wearing and what he is carrying. Specifically, the object discriminator also learns a convolution network having a convolution layer and a fully connected layer in advance. Then, the object discriminator extracts a feature amount from the image by each convolution layer of the learned convolutional neural network. Then, the object discriminator performs weight calculation on the object with respect to the extracted feature quantity by using all the coupling layers, and discriminates the type of the object shown in the image based on the obtained calculated value.
 行動判別器も、例としてディープラーニングを用いて、画像中の人物の行動を判定する。但し、行動判別器は、上述の判別器と異なり、予め学習したRegion Convolution Neural Networkによって、画像に写っている人物とその周辺とを検出する。次に、行動判別器は、予め学習した再帰型ニューラルネットワーク(Recurrent Neural Network)によって、検出された人物及び周辺が、特徴情報の行動を示す文字表現に該当するかどうか判定する。 The behavior discriminator also determines the behavior of a person in the image using deep learning as an example. However, unlike the above-described discriminator, the behavior discriminator detects a person shown in the image and its periphery by using a previously learned Region Convolution Neural Network. Next, the behavior discriminator determines whether or not the detected person and the surroundings correspond to a character expression indicating the behavior of the feature information by using a recurrent neural network (Recurrent Neural Network) learned in advance.
 体格判別器は、例として、人物の頭、首、肩、腹、足等の幅を画像から抽出し、それぞれの長さと、予め設定された体型のパターン(やせ型、中肉中背型、肥満型等)とを比較し、比較結果に基づいて、人物の体型を判別する。 As an example, the physique discriminator extracts the width of a person's head, neck, shoulders, belly, legs, etc. from the image, and determines the length and pattern of a pre-set body shape (slim type, fillet type, The body type of the person is determined based on the comparison result.
 第2判定部13は、本実施の形態では、人物の特徴毎の判定において、一致していると判定された判定結果が所定数以上である場合、又は一致していると判定された判定結果の割合が所定割合以上である場合に、特徴情報と画像に写っている人物とが一致していると判定する。 In the present embodiment, the second determination unit 13 determines whether or not the number of determination results determined to match in a determination for each person characteristic is equal to or greater than a predetermined number, or the determination result determined to match. If the ratio is equal to or greater than a predetermined ratio, it is determined that the feature information matches the person in the image.
 具体的には、図3の例では、第2判定部13は、論理加算を行ない、年齢判別器、性別判別器、体格判別器、行動判別器、及び物体判別器の全てにおいて一致していると判定された場合に、特徴情報と画像に写っている人物とが一致していると判定する。また、第2判定部13による判定が終了すると、出力部15が、判定結果を出力する。出力先としては、検出装置100の表示装置、検出装置100の管理者の端末装置等が挙げられる。 Specifically, in the example of FIG. 3, the second determination unit 13 performs logical addition, and matches in all of the age discriminator, gender discriminator, physique discriminator, behavior discriminator, and object discriminator. If it is determined that the feature information matches the person shown in the image. When the determination by the second determination unit 13 is completed, the output unit 15 outputs a determination result. Examples of the output destination include a display device of the detection device 100, a terminal device of an administrator of the detection device 100, and the like.
[装置動作]
 次に、本実施の形態における情報処理装置10の動作について図4を用いて説明する。図4は、本発明の実施の形態における情報処理装置の動作を示すフロー図である。以下の説明においては、適宜図1~図3を参酌する。また、本実施の形態では、情報処理装置10を動作させることによって、情報処理方法が実施される。よって、本実施の形態における情報処理方法の説明は、以下の情報処理装置10の動作説明に代える。
[Device operation]
Next, the operation of the information processing apparatus 10 in the present embodiment will be described with reference to FIG. FIG. 4 is a flowchart showing the operation of the information processing apparatus according to the embodiment of the present invention. In the following description, FIGS. 1 to 3 are referred to as appropriate. Moreover, in this Embodiment, the information processing method is implemented by operating the information processing apparatus 10. Therefore, the description of the information processing method in the present embodiment is replaced with the following description of the operation of the information processing apparatus 10.
 図4に示すように、最初に、検出装置100の管理者によって、検出装置100において、特定対象となる人物の特徴が入力されると、入力受付部11は、入力された特徴を含む特徴情報の入力を受け付ける(ステップA1)。また、入力受付部11は、受け付けた特徴情報を解釈部14に渡す。 As shown in FIG. 4, when a feature of a person to be specified is input to the detection device 100 by an administrator of the detection device 100, the input receiving unit 11 first includes feature information including the input feature. Is received (step A1). The input receiving unit 11 passes the received feature information to the interpreting unit 14.
 次に、解釈部14は、渡された特徴情報から、それに含まれる各特徴を取り出し、取り出した特徴を第1判定部12に渡す(ステップA2)。 Next, the interpretation unit 14 extracts each feature included in the passed feature information, and passes the extracted feature to the first determination unit 12 (step A2).
 次に、第1判定部12は、撮像装置20から画像データを取得し、ステップA2で渡された特徴毎に、その特徴と、画像データの画像に写っている人物の特徴との一致度合を判定する(ステップA3)。具体的には、第1判定部12は、各特徴に対応する特徴判別器16を選出し、選出した各特徴判別器16に、対応する特徴と画像の人物の特徴との一致判定を行なわせる。 Next, the first determination unit 12 acquires image data from the imaging device 20, and for each feature passed in step A2, the degree of coincidence between the feature and the feature of the person shown in the image of the image data is determined. Determine (Step A3). Specifically, the first determination unit 12 selects a feature discriminator 16 corresponding to each feature, and causes each selected feature discriminator 16 to perform a match determination between the corresponding feature and the feature of the person in the image. .
 次に、第2判定部13は、特徴毎の判定結果に基づいて、特徴情報と画像に写っている人物とが一致しているかどうかを判定する(ステップA4)。具体的には、第2判定部13は、論理加算を行ない、年齢判別器、性別判別器、体格判別器、行動判別器、及び物体判別器の全てにおいて一致している判定された場合に、特徴情報と画像に写っている人物とが一致していると判定する(図3参照)。 Next, the second determination unit 13 determines whether the feature information matches the person shown in the image based on the determination result for each feature (step A4). Specifically, the second determination unit 13 performs a logical addition, and when it is determined that all of the age discriminator, gender discriminator, physique discriminator, behavior discriminator, and object discriminator match, It is determined that the feature information matches the person shown in the image (see FIG. 3).
 その後、出力部15は、ステップA4での判定結果を、検出装置100の表示装置、検出装置100の管理者の端末装置等に出力する(ステップA5)。また、例えば、特定対象となる人物が指名手配犯であり、ステップA4において一致していると判定された場合は、出力部15は、警告を出力しても良い。また、ステップA3~A5は、撮像装置20から画像データが出力される度に繰り返し実行される。 Thereafter, the output unit 15 outputs the determination result in step A4 to the display device of the detection device 100, the terminal device of the administrator of the detection device 100, and the like (step A5). In addition, for example, if the person to be specified is a wanted crime, and it is determined in step A4 that they match, the output unit 15 may output a warning. Steps A3 to A5 are repeatedly executed every time image data is output from the imaging device 20.
 このように、本実施の形態によれば、テキスト等によって特徴情報を入力しておけば、撮像装置20が撮影した映像から、各特徴が一致した人物が自動的に特定される。このため、目撃情報に基づいて、不審者、指名手配犯等の特徴を特徴情報として入力しておけば、これらの者の顔画像がなくても、これらの者を自動的に特定できる。なお、上述した例では、特徴情報のみを用いて人物を特定しているが、本実施の形態は、特徴情報と顔画像との両方を用いて人物の特定が行なわれる態様であっても良い。 As described above, according to the present embodiment, if feature information is input by text or the like, a person whose features match each other is automatically specified from the video imaged by the imaging device 20. For this reason, if features such as a suspicious person and a wanted criminal are input as feature information based on the sighting information, these people can be automatically identified even if they do not have face images. In the above-described example, a person is specified using only feature information. However, the present embodiment may be an embodiment in which a person is specified using both feature information and a face image. .
[プログラム]
 本実施の形態におけるプログラムは、コンピュータに、図4に示すステップA1~A5を実行させるプログラムであれば良い。このプログラムをコンピュータにインストールし、実行することによって、本実施の形態における情報処理装置10と情報処理方法とを実現することができる。この場合、コンピュータのプロセッサは、入力受付部11、第1判定部12、第2判定部13、解釈部14、及び出力部15として機能し、処理を行なう。
[program]
The program in the present embodiment may be a program that causes a computer to execute steps A1 to A5 shown in FIG. The information processing apparatus 10 and the information processing method in the present embodiment can be realized by installing and executing this program on a computer. In this case, the processor of the computer functions as the input reception unit 11, the first determination unit 12, the second determination unit 13, the interpretation unit 14, and the output unit 15, and performs processing.
 また、本実施の形態におけるプログラムは、複数のコンピュータによって構築されたコンピュータシステムによって実行されても良い。この場合は、例えば、各コンピュータが、それぞれ、入力受付部11、第1判定部12、第2判定部13、解釈部14、及び出力部15のいずれかとして機能しても良い。 Further, the program in the present embodiment may be executed by a computer system constructed by a plurality of computers. In this case, for example, each computer may function as any one of the input reception unit 11, the first determination unit 12, the second determination unit 13, the interpretation unit 14, and the output unit 15, respectively.
[変形例]
 続いて、本実施の形態における変形例について図5を用いて説明する。図5は、本発明の実施の形態における変形例での第1判定部及び第2判定部の処理を説明する図である。
[Modification]
Subsequently, a modification example of the present embodiment will be described with reference to FIG. FIG. 5 is a diagram for explaining processing of the first determination unit and the second determination unit in a modification of the embodiment of the present invention.
 図5に示すように、本変形例では、各特徴判別器16は、対応する特徴と画像中の人物の特徴とが一致している確率を算出する。第1判定部12は、各特徴判別器16が算出した確率を、第2判定部13に出力する。 As shown in FIG. 5, in this modification, each feature discriminator 16 calculates the probability that the corresponding feature matches the feature of the person in the image. The first determination unit 12 outputs the probability calculated by each feature classifier 16 to the second determination unit 13.
 そして、第2判定部13は、人物の特徴毎に算出された確率の平均値を求め、求めた平均値が閾値を超えているかどうかを判定する。判定の結果、平均値が閾値を超えている場合に、第2判定部13は、特徴情報と画像に写っている人物とが一致していると判定する。 And the 2nd determination part 13 calculates | requires the average value of the probability calculated for every feature of a person, and determines whether the calculated | required average value exceeds the threshold value. As a result of the determination, if the average value exceeds the threshold value, the second determination unit 13 determines that the feature information matches the person shown in the image.
 本変形例による場合は、ある程度似ている人物が出現すると、警告が出力されるので、不審者及び指名手配犯等が見過ごされる可能性をより低くすることができる。 In the case of this modification, a warning is output when a person who resembles to some extent appears, so that the possibility that a suspicious person, a wanted person, etc. are overlooked can be further reduced.
 また、本変形例では、第2判定部13は、平均値を計算する際に、各特徴の確率に重み係数を乗算して、平均値を算出することができる。具体的には、重み係数の値は、検出装置100の管理者によって適宜設定される。 Further, in the present modification, the second determination unit 13 can calculate the average value by multiplying the probability of each feature by a weighting factor when calculating the average value. Specifically, the value of the weighting factor is appropriately set by the administrator of the detection apparatus 100.
 例えば、特徴情報が目撃情報に基づいて入力されている場合において、管理者は、目撃者の確信度が高い特徴に対しては、重み係数の値を高く設定する。つまり、目撃者が、特定対象となる人物の年齢に自信がある場合は、管理者は、図5の例において、年齢の重み係数を1.2に設定する。この場合、平均値は、(70×1.2+80+60+75+90)/5=77.8となり、重み係数を設定しなかった場合(75)よりも高くなる。 For example, when the feature information is input based on the sighting information, the administrator sets a high value of the weighting factor for the feature with high witness confidence. That is, when the witness is confident in the age of the person to be identified, the manager sets the age weighting factor to 1.2 in the example of FIG. In this case, the average value is (70 × 1.2 + 80 + 60 + 75 + 90) /5=77.8, which is higher than when the weighting coefficient is not set (75).
[応用例]
 続いて、図6を用いて、本実施の形態における検出装置100を用いた人物検出システムについて説明する。図6は、本実施の形態における検出装置によって構成された人物検出システムの一例を示す図である。
[Application example]
Subsequently, a person detection system using the detection apparatus 100 according to the present embodiment will be described with reference to FIG. FIG. 6 is a diagram illustrating an example of a person detection system configured by the detection device according to the present embodiment.
 図6に示すように、人物検出システム400は、複数の検出装置100と、管理サーバ200とを備えており、これらはインターネット300を介して接続されている。また、管理サーバ200には、管理者の端末装置210が接続されている。また、検出装置100は、それぞれ異なるエリアに設置されている。 As shown in FIG. 6, the person detection system 400 includes a plurality of detection devices 100 and a management server 200, which are connected via the Internet 300. In addition, an administrator terminal device 210 is connected to the management server 200. The detection devices 100 are installed in different areas.
 管理者は、目撃者から「20代か30代のナイフを持って走っている中肉中背の帽子を被っていない男性」のような証言が与えられると、この証言を元に、端末装置210上で特徴情報を入力する。これにより、端末装置210は、特徴情報を管理サーバ200に送信する。 When the witness gives a testimony such as `` a man who does not wear a medium-thickness hat running with a knife in his 20s or 30s '' based on this testimony, the terminal device The feature information is input on 210. As a result, the terminal device 210 transmits the feature information to the management server 200.
 管理サーバ200は、特徴情報を受信すると、受信した特徴情報の形式を、図3に示した論理的に解釈可能な形式に変換する。そして、管理サーバ200は、変換後の特徴情報を、各検出装置100に送信する。また、管理サーバ200は、目撃情報が取得されたエリアを取得できる場合は、特定したエリア及びそれに隣接するエリアに対応する検出装置100のみに特徴情報を送信することもできる。 When receiving the feature information, the management server 200 converts the format of the received feature information into a logically interpretable format shown in FIG. Then, the management server 200 transmits the converted feature information to each detection device 100. In addition, when the area where the sighting information is acquired can be acquired, the management server 200 can transmit the feature information only to the detection device 100 corresponding to the specified area and the adjacent area.
 各検出装置100は、管理サーバ200から送信されてきた特徴情報を受信すると、入力受付部11において、特徴情報の入力が受け付けられ(ステップA1)、その後、ステップA2~A5が実行される。また、ステップA3~A5は、撮像装置20から画像データが出力される度に繰り返し実行される。なお、人物検出システム400は、不審者、指名手配犯だけでなく、迷子の検出にも適用可能である。 When each detecting device 100 receives the feature information transmitted from the management server 200, the input accepting unit 11 accepts the input of the feature information (step A1), and thereafter, steps A2 to A5 are executed. Steps A3 to A5 are repeatedly executed every time image data is output from the imaging device 20. It should be noted that the person detection system 400 can be applied not only to suspicious persons and wanted criminals, but also to detection of lost children.
[物理構成]
 ここで、本実施の形態におけるプログラムを実行することによって、情報処理装置10を実現するコンピュータについて図7を用いて説明する。図7は、本発明の実施の形態における情報処理装置を実現するコンピュータの一例を示すブロック図である。
[Physical configuration]
Here, a computer that realizes the information processing apparatus 10 by executing the program according to the present embodiment will be described with reference to FIG. FIG. 7 is a block diagram illustrating an example of a computer that implements the information processing apparatus according to the embodiment of the present invention.
 図7に示すように、コンピュータ110は、CPU(Central Processing Unit)111と、メインメモリ112と、記憶装置113と、入力インターフェイス114と、表示コントローラ115と、データリーダ/ライタ116と、通信インターフェイス117とを備える。これらの各部は、バス121を介して、互いにデータ通信可能に接続される。なお、コンピュータ110は、CPU111に加えて、又はCPU111に代えて、GPU(Graphics Processing Unit)、又はFPGA(Field-Programmable Gate Array)を備えていても良い。 As shown in FIG. 7, the computer 110 includes a CPU (Central Processing Unit) 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. With. These units are connected to each other via a bus 121 so that data communication is possible. The computer 110 may include a GPU (GraphicsGraphProcessing Unit) or an FPGA (Field-Programmable Gate Array) in addition to or instead of the CPU 111.
 CPU111は、記憶装置113に格納された、本実施の形態におけるプログラム(コード)をメインメモリ112に展開し、これらを所定順序で実行することにより、各種の演算を実施する。メインメモリ112は、典型的には、DRAM(Dynamic Random Access Memory)等の揮発性の記憶装置である。また、本実施の形態におけるプログラムは、コンピュータ読み取り可能な記録媒体120に格納された状態で提供される。なお、本実施の形態におけるプログラムは、通信インターフェイス117を介して接続されたインターネット上で流通するものであっても良い。 The CPU 111 performs various operations by developing the program (code) in the present embodiment stored in the storage device 113 in the main memory 112 and executing them in a predetermined order. The main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory). Further, the program in the present embodiment is provided in a state of being stored in a computer-readable recording medium 120. Note that the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
 また、記憶装置113の具体例としては、ハードディスクドライブの他、フラッシュメモリ等の半導体記憶装置が挙げられる。入力インターフェイス114は、CPU111と、キーボード及びマウスといった入力機器118との間のデータ伝送を仲介する。表示コントローラ115は、ディスプレイ装置119と接続され、ディスプレイ装置119での表示を制御する。 Further, specific examples of the storage device 113 include a hard disk drive and a semiconductor storage device such as a flash memory. The input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and a mouse. The display controller 115 is connected to the display device 119 and controls display on the display device 119.
 データリーダ/ライタ116は、CPU111と記録媒体120との間のデータ伝送を仲介し、記録媒体120からのプログラムの読み出し、及びコンピュータ110における処理結果の記録媒体120への書き込みを実行する。通信インターフェイス117は、CPU111と、他のコンピュータとの間のデータ伝送を仲介する。 The data reader / writer 116 mediates data transmission between the CPU 111 and the recording medium 120, and reads a program from the recording medium 120 and writes a processing result in the computer 110 to the recording medium 120. The communication interface 117 mediates data transmission between the CPU 111 and another computer.
 また、記録媒体120の具体例としては、CF(Compact Flash(登録商標))及びSD(Secure Digital)等の汎用的な半導体記憶デバイス、フレキシブルディスク(Flexible Disk)等の磁気記録媒体、又はCD-ROM(Compact Disk Read Only Memory)などの光学記録媒体が挙げられる。 Specific examples of the recording medium 120 include general-purpose semiconductor storage devices such as CF (Compact Flash (registered trademark)) and SD (Secure Digital), magnetic recording media such as a flexible disk, or CD- Optical recording media such as ROM (Compact Disk Read Only Memory) are listed.
 なお、本実施の形態における情報処理装置10は、プログラムがインストールされたコンピュータではなく、各部に対応したハードウェアを用いることによっても実現可能である。更に、情報処理装置10は、一部がプログラムで実現され、残りの部分がハードウェアで実現されていてもよい。 It should be noted that the information processing apparatus 10 in the present embodiment can be realized by using hardware corresponding to each unit, not a computer in which a program is installed. Furthermore, a part of the information processing apparatus 10 may be realized by a program, and the remaining part may be realized by hardware.

 上述した実施の形態の一部又は全部は、以下に記載する(付記1)~(付記18)によって表現することができるが、以下の記載に限定されるものではない。 

A part or all of the embodiment described above can be expressed by (Appendix 1) to (Appendix 18) described below, but is not limited to the following description.

(付記1) 特定対象となる人物の特徴を示す顔画像以外の特徴情報の入力を受け付ける、入力受付部と、
 前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴との一致度合を判定する、第1判定部と、
 前記人物の特徴毎の判定結果に基づいて、前記特徴情報と前記画像に写っている人物とが一致しているかどうかを判定する、第2判定部と、
を備えている、ことを特徴とする情報処理装置。

(Additional remark 1) The input reception part which receives the input of feature information other than the face image which shows the characteristic of the person used as specific object,
A first determination unit that determines, for each feature of the person included in the feature information, the degree of coincidence between the feature and the feature of the person shown in the image;
A second determination unit that determines whether or not the feature information matches a person shown in the image based on a determination result for each feature of the person;
An information processing apparatus comprising:

(付記2) 前記特徴情報に含まれる前記特徴の全部又は一部が、前記特定対象となる人物の外形的特徴である、
付記1に記載の情報処理装置。

(Supplementary Note 2) All or part of the feature included in the feature information is an external feature of the person to be specified.
The information processing apparatus according to attachment 1.

(付記3) 前記入力受付部が、前記特徴情報として、前記人物の特徴がテキストで表されたテキスト情報を受け付ける、
付記1または2に記載の情報処理装置。

(Additional remark 3) The said input reception part receives the text information by which the characteristic of the said person was represented by the text as the said characteristic information.
The information processing apparatus according to appendix 1 or 2.

(付記4) 前記入力受付部が、前記人物の特徴のうち、指定された特徴を含む特徴情報の入力を受け付ける、
付記1または2に記載の情報処理装置。

(Additional remark 4) The said input reception part receives the input of the characteristic information containing the designated characteristic among the characteristics of the said person,
The information processing apparatus according to appendix 1 or 2.

(付記5) 前記第1判定部が、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致及び非一致のいずれであるかを判定し、
 前記第2判定部が、前記人物の特徴毎の判定において、一致していると判定された判定結果が所定数以上である場合、又は一致していると判定された判定結果の割合が所定割合以上である場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
付記1から4のいずれかに記載の情報処理装置。

(Supplementary Note 5) For each feature of the person included in the feature information, the first determination unit determines whether the feature and the feature of the person shown in the image match or do not match. And
In the determination for each feature of the person, the second determination unit determines that the determination results determined to be equal to or greater than a predetermined number, or the ratio of the determination results determined to match is a predetermined ratio In the case of the above, it is determined that the feature information and the person shown in the image match.
The information processing apparatus according to any one of appendices 1 to 4.

(付記6) 前記第1判定部が、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致している確率を算出し、
 前記第2判定部が、前記人物の特徴毎に算出された前記確率の平均値を求め、求めた平均値が閾値を超える場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
付記1から4のいずれかに記載の情報処理装置。

(Supplementary Note 6) For each feature of the person included in the feature information, the first determination unit calculates a probability that the feature and the feature of the person in the image match.
The second determination unit obtains an average value of the probabilities calculated for each feature of the person, and when the obtained average value exceeds a threshold, the feature information and the person shown in the image match. It is determined that
The information processing apparatus according to any one of appendices 1 to 4.

(付記7)(a)特定対象となる人物の特徴を示す顔画像以外の特徴情報の入力を受け付ける、ステップと、
(b)前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴との一致度合を判定する、ステップと、
(c)前記人物の特徴毎の判定結果に基づいて、前記特徴情報と前記画像に写っている人物とが一致しているかどうかを判定する、ステップと、
を有する、ことを特徴とする情報処理方法。

(Appendix 7) (a) accepting input of feature information other than a face image indicating the characteristics of a person to be identified;
(B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image;
(C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person;
An information processing method characterized by comprising:

(付記8) 前記特徴情報に含まれる前記特徴の全部又は一部が、前記特定対象となる人物の外形的特徴である、
付記7に記載の情報処理方法。

(Supplementary Note 8) All or part of the feature included in the feature information is an external feature of the person to be specified.
The information processing method according to attachment 7.

(付記9) 前記(a)のステップにおいて、前記特徴情報として、前記人物の特徴がテキストで表されたテキスト情報を受け付ける、
付記7または8に記載の情報処理方法。

(Supplementary Note 9) In the step (a), as the feature information, text information in which the feature of the person is expressed in text is received.
The information processing method according to appendix 7 or 8.

(付記10) 前記(a)のステップにおいて、前記人物の特徴のうち、指定された特徴を含む特徴情報の入力を受け付ける、
付記7または8に記載の情報処理方法。

(Supplementary Note 10) In the step (a), input of feature information including a specified feature among the features of the person is accepted.
The information processing method according to appendix 7 or 8.

(付記11) 前記(b)のステップにおいて、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致及び非一致のいずれであるかを判定し、
 前記(c)のステップにおいて、前記人物の特徴毎の判定において、一致していると判定された判定結果が所定数以上である場合、又は一致していると判定された判定結果の割合が所定割合以上である場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
付記7から10のいずれかに記載の情報処理方法。

(Supplementary Note 11) In the step (b), for each feature of the person included in the feature information, whether the feature and the feature of the person shown in the image match or do not match. Judgment,
In the step (c), in the determination for each feature of the person, when the determination result determined to match is a predetermined number or more, or the ratio of the determination result determined to match is predetermined. When the ratio is equal to or greater than the ratio, it is determined that the feature information matches the person shown in the image.
The information processing method according to any one of appendices 7 to 10.

(付記12) 前記(b)のステップにおいて、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致している確率を算出し、
 前記(c)のステップにおいて、前記人物の特徴毎に算出された前記確率の平均値を求め、求めた平均値が閾値を超える場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
付記7から10のいずれかに記載の情報処理方法。

(Supplementary Note 12) In the step (b), for each feature of the person included in the feature information, a probability that the feature and the feature of the person shown in the image match is calculated,
In the step (c), an average value of the probabilities calculated for each feature of the person is obtained, and when the obtained average value exceeds a threshold value, the feature information and the person shown in the image are identical. Judge that you are doing,
The information processing method according to any one of appendices 7 to 10.

(付記13)コンピュータに、
(a)特定対象となる人物の特徴を示す顔画像以外の特徴情報の入力を受け付ける、ステップと、
(b)前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴との一致度合を判定する、ステップと、
(c)前記人物の特徴毎の判定結果に基づいて、前記特徴情報と前記画像に写っている人物とが一致しているかどうかを判定する、ステップと、
を実行させる命令を含む、プログラムを記録しているコンピュータ読み取り可能な記録媒体。

(Supplementary note 13)
(A) receiving an input of feature information other than a face image indicating a feature of a person to be identified;
(B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image;
(C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person;
The computer-readable recording medium which recorded the program containing the instruction | indication which performs this.

(付記14) 前記特徴情報に含まれる前記特徴の全部又は一部が、前記特定対象となる人物の外形的特徴である、
付記13に記載のコンピュータ読み取り可能な記録媒体。

(Supplementary Note 14) All or part of the feature included in the feature information is an external feature of the person to be specified.
The computer-readable recording medium according to attachment 13.

(付記15) 前記(a)のステップにおいて、前記特徴情報として、前記人物の特徴がテキストで表されたテキスト情報を受け付ける、
付記13または14に記載のコンピュータ読み取り可能な記録媒体。

(Supplementary Note 15) In the step (a), as the feature information, text information in which the feature of the person is expressed in text is received.
The computer-readable recording medium according to appendix 13 or 14.

(付記16) 前記(a)のステップにおいて、前記人物の特徴のうち、指定された特徴を含む特徴情報の入力を受け付ける、
付記13または14に記載のコンピュータ読み取り可能な記録媒体。

(Supplementary Note 16) In the step (a), input of feature information including a specified feature among the features of the person is accepted.
The computer-readable recording medium according to appendix 13 or 14.

(付記17) 前記(b)のステップにおいて、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致及び非一致のいずれであるかを判定し、
 前記(c)のステップにおいて、前記人物の特徴毎の判定において、一致していると判定された判定結果が所定数以上である場合、又は一致していると判定された判定結果の割合が所定割合以上である場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
付記13から16のいずれかに記載のコンピュータ読み取り可能な記録媒体。

(Supplementary Note 17) In the step (b), for each feature of the person included in the feature information, whether the feature and the feature of the person shown in the image match or do not match. Judgment,
In the step (c), in the determination for each feature of the person, when the determination result determined to match is a predetermined number or more, or the ratio of the determination result determined to match is predetermined. When the ratio is equal to or greater than the ratio, it is determined that the feature information matches the person shown in the image.
The computer-readable recording medium according to any one of appendices 13 to 16.

(付記18) 前記(b)のステップにおいて、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致している確率を算出し、
 前記(c)のステップにおいて、前記人物の特徴毎に算出された前記確率の平均値を求め、求めた平均値が閾値を超える場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
付記13から16のいずれかに記載のコンピュータ読み取り可能な記録媒体。 

(Supplementary Note 18) In the step (b), for each feature of the person included in the feature information, a probability that the feature and the feature of the person shown in the image match is calculated,
In the step (c), an average value of the probabilities calculated for each feature of the person is obtained, and when the obtained average value exceeds a threshold value, the feature information and the person shown in the image are identical. Judge that you are doing,
The computer-readable recording medium according to any one of appendices 13 to 16.
 以上、実施の形態を参照して本願発明を説明したが、本願発明は上記実施の形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 The present invention has been described above with reference to the embodiments, but the present invention is not limited to the above embodiments. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 この出願は、2017年2月27日に出願された日本出願特願2017-035338を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2017-035338 filed on Feb. 27, 2017, the entire disclosure of which is incorporated herein.
 以上のように、本発明によれば、不審人物等に関する顔画像以外の特徴情報に基づいて不審人物等を自動的に特定することができる。本発明は、不審人物、指名手配犯等を検出するためのシステム、迷子を捜すためのシステムに有用である。 As described above, according to the present invention, a suspicious person or the like can be automatically specified based on feature information other than a face image related to the suspicious person or the like. INDUSTRIAL APPLICABILITY The present invention is useful for a system for detecting a suspicious person, a wanted crime, and a system for searching for lost children.
 10 情報処理装置
 11 入力受付部
 12 第1判定部
 13 第2判定部
 14 解釈部
 15 出力部
 16 特徴判別器
 20 撮像装置
 100 検出装置
 110 コンピュータ
 111 CPU
 112 メインメモリ
 113 記憶装置
 114 入力インターフェイス
 115 表示コントローラ
 116 データリーダ/ライタ
 117 通信インターフェイス
 118 入力機器
 119 ディスプレイ装置
 120 記録媒体
 121 バス
 200 管理サーバ
 210 端末装置
 300 インターネット
 400 人物検出システム
DESCRIPTION OF SYMBOLS 10 Information processing apparatus 11 Input reception part 12 1st determination part 13 2nd determination part 14 Interpretation part 15 Output part 16 Feature discriminator 20 Imaging device 100 Detection apparatus 110 Computer 111 CPU
DESCRIPTION OF SYMBOLS 112 Main memory 113 Memory | storage device 114 Input interface 115 Display controller 116 Data reader / writer 117 Communication interface 118 Input apparatus 119 Display apparatus 120 Recording medium 121 Bus 200 Management server 210 Terminal apparatus 300 Internet 400 Person detection system

Claims (18)

  1.  特定対象となる人物の特徴を示す顔画像以外の特徴情報の入力を受け付ける、入力受付部と、
     前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴との一致度合を判定する、第1判定部と、
     前記人物の特徴毎の判定結果に基づいて、前記特徴情報と前記画像に写っている人物とが一致しているかどうかを判定する、第2判定部と、
    を備えている、ことを特徴とする情報処理装置。
    An input receiving unit that receives input of feature information other than a face image indicating the characteristics of a person to be identified;
    A first determination unit that determines, for each feature of the person included in the feature information, the degree of coincidence between the feature and the feature of the person shown in the image;
    A second determination unit that determines whether or not the feature information matches a person shown in the image based on a determination result for each feature of the person;
    An information processing apparatus comprising:
  2.  前記特徴情報に含まれる前記特徴の全部又は一部が、前記特定対象となる人物の外形的特徴である、
    請求項1に記載の情報処理装置。
    All or part of the feature included in the feature information is an external feature of the person to be specified.
    The information processing apparatus according to claim 1.
  3.  前記入力受付部が、前記特徴情報として、前記人物の特徴がテキストで表されたテキスト情報を受け付ける、
    請求項1または2に記載の情報処理装置。
    The input receiving unit receives text information in which the feature of the person is expressed in text as the feature information;
    The information processing apparatus according to claim 1 or 2.
  4.  前記入力受付部が、前記人物の特徴のうち、指定された特徴を含む特徴情報の入力を受け付ける、
    請求項1または2に記載の情報処理装置。
    The input receiving unit receives input of feature information including a specified feature among the features of the person;
    The information processing apparatus according to claim 1 or 2.
  5.  前記第1判定部が、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致及び非一致のいずれであるかを判定し、
     前記第2判定部が、前記人物の特徴毎の判定において、一致していると判定された判定結果が所定数以上である場合、又は一致していると判定された判定結果の割合が所定割合以上である場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
    請求項1から4のいずれかに記載の情報処理装置。
    The first determination unit determines, for each feature of the person included in the feature information, whether the feature and the feature of the person shown in the image match or do not match,
    In the determination for each feature of the person, the second determination unit determines that the determination results determined to be equal to or greater than a predetermined number, or the ratio of the determination results determined to match is a predetermined ratio In the case of the above, it is determined that the feature information and the person shown in the image match.
    The information processing apparatus according to claim 1.
  6.  前記第1判定部が、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致している確率を算出し、
     前記第2判定部が、前記人物の特徴毎に算出された前記確率の平均値を求め、求めた平均値が閾値を超える場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
    請求項1から4のいずれかに記載の情報処理装置。
    The first determination unit calculates, for each person feature included in the feature information, a probability that the feature and the person feature in the image match,
    The second determination unit obtains an average value of the probabilities calculated for each feature of the person, and when the obtained average value exceeds a threshold, the feature information and the person shown in the image match. It is determined that
    The information processing apparatus according to claim 1.
  7. (a)特定対象となる人物の特徴を示す顔画像以外の特徴情報の入力を受け付ける、ステップと、
    (b)前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴との一致度合を判定する、ステップと、
    (c)前記人物の特徴毎の判定結果に基づいて、前記特徴情報と前記画像に写っている人物とが一致しているかどうかを判定する、ステップと、
    を有する、ことを特徴とする情報処理方法。
    (A) receiving an input of feature information other than a face image indicating a feature of a person to be identified;
    (B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image;
    (C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person;
    An information processing method characterized by comprising:
  8.  前記特徴情報に含まれる前記特徴の全部又は一部が、前記特定対象となる人物の外形的特徴である、
    請求項7に記載の情報処理方法。
    All or part of the feature included in the feature information is an external feature of the person to be specified.
    The information processing method according to claim 7.
  9.  前記(a)のステップにおいて、前記特徴情報として、前記人物の特徴がテキストで表されたテキスト情報を受け付ける、
    請求項7または8に記載の情報処理方法。
    In the step (a), as the feature information, text information in which the feature of the person is expressed in text is accepted.
    The information processing method according to claim 7 or 8.
  10.  前記(a)のステップにおいて、前記人物の特徴のうち、指定された特徴を含む特徴情報の入力を受け付ける、
    請求項7または8に記載の情報処理方法。
    In the step (a), an input of feature information including a specified feature among the features of the person is accepted.
    The information processing method according to claim 7 or 8.
  11.  前記(b)のステップにおいて、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致及び非一致のいずれであるかを判定し、
     前記(c)のステップにおいて、前記人物の特徴毎の判定において、一致していると判定された判定結果が所定数以上である場合、又は一致していると判定された判定結果の割合が所定割合以上である場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
    請求項7から10のいずれかに記載の情報処理方法。
    In the step (b), for each person feature included in the feature information, it is determined whether the feature and the person feature shown in the image match or do not match,
    In the step (c), in the determination for each feature of the person, when the determination result determined to match is a predetermined number or more, or the ratio of the determination result determined to match is predetermined. When the ratio is equal to or greater than the ratio, it is determined that the feature information matches the person shown in the image.
    The information processing method according to claim 7.
  12.  前記(b)のステップにおいて、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致している確率を算出し、
     前記(c)のステップにおいて、前記人物の特徴毎に算出された前記確率の平均値を求め、求めた平均値が閾値を超える場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
    請求項7から10のいずれかに記載の情報処理方法。
    In the step (b), for each person feature included in the feature information, a probability that the feature and the person feature in the image match is calculated,
    In the step (c), an average value of the probabilities calculated for each feature of the person is obtained, and when the obtained average value exceeds a threshold value, the feature information and the person shown in the image are identical. Judge that you are doing,
    The information processing method according to claim 7.
  13. コンピュータに、
    (a)特定対象となる人物の特徴を示す顔画像以外の特徴情報の入力を受け付ける、ステップと、
    (b)前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴との一致度合を判定する、ステップと、
    (c)前記人物の特徴毎の判定結果に基づいて、前記特徴情報と前記画像に写っている人物とが一致しているかどうかを判定する、ステップと、
    を実行させる命令を含む、プログラムを記録しているコンピュータ読み取り可能な記録媒体。
    On the computer,
    (A) receiving an input of feature information other than a face image indicating a feature of a person to be identified;
    (B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image;
    (C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person;
    The computer-readable recording medium which recorded the program containing the instruction | indication which performs this.
  14.  前記特徴情報に含まれる前記特徴の全部又は一部が、前記特定対象となる人物の外形的特徴である、
    請求項13に記載のコンピュータ読み取り可能な記録媒体。
    All or part of the feature included in the feature information is an external feature of the person to be specified.
    The computer-readable recording medium according to claim 13.
  15.  前記(a)のステップにおいて、前記特徴情報として、前記人物の特徴がテキストで表されたテキスト情報を受け付ける、
    請求項13または14に記載のコンピュータ読み取り可能な記録媒体。
    In the step (a), as the feature information, text information in which the feature of the person is expressed in text is accepted.
    The computer-readable recording medium according to claim 13 or 14.
  16.  前記(a)のステップにおいて、前記人物の特徴のうち、指定された特徴を含む特徴情報の入力を受け付ける、
    請求項13または14に記載のコンピュータ読み取り可能な記録媒体。
    In the step (a), an input of feature information including a specified feature among the features of the person is accepted.
    The computer-readable recording medium according to claim 13 or 14.
  17.  前記(b)のステップにおいて、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致及び非一致のいずれであるかを判定し、
     前記(c)のステップにおいて、前記人物の特徴毎の判定において、一致していると判定された判定結果が所定数以上である場合、又は一致していると判定された判定結果の割合が所定割合以上である場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
    請求項13から16のいずれかに記載のコンピュータ読み取り可能な記録媒体。
    In the step (b), for each person feature included in the feature information, it is determined whether the feature and the person feature shown in the image match or do not match,
    In the step (c), in the determination for each feature of the person, when the determination result determined to match is a predetermined number or more, or the ratio of the determination result determined to match is predetermined. When the ratio is equal to or greater than the ratio, it is determined that the feature information matches the person shown in the image.
    The computer-readable recording medium according to claim 13.
  18.  前記(b)のステップにおいて、前記特徴情報に含まれる前記人物の特徴毎に、当該特徴と、画像に写っている人物の特徴とが、一致している確率を算出し、
     前記(c)のステップにおいて、前記人物の特徴毎に算出された前記確率の平均値を求め、求めた平均値が閾値を超える場合に、前記特徴情報と前記画像に写っている人物とが一致していると判定する、
    請求項13から16のいずれかに記載のコンピュータ読み取り可能な記録媒体。
    In the step (b), for each person feature included in the feature information, a probability that the feature and the person feature in the image match is calculated,
    In the step (c), an average value of the probabilities calculated for each feature of the person is obtained, and when the obtained average value exceeds a threshold value, the feature information and the person shown in the image are identical. Judge that you are doing,
    The computer-readable recording medium according to claim 13.
PCT/JP2018/006585 2017-02-27 2018-02-22 Information processing device, information processing method, and computer-readable recording medium WO2018155594A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-035338 2017-02-27
JP2017035338A JP7120590B2 (en) 2017-02-27 2017-02-27 Information processing device, information processing method, and program

Publications (1)

Publication Number Publication Date
WO2018155594A1 true WO2018155594A1 (en) 2018-08-30

Family

ID=63253901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/006585 WO2018155594A1 (en) 2017-02-27 2018-02-22 Information processing device, information processing method, and computer-readable recording medium

Country Status (2)

Country Link
JP (2) JP7120590B2 (en)
WO (1) WO2018155594A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6874754B2 (en) * 2018-12-11 2021-05-19 東京電力ホールディングス株式会社 Information processing method, program, information processing device, trained model generation method and trained model
JP6989572B2 (en) * 2019-09-03 2022-01-05 パナソニックi−PROセンシングソリューションズ株式会社 Investigation support system, investigation support method and computer program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010257449A (en) * 2009-03-31 2010-11-11 Sogo Keibi Hosho Co Ltd Device, method, and program for retrieving person
JP2011035806A (en) * 2009-08-05 2011-02-17 Nec Corp Portable terminal device, image management method, and program

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006268825A (en) * 2005-02-28 2006-10-05 Toshiba Corp Object detector, learning device, and object detection system, method, and program
JP2007241377A (en) * 2006-03-06 2007-09-20 Sony Corp Retrieval system, imaging apparatus, data storage device, information processor, picked-up image processing method, information processing method, and program
JP5121258B2 (en) * 2007-03-06 2013-01-16 株式会社東芝 Suspicious behavior detection system and method
CN101980242B (en) * 2010-09-30 2014-04-09 徐勇 Human face discrimination method and system and public safety system
JP5649425B2 (en) * 2010-12-06 2015-01-07 株式会社東芝 Video search device
JP6225460B2 (en) * 2013-04-08 2017-11-08 オムロン株式会社 Image processing apparatus, image processing method, control program, and recording medium
JP2015143951A (en) * 2014-01-31 2015-08-06 オムロン株式会社 Object discrimination device, image sensor and object discrimination method
JP6441068B2 (en) * 2014-12-22 2018-12-19 セコム株式会社 Monitoring system
JP2016131288A (en) * 2015-01-13 2016-07-21 東芝テック株式会社 Information processing apparatus and program
US10110858B2 (en) * 2015-02-06 2018-10-23 Conduent Business Services, Llc Computer-vision based process recognition of activity workflow of human performer
JP5785667B1 (en) * 2015-02-23 2015-09-30 三菱電機マイコン機器ソフトウエア株式会社 Person identification system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010257449A (en) * 2009-03-31 2010-11-11 Sogo Keibi Hosho Co Ltd Device, method, and program for retrieving person
JP2011035806A (en) * 2009-08-05 2011-02-17 Nec Corp Portable terminal device, image management method, and program

Also Published As

Publication number Publication date
JP7120590B2 (en) 2022-08-17
JP2018142137A (en) 2018-09-13
JP2022003526A (en) 2022-01-11

Similar Documents

Publication Publication Date Title
US10846537B2 (en) Information processing device, determination device, notification system, information transmission method, and program
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
US20200050871A1 (en) Method and apparatus for integration of detected object identifiers and semantic scene graph networks for captured visual scene behavior estimation
US20180174062A1 (en) Root cause analysis for sequences of datacenter states
US20200012887A1 (en) Attribute recognition apparatus and method, and storage medium
US20110222743A1 (en) Matching device, digital image processing system, matching device control program, computer-readable recording medium, and matching device control method
US9824313B2 (en) Filtering content in an online system based on text and image signals extracted from the content
US11126827B2 (en) Method and system for image identification
WO2020195732A1 (en) Image processing device, image processing method, and recording medium in which program is stored
US11023714B2 (en) Suspiciousness degree estimation model generation device
CN109766755A (en) Face identification method and Related product
CN110660078B (en) Object tracking method, device, computer equipment and storage medium
JP2022003526A (en) Information processor, detection system, method for processing information, and program
CN111581436B (en) Target identification method, device, computer equipment and storage medium
KR20230069892A (en) Method and apparatus for identifying object representing abnormal temperatures
CN111783677B (en) Face recognition method, device, server and computer readable medium
JP6542819B2 (en) Image surveillance system
US20200302572A1 (en) Information processing device, information processing system, information processing method, and program
US10783365B2 (en) Image processing device and image processing system
CN110458052B (en) Target object identification method, device, equipment and medium based on augmented reality
JP2019053381A (en) Image processing device, information processing device, method, and program
CN108875467B (en) Living body detection method, living body detection device and computer storage medium
JP7315022B2 (en) Machine learning device, machine learning method, and machine learning program
CN114596638A (en) Face living body detection method, device and storage medium
KR101886856B1 (en) System and method for data combining based on result of non-rigid object tracking on multi-sensor seeker

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18757079

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18757079

Country of ref document: EP

Kind code of ref document: A1