WO2018155594A1 - Dispositif de traitement d'informations, procédé de traitement d'informations, et support d'enregistrement lisible par ordinateur - Google Patents
Dispositif de traitement d'informations, procédé de traitement d'informations, et support d'enregistrement lisible par ordinateur Download PDFInfo
- Publication number
- WO2018155594A1 WO2018155594A1 PCT/JP2018/006585 JP2018006585W WO2018155594A1 WO 2018155594 A1 WO2018155594 A1 WO 2018155594A1 JP 2018006585 W JP2018006585 W JP 2018006585W WO 2018155594 A1 WO2018155594 A1 WO 2018155594A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature
- person
- information
- image
- match
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
Definitions
- the present invention relates to an information processing apparatus and an information processing method for automatically specifying a suspicious person based on feature information other than a face image related to the suspicious person, and further, a program for realizing the information processing apparatus.
- the present invention relates to a recorded computer-readable recording medium.
- a supervisor needs to identify a suspicious person or the like by visual observation from a video taken by the surveillance camera.
- the visual identification has a problem that the labor is large, and that it is difficult to deal with when a plurality of persons are shown in the video.
- Patent Document 1 discloses a monitoring device having a function of automatically specifying a person. Specifically, the monitoring device disclosed in Patent Document 1 extracts a person's face image from a video sent from the monitoring camera, and extracts the extracted face image and the person's face image registered in the database. Are automatically identified by checking the above.
- the face image of the person registered in the database is usually a face image from the front
- the face image of the person extracted from the video sent from the surveillance camera is not a face image from the front
- the monitoring device disclosed in Patent Document 1 displays both the face image of the person registered in the database and the extracted face image of the person on the screen, and the judgment by the monitor is performed. I support.
- An example of an object of the present invention is an information processing apparatus, an information processing method, and a program capable of solving the above-described problem and automatically specifying a suspicious person based on feature information other than a face image related to the suspicious person. Is to provide.
- a monitoring device includes: An input receiving unit that receives input of feature information other than a face image indicating the characteristics of a person to be identified; A first determination unit that determines, for each feature of the person included in the feature information, the degree of coincidence between the feature and the feature of the person shown in the image; A second determination unit that determines whether or not the feature information matches a person shown in the image based on a determination result for each feature of the person; It is characterized by having.
- a monitoring method includes: (A) receiving an input of feature information other than a face image indicating a feature of a person to be identified; (B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image; (C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person; It is characterized by having.
- a computer-readable recording medium On the computer, (A) receiving an input of feature information other than a face image indicating a feature of a person to be identified; (B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image; (C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person; A program including an instruction for executing is recorded.
- a suspicious person or the like can be automatically specified based on feature information other than a face image related to the suspicious person or the like.
- FIG. 1 is a block diagram showing a schematic configuration of an information processing apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing a specific configuration of the information processing apparatus according to the embodiment of the present invention.
- FIG. 3 is a diagram illustrating an example of feature information used in the embodiment of the present invention and processing of the first determination unit and the second determination unit.
- FIG. 4 is a flowchart showing the operation of the information processing apparatus according to the embodiment of the present invention.
- FIG. 5 is a diagram for explaining processing of the first determination unit and the second determination unit in a modification of the embodiment of the present invention.
- FIG. 6 is a diagram illustrating an example of a person detection system configured by the detection device according to the present embodiment.
- FIG. 7 is a block diagram illustrating an example of a computer that implements the information processing apparatus according to the embodiment of the present invention.
- FIG. 1 is a block diagram showing a schematic configuration of an information processing apparatus according to an embodiment of the present invention.
- An information processing apparatus 10 is an apparatus for performing information processing when a specific person is detected. As illustrated in FIG. 1, the information processing apparatus 10 includes an input reception unit 11, a first determination unit 12, and a second determination unit 13.
- the input reception unit 11 receives input of feature information other than a face image indicating the characteristics of a person to be specified.
- the first determination unit 12 determines the degree of coincidence between each feature and the feature of the person shown in the image for each feature of the person included in the feature information.
- the second determination unit 13 determines whether the feature information matches the person shown in the image based on the determination result for each feature of the person.
- the degree of coincidence between a feature other than a face such as the age, system, and sex of a person to be specified and a feature reflected in an image is determined, and based on the determination result.
- it is finally determined whether or not the person in the image is a person to be specified.
- FIG. 2 is a block diagram showing a specific configuration of the information processing apparatus according to the embodiment of the present invention.
- FIG. 3 is a diagram illustrating an example of feature information used in the embodiment of the present invention and processing of the first determination unit and the second determination unit.
- the information processing apparatus 10 and the imaging apparatus 20 constitute a detection apparatus 100 that detects a person to be identified.
- the imaging device 20 outputs image data of an image obtained by shooting at set intervals.
- a specific example of the imaging device 20 is a surveillance camera.
- the information processing apparatus 10 includes an interpretation unit 14 and an output unit 15 in addition to the input reception unit 11, the first determination unit 12, and the second determination unit 13.
- the feature information for example, as shown in FIG. 3, text information that expresses features such as age, sex, physique, behavior, and object in a person to be identified as text can be cited.
- the object here includes an object worn by a person and an object carried by the person.
- the characteristic information is text information including “20s or 30s, male, middle-back, running, wearing no hat, carrying a knife”.
- the input reception unit 11 receives feature information including the input feature. .
- the input receiving unit 11 can receive only the input of the feature information including the specified feature when the administrator specifies only some of the preset features of the person.
- the input reception unit 11 can display various icons representing features on a screen of a display device or a screen of a terminal device or the like connected to the detection device 100. In this case, when the administrator specifies several icons, the input receiving unit 11 receives an identifier or text associated with the specified icon as feature information.
- the interpretation unit 14 takes out the feature included in the feature information and passes the extracted feature to the first determination unit 12.
- the interpretation unit 14 determines, from the feature information, as features, “20s or 30s”, “male”, “middle meat”, “running”, “hat” “I do n’t wear a knife and carry a knife” is taken out, and the taken-out features are notified to the first determination unit 12.
- the first determination unit 12 includes a plurality of feature classifiers 16 for each feature.
- each feature discriminator 16 corresponds to any one of age, gender, physique, action, and object.
- each feature classifier 16 is represented as “age classifier”, “gender classifier”, “physique classifier”, “behavior classifier”, and “object classifier”.
- the type of feature discriminator 16 is not limited to that shown in FIG. Further, when the feature is notified from the interpretation unit 14, the first determination unit 12 selects a necessary feature discriminator 16 from the feature discriminators 16 held in advance according to the notified feature. You can also
- Each feature discriminator 16 determines whether the feature to which it corresponds and the feature of the person in the image specified by the image data from the imaging device 20 are matched or mismatched. Each feature discriminator 16 calculates the probability that the corresponding feature matches the feature of the person in the image, and determines that they match if the calculated probability is equal to or greater than a threshold value. Also good. Specifically, each feature discriminator 16 performs determination as follows.
- the age discriminator discriminates the age of a person in the image using deep learning, which is one of machine learning. Specifically, the age discriminator learns a convolutional neural network (Convolutional Neural Network) having a convolutional layer and a fully connected layer in advance. Then, the age discriminator extracts a feature amount from the image by each convolution layer of the convolutional neural network learned in advance. Then, the age discriminator performs weight calculation on the extracted feature quantity with respect to the extracted feature quantity by all the coupling layers, and discriminates the age based on the obtained calculated value.
- Convolutional neural network Convolutional Neural Network
- the gender discriminator discriminates the gender of the person in the image using deep learning as an example. Specifically, the gender discriminator also extracts a feature amount from the image by each convolution layer of the convolutional neural network learned in advance. However, the gender discriminator performs weight calculation for gender on the extracted feature quantity by all the connected layers, and discriminates gender based on the obtained calculated value.
- the object discriminator also uses deep learning as an example to determine what a person is wearing and what he is carrying. Specifically, the object discriminator also learns a convolution network having a convolution layer and a fully connected layer in advance. Then, the object discriminator extracts a feature amount from the image by each convolution layer of the learned convolutional neural network. Then, the object discriminator performs weight calculation on the object with respect to the extracted feature quantity by using all the coupling layers, and discriminates the type of the object shown in the image based on the obtained calculated value.
- the behavior discriminator also determines the behavior of a person in the image using deep learning as an example. However, unlike the above-described discriminator, the behavior discriminator detects a person shown in the image and its periphery by using a previously learned Region Convolution Neural Network. Next, the behavior discriminator determines whether or not the detected person and the surroundings correspond to a character expression indicating the behavior of the feature information by using a recurrent neural network (Recurrent Neural Network) learned in advance.
- Recurrent Neural Network recurrent Neural Network
- the physique discriminator extracts the width of a person's head, neck, shoulders, belly, legs, etc. from the image, and determines the length and pattern of a pre-set body shape (slim type, fillet type, The body type of the person is determined based on the comparison result.
- a pre-set body shape slim type, fillet type, The body type of the person is determined based on the comparison result.
- the second determination unit 13 determines whether or not the number of determination results determined to match in a determination for each person characteristic is equal to or greater than a predetermined number, or the determination result determined to match. If the ratio is equal to or greater than a predetermined ratio, it is determined that the feature information matches the person in the image.
- the second determination unit 13 performs logical addition, and matches in all of the age discriminator, gender discriminator, physique discriminator, behavior discriminator, and object discriminator. If it is determined that the feature information matches the person shown in the image.
- the output unit 15 outputs a determination result. Examples of the output destination include a display device of the detection device 100, a terminal device of an administrator of the detection device 100, and the like.
- FIG. 4 is a flowchart showing the operation of the information processing apparatus according to the embodiment of the present invention.
- FIGS. 1 to 3 are referred to as appropriate.
- the information processing method is implemented by operating the information processing apparatus 10. Therefore, the description of the information processing method in the present embodiment is replaced with the following description of the operation of the information processing apparatus 10.
- the input receiving unit 11 first includes feature information including the input feature. Is received (step A1). The input receiving unit 11 passes the received feature information to the interpreting unit 14.
- the interpretation unit 14 extracts each feature included in the passed feature information, and passes the extracted feature to the first determination unit 12 (step A2).
- the first determination unit 12 acquires image data from the imaging device 20, and for each feature passed in step A2, the degree of coincidence between the feature and the feature of the person shown in the image of the image data is determined. Determine (Step A3). Specifically, the first determination unit 12 selects a feature discriminator 16 corresponding to each feature, and causes each selected feature discriminator 16 to perform a match determination between the corresponding feature and the feature of the person in the image. .
- the second determination unit 13 determines whether the feature information matches the person shown in the image based on the determination result for each feature (step A4). Specifically, the second determination unit 13 performs a logical addition, and when it is determined that all of the age discriminator, gender discriminator, physique discriminator, behavior discriminator, and object discriminator match, It is determined that the feature information matches the person shown in the image (see FIG. 3).
- the output unit 15 outputs the determination result in step A4 to the display device of the detection device 100, the terminal device of the administrator of the detection device 100, and the like (step A5).
- the output unit 15 may output a warning. Steps A3 to A5 are repeatedly executed every time image data is output from the imaging device 20.
- a person whose features match each other is automatically specified from the video imaged by the imaging device 20. For this reason, if features such as a suspicious person and a wanted criminal are input as feature information based on the sighting information, these people can be automatically identified even if they do not have face images.
- a person is specified using only feature information.
- the present embodiment may be an embodiment in which a person is specified using both feature information and a face image. .
- the program in the present embodiment may be a program that causes a computer to execute steps A1 to A5 shown in FIG.
- the information processing apparatus 10 and the information processing method in the present embodiment can be realized by installing and executing this program on a computer.
- the processor of the computer functions as the input reception unit 11, the first determination unit 12, the second determination unit 13, the interpretation unit 14, and the output unit 15, and performs processing.
- each computer may function as any one of the input reception unit 11, the first determination unit 12, the second determination unit 13, the interpretation unit 14, and the output unit 15, respectively.
- FIG. 5 is a diagram for explaining processing of the first determination unit and the second determination unit in a modification of the embodiment of the present invention.
- each feature discriminator 16 calculates the probability that the corresponding feature matches the feature of the person in the image.
- the first determination unit 12 outputs the probability calculated by each feature classifier 16 to the second determination unit 13.
- the 2nd determination part 13 calculates
- the second determination unit 13 can calculate the average value by multiplying the probability of each feature by a weighting factor when calculating the average value.
- the value of the weighting factor is appropriately set by the administrator of the detection apparatus 100.
- FIG. 6 is a diagram illustrating an example of a person detection system configured by the detection device according to the present embodiment.
- the person detection system 400 includes a plurality of detection devices 100 and a management server 200, which are connected via the Internet 300.
- an administrator terminal device 210 is connected to the management server 200.
- the detection devices 100 are installed in different areas.
- the terminal device 210 When the witness gives a testimony such as ⁇ a man who does not wear a medium-thickness hat running with a knife in his 20s or 30s '' based on this testimony, the terminal device The feature information is input on 210. As a result, the terminal device 210 transmits the feature information to the management server 200.
- the management server 200 When receiving the feature information, the management server 200 converts the format of the received feature information into a logically interpretable format shown in FIG. Then, the management server 200 transmits the converted feature information to each detection device 100. In addition, when the area where the sighting information is acquired can be acquired, the management server 200 can transmit the feature information only to the detection device 100 corresponding to the specified area and the adjacent area.
- each detecting device 100 receives the feature information transmitted from the management server 200
- the input accepting unit 11 accepts the input of the feature information (step A1), and thereafter, steps A2 to A5 are executed. Steps A3 to A5 are repeatedly executed every time image data is output from the imaging device 20.
- the person detection system 400 can be applied not only to suspicious persons and wanted criminals, but also to detection of lost children.
- FIG. 7 is a block diagram illustrating an example of a computer that implements the information processing apparatus according to the embodiment of the present invention.
- the computer 110 includes a CPU (Central Processing Unit) 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. With. These units are connected to each other via a bus 121 so that data communication is possible.
- the computer 110 may include a GPU (GraphicsGraphProcessing Unit) or an FPGA (Field-Programmable Gate Array) in addition to or instead of the CPU 111.
- GPU GraphicsGraphProcessing Unit
- FPGA Field-Programmable Gate Array
- the CPU 111 performs various operations by developing the program (code) in the present embodiment stored in the storage device 113 in the main memory 112 and executing them in a predetermined order.
- the main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory).
- the program in the present embodiment is provided in a state of being stored in a computer-readable recording medium 120. Note that the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
- the storage device 113 includes a hard disk drive and a semiconductor storage device such as a flash memory.
- the input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and a mouse.
- the display controller 115 is connected to the display device 119 and controls display on the display device 119.
- the data reader / writer 116 mediates data transmission between the CPU 111 and the recording medium 120, and reads a program from the recording medium 120 and writes a processing result in the computer 110 to the recording medium 120.
- the communication interface 117 mediates data transmission between the CPU 111 and another computer.
- the recording medium 120 include general-purpose semiconductor storage devices such as CF (Compact Flash (registered trademark)) and SD (Secure Digital), magnetic recording media such as a flexible disk, or CD- Optical recording media such as ROM (Compact Disk Read Only Memory) are listed.
- CF Compact Flash
- SD Secure Digital
- magnetic recording media such as a flexible disk
- CD- Optical recording media such as ROM (Compact Disk Read Only Memory) are listed.
- the information processing apparatus 10 in the present embodiment can be realized by using hardware corresponding to each unit, not a computer in which a program is installed. Furthermore, a part of the information processing apparatus 10 may be realized by a program, and the remaining part may be realized by hardware.
- the input reception part which receives the input of feature information other than the face image which shows the characteristic of the person used as specific object, A first determination unit that determines, for each feature of the person included in the feature information, the degree of coincidence between the feature and the feature of the person shown in the image; A second determination unit that determines whether or not the feature information matches a person shown in the image based on a determination result for each feature of the person;
- An information processing apparatus comprising:
- the said input reception part receives the text information by which the characteristic of the said person was represented by the text as the said characteristic information.
- the information processing apparatus according to appendix 1 or 2.
- the said input reception part receives the input of the characteristic information containing the designated characteristic among the characteristics of the said person,
- the information processing apparatus according to appendix 1 or 2.
- the first determination unit determines whether the feature and the feature of the person shown in the image match or do not match. And In the determination for each feature of the person, the second determination unit determines that the determination results determined to be equal to or greater than a predetermined number, or the ratio of the determination results determined to match is a predetermined ratio In the case of the above, it is determined that the feature information and the person shown in the image match.
- the information processing apparatus according to any one of appendices 1 to 4.
- the first determination unit calculates a probability that the feature and the feature of the person in the image match.
- the second determination unit obtains an average value of the probabilities calculated for each feature of the person, and when the obtained average value exceeds a threshold, the feature information and the person shown in the image match. It is determined that The information processing apparatus according to any one of appendices 1 to 4.
- (Appendix 7) (a) accepting input of feature information other than a face image indicating the characteristics of a person to be identified; (B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image; (C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person;
- step (a) as the feature information, text information in which the feature of the person is expressed in text is received.
- step (a) input of feature information including a specified feature among the features of the person is accepted.
- step (b) for each feature of the person included in the feature information, whether the feature and the feature of the person shown in the image match or do not match. Judgment, in the step (c), in the determination for each feature of the person, when the determination result determined to match is a predetermined number or more, or the ratio of the determination result determined to match is predetermined. When the ratio is equal to or greater than the ratio, it is determined that the feature information matches the person shown in the image.
- the information processing method according to any one of appendices 7 to 10.
- step (b) for each feature of the person included in the feature information, a probability that the feature and the feature of the person shown in the image match is calculated, In the step (c), an average value of the probabilities calculated for each feature of the person is obtained, and when the obtained average value exceeds a threshold value, the feature information and the person shown in the image are identical. Judge that you are doing, The information processing method according to any one of appendices 7 to 10.
- (Supplementary note 13) (A) receiving an input of feature information other than a face image indicating a feature of a person to be identified; (B) for each feature of the person included in the feature information, determining the degree of coincidence between the feature and the feature of the person shown in the image; (C) determining whether the feature information and a person shown in the image match based on a determination result for each feature of the person;
- the computer-readable recording medium which recorded the program containing the instruction
- step (a) As the feature information, text information in which the feature of the person is expressed in text is received.
- step (a) input of feature information including a specified feature among the features of the person is accepted.
- the computer-readable recording medium according to appendix 13 or 14.
- step (b) for each feature of the person included in the feature information, whether the feature and the feature of the person shown in the image match or do not match. Judgment, in the step (c), in the determination for each feature of the person, when the determination result determined to match is a predetermined number or more, or the ratio of the determination result determined to match is predetermined. When the ratio is equal to or greater than the ratio, it is determined that the feature information matches the person shown in the image.
- the computer-readable recording medium according to any one of appendices 13 to 16.
- step (b) for each feature of the person included in the feature information, a probability that the feature and the feature of the person shown in the image match is calculated, In the step (c), an average value of the probabilities calculated for each feature of the person is obtained, and when the obtained average value exceeds a threshold value, the feature information and the person shown in the image are identical. Judge that you are doing, The computer-readable recording medium according to any one of appendices 13 to 16.
- a suspicious person or the like can be automatically specified based on feature information other than a face image related to the suspicious person or the like.
- INDUSTRIAL APPLICABILITY The present invention is useful for a system for detecting a suspicious person, a wanted crime, and a system for searching for lost children.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Alarm Systems (AREA)
Abstract
Dispositif de traitement d'informations (10) comprenant : une unité de réception d'entrée (11) qui accepte une entrée d'informations de caractéristiques autres qu'une image faciale et indiquant des caractéristiques d'une personne à identifier ; une première unité de détermination (12) qui, pour chaque caractéristique de la personne, comprise dans les informations de caractéristiques, détermine un degré de correspondance entre cette caractéristique et une caractéristique d'une personne apparaissant dans une image ; et une seconde unité de détermination (13) qui, sur la base des résultats de détermination pour chacune des caractéristiques de la personne, détermine si les informations de caractéristiques et la personne apparaissant dans l'image correspondent.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017-035338 | 2017-02-27 | ||
JP2017035338A JP7120590B2 (ja) | 2017-02-27 | 2017-02-27 | 情報処理装置、情報処理方法、及びプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018155594A1 true WO2018155594A1 (fr) | 2018-08-30 |
Family
ID=63253901
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/006585 WO2018155594A1 (fr) | 2017-02-27 | 2018-02-22 | Dispositif de traitement d'informations, procédé de traitement d'informations, et support d'enregistrement lisible par ordinateur |
Country Status (2)
Country | Link |
---|---|
JP (2) | JP7120590B2 (fr) |
WO (1) | WO2018155594A1 (fr) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6874754B2 (ja) * | 2018-12-11 | 2021-05-19 | 東京電力ホールディングス株式会社 | 情報処理方法、プログラム、情報処理装置、学習済みモデルの生成方法及び学習済みモデル |
JP2020144738A (ja) * | 2019-03-08 | 2020-09-10 | 三菱電機株式会社 | 防犯装置、及び、防犯方法 |
JP6989572B2 (ja) * | 2019-09-03 | 2022-01-05 | パナソニックi−PROセンシングソリューションズ株式会社 | 捜査支援システム、捜査支援方法およびコンピュータプログラム |
WO2024202503A1 (fr) * | 2023-03-29 | 2024-10-03 | 日本電気株式会社 | Dispositif de traitement, procédé de traitement, et support d'enregistrement |
WO2024202514A1 (fr) * | 2023-03-29 | 2024-10-03 | 日本電気株式会社 | Dispositif de traitement, procédé de traitement, et support d'enregistrement |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010257449A (ja) * | 2009-03-31 | 2010-11-11 | Sogo Keibi Hosho Co Ltd | 人物検索装置、人物検索方法、及び人物検索プログラム |
JP2011035806A (ja) * | 2009-08-05 | 2011-02-17 | Nec Corp | 携帯端末装置、画像管理方法およびプログラム |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006268825A (ja) * | 2005-02-28 | 2006-10-05 | Toshiba Corp | オブジェクト検出装置、学習装置、オブジェクト検出システム、方法、およびプログラム |
JP2007241377A (ja) * | 2006-03-06 | 2007-09-20 | Sony Corp | 検索システム、撮像装置、データ保存装置、情報処理装置、撮像画像処理方法、情報処理方法、プログラム |
JP5121258B2 (ja) | 2007-03-06 | 2013-01-16 | 株式会社東芝 | 不審行動検知システム及び方法 |
CN101980242B (zh) | 2010-09-30 | 2014-04-09 | 徐勇 | 人脸面相判别方法、系统及公共安全系统 |
JP5649425B2 (ja) | 2010-12-06 | 2015-01-07 | 株式会社東芝 | 映像検索装置 |
JP6225460B2 (ja) | 2013-04-08 | 2017-11-08 | オムロン株式会社 | 画像処理装置、画像処理方法、制御プログラムおよび記録媒体 |
JP2015143951A (ja) * | 2014-01-31 | 2015-08-06 | オムロン株式会社 | 物体判別装置、画像センサ、物体判別方法 |
JP6441068B2 (ja) * | 2014-12-22 | 2018-12-19 | セコム株式会社 | 監視システム |
JP2016131288A (ja) * | 2015-01-13 | 2016-07-21 | 東芝テック株式会社 | 情報処理装置及びプログラム |
US10110858B2 (en) | 2015-02-06 | 2018-10-23 | Conduent Business Services, Llc | Computer-vision based process recognition of activity workflow of human performer |
JP5785667B1 (ja) | 2015-02-23 | 2015-09-30 | 三菱電機マイコン機器ソフトウエア株式会社 | 人物特定システム |
-
2017
- 2017-02-27 JP JP2017035338A patent/JP7120590B2/ja active Active
-
2018
- 2018-02-22 WO PCT/JP2018/006585 patent/WO2018155594A1/fr active Application Filing
-
2021
- 2021-08-23 JP JP2021135720A patent/JP2022003526A/ja active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010257449A (ja) * | 2009-03-31 | 2010-11-11 | Sogo Keibi Hosho Co Ltd | 人物検索装置、人物検索方法、及び人物検索プログラム |
JP2011035806A (ja) * | 2009-08-05 | 2011-02-17 | Nec Corp | 携帯端末装置、画像管理方法およびプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP2018142137A (ja) | 2018-09-13 |
JP2022003526A (ja) | 2022-01-11 |
JP7120590B2 (ja) | 2022-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018155594A1 (fr) | Dispositif de traitement d'informations, procédé de traitement d'informations, et support d'enregistrement lisible par ordinateur | |
US10846537B2 (en) | Information processing device, determination device, notification system, information transmission method, and program | |
US20200050871A1 (en) | Method and apparatus for integration of detected object identifiers and semantic scene graph networks for captured visual scene behavior estimation | |
CN108229297B (zh) | 人脸识别方法和装置、电子设备、计算机存储介质 | |
US20180174062A1 (en) | Root cause analysis for sequences of datacenter states | |
US20200012887A1 (en) | Attribute recognition apparatus and method, and storage medium | |
US8929611B2 (en) | Matching device, digital image processing system, matching device control program, computer-readable recording medium, and matching device control method | |
US9824313B2 (en) | Filtering content in an online system based on text and image signals extracted from the content | |
US11126827B2 (en) | Method and system for image identification | |
WO2020195732A1 (fr) | Dispositif de traitement d'image, procédé de traitement d'image, et support d'enregistrement dans lequel un programme est stocké | |
US11023714B2 (en) | Suspiciousness degree estimation model generation device | |
CN110660078B (zh) | 对象追踪方法、装置、计算机设备和存储介质 | |
CN109766755A (zh) | 人脸识别方法及相关产品 | |
US20200302572A1 (en) | Information processing device, information processing system, information processing method, and program | |
CN111581436B (zh) | 目标识别方法、装置、计算机设备和存储介质 | |
US10783365B2 (en) | Image processing device and image processing system | |
JP2019053381A (ja) | 画像処理装置、情報処理装置、方法およびプログラム | |
CN111783677B (zh) | 人脸识别方法、装置、服务器和计算机可读介质 | |
JP6542819B2 (ja) | 画像監視システム | |
JP7315022B2 (ja) | 機械学習装置、機械学習方法、及び、機械学習プログラム | |
CN110458052B (zh) | 基于增强现实的目标对象识别方法、装置、设备、介质 | |
CN108875467B (zh) | 活体检测的方法、装置及计算机存储介质 | |
CN114596638A (zh) | 人脸活体检测方法、装置及存储介质 | |
KR101886856B1 (ko) | 이종센서 탐색기의 비정형 객체추적에 대한 데이터 결합시스템 및 방법 | |
US11604938B1 (en) | Systems for obscuring identifying information in images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18757079 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18757079 Country of ref document: EP Kind code of ref document: A1 |