WO2022030549A1 - Dispositif de récupération d'informations, procédé de récupération d'informations et programme - Google Patents

Dispositif de récupération d'informations, procédé de récupération d'informations et programme Download PDF

Info

Publication number
WO2022030549A1
WO2022030549A1 PCT/JP2021/028964 JP2021028964W WO2022030549A1 WO 2022030549 A1 WO2022030549 A1 WO 2022030549A1 JP 2021028964 W JP2021028964 W JP 2021028964W WO 2022030549 A1 WO2022030549 A1 WO 2022030549A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
output
target person
unit
Prior art date
Application number
PCT/JP2021/028964
Other languages
English (en)
Japanese (ja)
Inventor
鮎美 松本
哲希 柴田
育弘 宇田
真一 根本
篤 佐藤
知也 児玉
貴司 塩崎
Original Assignee
エヌ・ティ・ティ・コミュニケーションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by エヌ・ティ・ティ・コミュニケーションズ株式会社 filed Critical エヌ・ティ・ティ・コミュニケーションズ株式会社
Publication of WO2022030549A1 publication Critical patent/WO2022030549A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • An embodiment of the present invention relates to, for example, an information retrieval device, an information retrieval method, and a program for analyzing video data from a surveillance camera and searching for an image of a target person.
  • An information processing device such as a general-purpose personal computer receives video data from a camera, stores the received video data in a storage unit, analyzes the video data, and detects an image of a target person. In addition, the information processing device displays the detected image of the target person on a monitor or the like.
  • the video data contains a large number of frames, and the burden of processing related to feature extraction for the faces contained in these frames is heavy, and a technology for realizing high-speed processing with a low-cost device has been proposed (). See, for example, Patent Document 1).
  • the information processing device Since the information processing device detects the target person from a plurality of frames included in the stored video data, many similar images of the target person may be output as the detection result.
  • the present invention has been made by paying attention to the above circumstances, and is intended to provide a technique for improving the visibility of a detected image of the same person.
  • the information retrieval device is based on a search condition from a database in which detected images detected from a plurality of frames included in video data from one or more cameras are registered.
  • a search unit that searches for target person images and all target person images searched by the search unit based on the first output specification are selected, and all targets searched by the search unit based on the second output specification. It includes a selection unit that selects a first predetermined number of target person images that satisfy the output conditions and are regarded as the same person among the person images, and an output unit that outputs the target person images selected by the selection unit.
  • FIG. 1 is a diagram showing an example of a configuration of a monitoring system including a monitoring information processing apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing an example of a hardware configuration of a Web server device used as a monitoring information processing device according to an embodiment of the present invention.
  • FIG. 3 is a block diagram showing an example of a software configuration of a Web server device used as a monitoring information processing device according to an embodiment of the present invention.
  • FIG. 4 is a flowchart showing an example of a search process by the system according to the embodiment of the present invention.
  • FIG. 5 is a flowchart showing an example of a target image selection process in the search process by the system according to the embodiment of the present invention.
  • FIG. 6 is a flowchart showing an example of a target image selection process for each tracking ID within the angle of view in the search process by the system according to the embodiment of the present invention.
  • FIG. 1 is a diagram showing an overall configuration of a system including a monitoring information processing apparatus according to an embodiment of the present invention.
  • a plurality of surveillance cameras C1 to Cn are distributed and arranged in the aisles and sales floors of large-scale stores such as shopping malls and department stores.
  • surveillance cameras C1 to Cn are attached to, for example, a ceiling or a wall surface, capture images of each surveillance area, and output video data thereof.
  • the surveillance cameras C1 to Cn are equipped with video analysis engines VE1 to VEn, respectively.
  • the video analysis engines VE1 to VEn correspond to the video analysis unit, and the video analysis unit analyzes each video data from the surveillance cameras C1 to Cn.
  • the video analysis engines VE1 to VEn each perform intra-angle tracking on a plurality of image frames included in the video data output from the corresponding surveillance cameras C1 to Cn, and the images are imaged from the plurality of image frames. The same person image is determined based on the position information in the frame.
  • the video analysis engines VE1 to VEn are not arranged one-to-one with respect to the surveillance cameras C1 to Cn, but a smaller number of video analysis engines are arranged for a plurality of cameras, and a plurality of video analysis engines are used.
  • the video data of the surveillance camera may be collectively processed.
  • the system of one embodiment includes a Web server device SV used as a monitoring information processing device.
  • the video analysis engines VE1 to VEn are capable of data communication with the Web server device SV via the network NW, and transmit the generated video analysis result to the Web server device SV via the network NW.
  • NW for example, a wired LAN (Local Area Network) or a wireless LAN is used, but any other network may be used.
  • the web server device SV includes video analysis engines VE1 to VEN or one video analysis engine, and the video analysis engines VE1 to VEN of the web server device SV or one video analysis engine is a surveillance camera via a network NW.
  • Each video data from C1 to Cn may be received and the received video data may be analyzed.
  • Web server device SV 2 and 3 are block diagrams showing an example of a hardware configuration and a software configuration of the Web server device SV, respectively.
  • the Web server device SV includes a control unit 1 having a hardware processor such as a central processing unit (CPU), and the program storage unit 2 and a data storage unit are provided to the control unit 1 via a bus 6.
  • the storage unit having 3 is connected to the input / output interface (input / output I / F) 4 and the communication interface (communication I / F) 5.
  • a monitor device MT and an administrator terminal OT are connected to the input / output I / F4.
  • the monitoring device MT is used for the observer to visually monitor the monitoring area, and displays images of the surveillance cameras C1 to Cn, information indicating the detection result or the tracking result of the query to be monitored, and the like.
  • the administrator terminal OT is used by the system administrator for system management and maintenance, and displays various setting screens and information indicating the operating status in the system, and the system administrator manages and operates the system. When inputting various necessary data, it has a function of accepting the data and setting it in the Web server device SV.
  • the communication I / F5 transmits data between the video analysis engines VE1 to VEn using a communication protocol defined by the network NW under the control of the control unit 1, for example, a wired LAN or a wireless LAN. It is composed of the corresponding interfaces.
  • the program storage unit 2 includes, for example, a non-volatile memory such as an HDD (Hard Disk Drive) or SSD (Solid State Drive) that can be written and read at any time as a storage medium, and a non-volatile memory such as a ROM (Read Only Memory).
  • a non-volatile memory such as an HDD (Hard Disk Drive) or SSD (Solid State Drive) that can be written and read at any time as a storage medium
  • a non-volatile memory such as a ROM (Read Only Memory).
  • middleware such as an OS (Operating System)
  • OS Operating System
  • the data storage unit 3 is, for example, a combination of a non-volatile memory such as an HDD or SSD capable of writing and reading at any time and a volatile memory such as a RAM (RandomAccessMemory) as a storage medium, and is one of the present inventions.
  • a camera information table 31 and a setting information table 32 are provided as main storage units necessary for implementing the embodiment.
  • the camera information table 31 stores, for example, information representing the name, performance, and installation position of the surveillance camera in association with the identification information (hereinafter referred to as the camera ID) for each of the surveillance cameras C1 to Cn.
  • Information representing performance includes, for example, resolution and aspect ratio.
  • the information indicating the installation position includes, for example, latitude / longitude, imaging direction, and imaging angle.
  • the camera information table 31 stores a database, and the database registers the tracking target person image (detection image) detected from the video data from the surveillance cameras C1 to Cn.
  • the setting information table 32 stores the image feature amount of the query.
  • the setting information table 32 stores the image feature amount of the query input from the administrator terminal OT via the input / output I / F4.
  • the setting information table 32 stores the image feature amount of the query detected from the video data transmitted from the surveillance cameras C1 to Cn via the communication I / F5.
  • the setting information table 32 stores the alert determination condition input via the administrator terminal OT or the like.
  • the setting information table 32 stores the first or second alert determination condition input via the administrator terminal OT or the like.
  • the administrator presses the tracking button on the administrator terminal OT for the person (image) to be tracked.
  • the control unit 1 automatically registers the latest set of detected images (face image and whole body image) as a query image (query image feature amount) in response to pressing the tracking button, and starts tracking.
  • the administrator presses the history button on the administrator terminal OT for the person (image) that the administrator wants to track.
  • the control unit 1 selects an arbitrary image from the history list, registers it as a query image, and starts tracking.
  • control unit 1 performs a person search from the image of the surveillance camera according to the history search from the administrator, registers the image selected from the person search results by the administrator as a query image, and starts tracking. .. Further, the administrator selects a person (image) included in the surveillance image data obtained in real time, and the control unit 1 registers the selected person as a query image and starts tracking. In addition, the administrator takes in the image provided by the requester from the administrator terminal OT, registers it as a query image, and starts tracking.
  • the control unit 1 has an information acquisition unit 11, an image detection unit 12, a search condition setting unit 13, an image search unit 14, and an image selection unit (filter unit) 15 as processing functions according to an embodiment of the present invention. And an output unit 16. Each unit is realized by causing the hardware processor of the control unit 1 to execute the program stored in the program storage unit 2.
  • the information acquisition unit 11 acquires video data and video analysis results from the video analysis engines VE1 to VEn connected to the surveillance cameras C1 to Cn or the video analysis engines VE1 to VEn provided in the Web server device SV.
  • the video analysis engines VE1 to VEn each determine the same person from a plurality of image frames included in the video data output from the corresponding surveillance cameras C1 to Cn based on the position information in the image frame, and the determination result is obtained. Output the video analysis result including the above.
  • the information acquisition unit 11 can acquire not only the video data from the surveillance cameras C1 to Cn but also the moving image file input via the input / output I / F4 or the communication I / F5.
  • the information acquisition unit 11 transfers a moving image file having a file name in the specified format to the specified folder.
  • the information acquisition unit 11 registers the camera name and the shooting time included in the file name of the moving image file. This makes it possible to search for moving image files by specifying the camera name or shooting time. Searching based on video files allows image search without the need for real-time tracking. Since it does not require real-time tracking processing, it is possible to import video files at high speed.
  • the image search process based on the moving image file is substantially the same as the image search process based on the video data from the surveillance cameras C1 to Cn, except that the real-time tracking process is not required.
  • the image detection unit 12 comprehensively analyzes the video analysis result and the video data from the surveillance cameras C1 to Cn to detect the tracked person image.
  • the video analysis engines VE1 to VEn are, for example, from a plurality of image frames included in the video data from the surveillance cameras C1 to Cn based on the image feature amount (feature amount of the tracked person image) of the query given in advance. , A person image (tracked person image) having an image feature amount similar to the image feature amount of the query is extracted. For example, a plurality of queries are given in advance, and a plurality of person images having an image feature amount similar to the image feature amounts of the plurality of queries are extracted.
  • the video analysis engines VE1 to VEn include information indicating the degree of similarity between the extracted person image and the query image, the camera IDs of the surveillance cameras C1 to Cn, the tracking ID within the angle of view, and the shooting time (date, time, minute, and second). ) And the video analysis result including.
  • the portrait image includes a face image and a whole body image
  • the similarity information includes the similarity corresponding to each of the face image and the whole body image.
  • the camera ID is identification information unique to the surveillance camera.
  • the in-angle tracking ID is an ID for tracking images regarded as the same person in the same surveillance camera.
  • the image detection unit 12 registers the detected image information including the detected tracked person image (detected image) in the database of the camera information table 31.
  • the image detection unit 12 detects a huge number of tracked person images from a plurality of frames included in each video data from the surveillance cameras C1 to Cn, and stores a huge number of detected image information in the camera information table.
  • the detected image information includes the detected image and the attribute information.
  • the attribute information includes a camera ID, a tracking ID within the angle of view, a score of personality, an image type (face image or whole body image), a shooting date and time, and the like.
  • the database of the camera information table 31 not only the tracking target person image is registered as a search target, but also various person images are registered.
  • the search condition setting unit 13 sets the search conditions to be read from the setting information table 32.
  • the search condition includes a period (start date and time and end date and time). Further, the search condition may include a camera ID, or may include a search target image (face image, whole body image, or face and whole body image).
  • the image search unit 14 searches for the target person image according to the search condition from the database in which the detected image information including the detected images detected from the plurality of frames included in the video data from the surveillance cameras C1 to Cn is registered. ..
  • the image search unit 14 searches for the detected image detected in the designated period as a target person image (image of a plurality of people).
  • the search result includes a plurality of images of each person.
  • the image search unit 14 searches for an image similar to the image of a certain person among the detected images detected in the designated period as a target person image (image of a certain person).
  • the search results include multiple images of a person.
  • the image selection unit 15 selects all the target person images searched by the image search unit 14 based on the first output designation (for example, the filter function OFF) of the setting information table 32. Further, the image selection unit 15 satisfies the output conditions among all the target person images searched by the image search unit 14 based on the second output designation (for example, the filter function is ON) of the setting information table 32, and is regarded as the same person.
  • a first predetermined number of subject images are selected. The first predetermined number is one or two or more arbitrarily set.
  • the output unit 15 outputs the target person image selected by the image selection unit 15.
  • the output unit 16 outputs a target person image to be displayed on the monitor device MT.
  • the first output designation is set in the setting information table 32
  • all the target person images searched by the image search unit 14 are displayed on the monitor device MT
  • the second output designation is made in the setting information table 32.
  • a first predetermined number of target person images that satisfy the output conditions and are regarded as the same person among all the target person images searched by the image search unit 14 are displayed on the monitor device MT. To.
  • the case where the tables 31 and 32 provided in the data storage unit 3 are provided in the Web server device SV is taken as an example.
  • the present invention is not limited to this, and it may be provided in a database server or a file server arranged outside the Web server device SV.
  • the Web server device SV accesses the tables 31 and 32 in the database server or the file server, and performs each process by acquiring necessary information.
  • FIG. 4 is a flowchart showing an example of a search process by the system according to the embodiment of the present invention.
  • Surveillance cameras C1 to Cn start shooting and output video data (ST1).
  • the video analysis engines VE1 to VEN analyze video data from the corresponding surveillance cameras C1 to Cn, respectively (ST2).
  • the video analysis engines VE1 to VEn each perform intra-angle tracking on a plurality of image frames included in the video data output from the corresponding surveillance cameras C1 to Cn, and the images are imaged from the plurality of image frames. The same person is determined based on the position information in the frame.
  • the video analysis engines VE1 to VEn output video data and the same person determination.
  • the communication I / F5 of the Web server device SV receives the video data and the same person determination from the video analysis engines VE1 to VEn.
  • the information acquisition unit 11 acquires video data from the video analysis engines VE1 to VEn and determination of the same person (ST3).
  • the image detection unit 12 comprehensively analyzes the video data from the video analysis engines VE1 to VEn and the same person determination, and detects the person image from a plurality of frames included in the video data from the video analysis engines VE1 to VEn. (ST4). For example, the number of frames is enormous, and the number of detected human images is enormous.
  • the image detection unit 12 registers the detected image information including the detected person image in the database of the camera information table 31 (ST5).
  • the search condition setting unit 13 reads the search condition from the setting information table 32 according to the instruction from the administrator terminal OT, and sets the search condition in the image search unit 14.
  • the search condition includes a start date and time D1 and an end date and time D2.
  • the search condition setting unit 13 reads the first or second output designation from the setting information table 32 according to the instruction from the administrator terminal OT, and sets the first or second output designation to the image selection unit 15. do.
  • the image search unit 14 searches the database registered in the camera information table 31 for the target person image according to the set search condition (ST6).
  • the image search unit 14 refers to the attribute information included in the detected image information registered in the database, and searches for the target person image included in the period of the start date / time D1 and the end date / time D2.
  • the image selection unit 15 selects all or part of the target person images searched by the image search unit 14 based on the first or second output designation (ST7).
  • the output unit 16 outputs the target person image selected by the image selection unit 15 to the monitoring device MT via the input / output I / F4 or the like (ST8).
  • FIG. 5 is a flowchart showing an example of a target image selection process in the search process by the system according to the embodiment of the present invention.
  • FIG. 5 is a flowchart illustrating the details of ST7 shown in FIG.
  • the image selection unit 15 is the all target image searched by the image search unit 14 based on the first output designation. Is selected (ST702).
  • the image selection unit 15 is searched by the image search unit 14 based on the second output designation.
  • a first predetermined number of subject images that satisfy the output conditions and are considered to be the same person are selected from all the subject images (ST704).
  • the image selection unit 15 determines the second output designation. Based on this, one target person image that satisfies the output conditions and is regarded as the same person is selected, and the output unit 16 outputs one selected target person image. In this case, the subject image having the highest score of civilization satisfies the output condition.
  • the image selection unit 15 is the second. Based on the output designation, two target person images that satisfy the output conditions and are regarded as the same person are selected, and the output unit 16 outputs the two selected target person images. In this case, among the facial images, the top two subject images having a high score of civilization satisfy the output condition.
  • the output unit 16 When all the target person images searched by the image search unit 14 are selected based on the first output designation, the output unit 16 outputs all the selected target person images to the monitoring device MT. Further, the output unit 16 selects a first predetermined number of target person images that satisfy the output conditions and are regarded as the same person among all the target person images searched by the image search unit 14 based on the second output designation. If so, the selected first predetermined number of target images are output to the monitoring device MT.
  • the image selection unit 15 sets the target person image up to a first predetermined number or a first predetermined number considered to be the same person for the video data from one camera based on the second output designation. You may choose.
  • the output unit 16 outputs the target person image up to the first predetermined number or the first predetermined number regarded as the same person for each camera, and the monitoring device MT outputs the target person image. Is displayed.
  • the image selection unit 15 has a first predetermined number that is regarded as the same person for a plurality of frames for a continuous predetermined time included in the video data from one camera.
  • a target person image whose upper limit is the first predetermined number may be selected.
  • the output unit 16 outputs the target person image up to the first predetermined number or the first predetermined number regarded as the same person, and the monitoring device MT displays the output target person image. It is possible to display a first predetermined number of the same person at predetermined time intervals, which is excellent in visibility and enables detailed confirmation.
  • FIG. 6 is a flowchart showing an example of a target image selection process for each tracking ID within the angle of view in the search process by the system according to the embodiment of the present invention.
  • FIG. 6 is a flowchart illustrating the details of ST7 shown in FIG.
  • the image selection unit 15 receives the search result from the image search unit 14 (ST711) and reads the search result file (ST712).
  • the search result file includes the target person image information
  • the target person image information includes the target person image and the attribute information.
  • the attribute information includes a camera ID, a tracking ID within the angle of view, a score of personality, an image type (face image or whole body image), a shooting date and time, and the like.
  • the image selection unit 15 sorts the target person images according to the tracking ID within the angle of view (ST713). For example, the image selection unit 15 rearranges the target person images with the first priority as the date and time and the second priority as the score of person-likeness.
  • the image selection unit 15 sorts the images in descending order of similarity.
  • the image selection unit 15 sorts the images in descending order of the face image similarity for the images whose face image similarity exceeds the threshold, and the face image similarity is high. For images that do not exceed the threshold, the images are sorted in descending order of similarity to the whole body image.
  • the image selection unit 15 selects all the searched target image. If the tracking ID filter flag in the angle of view is True (filter function ON) (ST714, YES), the image selection unit 15 selects a part of the target person images among all the searched target person images (ST714, YES). ST715 to ST720).
  • the image selection unit 15 examines the tracking ID in each angle of view in order.
  • the target person image is registered in the display list for each tracking ID within the angle of view.
  • the output unit 16 outputs a display list, and the monitoring device MT displays a target person image based on the display list.
  • the present embodiment it is possible to provide a system, an apparatus, a method, and a program for improving the visibility of a detected image of the same person.
  • the images of the same person are included in a plurality of frames of the video data from this camera.
  • the images of the same person are included in the plurality of frames of the video data from the plurality of cameras.
  • the Web server device SV displays a large number of images of the same person by the setting of the first output designation, and displays a small number of narrowed-down images of the same person by the setting of the second output designation.
  • the image displayed by the second output designation setting is an image that does not match the purpose, it may be changed to the first output designation setting. For example, if the first output designation is used, a person can be confirmed in detail from many images, and if the second output designation is used, a person can be confirmed from a small number of images in a short time. Further, if the second output designation is used, the time required for the display process can be shortened.
  • the program according to the present embodiment may be transferred in a state of being stored in an electronic device, may be transferred in a state of being stored in a storage medium, or may be transferred by downloading via a network or the like.
  • the recording medium is a non-temporary computer-readable storage medium such as a magnetic disk, an optical disk, or a flash memory.
  • the present invention is not limited to the above embodiment, and can be variously modified at the implementation stage without departing from the gist thereof.
  • each embodiment may be carried out in combination as appropriate, in which case the combined effect can be obtained.
  • the above-described embodiment includes various inventions, and various inventions can be extracted by a combination selected from a plurality of disclosed constituent requirements. For example, even if some constituent elements are deleted from all the constituent elements shown in the embodiment, if the problem can be solved and the effect is obtained, the configuration in which the constituent elements are deleted can be extracted as an invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

L'invention concerne une technique permettant de délivrer efficacement des images de la même personne détectée. Un dispositif de récupération d'informations selon un aspect de la présente invention est pourvu de : une unité de récupération qui récupère, à partir d'une base de données dans laquelle des images détectées qui sont détectées à partir d'une pluralité de trames incluses dans des données vidéo provenant d'au moins une caméra sont enregistrées, une image de sujet correspondant à une condition de récupération ; une unité de sélection qui sélectionne, sur la base d'une première désignation de sortie, toutes les images du sujet récupérées par l'unité de récupération et sélectionne, sur la base d'une seconde désignation de sortie, un premier nombre prédéterminé d'images de sujet parmi toutes les images de sujet récupérées par l'unité de récupération qui satisfont une condition de sortie et sont considérées comme étant de la même personne ; et une unité de sortie qui délivre les images de sujet sélectionnées par l'unité de sélection.
PCT/JP2021/028964 2020-08-07 2021-08-04 Dispositif de récupération d'informations, procédé de récupération d'informations et programme WO2022030549A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020135168A JP2022030865A (ja) 2020-08-07 2020-08-07 情報検索装置、情報検索方法、及びプログラム
JP2020-135168 2020-08-07

Publications (1)

Publication Number Publication Date
WO2022030549A1 true WO2022030549A1 (fr) 2022-02-10

Family

ID=80117521

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/028964 WO2022030549A1 (fr) 2020-08-07 2021-08-04 Dispositif de récupération d'informations, procédé de récupération d'informations et programme

Country Status (2)

Country Link
JP (1) JP2022030865A (fr)
WO (1) WO2022030549A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018163398A1 (fr) * 2017-03-10 2018-09-13 株式会社日立国際電気 Système de recherche d'images similaires

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018163398A1 (fr) * 2017-03-10 2018-09-13 株式会社日立国際電気 Système de recherche d'images similaires

Also Published As

Publication number Publication date
JP2022030865A (ja) 2022-02-18

Similar Documents

Publication Publication Date Title
US11210504B2 (en) Emotion detection enabled video redaction
US9210385B2 (en) Method and system for metadata extraction from master-slave cameras tracking system
JP7497853B2 (ja) 顔検知システム
KR101417548B1 (ko) 사진 콜렉션에서 이벤트들을 생성하고 라벨링하는 방법 및 시스템
CN111131902B (zh) 确定目标对象信息的方法及视频播放设备
TWI747341B (zh) 檔案應用方法及裝置、儲存媒體
WO2020259100A1 (fr) Procédé et appareil de traitement d'informations, et support d'informations
KR101777238B1 (ko) 영상 트렌드 검출 및 큐레이션을 위한 방법 및 시스템
JP2008165701A (ja) 画像処理装置、電子機器、画像処理方法、及び、プログラム
JP2023505864A (ja) ターゲット移動軌跡の構築方法、機器及びコンピュータ記憶媒体
CN106844654A (zh) 面向警务实战的海量视频分布式检索方法
US20210035312A1 (en) Methods circuits devices systems and functionally associated machine executable instructions for image acquisition identification localization & subject tracking
WO2022030549A1 (fr) Dispositif de récupération d'informations, procédé de récupération d'informations et programme
JP2015073191A (ja) 画像処理システムおよびその制御方法
JP2022043631A (ja) 情報処理装置、情報処理方法及びプログラム
JP2007213183A (ja) デジタル画像データの分類装置、デジタル画像データの分類方法およびデジタル画像データの分類プログラム
US10599928B2 (en) Method and system for enabling information in augmented reality applications
JP2014042170A (ja) 撮影システム、撮影支援方法、撮影装置及びその制御方法、管理装置及びその制御方法、並びにプログラム
US8571357B2 (en) Image data management apparatus, method and program
JP2004171375A (ja) 画像処理方法
JP5523122B2 (ja) 画像管理装置、方法及びプログラム
JP6341843B2 (ja) 画像検索装置、及び画像検索システム
JP7235612B2 (ja) 人物検索システムおよび人物検索方法
WO2022030546A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2020078030A (ja) システム、情報処理装置、情報処理方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21852173

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21852173

Country of ref document: EP

Kind code of ref document: A1