WO2014155639A1 - Système de surveillance vidéo et système de récupération d'images - Google Patents

Système de surveillance vidéo et système de récupération d'images Download PDF

Info

Publication number
WO2014155639A1
WO2014155639A1 PCT/JP2013/059437 JP2013059437W WO2014155639A1 WO 2014155639 A1 WO2014155639 A1 WO 2014155639A1 JP 2013059437 W JP2013059437 W JP 2013059437W WO 2014155639 A1 WO2014155639 A1 WO 2014155639A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
image
feature amount
monitoring system
detection
Prior art date
Application number
PCT/JP2013/059437
Other languages
English (en)
Japanese (ja)
Inventor
智明 吉永
健一 米司
廣池 敦
裕樹 渡邉
佑人 小松
影山 昌広
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to PCT/JP2013/059437 priority Critical patent/WO2014155639A1/fr
Priority to JP2015507829A priority patent/JP5982557B2/ja
Publication of WO2014155639A1 publication Critical patent/WO2014155639A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a video surveillance system, and more particularly to a technique for performing detection processing such as face / person detection on video acquired by a camera.
  • the conventional system has a problem that the recognition accuracy deteriorates depending on the installation situation because the illumination environment and the depression angle are greatly different for each of the plurality of monitoring cameras on the network connected to the system.
  • the face detection accuracy may be deteriorated for an image of a camera installed at a large depression angle or a camera installed at a position where the surrounding illumination fluctuation is severe.
  • the face orientation of a person reflected between the cameras is different, the face orientation is different even if a face search is performed from the camera 2 having a different depression angle based on the feature amount of the face photographed by a certain camera 1. It may be difficult to find the face of the same person.
  • Patent Document 1 proposes a method of stabilizing illumination fluctuations by selecting an image processing method based on a table prepared in advance according to the temperature of the use environment.
  • Patent Document 2 aims at stabilization by photographing a face using two cameras and switching a face authentication threshold according to the face situation obtained from each image.
  • Patent Document 1 there is a problem that the system is complicated because it must be measured using another means such as temperature.
  • Patent Document 2 two cameras must be installed at the same position, which increases the cost.
  • a monitoring system includes a camera and a detection unit that detects an object from an input image captured by the camera.
  • a feature amount extraction unit that extracts a feature amount of the target object, a storage unit that accumulates the input image, the target object, and the feature amount, a selection control unit that controls the detection unit and the feature amount extraction unit, a detection unit,
  • An evaluation unit that evaluates a result from the feature amount extraction unit, and the selection control unit selects a detection method in the detection unit and an extraction method in the feature amount extraction unit based on the output from the evaluation unit It is characterized by that.
  • the optimum detection / extraction process can be automatically selected for each camera under different circumstances for each of the plurality of monitoring cameras.
  • FIG. 1 shows an outline of the present invention and is a diagram showing an example of a monitoring system using the present invention.
  • This system has one or a plurality of cameras 110 a to 110 b, and these cameras are connected through a communication infrastructure 120.
  • the communication infrastructure 120 is a LAN or a video transmission cable, and the video of each camera is transmitted through the communication infrastructure 120.
  • the image recognition unit 130 is a processing unit placed on a server or a CPU in the camera, and performs a recognition process such as face detection on an image obtained from the camera 110, and the recognition result is stored in a feature amount database through the communication infrastructure 120. 140.
  • the feature amount database 140 has a function of storing and managing feature amounts obtained from the image recognition unit 130. Further, it has a function of searching for similar feature value data from a plurality of accumulated feature values.
  • the image feature amount stored in the feature amount database 140 is associated with an image stored in the image database 150 using an ID or the like.
  • the image database 150 stores image data obtained from each camera 110.
  • the image data stored here is added with time information, camera information, and the like, and has a function of instantly searching for a predetermined image based on such information.
  • the management unit 160 has a function of managing the image recognition unit 130 such as control of the control method set by the selection control unit.
  • the display unit 170 searches and browses image data and the like stored in the image database, and manages the state of the recognition unit 130 through the management unit 160.
  • the sensor 180 is a variety of sensors such as an infrared sensor and an RFID reader, and sensor information can be acquired by the image recognition unit 130 through the communication infrastructure 120. However, the sensor 180 is not an essential configuration.
  • the image recognition unit 130 first determines an image recognition method performed by the selection control unit 131 by the object detection unit 132 and the feature extraction unit 133 for the image obtained from the camera 110.
  • the object detection unit 132 has a plurality of detection methods 1 to N, and performs object detection according to the method selected by the selection control unit 131.
  • the feature extraction unit 133 also extracts the feature amount of the object from the object obtained at 132 using the extraction method determined by the selection control unit 131.
  • the recognition result output unit 134 sends the obtained object feature information to the feature database 140 via the communication infrastructure 120. Moreover, it sends to the evaluation part 135 according to a condition.
  • the evaluation unit 135 evaluates various methods of the object detection unit 132 and the feature extraction unit 133 and sends the evaluation results to the statistical analysis unit 136.
  • the statistical analysis unit 136 statistically analyzes a plurality of obtained evaluation results to determine the optimum object detection method and feature extraction method for each camera or for each camera time zone, and to select a method. Create and update tables regularly.
  • the selection control unit 131 performs method selection according to the method selection table of the statistical analysis unit 136.
  • FIG. 2 is a diagram showing a system configuration of the present invention.
  • the image recognition unit 130 is software that operates on the server computer 210.
  • the server computer 210 includes an I / F 211 that transmits / receives information to / from a communication base, a CPU 212 that performs processing of the image recognition unit 130, a memory 213, and an HDD 214 that stores information.
  • the management unit 160 is executed on the CPU 212, and the management information is stored in the HDD 214.
  • the display unit 170 is realized in the client computer.
  • the present invention is implemented on the above system configuration.
  • One server computer or a plurality of server computers may be provided.
  • the client computer 170 may be realized on the same device.
  • FIG. 3 is a diagram illustrating an example of a processing flow of the object detection unit 132.
  • Preprocessing is performed on the image obtained in S31 in S32.
  • filter processing such as image scale change, super-resolution, smoothing, and edge enhancement are candidates.
  • step S33 a moving object detection process is performed to extract a moving object region.
  • step S34 face detection is performed on the moving object region obtained in S33 to detect a face. Face detection can detect a face from an image by using a discriminator constructed by machine learning in advance by using a method described in Non-Patent Document 1, for example.
  • a discriminator is constructed in advance for each face direction, such as a front face discriminator capable of detecting a front face, a side face discriminator capable of detecting a left side face, and a downward face discriminator capable of detecting a downward face.
  • a classifier may be prepared for each rotation of the face, or a classifier may be prepared for each object such as a person or a vehicle.
  • detection processing is performed using a discriminator selected by the selection control unit 131 from among the plurality of discriminators.
  • S35 an erroneous detection determination is performed on the detection result obtained in S34, and the detection result is output in S36. Execution / non-execution of processing in each step and selection of a plurality of methods are performed based on the result determined by the selection control unit 131.
  • FIG. 5 is a diagram illustrating an example of the flow of object feature amount extraction processing in the feature extraction unit.
  • the object detection result obtained by the object detection unit 132 is received.
  • the object o is selected from all the detected objects, and the following processes of S53 and S54 are performed.
  • pre-processing is performed on the detected face image. Pre-processing includes normalization of illumination, image enlargement by super-resolution, and image position correction by detecting the position of a facial organ such as the eyes and nose and mouth.
  • feature amount extraction is performed on the face image subjected to the preprocessing in S54.
  • multi-dimensional D image feature amounts are extracted by PCA (principal component analysis), Gabor Filter, HOG feature amount calculation, or the like.
  • PCA Principal component analysis
  • Gabor Filter Gabor Filter
  • HOG feature amount calculation or the like.
  • a method for extracting D1-dimensional feature values from the entire head a method for extracting D2-dimensional feature values only from the face region excluding the region with hair, and the pre-processing of S54 are used.
  • a technique for extracting a local feature amount from the periphery of the given facial organ position in a three-dimensional manner is prepared, and this is switched in the designation of the selection control unit 131.
  • D1, D2, and D3 ⁇ D.
  • S55 it is determined whether or not the feature amount is extracted for all detected objects. If there is an unextracted object, the process returns to SE2, and if not, the process moves to S56.
  • the obtained feature quantities of all objects are output. Accordingly, for example, when the face tends to appear small on the camera, the D1 dimensional feature value of the entire head is selected, and when the face appears large, selection is performed such that the overall D1 dimension + face part D2 dimensional feature value is extracted.
  • a search is performed using only the D1-dimensional whole head feature quantity between a camera with a small face and a camera with a large face. It is possible to perform a search using the feature amount.
  • FIG. 6 is a diagram illustrating an example of an output result of the recognition result output unit 134.
  • the recognition result output unit 134 adds the camera number and time information to the object information obtained by the object detection unit 132 and the feature extraction unit 133 to obtain a recognition result and registers it in the feature amount database 140.
  • the information required for the recognition result includes the photographed camera number, the frame ID unique to the photographing time, the detection processing method and feature extraction method selected by the selection control unit 131, the detection region indicating the object position on the image, D-dimensional feature value obtained from the object.
  • the obtained recognition result information is stored as a recognition result table 610, which may be sent for each image frame, or may be sent at regular intervals.
  • a recognition result table 610 which may be sent for each image frame, or may be sent at regular intervals.
  • FIG. 7 is a diagram showing a detailed configuration of the evaluation unit 135.
  • the image collection unit 701 acquires the evaluation image from the transmission unit 120 and stores it in the evaluation data storage unit 703. However, accumulation in the evaluation data accumulation unit 703 is performed only when specific sensor information and recognition results are acquired.
  • the sensor information acquisition unit 702 acquires sensor information from the transmission unit, and notifies the image collection unit when specific sensor information is acquired. This sensor information is, for example, a human sensor, entry / exit card information using RFID, radio wave information using a mobile phone, or the like. If card information at the time of entry / exit is used, it is possible to roughly estimate how many people have entered the camera shooting area from where.
  • the evaluation data storage unit 703 stores the images collected by the image collection unit 701. At this time, sensor information and recognition result information are added and recorded.
  • the evaluation execution control unit 704 sends a recognition processing execution command for the image of the evaluation data storage unit 703 to the selection control unit 131.
  • the selection control unit 131 issues a control command to execute all the object detection methods 1 to N and feature extraction methods 1 to M when executing the evaluation. For example, this evaluation execution control unit outputs an execution command in a time zone such as nighttime or a time zone in which no one is shown and the processing load of the CPU 212 is light.
  • the recognition result for the evaluation image instructed by the evaluation execution control unit 704 is input from the recognition result output unit 134 to the recognition result acquisition unit 705 and sent to the result determination unit 705.
  • the recognition result acquisition unit 705 also acquires a recognition result at a normal time other than at the time of evaluation execution, and notifies the image collection unit of the result when a specific recognition result is acquired.
  • This specific recognition result is, for example, a case where the number of detected objects is large, or a case where almost no objects can be detected although there are many moving objects. Each of these is suitable as an evaluation image as an image in which many people would have been detected or an image in which a detection failure would have occurred. Therefore, the image collection unit 701 determines whether to collect this image and stores it. .
  • the result determination unit 706 performs accuracy evaluation on the recognition results of a plurality of methods, and collectively sends the results to the statistical analysis unit 136.
  • images suitable for evaluation can be collected periodically, and further, correct information (e.g., the position of the person and who is the person) based on the sensor information and recognition results. Etc.) can be collected automatically. Evaluation of a plurality of recognition methods can be performed on the evaluation image thus obtained without interfering with necessary recognition processing.
  • FIG. 8 is a flowchart for determining image collection in the image collection unit 701.
  • S83 it is determined whether the sensor value obtained from the sensor information acquisition unit 702 is equal to or greater than the threshold value Ss.
  • the threshold is the amount of the sensor, and if the sensor is passing information for a gate or a door using RFID or the like, the number of people passing within a certain time (for example, 3 or more) is set. . If it is equal to or greater than the threshold, the process proceeds to S86, where the current image is collected and cached in a temporary memory. If it is less than the threshold, the process proceeds to S84.
  • the recognition value is compared with the threshold value Sr, and if it is equal to or greater than the threshold value, the process proceeds to S86, and if it is less than the threshold value, the process proceeds to S85.
  • This recognition value is, for example, the value of the number of moving object regions or the distance between the extracted facial feature quantity and the person registered in advance.
  • it is determined whether a predetermined time T2 seconds or more has elapsed from the time when the evaluation image was previously recorded. If it has not elapsed, the process returns to the determination process in S83, and if it has elapsed, the process proceeds to S87 to be temporarily stored. The recorded image is recorded in the evaluation data storage unit.
  • FIG. 9 is a diagram illustrating an example of the determination result of the result determination unit 706.
  • information about the number of people photographed by the camera and who the person is is added to the image based on the ID at the time of passage as sensor information. And recorded in the evaluation data storage unit 703. Based on this information, the detection rate of how many of the three people in the image can be detected and the distance (similarity) between the feature quantity obtained from a specific person and the registered feature quantity of that person ) As an evaluation result.
  • the accuracy of the object detection and the feature amount can be calculated based on the sensor information.
  • FIG. 10 is a diagram illustrating an example of an analysis result held by the statistical analysis unit 136.
  • the statistical analysis unit 136 accumulates the evaluation results obtained by the evaluation unit 135 in the DB, and performs statistical analysis on the accumulated data at regular time intervals. As a result, the statistical analysis unit obtains an analysis result table 1010.
  • Tables 1010a and 1010b show examples of analysis result tables of the cameras 110a and 110b, respectively.
  • the result table includes items such as a specific time zone, weather, detection method, detection area, and feature extraction method, and the recognition result is divided into time zones, so that the selection control unit 131 recognizes each specific time zone. It is possible to switch the method. Similarly, it is possible to change the recognition method for each weather by having the weather information at that time.
  • the positive detection accuracy is obtained as a recognition rate for each detection method
  • the feature extraction method the authentication accuracy for each feature amount extraction method is obtained by calculating an average from all the evaluation data. Thereby, it is possible to statistically grasp which recognition method has the highest performance in a certain time zone.
  • the detection area stores an area where all detected objects exist. As a result, it is possible to grasp the region where the object will exist for each time slot.
  • the total number of data used for statistical analysis may also be stored in this table. Thereby, the reliability of each recognition rate can be determined, and when the statistical data is small, it is possible not to select a method.
  • a statistical analysis table may be created and output for each lighting situation, each face size, and each CPU load situation.
  • FIG. 11 is an example of a method selection table of the selection control unit 131.
  • an object detection method and a feature extraction method are defined so that recognition processing with the highest accuracy can be performed based on the result of the statistical analysis unit 136.
  • the processing contents of the object detection unit 132 and the feature extraction unit 133 are designated according to this table.
  • This method control table 1110 is updated at a specific period such as every day or every week. Also, if there is a change in the camera installation position, it is reset.
  • FIG. 12 is an example in which a monitoring system management screen is displayed on the display unit 170.
  • Management information of the monitoring system is managed by the management unit 160.
  • the system control table 1110 of each camera held by the selection control unit 131 and the analysis result table 1010 held by the statistical analysis unit are collectively managed.
  • the display unit 170 performs processing for extracting necessary information from the management unit 160, shaping it, and displaying it in an easy-to-understand manner. These can be constructed using HTML, Flash, or the like.
  • FIG. 12A shows an example of a processing method confirmation screen 1210. On this screen, the object detection method and the feature extraction method adopted by each camera can be confirmed.
  • FIG. 12B is an example of the evaluation result confirmation screen 1220.
  • the evaluation result obtained by the statistical analysis unit 136 can be confirmed through the management unit 160.
  • by selecting a method to be confirmed it is possible to superimpose and display an evaluation image, its recognition result, correct answer information, and the like in a window at the bottom of the screen and confirm the evaluation result.
  • the buttons at the bottom of the screen can be used to control playback such as sending and returning images, and the buttons on the right side of the screen can be used to delete evaluation images and add / correct correct information. Thereby, the quality of the evaluation image can be improved and more accurate accuracy evaluation can be realized.
  • FIG. 13 is an example of a screen for performing an image search on the display unit 170.
  • the search result display screen 1310 searches for an image showing a similar face by selecting a specific face image in the query image window and pressing a search button on the side of the window.
  • the search command is transmitted to the feature quantity database 140, searches the feature quantity database 140 for data having a feature quantity similar to the face of the query, and outputs the data in the database in a similar order (in order of distance).
  • the display unit 170 draws out an image from the image database 150 based on the frame ID information in the acquired data and displays it on the screen. At this time, it is also possible to display the search result by limiting the detection method and the feature extraction method only to an image that has been processed in the same manner as the query image.
  • the monitoring system described in the present embodiment includes a camera, a detection unit that detects an object from an input image captured by the camera, and a feature amount extraction unit that extracts a feature amount of the detected object.
  • a storage unit that accumulates an input image, an object, and a feature amount; a selection control unit that controls the detection unit and the feature amount extraction unit; and an evaluation unit that evaluates a result from the detection unit and the feature amount extraction unit;
  • the selection control unit selects a detection method in the detection unit and an extraction method in the feature amount extraction unit based on the output from the evaluation unit.
  • images suitable for evaluation can be collected periodically, and correct information (such as the person's position and who the person is) in the evaluation image is automatically based on sensor information and recognition results. Can be collected.
  • FIG. 14A is an image of a passage taken by a certain camera 110a
  • FIG. 14B is an image showing an example in which the recognition result accumulated in the statistical analysis unit 136 is drawn on the image.
  • FIG. 14B an object such as a face appears in a specific size for each area with respect to a limited area on the passage. Therefore, by performing statistical analysis, the object size existing for each divided area on the image can be limited as shown in FIG. FIG. 14C shows that no object appears in the area written as 0. Other than that, the minimum and maximum size of the appearing object is shown.
  • the selection control unit 131 By referring to this table by the selection control unit 131, it is possible to limit the area where the object is detected and the size of the object, and it is possible to reduce processing time and eliminate false detection. Furthermore, it is possible to estimate the MAP at the position where the camera is installed based on the appearance distribution obtained by the statistical processing. By comparing this with map data of a facility or the like where the camera is installed, it is possible to estimate where on the map the image is taken, and to automatically estimate the current camera position.
  • FIG. 15 shows an example of the special condition time system control table 1510 held by the selection control unit 131.
  • the special condition priority method table 1510 indicates the priority of processing of the object detection unit 132 and the feature extraction unit 133 when a special condition occurs.
  • the special condition priority system table 1510 is set manually because a large amount of data cannot be collected statistically for emergency response.
  • the special condition priority method table 1510 has a special situation item, and the priority of the switching recognition method is assigned to each special situation such as a fire or an earthquake. The occurrence of a special situation such as a fire or an earthquake can be obtained from the sensor through the communication infrastructure 120.
  • a fire can be detected by attaching a smoke detection sensor as the sensor 180, and an earthquake can be detected by obtaining notification of an emergency earthquake warning or sensor information of a seismic intensity meter.
  • the recognition process is performed according to the special condition time method control table 1520 generated from the special condition time priority method table 1510 and the analysis result table 1010.
  • the special condition method control table 1520 has a value obtained by multiplying the recognition rate R of each method included in the analysis result table 1010 and the priority P of the special condition priority method table 1510 above a certain threshold value. Select the method that is the largest of them.
  • the minimum detection size included in the special condition priority method table 1510 defines the minimum size at which the object in the special situation appears in the image. By comparing this value with the minimum / maximum size of the object included in the statistical analysis unit 136, it is possible to determine how much the zoom or pan / tilt of the camera should be in a special situation. By describing this in the special condition time system control table 1520 as a camera switching value, it is possible to set an optimal camera parameter when a special situation occurs based on the statistical analysis result of each camera.
  • the entire camera can be monitored by switching a camera that has been set to zoom so that a person's face can be clearly seen in normal times so that it can be photographed at a wider angle when an earthquake occurs.
  • a table is prepared for each setting value of the PTZ camera as a special situation, it is possible to switch the detection method for each PTZ direction of the camera.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne : une unité de traitement de reconnaissance possédant une pluralité de modes de traitement de reconnaissance ; une unité d'évaluation permettant d'évaluer les performances de chaque procédé dans la pluralité de procédés de reconnaissance possédés par l'unité de traitement de reconnaissance ; une unité d'analyse statistique permettant d'analyser statistiquement une pluralité de résultats d'évaluations passées obtenus de l'unité d'évaluation et de déterminer, pour chaque caméra ou chaque intervalle de temps, le procédé de reconnaissance qui est optimal ; et une unité de commande de sélection caractérisée en ce qu'elle détermine, pour une image obtenue de la caméra et conformément au résultat obtenu par l'unité d'analyse statistique, un ou plusieurs procédés de reconnaissance parmi la pluralité de procédés de reconnaissance possédés par l'unité de traitement de reconnaissance et en ce qu'elle amène le traitement de reconnaissance à être réalisé à l'aide du mode déterminé par l'unité de traitement de reconnaissance. Cela permet qu'un mode de reconnaissance optimal pour un environnement dans lequel la caméra est installée soit automatiquement défini à partir de résultats statistiques passés et permet de réaliser une réduction des coûts d'ajustement manuel et une amélioration de la précision de reconnaissance.
PCT/JP2013/059437 2013-03-29 2013-03-29 Système de surveillance vidéo et système de récupération d'images WO2014155639A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2013/059437 WO2014155639A1 (fr) 2013-03-29 2013-03-29 Système de surveillance vidéo et système de récupération d'images
JP2015507829A JP5982557B2 (ja) 2013-03-29 2013-03-29 映像監視システムおよび画像検索システム

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2013/059437 WO2014155639A1 (fr) 2013-03-29 2013-03-29 Système de surveillance vidéo et système de récupération d'images

Publications (1)

Publication Number Publication Date
WO2014155639A1 true WO2014155639A1 (fr) 2014-10-02

Family

ID=51622706

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/059437 WO2014155639A1 (fr) 2013-03-29 2013-03-29 Système de surveillance vidéo et système de récupération d'images

Country Status (2)

Country Link
JP (1) JP5982557B2 (fr)
WO (1) WO2014155639A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016170472A (ja) * 2015-03-11 2016-09-23 カシオ計算機株式会社 情報処理装置、情報処理方法、及びプログラム
JPWO2016151802A1 (ja) * 2015-03-25 2017-12-28 株式会社日立国際電気 顔照合システムおよび顔照合方法
KR101916832B1 (ko) * 2017-07-11 2018-11-08 강경수 어노테이션 바운딩 박스를 이용한 오브젝트 감지 방법 및 장치
JP2019033397A (ja) * 2017-08-08 2019-02-28 富士通株式会社 データ処理装置、プログラム及びデータ処理方法
JP2020113964A (ja) * 2019-07-17 2020-07-27 パナソニックi−PROセンシングソリューションズ株式会社 監視カメラおよび検知方法
CN116456058A (zh) * 2023-04-28 2023-07-18 南京派拉斯曼工程技术有限公司 一种基于改进的视频捕捉检测方法
US11893797B2 (en) 2019-01-18 2024-02-06 Nec Corporation Information processing device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230133832A1 (en) * 2021-11-01 2023-05-04 Western Digital Technologies, Inc. Data Collection and User Feedback in Edge Video Devices

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002073321A (ja) * 2000-04-18 2002-03-12 Fuji Photo Film Co Ltd 画像表示方法
JP2008017224A (ja) * 2006-07-06 2008-01-24 Casio Comput Co Ltd 撮像装置、撮像装置の出力制御方法及びプログラム
JP2010191939A (ja) * 2009-01-21 2010-09-02 Omron Corp パラメータ決定支援装置およびパラメータ決定支援プログラム
WO2012132418A1 (fr) * 2011-03-29 2012-10-04 パナソニック株式会社 Dispositif d'estimation de caractéristique

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002073321A (ja) * 2000-04-18 2002-03-12 Fuji Photo Film Co Ltd 画像表示方法
JP2008017224A (ja) * 2006-07-06 2008-01-24 Casio Comput Co Ltd 撮像装置、撮像装置の出力制御方法及びプログラム
JP2010191939A (ja) * 2009-01-21 2010-09-02 Omron Corp パラメータ決定支援装置およびパラメータ決定支援プログラム
WO2012132418A1 (fr) * 2011-03-29 2012-10-04 パナソニック株式会社 Dispositif d'estimation de caractéristique

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TAKESHI MITA ET AL.: "Joint Haar-like Features Based on Feature Co-occurrence for Face Detection", THE TRANSACTIONS OF THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. J89 -D, no. 8, 1 August 2006 (2006-08-01), pages 1791 - 1801 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016170472A (ja) * 2015-03-11 2016-09-23 カシオ計算機株式会社 情報処理装置、情報処理方法、及びプログラム
JPWO2016151802A1 (ja) * 2015-03-25 2017-12-28 株式会社日立国際電気 顔照合システムおよび顔照合方法
KR101916832B1 (ko) * 2017-07-11 2018-11-08 강경수 어노테이션 바운딩 박스를 이용한 오브젝트 감지 방법 및 장치
JP2019033397A (ja) * 2017-08-08 2019-02-28 富士通株式会社 データ処理装置、プログラム及びデータ処理方法
US11893797B2 (en) 2019-01-18 2024-02-06 Nec Corporation Information processing device
JP2020113964A (ja) * 2019-07-17 2020-07-27 パナソニックi−PROセンシングソリューションズ株式会社 監視カメラおよび検知方法
CN116456058A (zh) * 2023-04-28 2023-07-18 南京派拉斯曼工程技术有限公司 一种基于改进的视频捕捉检测方法

Also Published As

Publication number Publication date
JPWO2014155639A1 (ja) 2017-02-16
JP5982557B2 (ja) 2016-08-31

Similar Documents

Publication Publication Date Title
JP5982557B2 (ja) 映像監視システムおよび画像検索システム
US11308777B2 (en) Image capturing apparatus with variable event detecting condition
US10346688B2 (en) Congestion-state-monitoring system
JP5976237B2 (ja) 映像検索システム及び映像検索方法
Adam et al. Robust real-time unusual event detection using multiple fixed-location monitors
US9224278B2 (en) Automated method and system for detecting the presence of a lit cigarette
US11295139B2 (en) Human presence detection in edge devices
US7606425B2 (en) Unsupervised learning of events in a video sequence
TWI430186B (zh) 影像處理裝置及影像處理方法
JP5793353B2 (ja) 顔画像検索システム、及び顔画像検索方法
US9477876B2 (en) Person recognition apparatus and method thereof
JP4866754B2 (ja) 行動履歴検索装置及び行動履歴検索方法
WO2014050518A1 (fr) Dispositif, procédé et programme de traitement d'informations
JP5669082B2 (ja) 照合装置
CN109766779B (zh) 徘徊人员识别方法及相关产品
JPWO2015166612A1 (ja) 映像解析システム、映像解析方法および映像解析プログラム
US10037467B2 (en) Information processing system
US10373015B2 (en) System and method of detecting moving objects
US9811739B2 (en) Surveillance system and surveillance method
EP2618288A1 (fr) Système et procédé de surveillance avec exploration et de visualisation d'épisode vidéo
KR101979375B1 (ko) 감시 영상의 객체 행동 예측 방법
JP5202419B2 (ja) 警備システムおよび警備方法
KR20160093253A (ko) 영상 기반 이상 흐름 감지 방법 및 그 시스템
US20180197000A1 (en) Image processing device and image processing system
CN109948411A (zh) 检测与视频中的运动模式的偏差的方法、设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13880174

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015507829

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13880174

Country of ref document: EP

Kind code of ref document: A1