WO2014132841A1 - 人物検索方法及びホーム滞留人物検索装置 - Google Patents
人物検索方法及びホーム滞留人物検索装置 Download PDFInfo
- Publication number
- WO2014132841A1 WO2014132841A1 PCT/JP2014/053766 JP2014053766W WO2014132841A1 WO 2014132841 A1 WO2014132841 A1 WO 2014132841A1 JP 2014053766 W JP2014053766 W JP 2014053766W WO 2014132841 A1 WO2014132841 A1 WO 2014132841A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- person
- face
- search
- database
- similarity
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19613—Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
Definitions
- the present invention relates to a person search method, and more particularly to a person search method for automatically extracting a person having a characteristic of behavior from a plurality of persons shown in a moving image.
- Patent Documents 1 to 4 Conventionally, there has been known a person search system that allows a computer to search for a desired person using video recognition technology or the like from video (moving images) shot or recorded by a surveillance camera or the like (for example, Patent Documents 1 to 4). reference.). Such a technique for searching based on the characteristics of an image itself without relying on external information such as tagging is generally called CBIR (Content-Based Image Retrieval), and is beginning to be used for searching for people.
- CBIR Content-Based Image Retrieval
- 2004-228688 extracts a portion in which a person (face) appears from an image, extracts a color histogram or the like as a feature quantity for individually identifying the person, and the feature quantity is similar to that of a desired person
- a video search system and a person search method that presume that they are the same person are disclosed.
- Patent Document 2 discloses an image security system that searches a captured video to identify a movement path of a person and uses the movement path for security management.
- Patent Document 4 discloses a safety management device that determines whether or not a user's behavior in a station platform is unique.
- JP 2009-027393 A Japanese Patent No. 4700477 JP 2012-068717 A JP 2010-205191 A JP 2011-66867 A JP 2011-210252 A
- the present invention has been made in view of such problems, and provides a person search device that can extract information that is considered to be more universally useful, or that can detect a particularly significant event in a specific application. Objective.
- a person search method includes a first step of performing registration for normal similar face image search, performing person image detection, feature amount extraction, and feature amount DB registration for an input video; A second step of specifying a human image automatically detected or manually specified from the input image as a face image to be determined, and a similar face image search with time reduction for the feature amount DB constructed in the first step
- the third step is to calculate the number of search results where the distance between feature quantities is less than a predetermined threshold (high similarity), and if the number of searches is large, the number of appearances is high and the possibility of wandering is high.
- information that is considered to be more universally useful can be extracted, or a particularly significant event can be detected in a specific application.
- FIG. 1 is a diagram conceptually illustrating the principle of determination of a prowler (Example 1).
- Process block diagram of wanderer detection system (Example 2) The figure which shows the mode of installation of a staying person search device (Example 3)
- Block diagram of stagnant person search device (Example 3)
- FIG. 1 illustrates the configuration of a wandering person detection system 1 according to the first embodiment.
- the wanderer detection system of this example is intended to monitor the surroundings of spaces used primarily by certain people, such as housing complexes, business establishments, and schools.
- a facility (building) 11 and a plurality of surveillance cameras 12 for photographing the surrounding land are installed.
- a range that can be photographed by the monitoring camera 12 is referred to as a monitoring target range.
- Each of the monitoring cameras 12 has a face detection function, and when a captured image includes a face, a partial image (face image) obtained by cutting out the face portion, a feature amount vector calculated from the face image, and the like Is sent out.
- the database (DB) 13 registers the feature amount vector received from the monitoring camera 12 in association with the time or the like.
- FIG. 2 conceptually shows the principle of determination of a wanderer in the first embodiment.
- a search is performed on the DB 13 in which a large number of feature vectors are registered using, for example, a person's face image detected by the monitoring camera 12 as a key image.
- What is shown on the upper side of each of FIGS. 2a and 2b is the key image.
- FIG. 2a shows a case where the person in the key image is a person with a large number of appearances
- the search to be performed by the DB 13 is a process of desirably extracting all the similar feature quantity vectors of the key image from the registered feature quantity vectors.
- FIG. 2 the ten most similar cases are shown as search results.
- Each face of the search result is often a different person as the similarity with the key image decreases. Therefore, a predetermined first threshold value is provided for the similarity, and faces having a similarity exceeding the threshold can be estimated as the same person, and the number thereof can be counted. When this number exceeds the second predetermined threshold, it can be determined that the number of appearances is large.
- prowlers can be detected.
- a person who is not a prowler is, for example, a resident in an apartment house, an employee in a company, a student in a school, or a teacher.
- the wanderer detection system further automatically creates a white list and appearance history (behavior pattern) record in the first embodiment, and uses time information and place information indicated by the appearance history to wander.
- the person is judged.
- FIG. 3 shows a processing block of the wandering person detection system of this example.
- the basic configuration and each component of the system are the same as those in the first embodiment unless otherwise specified.
- at least two surveillance cameras 12a and 12b are used, and the surveillance camera 12a can be photographed only by a person (resident) who is known not to be a suspicious person (for example, security requiring authentication by an ID card or the like).
- the surveillance camera 12b is installed in a place (for example, entrance) where it is desired to detect a suspicious person.
- the DB 13 is also internally divided into two DBs 13a and 13b according to two types of monitoring environments.
- each surveillance camera 12a, 12b sends a face image and a feature vector, it sends together a camera ID, a photographing time, face orientation information, etc. that can identify each camera. These are collectively referred to as face detection data.
- the DB 13a registers only the resident face detection data from the monitoring camera 12a
- the DB 13b registers the resident and non-resident faces from the monitoring camera 12b.
- a general-purpose relational DB such as SQL (in-memory database or column-oriented database is desirable) is used for the DBs 13a and 13b.
- SQL in-memory database or column-oriented database is desirable
- the face image ID is an identifier for accessing a face image stored in another server, and is usually a unique value, and thus also serves as a main key in the DB 13.
- the face image ID may be a combination of an ID that can identify the entire image (frame) before cutting out the face and a face ID that is attached to the face detected in that frame.
- the white list 14 is a set of records having the estimated person ID as a main key, and Table 2 shows an example of elements included in one record.
- the average (centroid) and variance of all feature vectors are preferably the average and variance of all records to which the estimated person ID is assigned, and the number of samples is the total number of records used in those calculations. Including the average calculated separately for each face orientation. The three elements of face orientation, feature vector average, and variance are one set, and if they can be calculated, the average and variance calculated for each face orientation are retained. The number of sets is arbitrary. Unless otherwise specified, the feature quantity to be searched first is the total average feature quantity. Note that a representative feature vector may be stored instead of the average of the feature vectors. Further, instead of the number of samples, all the primary keys of the records in the DB 13 as samples may be enumerated, the DB 13a may be enumerated as the main key, and the DB 13b may be the number of samples.
- the suspicious person appearance list 15 is a set of records having the suspicious person ID as a primary key, and the elements of the records are as shown in Table 3.
- This record holds the average (or representative feature vector) of facial feature vectors to which suspicious person IDs have been assigned, as well as that of the whitelist 14, the variance, the number of samples, and when registered in the past Has an appearance history which is a finite number of sets (registration events) of shooting time, camera ID, and face orientation information.
- the appearance history may be the primary key of the corresponding record in the DB 13b. When the number of registered events reaches the specified value, it is deleted from the oldest one.
- the operation of the wanderer detection system of this example has the following four stages.
- Stage 1 This is an initial stage in which the number of registered faces in the DB 13 is increased to the extent that a meaningful search can be performed.
- the estimated person ID is undetermined (empty), and the number of grouping trials is the initial value (0).
- the estimated person ID can be assigned according to the following rule using the property that the same person often appears continuously in the moving image.
- ⁇ Rule 1-1 ⁇ Features of face detection data continuously transmitted from the monitoring camera 12a with the same camera ID, when there is no other face with the same shooting time, or with the face detection data immediately before in time If the amount of similarity is greater than or equal to a predetermined threshold, the same estimated person ID is assigned to the face detection data and registered in the DB 13a. The newly issued estimated person ID is registered in the white list 14. In this rule, the estimated person ID given without confirmation based on the similarity of the feature quantity may be one that can be distinguished from the ID given by other rules.
- Step 2 In this step, the same person is grouped and the white list 14 is created.
- the grouping unit 17 performs a similar face search in the DBs 13a and 13b using the face of one record registered in the DB 13a and whose estimated person ID is undetermined as a key, and searches based on the following rules: The result is processed to give some estimated person ID to the record used as the search key, and the white list 14 is updated.
- ⁇ Rule 2-1 ⁇ Among the search results from the DB 13a, the same person candidates whose similarity is equal to or higher than the first threshold are extracted and the estimated person ID is assigned, and the most estimated person among them is extracted.
- ID (A) occupies more than the first predetermined ratio, or when an estimated person ID (A) having the same camera ID and continuous shooting time is found, a record with a key face and the same person candidate Among them, A is also given to the estimated person ID of the record whose estimated person ID is undetermined.
- ⁇ Rule 2-2 ⁇ Among the search results from the DBs 13a and 13b, those whose similarity is equal to or greater than the second threshold value and whose face direction is close to the key face are set as the same person candidate, If there is a record to which the same estimated person ID (A) is assigned at a predetermined ratio or more, A is also assigned to the key face record and the estimated person ID of the record of which the estimated person ID is undetermined among the same person candidates.
- the second predetermined ratio may be zero. That is, if at least one of the same person candidates is given an estimated person ID, it is also given to other same person candidates.
- ⁇ Rule 2-3 ⁇ When the conditions of Rule 2 and Rule 3 are not satisfied (when there is no same person candidate), a new estimated person ID is given to the record of the DB 13a that is a key face, and the white list 14 A record of the new estimated person ID is created.
- Rule 2-4 ⁇ When an estimated person ID is assigned in rules 2 and 3, if there is a record having another estimated person ID (B) among the same person candidates, an updated white list is displayed. The feature vector of the estimated person IDs A and B is compared, and the necessity of merging is determined. For example, when the average of the feature amount vectors for a plurality of face orientations is sufficiently obtained, if sufficient similarity is recognized as compared with each face orientation, the same person is merged.
- the estimated person ID is updated for the records in the DBs 13a and 13b to which the estimated person ID that disappears due to the merger is assigned.
- These rules can be interpreted as one implementation of the well-known k-nearest neighbor method, the minimum mean variance method, or the LBG (Linde-Buzo-Gray) method that only performs merging. There is no need to force it.
- the DB 13a and 13b are different from each other in the shooting environment of the original face image, and the feature amount varies even for the same person. Therefore, the evaluation scale of clustering is based on the face orientation which is the main factor of variation.
- the weight for calculating the similarity (distance in the feature amount space) may be optimized (different) according to the face orientation.
- Step 3 the white list 14 is used to create the suspicious person appearance list 15 to detect the suspicious person.
- the suspicious candidate search unit 18 searches the white list 14 or the suspicious person appearance list 15 for an estimated person having a similar feature quantity vector. Those determined to be suspicious candidates by the rules are registered or updated in the suspicious person appearance list 15.
- Rule 3-1 ⁇ The estimated person ID whose similarity with the feature vector of the newly added record in the DB 13b (and DB 13a) is equal to or greater than the third threshold is searched from the suspicious person appearance list 15.
- the third threshold value is set to such an extent that a plurality of estimated person IDs (C) are extracted so as not to have the same person. After that, for a plurality of estimated person IDs (C), among the appearance histories held in the records of the suspicious person appearance list 15, the feature vectors of the face orientation close to the face orientation of the newly added record are represented by the face image. The ID is taken out from the DB 13b as a key. If the extracted feature vector is found to have a similarity equal to or greater than the fourth threshold, it is a registered suspicious candidate candidate, and the appearance history of the suspicious person appearance list is updated.
- ⁇ Rule 3-2 ⁇ If not found in rule 3-1, search from white list 14 in the same manner. In other words, the estimated person ID whose similarity with the feature vector of the newly added record in the DBs 13a and 13b is equal to or greater than the third threshold is searched from the white list 14.
- the third threshold value is set to such an extent that a plurality of estimated person IDs (D) are extracted so as not to have the same person. Thereafter, with respect to a plurality of estimated person IDs (D), with reference to the feature amount vector for each face direction held in the record of the white list 14, the degree of similarity is a fourth threshold value or more and the face direction is a newly added record Look for something close to your face.
- the estimated person ID is stored in the additional record, and the white list 14 is also updated if necessary. If it is not found, if it is an additional record to the DB 13a, it is newly registered in the white list 14 as in Rule 2-3, and if it is an additional record to the DB 13b, it is newly registered in the suspicious person appearance list 15.
- the fourth threshold value used for the second search is for narrowing down the first search, and is also a comparison in the same face direction where the similarity is likely to be high. Usually above the third threshold (if a scale is used). When the feature vector for each face direction is not comprehensively collected, a plurality of feature vectors are interpolated to obtain a feature vector having the same face direction as that of the newly added record. Also, if rule 3-2 is applied while the number of registrations in the white list 14 is small, the resident may also enter the suspicious person appearance list 15. Therefore, when the number of registrations is small, Registration should be suspended.
- the suspicious person judgment unit 19 determines whether or not the appearance history satisfies the following rule (the proposition is True or false), and by using these Boolean algebra operations and weighted scoring, it is determined whether this appearance corresponds to a prowler.
- Rule 4-1 ⁇ When entering or moving to a site or building by a normal method, it does not match the order (preferably a pattern including time information) that would be taken by the monitoring camera 12.
- ⁇ Rule 4-2 ⁇ It is a rare time when a normal resident appears.
- ⁇ Rule 4-3 ⁇ There is no sign of going to the destination somewhere, that is, it is slower than the normal moving speed or turned back halfway (that is, it is wandering).
- ⁇ Rule 4-4 ⁇ There is no specific event (gate opening / closing, ID authentication, etc.) that should occur before and after entering if it is a normal resident.
- ⁇ Rule 4-5 ⁇ Not applicable to the visitor list created in advance.
- ⁇ Rule 4-6 ⁇ One person. (There is no record of another (estimated person ID) photographed by the same surveillance camera at the same time.)
- the moving speed of rule 4-3 is determined to be slow when a standard moving time is obtained in advance for a combination of camera IDs of adjacent appearance histories and is significantly longer than that. Return is determined by the fact that the camera IDs of two appearance histories within a predetermined time are the same.
- rule 5-1 if the number of samples held in the record of the estimated person ID (B) is 1, no update is performed.
- (re) grouping can use other known cluster analysis methods (k-means method, deterministic annealing EM algorithm, etc.), and the elements stored in the records of the whitelist 14 Choose what you need. Many approaches, such as EM, require a measure of the spread of each cluster.
- stage 4 the operation in stage 3 is also continued. Further, if the number of additional records per short time in the DBs 13a and 13b continues in stages 2 and 3, the process cannot catch up and records with no estimated person ID are accumulated. Therefore, the frequency of addition and the number of unassigned records are monitored, and the registration in the DBs 13a and 13b is thinned out as appropriate so as not to exceed a predetermined upper limit. In addition, it is not always necessary to search for newly added records in the DB 13a in the rule 3-1 of stage 3, but when a wanderer is detected, it is temporarily searched and newly registered in the white list 14 within a predetermined time. It is a good idea to search for wanderers in the list.
- the white list 14 such as a resident can be automatically created without relying on human hands, and the resident can be prevented from being determined as a suspicious person.
- the feature quantity distribution obtained by continuous shooting for the same person is maintained and utilized as much as possible, the representative feature quantity for each face orientation is held in the white list, and the degree of similarity is judged for each face orientation.
- the feature amount may be distinguished and held for the resolution of the original face image, the illumination environment, and the like, which are other factors that spread the feature amount distribution.
- the white list 14 and the suspicious person appearance list 15 are displayed. There is no need to prepare consciously, and the DBs 13a and 13b may be referred to each time. However, it is more efficient to separately hold at least parameters for classification learning, metric learning, and kernel learning (average feature amount (representative value) and variance for each estimated person ID).
- the white list is created first, but the suspicious person appearance list may be first.
- a whitelist is automatically constructed by determining the accuracy of whether or not a resident is based on the behavior pattern recorded in the suspicious person (non-resident) appearance list. You may make it do.
- the behavior pattern has a periodicity of appearance (such as being reflected on the camera at a fixed time every day), it is strongly estimated that the person is a resident.
- the fact that it does not correspond to the previous rules 4-1 to 3-6 can also be used for estimation of residents. It is also possible to determine that a person who appears together with a person who has already been found to have high resident accuracy has high resident accuracy. The person determined to be a resident is deleted from the suspicious person appearance list 15 and registered in the white list.
- FIG. 3 is a diagram illustrating how the staying person search device according to the third embodiment is installed. This device is intended to detect a person who stays on the platform of a railway station without getting on an arrival train. If such a person stays behind the platform where the train approach speed is high, the person may enter the track sometimes when the train enters and cause an accident. It is.
- a sufficient number of dome-shaped cameras 12c and fixed cameras 12d are installed on the platform roof to capture the face of a person staying at the end of the platform.
- the dome-type camera 12c is a small television camera mounted on the gimbal so that the shooting direction and range can be remotely controlled, and is covered with a dark-colored and transparent hemispherical cover.
- the dome-type camera 12c and the fixed camera 12d output a video signal having a dynamic range compressed so that the face of a person can be discriminated even when there is a shade and a sun in the field of view.
- the dome-type camera 12c and the fixed camera 12d are not limited to photographing the platform on which the dome-type camera 12c and the fixed camera 12d are installed, and may photograph the adjacent platform.
- FIG. 4 is a configuration diagram of the staying person search device of the third example.
- This device includes at least one video storage server 2, similar face image search server 3, display terminal 4, and management terminal 5 in addition to the dome type camera 12c and the fixed camera 12d, and a LAN ( Local area network) 6.
- LAN Local area network
- the video storage server 2 sends a video transmission request to the dome camera 12c via the LAN 6, and receives and stores image data from the dome camera 12c.
- the stored video is provided in response to a request from the similar face image search server 3 or the like.
- the video storage server 2 includes a camera I / F 21, a recording / distribution control unit 22, a Web server unit 23, a storage 24, and a setting holding unit 25 as functional configurations.
- the camera I / F 21 communicates with the dome type camera 12c and the like via the LAN 6 according to a specific protocol of the dome type camera 12c and the like.
- the contents of communication are authentication for confirming the authority to acquire video, video transmission request including designation of image quality, image data, and the like.
- the record distribution control unit 22 has a large-capacity cache memory, and manages writing and reading of image data to and from the storage 24. At the time of recording, in addition to the image data, an image ID (image identification information) serving as information for reading the image data is also recorded. Since the stream of image data from a large number of cameras needs to be recorded in real time without omission and to satisfy a read request, the record distribution control unit 22 optimizes the recording unit of the image data and the recording arrangement on the storage 24. And write and read scheduling. In addition, in order to make the storage 24 RAID (Redundant Arrays of Inexpensive Disks), generation and writing control of redundant data (parity) is performed. By dividing a plurality of image data streams into a plurality of write streams and generating horizontal parity or the like therefrom, reading for parity generation is not required.
- RAID Redundant Arrays of Inexpensive Disks
- the Web server 23 receives an image request from the similar face image search server 3 or the like using the HTTP protocol, and returns the image data read out by requesting the recording and distribution control unit 22 as an image request response. Further, the operating state of the video storage server 2 is provided as a Web page.
- the storage 24 is a disk array composed of a plurality of HDDs (Hard Disk Drives) and SSDs (Solid State Drives), and uses a special file system suitable for simultaneous multiple access of video streams. For example, if all the correspondences between the image IDs that can uniquely identify one image and the recording positions are tabulated, it becomes larger than the capacity of the main memory, so that the correspondence can be referred to efficiently. .
- Data to be recorded is encrypted by a RAID controller or for each individual drive by AES (Advanced Encryption Standard) -256 or the like.
- the setting holding unit 25 stores a schedule for the camera I / F to acquire video from the dome-type camera 12c, a re-distribution schedule to the display device, and the like.
- the similar face image search server 3 has an image acquisition I / F 31, a face detection / feature amount calculation unit 32, a face registration / search unit 33, a face feature amount DB 34, a Web service unit 35, a search trigger unit 36, a setting as a functional configuration.
- a holding unit 37 and a failure notification unit 38 are provided.
- the image acquisition I / F 31 is connected to the LAN 6 and acquires the video distributed by multicast from the dome camera 12c and the recorded image data of the video storage server 2 by making an image acquisition request or the like.
- the face detection / feature amount calculation unit 32 divides the image acquired by the image acquisition I / F 31 for each frame, and extracts an area estimated to be a human face from each frame.
- the estimation is basically performed based on the color of the skin and whether or not the eyes and nose can be detected, and for each extracted face area, the face direction is further estimated from the center of gravity of the face area and the relative position of the eyes and nose and mouth.
- the face area is handled as a rectangular image having a predetermined aspect ratio, but portions other than the face (background area) such as the four corners are filled with a prescribed color.
- normalization processing such as size (number of vertical and horizontal pixels), resolution, brightness, and contrast (histogram) is performed on each face area.
- the size is normalized to a plurality of sizes.
- the normalized face area is divided into blocks of a certain size for each of a plurality of sizes, and a color or luminance, gradient or edge thereof, or a histogram of gradient or edge pattern is obtained for each block, and the result Is output as a facial feature value.
- the photographing time, camera ID, image ID, and other metadata and face orientation information that the frame from which the extraction is performed have are output together.
- the enlargement ratio at the time of size normalization As metadata, the enlargement ratio at the time of size normalization, the accuracy of face estimation, the spatial coordinates in the frame from which the face was extracted, the extraction ID indicating the number of the plurality of faces extracted in the same frame, It can include the type of event.
- the accuracy of face estimation is a value that approaches 0 when another object covers a part of the face.
- the event type is information indicating whether the acquired image is based on normal recording (distribution) or alarm recording (distribution). Since the face feature amount has several thousand dimensions as it is, redundant components having high correlation with other components are removed.
- a matrix composed of eigenvectors (basis vectors) obtained in advance by known principal component analysis, linear (Fischer) discriminant analysis, and independent component analysis is used as a feature vector from the left.
- the multiplication method is simple. If the dimension is reduced only by principal component analysis, it should be reduced to about 1000 dimensions.
- the face registration / search unit 33 writes (registers) the feature amount calculated by the face detection / feature amount calculation unit 32 into the face feature amount DB 34, and performs reading, search, and other operations.
- the feature amount is registered as a record including attribute information such as a shooting time, a camera ID, a face image ID, a face orientation, and an enlargement ratio.
- the face registration / retrieval unit 33 classifies the feature quantity into one of the categories and registers it in a cluster (leaf cluster) corresponding to the category.
- the cluster has a hierarchical structure like a multi-branch decision tree, and it is sufficient to identify the number of branches when registering. Registration in a cluster is performed in a multi-level manner using a dictionary (also called a map, a code book, etc.) or a hash function.
- the dictionary is updated by an algorithm called LBG or LVQ that slightly modifies the representative vectors of the cluster and neighboring clusters.
- LBG hash function
- LVQ Locality Sensitive Hashing
- a method using a covariance matrix or the like is based on a mixed distribution model, and performs a posteriori probability (logarithmic likelihood or Mahalanobis distance of a Mahalanobis distance) by performing a matrix operation between a feature quantity vector to be registered and an average value vector or a covariance matrix. Classification is performed by selecting the one having the largest sign (negative value), that is, the next lower cluster to be referred to is determined.
- An EM algorithm is known as an algorithm for optimizing parameters such as a covariance matrix. Note that the k-d tree method, which has been known for a long time, is not suitable for searching for a high-dimensional vector as in this example. However, if only the components normalized and extracted to a small size among the components of the feature vector are used, there is a possibility that they can be used in the upper part of the tree.
- each leaf cluster has a size including exactly one distribution of a person's face image so as to correspond to the prototype.
- the person IDs held in the leaf cluster dictionary and the like are maintained in correspondence with the generation, division, and integration of the leaf clusters by the clustering algorithm described above.
- the leaf cluster is not necessarily one-to-one corresponding to an actual individual person, and the leaf cluster ID and the person ID may be handled separately.
- a person ID table 71 storing the address of the leaf cluster corresponding to the person ID may be provided.
- a face image ID table is provided separately or a representative face image ID column is added to the person ID table 71.
- the search by the face image ID is not complete.
- a feature amount is extracted from that frame (again).
- the feature amount search method is similar to registration, and the face registration / search unit 33 performs nearest neighbor search (NNS: Nearest neighbor search) sequentially from the highest cluster using the given feature amount as a key.
- NSS Nearest neighbor search
- a linear (brute force) search is performed to obtain the Manhattan distance between the feature quantity and the key feature quantity of each record, and a desired similarity or a desired number of records is obtained. Note that if a cluster closest to the second, not the nearest neighbor, is selected only once in any of a plurality of classification hierarchies up to the leaf cluster, a maximum of several leaf clusters in the hierarchy is reached. If a linear search is performed among these, search omissions can be reduced. If the size of the leaf cluster is smaller than the distribution of the feature amount of one person, the search with the leaf cluster is unnecessary.
- the face feature DB 34 is a recording medium such as an HDD that holds record data included in the end cluster while maintaining information on a multi-level clustering structure (tree).
- a multi-level clustering structure such as an HDD
- parameters such as dictionaries, hash functions, covariance matrices, etc., and numerical values (such as values in the middle of calculation) used by the algorithm for optimizing the parameters
- a person ID table 71, a final search date list 72, A black list 7373 (described later) and the like are also held.
- the face feature DB 34 is preferably realized as an entire in-memory DB, but at least tree information, parameters, numerical values, and lists should be cached in the memory during operation.
- the end cluster is arranged at a recording position (sector) that can be read continuously.
- the operations on the records in the face feature DB 34 are basically only new registrations and deletions and are not changed.
- the face feature DB 34 is internally composed of a plurality of divided DBs (different from the clustering described above). When the capacity of a certain divided DB reaches the upper limit, the oldest divided DB is initialized and Will be operated by a new registration destination. A time series of divided DBs is called a generation. Further, the image may be further divided according to the enlargement ratio, face orientation, and the like. In order to optimize the identification in the clustering described above, when a basis function used for feature dimension reduction is updated, the update can be performed when the generation changes. As a result, the feature space can vary between generations. If the purpose is a staying person search, the generation interval may be about the same as the staying time.
- the Web service unit 35 receives a search request from the display terminal 4, an image request, or setting information of the similar face image search server 3 from the management terminal 5, and responds to the processing result with data such as a Web page. That embodies the search application.
- the search request includes a staying person detection request (short-term search request), a key designation search request, a blacklist 73 collation request, an event search request, and a combination thereof.
- Information (such as a face image ID corresponding to a feature value as a key) is passed to the face registration / search unit 33.
- Web page (display screen) data is generated and used as a response to the display terminal 4.
- the search result is provided as a tile display or the like in which a plurality of face images are arranged in a grid pattern.
- Each face image describes an address (URI) in the video storage server 2 based on the face image ID in the Web page.
- URI address
- the display terminal 4 can make a new search request designating the face in the search result as a key.
- the search trigger unit 36 controls the face registration / search unit 33 to automatically detect a staying person at an appropriate time interval.
- the search trigger unit 36 receives train operation data and the like, and provides the face registration / search unit 33 with appropriate search conditions by collating the current date and time. Here, it is considered to search for a face photographed continuously for a period of T hours from the current time to a time several minutes before the stop time of the two previous trains that stopped at the station.
- the face registration / search unit 33 newly registers a face, the person ID associated with the leaf cluster to be registered is acquired. Then, with reference to the person ID as a key, the last search date / time list 72 is referenced to obtain the last search date / time of the person.
- ⁇ is a coefficient less than 1
- ⁇ is a coefficient less than 1
- a similar face search is performed only for records registered during the past T time, and the number of extractions is Returned from the face registration / search unit 33.
- This time limitation can be realized, for example, by checking the shooting time prior to calculating the degree of similarity and excluding those that do not meet the conditions at the brute force search.
- the last search date list 72 is updated.
- the last search date / time list 72 is a table that holds the last search date / time using the person ID as a primary key.
- the Web service unit 35 and the search trigger unit 36 determine the number of extractions received and the number of registrations expected by the staying person (the search target time T depends on the registration time interval (depending on the frequency of image acquisition from the image storage server 2)). By comparing with (division value), it can be easily determined whether or not the person is a stayer, and notification on the display terminal and registration in the black list 73 are performed as appropriate. Even if the number of extraction is small, if it is registered in the black list 73, it is a notification target.
- the black list 73 uses a person ID as a main key, and holds a registration history consisting of the date and time of detection as a staying person, the face image ID at that time, and the like for several months.
- the setting holding unit 37 holds various settings necessary for the similar face image search server 3, information on users who can log in, and the like.
- the failure notification unit 38 notifies the management terminal or the like 5 of a failure that has occurred in the similar face image search server 3 using an SMTP trap or the like.
- the display terminal 4 is a PC (personal computer) having a function of a Web browser, transmits a search request or an image request to the similar face image search server 3 or the video storage server 2, and receives a Web page or the like received as a response. Is displayed.
- the functions of the search request control unit 42, the search result display processing unit 43, the search condition designation processing unit 44, and the server status display processing unit 45 are javascript (trademark), Active X (trademark) and .Net included in the web browser and web page.
- the video display processing unit 41 is realized by Directshow (trademark) or the like.
- the display terminal 4 may have the same function as the display terminal of a general surveillance camera system. That is, an arbitrary camera can be specified, and live video or recorded video can be acquired and displayed from the video storage server 2 by a protocol such as MRCP (Media Resource Control Protocol).
- MRCP Media Resource Control Protocol
- the management terminal 5 is a general PC (personal computer) for causing the staying person search device of this example to capture an image of an external medium or to back up a recorded image.
- the external medium I / F 51 is an interface capable of connecting an arbitrary external medium such as a DVD drive or USB (trademark).
- the format changing unit 52 converts the video or DB captured from the external medium I / F 51 or the like so as to match the format of the staying person search device.
- the file unit 53 is a storage device that holds video captured from the external medium I / F 51 or the like or after format conversion.
- the upload unit 54 transmits a file between the similar face image search server 3 or the video storage server 2 and the file unit 53 by the FTP protocol or the like via the LAN 6.
- the black list original data acquired from the outside is uploaded to the similar face image search server 3 and merged with the black list 73 of the similar face image search server 3, or a snapshot of the face feature DB 34 is downloaded to the file unit 53.
- the external DB I / F 55 is an ODBC (Open DataBase Connectivity), a JET database engine, or the like, and can access any server on the network and input / output data. For example, it is used when the black lists of similar staying person search devices installed on the routes of other railway companies are linked to each other.
- ODBC Open DataBase Connectivity
- the LAN 6 is a private network constructed by, for example, Ethernet (trademark) or the like, and connects each device from the video storage server 2 to the management terminal 5 that can be installed at various bases.
- the LAN 6 is not limited to a single collision domain network.
- the learning machine can be used for identifying a person (similarity, clustering), but is not directly involved in determining whether or not it is a suspicious person. That is, the suspicious person is extracted based on easy-to-understand rules based on the appearance history, and is finally determined by a monitor.
- the rules can be used widely by adjusting parameters such as the extraction target time intuitively for each application.
- the present invention does not preclude this supervisor's decision to be a teacher of the learning machine.
- the configuration of the system or apparatus according to the present invention is not necessarily limited to the configuration described above, and various configurations may be used.
- the present invention is, for example, a method or apparatus for executing the processing according to the present invention, a program for causing a computer to implement such a method, a non-transient tangible medium for recording the program, etc. It can also be provided. For example, it can be provided as a combination of a program that functions as a surveillance camera 12 by installing it on a smartphone and a program that functions as a home security system by installing it on a personal computer or the like, or the latter alone.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
- Alarm Systems (AREA)
Abstract
Description
特許文献1は、画像から人物(の顔)が映った部分を切出し、人物を個々に特定するための特徴量として色ヒストグラム等を抽出し、この特徴量が所望の人物のものと類似する場合に同一人物であると推定する、映像検索システムおよび人物検索方法を開示している。
防犯カメラで不審者を検知する場合を例にとっても、「不審」という抽象的な観念を、機械で行なうことができる低次元の処理に一意に翻訳することが困難である。特許文献6(特開2011-210252)のサポートベクターマシン等の学習機械を用いるとしても、学習結果に再利用可能性が見込めないと、学習データ(不審人物を検知すべき条件)を用意する労力に見合わない。
本例のうろつき者検知システムは、集合住宅、企業の事業所、学校のような、主に特定の人々によって利用される空間の周囲を監視することを意図している。図1に示すように、施設(建物)11があり、その周りの土地を撮影する複数の監視カメラ12が設置される。これら監視カメラ12により撮影できる範囲を、監視対象範囲と呼ぶ。監視カメラ12のそれぞれは、顔検出機能を有し、撮影した画像に顔が含まれているときは、顔の部分を切り出した部分画像(顔画像)及びその顔画像から算出した特徴量ベクトル等を送出する。データベース(DB)13は、監視カメラ12から受信した特徴量ベクトルを時刻等と対応付けて登録する。
図3に、本例のうろつき者検知システムの処理ブロックを示す。システムの基本構成や各構成要素は、特に言及しない限り実施例1と同様である。本例では少なくとも2つの監視カメラ12a、12bを用い、監視カメラ12aは不審者ではないことが判明している人(居住者)のみが撮影できる環境(例えば、IDカード等による認証が必要なセキュリティエリア)に設置し、監視カメラ12bは不審者を検知したい場所(例えば、エントランス)に設置する。またDB13も、2種類の監視環境に応じて、内部的に2つのDB13a、13bに分ける。
[段階1] 意味のある検索ができる程度まで、DB13への顔の登録数を増やす、初期的な段階である。この段階では、監視カメラ12a、12bから受信した顔をそのまま登録するのみで、推定人物IDは未定(空)、グループ化試行回数は初期値(0)である。ただし、動画像には同一人物が持続的に映ることが多いという性質を利用した下記のルールにより、推定人物IDを付与することができる。
{ルール1-1}:同一のカメラIDの監視カメラ12aから持続的に送信されてきた顔検出データで、同一の撮影時刻のものが他にない場合或いは時間的に直前の顔検出データと特徴量の類似度が所定のしきい値以上である場合、それらの顔検出データに同じ推定人物IDを付与してDB13aに登録する。また新たに発行された推定人物IDをホワイトリスト14に登録する。なおこのルールにおいて特徴量の類似度に基づく確認をせずに付与した推定人物IDは、他のルールで付与したIDと区別できるものを使用してもよい。
{ルール2-1}:DB13aからの検索結果のうち、類似度が第1しきい値以上である同一人物候補の中で推定人物IDが付与済みものを抽出し、その中で最多の推定人物ID(A)が第1の所定割合以上を占めていた場合、或いはカメラIDが同一で撮影時刻に連続性がある推定人物ID(A)を発見した場合、キー顔にしたレコード及び同一人物候補中で推定人物IDが未定のレコードの推定人物IDにもAを付与する。
{ルール2-2}:DB13a、13bからの検索結果のうち、類似度が第2しきい値以上でかつ顔向きがキー顔の顔向きと近いものを同一人物候補とし、その中に第2の所定割合以上で同じ推定人物ID(A)が付与されたレコードがある場合、キー顔にしたレコード及び同一人物候補中で推定人物IDが未定のレコードの推定人物IDにもAを付与する。なお第2の所定割合は0でもよい。つまり同一人物候補の中に1つでも推定人物IDが付与されているものがあれば、それを他の同一人物候補にも付与する。
{ルール2-4}:ルール2及びルール3で、推定人物IDを付与した際に、同一人物候補の中に別の推定人物ID(B)を有するレコードがあれば、更新後のホワイトリストを参照し、推定人物IDがAとBの特徴量ベクトルを比較して、併合の要否を判定する。例えば、複数の顔向き毎の特徴量ベクトルの平均等が十分に得られているときに、顔向き毎に比較して十分な類似が認められれば、同一人物として併合する。これにより、顔向きを考慮しない特徴量分布が他人と重なっていても、誤った併合を防ぐことができる。併合により消滅するほうの推定人物IDが付与されたDB13a、13bのレコードについては、推定人物IDの更新を行なう。
これらのルールは、公知のk近傍法や最小平均分散法、併合のみ行うLBG(Linde-Buzo- Gray)法の一実装とも解釈でき、公知の他のクラスタリング手法でも代用できるが、グループ同士の併合は無理に行う必要はない。なおDB13aと13bとでは元となる顔画像の撮影環境が異なり同一人物でも特徴量がばらつくため、クラスタリングの評価尺度は、ばらつきの主な要因である顔向きを考量したものとした。類似度(特徴量空間での距離)を計算する際の重みを、顔向きに応じて最適化して(異ならせて)もよい。
{ルール3-1}:DB13b(及びDB13a)の新規追加レコードの特徴量ベクトルとの類似度が第3しきい値以上となる推定人物IDを、不審者登場リスト15から検索する。同一人物をもらさないよう、複数の推定人物ID(C)が抽出される程度に第3しきい値を設定する。その後、複数の推定人物ID(C)について、不審者登場リスト15のレコードに保持された登場履歴の内、新規追加レコードの顔向きと近い顔向きのものについて、その特徴量ベクトルを、顔画像IDをキーにしてDB13bから取り出す。取り出した特徴量ベクトルに、類似度が第4しきい値以上のものが見つかれば、登録済み不審者候補であるので、不審者登場リストの登場履歴を更新する。
なお、2度目の検索に用いる第4しきい値は、1度目の検索を絞り込むためのものであり、類似度が高くなり易い同じ顔向きでの比較でもあるので、(同じ特徴量空間及び距離尺度を用いているのであれば)通常、第3しきい値以上である。また顔向き毎の特徴量ベクトルが網羅的に収集されていないときは、複数の特徴量ベクトルを補間して、新規追加レコードの顔向きと同じ顔向きの特徴量ベクトルを得るようにする。またルール3-2をホワイトリスト14の登録数が少ないうちに適用すると、居住者も不審者登場リスト15に入ってしまう恐れがあるので、登録数が少ない時は、不審者登場リスト15への登録は保留するとよい。
{ルール4-1}:通常の方法で敷地や建物に入ったり、移動したりする際に、監視カメラ12で撮影されるであろう順番(望ましくは、時間情報を含むパターン)と一致しない。
{ルール4-2}:通常の居住者が登場することがまれな時間帯である。
{ルール4-3}:通常の移動速度に比べて遅い、或いは途中で引き返す等、何処かしらの目的地へ向かっている気配が無い(つまりうろついている)。
{ルール4-4}:通常の居住者であれば登場する前後に生じるはずの特定のイベント(門の開閉、ID認証等)が、ない。
{ルール4-5}:事前に作成された来訪予定者リストに該当がない。
{ルール4-6}:一人である。(同時に同一の監視カメラで撮影された別の(推定人物ID)のレコードが存在しない。)
なお、ルール4-3の移動速度は、隣接する登場履歴のカメラIDの組合せについて、予め標準的な移動時間を求めておき、それよりも顕著に長い時に遅いと判断する。引き返しは、所定時間以内にある2つの登場履歴のカメラIDが同一であることで判断される。
{ルール5-1}:ホワイトリスト14におけるその推定人物ID(B)のレコードが保持する特徴量ベクトルの平均をキー顔として、DB13a、13bで類似顔検索を行い、{ルール3-1}または{ルール3-2}と同様の基準でグループ化しなおす。
{ルール5-2}:ルール5-1に拘わらず、その推定人物ID(B)のレコードが保持する標本数が1のときは、更新は行なわない。
なお(再)グループ化には、上記の他、公知のクラスター分析手法(k-means法、確定的アニーリングEMアルゴリズムなど)も利用でき、ホワイトリスト14のレコードに保存する要素は、使用する手法において必要なものを選べばよい。例えばEM等の多くの手法は、各クラスタの広がりの尺度を必要とする。同一人物を1つのグループにまとめることを特に重視する場合は、グループの中心(全平均特徴量)付近で局所的に独立成分分析等を適用して特徴量ベクトルに新たな成分を追加したり、近隣のグループを含む局所的に利用可能なマハラノビス距離を用いたり(計量学習と呼ばれる)、カーネルPCAによる非線形判別を取り入れもよく、その際には教師なし(弱教師付き)学習が可能なOne class SVMを利用できる。
図3は、本実施例3の滞留人物検索装置の設置の様子を示す図である。この装置は、鉄道の駅のプラットホームにおいて、到着列車に乗らずに滞留する人物を検出することを目的とする。そのような人物は、列車の進入速度が速いホームの後ろよりで滞留していると、時には列車の進入時に線路内に立ち入って事故の原因となるおそれがおり、状況によっては保護が必要な人物である。
映像蓄積サーバ2は、機能構成として、カメラI/F21、記録配信制御部22、Webサーバ部23、ストレージ24、設定保持部25を有する。
多数のカメラからの画像データのストリームを、欠落することなくリアルタイムで記録しつつ読出し要求にも応える必要から、記録配信制御部22は、画像データの書込み単位やストレージ24上での記録配置を最適化するとともに、書込みや読出しのスケジューリングを行なう。またストレージ24をRAID(Redundant Arrays of Inexpensive Disks)化するために、冗長データ(パリティ)の生成や書込みの制御を行う。複数の画像データのストリームを、複数の書込みストリームに振り分けて、それらから水平パリティ等を生成することで、パリティ生成のための読出しを要しないようにしている。
画像取得I/F31は、LAN6に接続されており、ドーム型カメラ12c等からマルチキャスト配信される映像や、映像蓄積サーバ2の記録画像データを、画像取得要求等を行なうことで取得する。
次に、各顔領域について、サイズ(縦及び横の画素数)、解像度、明度及びコントラスト(ヒストグラム)等の正規化処理を行う。サイズについては複数の大きさに正規化する。
次に、複数のサイズ毎に、正規化された顔領域を一定サイズのブロックに分割し、ブロック毎に色や輝度、それらの勾配やエッジ、或いは勾配やエッジのパターンのヒストグラムを求め、その結果を集約した多次元ベクトルを顔特徴量として出力する。その際、抽出する元となったフレームが有していた、撮影時刻、カメラID、画像IDその他のメタデータや顔向き情報を一緒に出力する。
ここで、現時刻から、その駅に停車した2本前の列車の停車時刻より数分前の時刻までのT時間の間、持続的に撮影された顔を検索することを考える。顔登録・検索部33が顔を新たに登録する際、登録する葉クラスタに対応付けられた人物IDを取得する。そしてその人物IDをキーにして最終検索日時リスト72を参照し、その人物の最終検索日時を得る。その最終検索日時と現時刻との差がα×T時間(αは1未満の係数)を超えた場合、過去T時間の間に登録されたレコードに限定した類似顔検索を行ない、抽出数が顔登録・検索部33から返される。この時間限定は、例えば、総当り検索の際に、類似度の計算より先に撮影時刻を検査し、その条件に合わないものを除外することで実現できる。検索を行なった際は、最終検索日時リスト72を更新する。最終検索日時リスト72は、人物IDを主キーとし、最終検索日時を保持するテーブルである。
Webサービス部35や検索トリガー部36は、受取った抽出数と、滞留者に見込まれる登録数(検索対象時間Tを、登録時間間隔(画像蓄積サーバ2からの画像の取得頻度に依存する)で除算した値)と比較することで、滞留者か否かを容易に判定でき、適宜表示端末での報知やブラックリスト73への登録を行なう。抽出数が少なくても、ブラックリスト73に登録されていれば、報知対象となる。ブラックリスト73は、人物IDを主キーとし、滞留者として検知された日時、その時の顔画像ID等からなる登録履歴を数か月程度保持する。
外部DB I/F55は、ODBC(Open DataBase Connectivity)やJETデータベースエンジン等であり、ネットワーク上の任意のサーバにアクセスし、データを入出力できる。例えば他の鉄道会社の路線に設置された同様の滞留人物検索装置のブラックリストを互いに連携させる場合に使用する。
21:カメラI/F、 22:記録配信制御部、 23:Webサーバ部、 24:ストレージ、 25:設定保持部25、 31:画像取得I/F、 32:顔検出・特徴量算出部、 33:顔登録・検索部、 34:顔特徴量DB、 35:Webサービス部、 36:検索トリガー部、 37:設定保持部、 38:障害通知部、71:人物IDテーブル、 72:最終検索日時リスト、 73:ブラックリスト。
Claims (7)
- 入力映像に対して、顔画像を検出し、該顔画像から特徴量を抽出し、該特徴量を時刻情報とともにデータベースに登録することで、該データベースを構築するステップと、
前記入力映像から自動検出された、或いは、手動で指定した顔画像を、判定したい顔画像として指定するステップと、
前記構築するステップで構築したデータベースに対して時間軸上での限定付きで類似顔を検索するステップと、
前記検索するステップの検索結果の中に、類似度が所定値より高い結果が何件あるか算出し、件数が多ければ、登場回数が多くうろつきの可能性が高いとし、件数が少なければ、登場回数が少なくうろつきの可能性が低いと判定するステップと、
事前登録された不審者ではない人物の顔と、前記検索数が多い人物との類似度を算出し、類似度が高ければ前記判定数するステップの判断に拘わらず不審者ではないと再判定するステップと、を有する人物検索方法。 - 複数のカメラからの映像から検出された顔画像から抽出した特徴量及び時刻情報を含む顔検出データを受信し、カメラの属性に応じて第1データベース若しくは第2データベースに振り分けて登録する第1ステップと、
第1データベースに登録されている、推定人物IDが未定となっているある1つのレコードの顔をキーとして、第1データベース若しくは第2データベースの少なくとも一方に対して類似検索を行い、所定の第1ルールに基づいて、検索キーとしたレコードに何らかの推定人物IDを付与するとともに、付与済みの推定人物IDとその特徴量と対応付けて保持するホワイトリストを更新することで、同一人物をグループ化する第2ステップと、
所定の第2ルールに基づいてホワイトリスト14を利用して不審者登場リスト15を作成し、少なくとも第2データベースに新たに登録された顔検出データの顔をキーとして、不審者リストに対して類似検索を行い、不審者候補を検出する第3ステップと、
前記不審者候補が検出された時に、該第3ステップでキーとした顔検出データの少なくとも一部を前記不審者登場リストに登場履歴として追記するとともに、該不審者登場リストに保持されている該登場履歴から、所定の第3ルールに基づいて不審者に該当するか判断する第4ステップと、を有する人物検索方法。 - 前記顔検出データは、検出した顔の向き情報を含み、前記ホワイトリストは、該顔の向き情報のそれぞれに対応する複数の代表特徴量と、全ての顔向きに対応する全代表特徴量とを保持することを特徴とする請求項2記載の人物検索方法。
- 前記所定の第2ルールは、前記第2データベースに新たに登録された顔検出データの顔をキーとしてホワイトリストに対して類似検索を行い、類似度が所定値より高い結果が無い場合、不審者登場リストに新規登録することを特徴とする請求項2記載の人物検索方法。
- 前記所定の第1ルールは、
第2ステップにおける第1データベースに対する検索の結果のうち、全代表特徴量との類似度が第1しきい値以上である同一人物候補の中で推定人物IDが付与済みものを抽出し、その中で最多の推定人物IDが第1の所定割合以上を占めていた場合、或いはカメラIDが同一で撮影時刻に連続性がある推定人物IDを発見した場合、それら推定人物IDと同じIDを、前記キー顔にしたレコード及び同一人物候補中で推定人物IDが未定のレコードに付与するサブルールと、
第2ステップにおける第1データベース若しくは第2データベースの少なくとも一方に対する検索の結果のうち、前記キー顔と顔向きが近い代表特徴量との類似度が第2しきい値以上のものを同一人物候補とし、その中に第2の所定割合以上で同じ推定人物IDが付与されたレコードがある場合、それら同じ推定人物IDと同じIDを、前記キー顔にしたレコード及び同一人物候補中で推定人物IDが未定のレコードに付与するサブルールと、の少なくとも1つを含むことをを特徴とする請求項3記載の人物検索方法。 - 前記所定の第3ルールは、
正当な方法で敷地若しくは建物に入り或いはその中を移動する際に、前記複数のカメラで撮影されるべき順番と一致しないこと、
前記敷地若しくは建物の所有者、強従者若しくは関係者が登場することがまれな時間帯であること、
通常の移動速度に比べて遅い、或いは途中で引き返していること、
前記所有者、強従者若しくは関係者であれば登場する前後に生じるべき特定のイベントが、検出されていないこと、
事前に通知された来訪者の予定時刻と一致しないこと、および同時に同一の監視カメラで撮影された別のレコードが存在しないこと、の内の1つ若しくは複数の命題の真偽に基づき、類似度の学習機械は直接には関係しないことを特徴とする請求項2記載の人物検索方法。 - 駅のプラットホームの撮影する複数のカメラからの入力映像に対して、顔画像を検出し、該顔画像から特徴量を抽出し、該特徴量を時刻情報とともにデータベースに登録することで、該データベースを構築するステップと、
前記構築するステップで構築した前記データベースに対して、該プラットホームを列車が発着する時間間隔より広くかつ時間軸上での限定付きで類似する特徴量を検索するステップと、
前記検索するステップによる抽出数と、滞留者に見込まれる登録数との比較により、滞留者か否かを判定するステップと、
判定された前記滞留者をブラックリストに登録するステップと、
前記登録された滞留者と類似する特徴量が、データベースから検索された時に報知するステップと、を有するホーム滞留人物検索装置。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/771,004 US9589181B2 (en) | 2013-02-28 | 2014-02-18 | Person search method and device for searching person staying on platform |
JP2015502875A JP6080940B2 (ja) | 2013-02-28 | 2014-02-18 | 人物検索方法及びホーム滞留人物検索装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013039087 | 2013-02-28 | ||
JP2013-039087 | 2013-02-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014132841A1 true WO2014132841A1 (ja) | 2014-09-04 |
Family
ID=51428113
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/053766 WO2014132841A1 (ja) | 2013-02-28 | 2014-02-18 | 人物検索方法及びホーム滞留人物検索装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9589181B2 (ja) |
JP (1) | JP6080940B2 (ja) |
WO (1) | WO2014132841A1 (ja) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104376679A (zh) * | 2014-11-24 | 2015-02-25 | 苏州立瓷电子技术有限公司 | 一种智能家居预警方法 |
CN104392578A (zh) * | 2014-11-24 | 2015-03-04 | 苏州立瓷电子技术有限公司 | 一种具有预警功能的家庭防火防盗系统 |
WO2016162963A1 (ja) * | 2015-04-08 | 2016-10-13 | 株式会社日立製作所 | 画像検索装置、システム及び方法 |
JP2016181020A (ja) * | 2015-03-23 | 2016-10-13 | 日本電気株式会社 | 画像処理装置、画像処理システム、画像処理方法及びプログラム |
WO2016199192A1 (ja) * | 2015-06-08 | 2016-12-15 | 株式会社アシストユウ | 人工知能を備えた移動式遠隔監視カメラ |
WO2017046838A1 (ja) * | 2015-09-14 | 2017-03-23 | 株式会社日立国際電気 | 特定人物検知システムおよび特定人物検知方法 |
JP2018018406A (ja) * | 2016-07-29 | 2018-02-01 | 日本電気株式会社 | 検出装置、監視システム、検出方法及びプログラム |
JP2019087932A (ja) * | 2017-11-09 | 2019-06-06 | 日本電気株式会社 | 情報処理システム |
CN109871822A (zh) * | 2019-03-05 | 2019-06-11 | 百度在线网络技术(北京)有限公司 | 用于输出信息的方法和装置 |
JP2019091395A (ja) * | 2017-11-15 | 2019-06-13 | キヤノン株式会社 | 情報処理装置、監視システム、方法及びプログラム |
JP2019159666A (ja) * | 2018-03-12 | 2019-09-19 | 株式会社ツクルバ | 不動産情報提供システム |
JP2019216424A (ja) * | 2015-03-19 | 2019-12-19 | 日本電気株式会社 | 監視システム及び監視方法 |
US10949657B2 (en) | 2016-11-22 | 2021-03-16 | Panasonic Intellectual Property Management Co., Ltd. | Person's behavior monitoring device and person's behavior monitoring system |
JPWO2021176544A1 (ja) * | 2020-03-03 | 2021-09-10 | ||
JP2021163310A (ja) * | 2020-04-01 | 2021-10-11 | 株式会社東芝 | 表示制御装置、表示制御方法及びプログラム |
JP2021528765A (ja) * | 2018-08-31 | 2021-10-21 | 日本電気株式会社 | 同一人物をグループ化するための方法、システム、およびプログラム |
JP2022013340A (ja) * | 2020-07-03 | 2022-01-18 | トヨタ自動車株式会社 | 制御装置、プログラム、及び制御システム |
EP3890312A4 (en) * | 2018-12-18 | 2022-02-16 | Huawei Technologies Co., Ltd. | METHOD AND SYSTEM FOR DISTRIBUTED IMAGE ANALYSIS AND STORAGE MEDIUM |
JP7424939B2 (ja) | 2020-08-07 | 2024-01-30 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | 人物検出装置、人物追跡装置、人物追跡システム、人物検出方法、人物追跡方法、人物検出プログラム及び人物追跡プログラム |
Families Citing this family (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7631151B2 (en) | 2005-11-28 | 2009-12-08 | Commvault Systems, Inc. | Systems and methods for classifying and transferring information in a storage network |
US20200257596A1 (en) | 2005-12-19 | 2020-08-13 | Commvault Systems, Inc. | Systems and methods of unified reconstruction in storage systems |
US8872910B1 (en) * | 2009-06-04 | 2014-10-28 | Masoud Vaziri | Method and apparatus for a compact and high resolution eye-view recorder |
US8892523B2 (en) | 2012-06-08 | 2014-11-18 | Commvault Systems, Inc. | Auto summarization of content |
JP6296813B2 (ja) * | 2014-01-30 | 2018-03-20 | キヤノン株式会社 | 情報処理端末、情報処理端末の制御方法およびプログラム |
JP6324094B2 (ja) * | 2014-02-03 | 2018-05-16 | キヤノン株式会社 | 情報処理端末、情報処理端末の制御方法およびプログラム |
US10656136B2 (en) * | 2014-06-16 | 2020-05-19 | Nikon Corporation | Observation apparatus, observation method, observation system, program, and cell manufacturing method |
KR102024867B1 (ko) * | 2014-09-16 | 2019-09-24 | 삼성전자주식회사 | 예제 피라미드에 기초하여 입력 영상의 특징을 추출하는 방법 및 얼굴 인식 장치 |
US10043089B2 (en) * | 2015-03-11 | 2018-08-07 | Bettina Jensen | Personal identification method and apparatus for biometrical identification |
CN105100193B (zh) * | 2015-05-26 | 2018-12-11 | 小米科技有限责任公司 | 云名片推荐方法及装置 |
JP6285614B2 (ja) * | 2015-07-01 | 2018-02-28 | 株式会社日立国際電気 | 監視システム、撮影側装置、及び照合側装置 |
US20180239838A1 (en) * | 2015-08-10 | 2018-08-23 | Nec Corporation | Display processing apparatus and display processing method |
US10275684B2 (en) * | 2015-11-04 | 2019-04-30 | Samsung Electronics Co., Ltd. | Authentication method and apparatus, and method and apparatus for training a recognizer |
US10353888B1 (en) | 2016-03-03 | 2019-07-16 | Amdocs Development Limited | Event processing system, method, and computer program |
US10140345B1 (en) * | 2016-03-03 | 2018-11-27 | Amdocs Development Limited | System, method, and computer program for identifying significant records |
WO2017169189A1 (ja) * | 2016-03-30 | 2017-10-05 | 日本電気株式会社 | 解析装置、解析方法及びプログラム |
US11113609B2 (en) * | 2016-04-07 | 2021-09-07 | Ancestry.Com Operations Inc. | Machine-learning system and method for identifying same person in genealogical databases |
RU2632473C1 (ru) * | 2016-09-30 | 2017-10-05 | ООО "Ай Ти Ви групп" | Способ обмена данными между ip видеокамерой и сервером (варианты) |
US10540516B2 (en) | 2016-10-13 | 2020-01-21 | Commvault Systems, Inc. | Data protection within an unsecured storage environment |
CN107992497B (zh) * | 2016-10-27 | 2021-01-29 | 杭州海康威视系统技术有限公司 | 一种图片展示方法及装置 |
CN106845355B (zh) * | 2016-12-24 | 2018-05-11 | 深圳云天励飞技术有限公司 | 一种人脸识别的方法、服务器及系统 |
CN106845356B (zh) * | 2016-12-24 | 2018-06-05 | 深圳云天励飞技术有限公司 | 一种人脸识别的方法、客户端、服务器及系统 |
US11023712B2 (en) * | 2017-01-05 | 2021-06-01 | Nec Corporation | Suspiciousness degree estimation model generation device |
US10311288B1 (en) * | 2017-03-24 | 2019-06-04 | Stripe, Inc. | Determining identity of a person in a digital image |
JP2018186397A (ja) * | 2017-04-26 | 2018-11-22 | キヤノン株式会社 | 情報処理装置、映像監視システム、情報処理方法及びプログラム |
CN107146350A (zh) * | 2017-05-15 | 2017-09-08 | 刘铭皓 | 一种利用移动客户端实现实时防盗的远程控制方法 |
US10832035B2 (en) * | 2017-06-22 | 2020-11-10 | Koninklijke Philips N.V. | Subject identification systems and methods |
US10025950B1 (en) * | 2017-09-17 | 2018-07-17 | Everalbum, Inc | Systems and methods for image recognition |
CN107820010B (zh) * | 2017-11-17 | 2020-11-06 | 英业达科技有限公司 | 摄影计数装置 |
CN108038176B (zh) * | 2017-12-07 | 2020-09-29 | 浙江大华技术股份有限公司 | 一种路人库的建立方法、装置、电子设备及介质 |
WO2019128883A1 (zh) * | 2017-12-27 | 2019-07-04 | 苏州欧普照明有限公司 | 一种身份标定系统和方法 |
US10642886B2 (en) * | 2018-02-14 | 2020-05-05 | Commvault Systems, Inc. | Targeted search of backup data using facial recognition |
CN109145842A (zh) * | 2018-08-29 | 2019-01-04 | 深圳市智莱科技股份有限公司 | 基于图像识别控制智能储物柜的箱门的方法及装置 |
JP7119794B2 (ja) * | 2018-09-05 | 2022-08-17 | トヨタ自動車株式会社 | ログデータの生成方法、プログラム、及びデータ構造 |
JP7018001B2 (ja) * | 2018-09-20 | 2022-02-09 | 株式会社日立製作所 | 情報処理システム、情報処理システムを制御する方法及びプログラム |
CN109544595B (zh) * | 2018-10-29 | 2020-06-16 | 苏宁易购集团股份有限公司 | 一种顾客路径追踪方法及系统 |
CN109544716A (zh) * | 2018-10-31 | 2019-03-29 | 深圳市商汤科技有限公司 | 学生签到方法及装置、电子设备和存储介质 |
CN109523325A (zh) * | 2018-11-29 | 2019-03-26 | 成都睿码科技有限责任公司 | 一种基于人脸识别的针对性的自调节广告投放系统 |
CN109492616B (zh) * | 2018-11-29 | 2022-03-29 | 成都睿码科技有限责任公司 | 一种基于自主学习的广告屏用人脸识别方法 |
EP3667557B1 (en) * | 2018-12-13 | 2021-06-16 | Axis AB | Method and device for tracking an object |
JP2021144506A (ja) * | 2020-03-12 | 2021-09-24 | パナソニックi−PROセンシングソリューションズ株式会社 | 顔検知方法、顔検知プログラムおよびサーバ |
DE102020206350A1 (de) * | 2020-05-20 | 2022-01-27 | Robert Bosch Gesellschaft mit beschränkter Haftung | Verfahren zur Detektion von Vergleichspersonen zu einer Suchperson, Überwachungsanordnung, insbesondere zur Umsetzung des Verfahrens, sowie Computerprogramm und computerlesbares Medium |
US11556563B2 (en) | 2020-06-12 | 2023-01-17 | Oracle International Corporation | Data stream processing |
KR102473804B1 (ko) * | 2020-10-16 | 2022-12-05 | 이노뎁 주식회사 | 영상관제 시스템에서 카메라 영상내 관제 지점의 지도 매핑 방법 |
US11663192B2 (en) * | 2020-12-10 | 2023-05-30 | Oracle International Corporation | Identifying and resolving differences between datastores |
TWI830264B (zh) * | 2022-06-24 | 2024-01-21 | 中華電信股份有限公司 | 用於列車門或月台門之安全示警系統、方法及電腦可讀媒介 |
US11800244B1 (en) | 2022-08-13 | 2023-10-24 | Mojtaba Vaziri | Method and apparatus for an imaging device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006011728A (ja) * | 2004-06-24 | 2006-01-12 | Omron Corp | 不審者対策システム及び不審者検出装置 |
JP2010205191A (ja) * | 2009-03-06 | 2010-09-16 | Omron Corp | 安全管理装置 |
JP2011186733A (ja) * | 2010-03-08 | 2011-09-22 | Hitachi Kokusai Electric Inc | 画像検索装置 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7634662B2 (en) * | 2002-11-21 | 2009-12-15 | Monroe David A | Method for incorporating facial recognition technology in a multimedia surveillance system |
GB2382289B (en) * | 2001-09-28 | 2005-07-06 | Canon Kk | Method and apparatus for generating models of individuals |
JP4036051B2 (ja) * | 2002-07-30 | 2008-01-23 | オムロン株式会社 | 顔照合装置および顔照合方法 |
US7239724B2 (en) * | 2003-07-22 | 2007-07-03 | International Business Machines Corporation | Security identification system and method |
JP4795718B2 (ja) * | 2005-05-16 | 2011-10-19 | 富士フイルム株式会社 | 画像処理装置および方法並びにプログラム |
JP4700477B2 (ja) | 2005-11-15 | 2011-06-15 | 株式会社日立製作所 | 移動体監視システムおよび移動体特徴量算出装置 |
JP2008108151A (ja) * | 2006-10-27 | 2008-05-08 | Funai Electric Co Ltd | 監視システム |
JP2009027393A (ja) | 2007-07-19 | 2009-02-05 | Hitachi Ltd | 映像検索システムおよび人物検索方法 |
JP5412133B2 (ja) * | 2009-02-20 | 2014-02-12 | オリンパスイメージング株式会社 | 再生装置および再生方法 |
JP2011066867A (ja) | 2009-09-18 | 2011-03-31 | Yukio Shigeru | 自殺防止監視通報方法及び自殺防止監視通報装置 |
US8401282B2 (en) | 2010-03-26 | 2013-03-19 | Mitsubishi Electric Research Laboratories, Inc. | Method for training multi-class classifiers with active selection and binary feedback |
JP5777310B2 (ja) | 2010-09-21 | 2015-09-09 | 株式会社日立国際電気 | 画像セキュリティシステムおよび認証方法 |
JP5754150B2 (ja) * | 2011-02-01 | 2015-07-29 | 株式会社デンソーウェーブ | セキュリティ装置 |
US8948465B2 (en) * | 2012-04-09 | 2015-02-03 | Accenture Global Services Limited | Biometric matching technology |
-
2014
- 2014-02-18 US US14/771,004 patent/US9589181B2/en active Active
- 2014-02-18 JP JP2015502875A patent/JP6080940B2/ja active Active
- 2014-02-18 WO PCT/JP2014/053766 patent/WO2014132841A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006011728A (ja) * | 2004-06-24 | 2006-01-12 | Omron Corp | 不審者対策システム及び不審者検出装置 |
JP2010205191A (ja) * | 2009-03-06 | 2010-09-16 | Omron Corp | 安全管理装置 |
JP2011186733A (ja) * | 2010-03-08 | 2011-09-22 | Hitachi Kokusai Electric Inc | 画像検索装置 |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104392578A (zh) * | 2014-11-24 | 2015-03-04 | 苏州立瓷电子技术有限公司 | 一种具有预警功能的家庭防火防盗系统 |
CN104376679A (zh) * | 2014-11-24 | 2015-02-25 | 苏州立瓷电子技术有限公司 | 一种智能家居预警方法 |
JP2019216424A (ja) * | 2015-03-19 | 2019-12-19 | 日本電気株式会社 | 監視システム及び監視方法 |
JP7111422B2 (ja) | 2015-03-19 | 2022-08-02 | 日本電気株式会社 | 監視システム及び監視方法 |
JP2016181020A (ja) * | 2015-03-23 | 2016-10-13 | 日本電気株式会社 | 画像処理装置、画像処理システム、画像処理方法及びプログラム |
WO2016162963A1 (ja) * | 2015-04-08 | 2016-10-13 | 株式会社日立製作所 | 画像検索装置、システム及び方法 |
JPWO2016162963A1 (ja) * | 2015-04-08 | 2018-01-11 | 株式会社日立製作所 | 画像検索装置、システム及び方法 |
US10795928B2 (en) | 2015-04-08 | 2020-10-06 | Hitachi, Ltd. | Image search apparatus, system, and method |
WO2016199192A1 (ja) * | 2015-06-08 | 2016-12-15 | 株式会社アシストユウ | 人工知能を備えた移動式遠隔監視カメラ |
JPWO2017046838A1 (ja) * | 2015-09-14 | 2018-06-28 | 株式会社日立国際電気 | 特定人物検知システム、特定人物検知方法および検知装置 |
WO2017046838A1 (ja) * | 2015-09-14 | 2017-03-23 | 株式会社日立国際電気 | 特定人物検知システムおよび特定人物検知方法 |
EP3355269A4 (en) * | 2015-09-14 | 2019-05-08 | Hitachi Kokusai Electric Inc. | SYSTEM FOR THE DETECTION OF A PARTICULAR PERSON AND METHOD FOR THE DETECTION OF A PARTICULAR PERSON |
US10657365B2 (en) | 2015-09-14 | 2020-05-19 | Hitachi Kokusai Electric Inc. | Specific person detection system and specific person detection method |
JP2018018406A (ja) * | 2016-07-29 | 2018-02-01 | 日本電気株式会社 | 検出装置、監視システム、検出方法及びプログラム |
US10949657B2 (en) | 2016-11-22 | 2021-03-16 | Panasonic Intellectual Property Management Co., Ltd. | Person's behavior monitoring device and person's behavior monitoring system |
JP7075034B2 (ja) | 2017-11-09 | 2022-05-25 | 日本電気株式会社 | 情報処理システム |
JP2022111124A (ja) * | 2017-11-09 | 2022-07-29 | 日本電気株式会社 | 情報処理システム |
JP2019087932A (ja) * | 2017-11-09 | 2019-06-06 | 日本電気株式会社 | 情報処理システム |
JP7097721B2 (ja) | 2017-11-15 | 2022-07-08 | キヤノン株式会社 | 情報処理装置、方法及びプログラム |
JP2019091395A (ja) * | 2017-11-15 | 2019-06-13 | キヤノン株式会社 | 情報処理装置、監視システム、方法及びプログラム |
JP2019159666A (ja) * | 2018-03-12 | 2019-09-19 | 株式会社ツクルバ | 不動産情報提供システム |
JP7111188B2 (ja) | 2018-08-31 | 2022-08-02 | 日本電気株式会社 | 同一人物をグループ化するための方法、システム、およびプログラム |
JP2021528765A (ja) * | 2018-08-31 | 2021-10-21 | 日本電気株式会社 | 同一人物をグループ化するための方法、システム、およびプログラム |
EP3890312A4 (en) * | 2018-12-18 | 2022-02-16 | Huawei Technologies Co., Ltd. | METHOD AND SYSTEM FOR DISTRIBUTED IMAGE ANALYSIS AND STORAGE MEDIUM |
CN109871822A (zh) * | 2019-03-05 | 2019-06-11 | 百度在线网络技术(北京)有限公司 | 用于输出信息的方法和装置 |
JPWO2021176544A1 (ja) * | 2020-03-03 | 2021-09-10 | ||
WO2021176544A1 (ja) * | 2020-03-03 | 2021-09-10 | 富士通株式会社 | 制御方法、制御プログラムおよび情報処理装置 |
JP7231879B2 (ja) | 2020-03-03 | 2023-03-02 | 富士通株式会社 | 制御方法、制御プログラムおよび情報処理装置 |
JP2021163310A (ja) * | 2020-04-01 | 2021-10-11 | 株式会社東芝 | 表示制御装置、表示制御方法及びプログラム |
JP7419142B2 (ja) | 2020-04-01 | 2024-01-22 | 株式会社東芝 | 表示制御装置、表示制御方法及びプログラム |
JP2022013340A (ja) * | 2020-07-03 | 2022-01-18 | トヨタ自動車株式会社 | 制御装置、プログラム、及び制御システム |
JP7334686B2 (ja) | 2020-07-03 | 2023-08-29 | トヨタ自動車株式会社 | 制御装置、プログラム、及び制御システム |
JP7424939B2 (ja) | 2020-08-07 | 2024-01-30 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | 人物検出装置、人物追跡装置、人物追跡システム、人物検出方法、人物追跡方法、人物検出プログラム及び人物追跡プログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2014132841A1 (ja) | 2017-02-02 |
US9589181B2 (en) | 2017-03-07 |
JP6080940B2 (ja) | 2017-02-15 |
US20160012280A1 (en) | 2016-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6080940B2 (ja) | 人物検索方法及びホーム滞留人物検索装置 | |
Xu et al. | Video structured description technology based intelligence analysis of surveillance videos for public security applications | |
JP6403784B2 (ja) | 監視カメラシステム | |
US11232685B1 (en) | Security system with dual-mode event video and still image recording | |
US11710392B2 (en) | Targeted video surveillance processing | |
WO2021063011A1 (zh) | 行为分析方法、装置、电子设备、存储介质和计算机程序 | |
EP2113846B1 (en) | Behavior history searching device and behavior history searching method | |
US20210343136A1 (en) | Event entity monitoring network and method | |
US20070291118A1 (en) | Intelligent surveillance system and method for integrated event based surveillance | |
US20230386305A1 (en) | Artificial Intelligence (AI)-Based Security Systems for Monitoring and Securing Physical Locations | |
US20210319226A1 (en) | Face clustering in video streams | |
KR101979375B1 (ko) | 감시 영상의 객체 행동 예측 방법 | |
US11348367B2 (en) | System and method of biometric identification and storing and retrieving suspect information | |
KR102110375B1 (ko) | 학습 전이 기반의 비디오 감시 방법 | |
CN110543583A (zh) | 信息处理方法及装置、图像设备及存储介质 | |
US11586682B2 (en) | Method and system for enhancing a VMS by intelligently employing access control information therein | |
Pogadadanda et al. | Abnormal activity recognition on surveillance: a review | |
Zhang et al. | A Multiple Instance Learning and Relevance Feedback Framework for Retrieving Abnormal Incidents in Surveillance Videos. | |
Farhi et al. | Smart identity management system by face detection using multitasking convolution network | |
CN117351405B (zh) | 一种人群行为分析系统及方法 | |
Hemaanand et al. | Smart surveillance system using computer vision and Internet of Things | |
KR102644230B1 (ko) | 머신러닝 학습알고리즘을 이용한 보안실 보관관리 시스템 | |
US20230185908A1 (en) | Privacy-aware event detection | |
Tan et al. | An artificial intelligence and internet of things platform for healthcare and industrial applications | |
Saxena et al. | Robust Home Alone Security System Using PIR Sensor and Face Recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14757207 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015502875 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14771004 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14757207 Country of ref document: EP Kind code of ref document: A1 |