WO2018116488A1 - Analysis server, monitoring system, monitoring method, and program - Google Patents

Analysis server, monitoring system, monitoring method, and program Download PDF

Info

Publication number
WO2018116488A1
WO2018116488A1 PCT/JP2017/008327 JP2017008327W WO2018116488A1 WO 2018116488 A1 WO2018116488 A1 WO 2018116488A1 JP 2017008327 W JP2017008327 W JP 2017008327W WO 2018116488 A1 WO2018116488 A1 WO 2018116488A1
Authority
WO
WIPO (PCT)
Prior art keywords
monitoring target
video data
suspicious person
video
server
Prior art date
Application number
PCT/JP2017/008327
Other languages
French (fr)
Japanese (ja)
Inventor
洋明 網中
一志 村岡
大 金友
太一 大辻
則夫 山垣
孝司 吉永
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2018557514A priority Critical patent/JP7040463B2/en
Publication of WO2018116488A1 publication Critical patent/WO2018116488A1/en
Priority to JP2022034125A priority patent/JP2022082561A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention is based on the priority claim of US Provisional Application No. 62/437747 (filed on Dec. 22, 2016), the entire contents of which are incorporated herein by reference. .
  • the present invention relates to an analysis server, a monitoring system, a monitoring method, and a program.
  • security is performed by video surveillance using a camera that is fixedly installed at an event venue or a wearable camera attached to a security guard.
  • ICT information communication technology
  • Patent Document 1 the position of a person existing in an image of a surveillance camera is detected, and an IC tag and an antenna are used to determine whether or not the detected person is an indeterminate person excluding a predetermined defender and defender.
  • the technology to be specified is disclosed.
  • the age, sex, and number of people acting together are estimated, and the crime risk value is calculated based on the estimation result.
  • Patent Document 2 confirms whether the detected person is really a suspicious person based on the detection information of the suspicious person detecting device, and further has a career as a criminal by collating with data in a criminal database. A technique for confirming whether or not is disclosed. Also, in the document, from the above confirmation result, the user and the security company (or police) are notified, or the street broadcasting device close to the site is selected and the warning information is circulated by street broadcasting via the regional broadcasting device. It is disclosed.
  • a single execution offender may commit a crime, but a plurality of execution offenders may commit a crime.
  • the crime is committed by a plurality of execution offenders, and the scale becomes larger and the damage can be serious. Therefore, in addition to preventing crimes by single offenders, it is more important to prevent crimes by multiple offenders.
  • Patent Document 1 discloses that a criminal level is estimated from the number of persons acting together with an indefinite person. However, in this document, those other than the defender and the defender are indeterminate, and no consideration is given to potential criminals (accomplices). Further, Patent Document 2 discloses that a suspicious person is collated with a database, but this document does not take into account potential criminals (accomplices).
  • the object of the present invention is to provide an analysis server, a monitoring system, a monitoring method, and a program that contribute to enabling detection of an object related to the main monitoring object in addition to the main monitoring object.
  • the second monitoring target estimated to have a relationship with the first monitoring target is used as the video data from the camera.
  • An analysis server is provided that detects based on the video data acquired from the video server that outputs the data.
  • a monitoring system including an analysis server that detects a second monitoring target based on video data acquired from the video server.
  • a third aspect of the present invention when video data is acquired from a video server that outputs video data from a camera to the outside, and a first monitoring target is detected, the relationship with the first monitoring target is determined.
  • a monitoring method including detecting a second monitoring target estimated to have based on video data acquired from the video server is provided.
  • the process of acquiring video data from a video server that outputs video data from a camera to the outside, and the first monitoring target when the first monitoring target is detected There is provided a program for causing a computer to execute a process of detecting a second monitoring target estimated to have a relationship based on video data acquired from the video server.
  • This program can be recorded on a computer-readable storage medium.
  • the storage medium may be non-transient such as a semiconductor memory, a hard disk, a magnetic recording medium, an optical recording medium, or the like.
  • the present invention can also be embodied as a computer program product.
  • an analysis server a monitoring system, a monitoring method, and a program that contribute to enabling detection of a target related to the main monitoring target in addition to the main monitoring target are provided. .
  • the analysis server 101 When the first monitoring target is detected, the analysis server 101 according to the embodiment outputs the second monitoring target estimated to have a relationship with the first monitoring target to the outside as video data from the camera It detects based on the video data acquired from the video server (refer FIG. 1).
  • the analysis server 101 also includes a first detection unit that detects a first monitoring target based on video data output from the video server in real time, and a second monitoring based on past video data stored in the video server. A second detection unit that detects a target.
  • the analysis server 101 analyzes the video data stored in the video server using, for example, the following four determination techniques, and tries to detect the second monitoring target. As a result, in addition to the main monitoring target, it is possible to detect a target related to the main monitoring target.
  • connection lines between the blocks in each drawing include both bidirectional and unidirectional directions.
  • the unidirectional arrow schematically shows the main signal (data) flow and does not exclude bidirectionality.
  • a person who has committed a crime in the past is set as a “main monitoring target” that is mainly monitored by the monitoring system. More specifically, in the monitoring system according to the first embodiment, a person who has committed a crime in the past and whose name, address, etc. are stored in the criminal database is treated as a main monitoring target. Detect as "suspicious person”. Furthermore, in the monitoring system according to the first embodiment, a monitoring target (monitoring target suspected of being related) estimated to have a relationship with the main monitoring target is detected as a “potential suspicious person”.
  • FIG. 2 is a diagram illustrating an example of the configuration of the monitoring system according to the first embodiment.
  • the monitoring system includes a plurality of cameras 10-1 to 10-3, a video server 20, an analysis server 30, and a database (DB) server 40.
  • DB database
  • FIG. 2 shows three cameras 10, but this is not intended to limit the number of cameras.
  • the plurality of cameras 10 are comprehensively arranged in an area to be guarded. That is, a required number of cameras are installed according to the area to be guarded.
  • Each camera 10 is connected to the video server 20 via a wired or wireless network.
  • Each camera 10 outputs video data to the video server 20.
  • the video data can be regarded as a set of a plurality of image data continuous in time series. Therefore, in the present disclosure, data captured by each camera is described as video data.
  • data output from each camera is not limited to video data, and may of course be image data.
  • the video server 20 includes a storage medium and stores video data submitted from each camera 10.
  • the video server 20 outputs video data stored in the storage medium in response to a request from the outside (specifically, the analysis server 30).
  • the analysis server 30 acquires video data from the video server 20.
  • the analysis server 30 is estimated to have a relationship with the main monitoring target (the above-described second monitoring target; latent). Based on the video data acquired from the video server 20. More specifically, the analysis server 30 detects a suspicious person using video data of each camera 10 output from the video server 20 in real time and information stored in the database server 40.
  • the analysis server 30 detects the suspicious person 61 first. Thereafter, the analysis server 30 detects a potential suspicious person 62 estimated to have a predetermined relationship with the suspicious person 61.
  • the analysis server 30 is installed in, for example, an operation center of a security company, and when a suspicious person and / or a potential suspicious person is detected, the result is displayed on a monitor. The operator in contact with the result takes action such as rushing to the site or reporting to the police.
  • the database server 40 is a device that manages a criminal database in which information on criminal is accumulated.
  • the information on the criminal is, for example, a face image of a person who has committed a crime in the past, a feature amount (feature amount vector) calculated from the face image, or the like.
  • the database includes at least feature amounts calculated from face images of persons who have sinned in the past.
  • the database server 40 transmits information on the criminal in response to a request from the analysis server 30. At this time, information can be easily exchanged by registering the feature amount calculated from the face image in the database. This is because the data size of the feature amount is usually smaller than the data size of the face image.
  • FIG. 3 is a block diagram illustrating an example of a hardware configuration of the analysis server 30 according to the first embodiment.
  • the analysis server 30 can be configured by a so-called computer (information processing apparatus), and has the configuration illustrated in FIG.
  • the analysis server 30 includes a CPU (Central Processing Unit) 31, a memory 32, an input / output interface 33, a NIC (Network Interface Card) 34 that is a communication interface, and the like that are connected to each other via an internal bus.
  • the configuration illustrated in FIG. 3 is not intended to limit the hardware configuration of the analysis server 30.
  • the analysis server 30 may include hardware (not shown) or may not include the input / output interface 33 as necessary.
  • the number of CPUs and the like included in the analysis server 30 is not limited to the example illustrated in FIG. 3. For example, a plurality of CPUs may be included in the analysis server 30.
  • the memory 32 is a RAM (Random Access Memory), a ROM (Read Only Memory), or an auxiliary storage device (hard disk or the like).
  • the input / output interface 33 is an interface of a display device and an input device (not shown).
  • the display device is, for example, a liquid crystal display.
  • the input device is, for example, a device that accepts a user operation such as a keyboard or a mouse, or a device that inputs information from an external storage device such as a USB (Universal Serial Bus) memory.
  • the user inputs necessary information to the analysis server 30 using a keyboard, a mouse, or the like.
  • the function of the analysis server 30 is realized by a processing module described later.
  • the processing module is realized, for example, when the CPU 31 executes a program stored in the memory 32.
  • the program can be downloaded through a network or updated using a storage medium storing the program.
  • the processing module may be realized by a semiconductor chip.
  • the function performed by the processing module may be realized by some hardware and / or software.
  • the computer can function as the analysis server 30 by installing the above-described computer program in the storage unit of the computer.
  • the computer can execute a monitoring method for suspicious persons and potential suspicious persons.
  • the video server 20 and the database server 40 can also be configured by an information processing device in the same manner as the analysis server 30, and the basic hardware configuration can be the same as that of the analysis server 30, and thus the description thereof is omitted. . Further, since the hardware configuration of the camera 10 is obvious to those skilled in the art, the description thereof is omitted.
  • FIG. 4 is a diagram illustrating an example of a processing configuration of the video server 20 according to the first embodiment.
  • the video server 20 includes a communication control unit 201, a data storage unit 202, and a data output unit 203.
  • the communication control unit 201 controls communication with other devices (camera 10, analysis server 30). Further, the communication control unit 201 distributes data (packets) acquired from the outside to an appropriate processing module. For example, when video data is acquired from the camera 10, the communication control unit 201 delivers the video data to the data storage unit 202. Further, the communication control unit 201 transmits data acquired from each processing module to another device. For example, when video data (a plurality of image data) is acquired from the data output unit 203, the communication control unit 201 transmits the video data to the analysis server 30.
  • the data storage unit 202 When the data storage unit 202 acquires video data from the camera 10 via the communication control unit 201, the data storage unit 202 classifies the acquired video data for each acquired camera 10 and stores it in a storage medium such as an HDD (Hard Disk Disk Drive). To do. At that time, the data storage unit 202 compresses the acquired video data as necessary and stores it in the storage medium.
  • a storage medium such as an HDD (Hard Disk Disk Drive).
  • the analysis server 30 may request the video server 20 to submit past video data. Therefore, the data storage unit 202 stores the time stamp together with the video data in a storage medium so that the past video data can be easily extracted (see FIG. 5).
  • the data output unit 203 outputs the video data stored in the storage medium to the analysis server 30.
  • the data output unit 203 outputs video data captured by each camera 10 to the analysis server 30 in real time. That is, the data output unit 203 periodically reads the latest data from the video data (image data) stored in the storage medium, and transmits the read data to the analysis server 30 via the communication control unit 201. .
  • the data output unit 203 outputs video data (a series of image data) in a specified range to the analysis server 30 in response to a request from the analysis server 30.
  • the analysis server 30 specifies video data to be acquired by “time (period)”, “location”, “camera ID (Identifier)”, and the like, and requests the video server 20 to submit video data. Based on these pieces of information, the data output unit 203 identifies data that meets the request from the stored video data, and outputs the data to the analysis server 30.
  • the operation of the video server 20 is summarized as shown in the flowchart of FIG.
  • step S101 the data output unit 203 outputs the latest video data to the analysis server 30.
  • step S102 the data output unit 203 confirms whether or not the “video data submission request” from the analysis server 30 has been received.
  • the data output unit 203 reads the video data in a necessary range from the storage medium and transmits it to the analysis server 30 (step S103).
  • step S102 If the video data submission request has not been received (step S102, No branch), the process returns to step S101 and the processing is continued.
  • FIG. 7 is a diagram illustrating an example of a processing configuration of the analysis server 30 according to the first embodiment.
  • the analysis server 30 includes a communication control unit 301, a suspicious person detection unit 302, a potential suspicious person detection unit 303, and a detection result output unit 304.
  • the communication control unit 301 controls communication with other devices in the same manner as the communication control unit 201 of the video server 20.
  • the suspicious person detection unit 302 detects a suspicious person based on video data output from the video server 20 in real time.
  • the suspicious person detection unit 302 corresponds to the first detection unit described above.
  • the suspicious person detection unit 302 tries to extract a face image of a person shown in the video data acquired from the video server 20.
  • Various techniques can be used to extract a face image from video data.
  • an input image (image data including a face image) is compared with a template image of a face image, and the difference between the two is a threshold value.
  • a face image may be extracted depending on whether or not it is the following. It is also possible to store a model combining color information, edge direction and density in advance, determine that a face exists when an area similar to the model is detected from the input frame, and extract a face image. . Furthermore, it is possible to detect a face image using a template created using the fact that the contour of the face (head) is an ellipse and the eyes and mouth are rectangular.
  • a face detection method that uses the characteristics of luminance distribution that the cheeks and forehead are bright and the eyes and mouth are low, and face detection using the face symmetry and skin color area and position.
  • a technique for performing the above may be used.
  • a method is used that statistically learns the feature quantity distribution obtained from a large number of face and non-face learning samples and determines whether the feature quantity obtained from the input image belongs to the face or non-face distribution. May be. That is, a technique related to machine learning such as a support vector machine may be used for face image extraction.
  • the suspicious person detection unit 302 calculates a plurality of feature amounts (so-called feature amount vectors) that characterize the face image.
  • a technique disclosed in Reference Document 2 Japanese Patent Laid-Open No. 2015-097000
  • feature points for example, center points and end points of eyes, nose, mouth, etc.
  • gray value and characteristics are calculated as feature quantities.
  • the feature quantity vectors are calculated by arranging the feature quantities (making a set of feature quantities).
  • the feature vector is also different.
  • the feature quantity vectors are the same or substantially the same.
  • the suspicious person detection unit 302 acquires information on the criminal from the database server 40.
  • the suspicious person detection unit 302 acquires a feature amount (feature amount vector) calculated from the criminal face image from the database server 40.
  • the feature vector of the face image acquired from the database server 40 is a feature vector that can be compared with the feature vector calculated by the suspicious person detection unit 302.
  • the suspicious person detecting unit 302 calculates a feature quantity vector of the same type as the feature quantity vector stored in the database server 40 (that is, the criminal database).
  • the suspicious person detection unit 302 performs a collation process on the feature vector of the face image calculated from the video data and the feature vector acquired from the database server 40. Specifically, the suspicious person detection unit 302 calculates the degree of similarity with the feature amount vector calculated from the video data for each feature amount vector acquired from the database server 40.
  • the suspicious person detection unit 302 calculates a chi-square distance, a Euclidean distance, or the like between two feature quantity vectors.
  • the calculated chi-square distance and Euclidean distance serve as an index indicating the degree of similarity between two feature quantity vectors (two face images characterized by the feature quantity vectors).
  • the index indicating the similarity between the two feature quantity vectors is not limited to the Euclidean distance or the chi-square distance.
  • the index may be a correlation value (Correlation) between two feature vectors.
  • the suspicious person detection unit 302 performs threshold processing on the calculated similarity, and determines whether or not a person shown in the video data acquired from the video server 20 is registered in the database server 40.
  • the suspicious person detecting unit 302 sets the person as “suspicious person” when a person similar to the person shown in the video data acquired from the video server 20 is registered in the database server 40. Further, when a suspicious person is detected, the suspicious person detecting unit 302 notifies the potential suspicious person detecting unit 303 to that effect.
  • the potential suspicious person detection unit 303 detects a person (a person suspected of having a predetermined relationship) estimated to be related to the suspicious person based on past video data accumulated in the video server 20. .
  • the potential suspicious person detection unit 303 analyzes the past video data obtained from the video server 20 based on a plurality of determination criteria when the suspicious person is detected by the suspicious person detection unit 302, thereby Try to detect.
  • the potential suspicious person detection unit 303 corresponds to the above-described second detection unit.
  • the potential suspicious person detection unit 303 detects the person as a “potential suspicious person” when there is a target (person) that satisfies at least one of the following determination criteria.
  • the first criterion is a case where a plurality of cameras 10 detect a suspicious person and a target person who is detected as a potential suspicious person (a potential suspicious candidate, hereinafter simply referred to as a candidate). .
  • the potential suspicious person detection unit 303 analyzes the video data acquired from the video server 20 and detects a potential suspicious person depending on whether or not there are candidates that appear in the plurality of cameras 10 together with the suspicious person.
  • the second criterion is when the distance between the suspicious person and the candidate is short and the distance between the suspicious person and the candidate is long.
  • the potential suspicious person detecting unit 303 analyzes the video data acquired from the video server 20, and the distance between the suspicious person and the suspicious person is less than the predetermined value and the predetermined distance is less than the predetermined value. A potential suspicious person is detected based on whether or not there is a candidate whose time is equal to or longer than the above.
  • the third criterion is when the trajectory of the suspicious person (trajectory due to movement) and the trajectory of the candidate intersect.
  • the potential suspicious person detecting unit 303 analyzes the video data acquired from the video server 20, calculates a trajectory due to the suspicious person moving and a trajectory due to the candidate moving, and the suspicious person trajectory. The potential suspicious person is detected based on whether or not the trajectory of the candidate intersects.
  • the fourth criterion is when the candidate is performing a predetermined action related to the action of the suspicious person.
  • the potential suspicious person detection unit 303 analyzes the video data acquired from the video server 20, and detects a potential suspicious person depending on whether there is a candidate who has performed an operation related to the suspicious person.
  • the suspicious person detecting unit 302 detects the suspicious person S.
  • the area colored in gray is an imageable area of each camera 10. Although there are uncolored areas in FIG. 8, these areas are taken by a camera 10 (not shown). Also, T is the time when the suspicious person detection unit 302 detects the suspicious person S.
  • the potential suspicious person detection unit 303 selects a person who exists in the security area (monitoring target area) at time T as a “potential suspicious person” candidate for the suspicious person S (the above-mentioned candidate). Extract as In the example of FIG. 8A, three persons other than the suspicious person S are extracted as candidates A to C. For example, the potential suspicious person detection unit 303 applies the above-described face image extraction processing or the like to the video data (image data) of each camera 10 at time T, and extracts candidates. At that time, if the same person is captured by a plurality of cameras 10, the potential suspicious person detection unit 303 preferably treats persons having similar feature vectors as the same person and eliminates duplication of persons. .
  • the potential suspicious person detection unit 303 requests the video server 20 to submit past video data (a plurality of image data) from the time T in each camera 10.
  • video data a plurality of image data
  • the potential suspicious person detection unit 303 requests the video server 20 to submit video data from time P3 to the current time T.
  • the image data at times P2 and P1 among the video data from time P3 to time T are shown as representatives.
  • the relationship between the times P1 to P3 is that the image data at the time P1 is the newest and the image data at the time P3 is the oldest.
  • the potential suspicious person detection unit 303 determines whether or not there are candidates that are simultaneously captured by the plurality of cameras 10 together with the suspicious person. For example, referring to FIG. 8B, the suspicious person S and the candidate A are simultaneously shown on the camera 10-1. Further, referring to FIG. 8D, the suspicious person S and candidates A and C are simultaneously shown by the camera 10-3. In this case, since it is the candidate A that appears simultaneously with the suspicious person S in the plurality of cameras 10, the potential suspicious person detecting unit 303 detects the candidate A as a potential suspicious person. Note that candidate C is not detected as a potential suspicious person because it is only reflected in the video data of the camera 10-3 once together with the suspicious person S.
  • the second determination criterion will be described with reference to FIG. Also in this case, it is assumed that the suspicious person detection unit 302 detects the suspicious person S at the time T. When the suspicious person S is detected, the potential suspicious person detecting unit 303 extracts candidates A to C as in the case of the first determination criterion. That is, it is assumed that the situation shown in FIG. 8A is also detected for the second determination criterion.
  • the potential suspicious person detection unit 303 requests the video server 20 to submit past video data from the time T in each camera 10. As for the second criterion, it is assumed that the submission of video data from time P3 to time T is requested.
  • the potential suspicious person detecting unit 303 calculates the movement trajectory within the above period (time P3 to T) for each of the suspicious person S and candidates A to C.
  • the potential suspicious person detection unit 303 specifies a person (suspicious person S, candidates A to C) for each image data forming the video data, and calculates the position of the specified person.
  • the potential suspicious person detection unit 303 uses position information (coordinates) of each camera 10 and information on each person obtained from each image data (direction from the camera 10, the size of the person in the image data), and the like. To calculate the position of each person. After that, the potential suspicious person detection unit 303 connects the positions (coordinates) calculated in each image data (each time) to obtain a movement trajectory for each person.
  • FIG. 9A is calculated for the suspicious person S
  • the trajectory shown in FIG. 9B is calculated for the candidate A
  • the trajectory shown in FIG. 9C is calculated for the candidate B.
  • FIG. 9D is a diagram in which the above three trajectories are superimposed.
  • the solid line corresponds to the locus of the suspicious person S
  • the dotted line corresponds to the locus of the candidate A
  • the alternate long and short dash line corresponds to the locus of the candidate B.
  • the time is written near the black spot on the locus.
  • the potential suspicious person detecting unit 303 determines the time during which the distance between the locus corresponding to the suspicious person S and the locus corresponding to each of the candidates A to C is within a predetermined range (hereinafter referred to as action time and Calculated).
  • the potential suspicious person detection unit 303 calculates the distance between the suspicious person and the candidate in each image data (at each time), and counts the number of image data in which the calculated distance is equal to or less than a predetermined value.
  • the action time can be calculated.
  • the distance between the two trajectories is close (the distance between the two people is a predetermined value or less)
  • the action time can also be calculated to be times P3 to T.
  • the distance between the two people is short between the times P3 and P2, but the times P2 to P1.
  • the distance between the two persons is long (the distance between the two persons is longer than a predetermined value).
  • the potential suspicious person detection unit 303 performs threshold processing on the action time, and detects a candidate having an action time equal to or greater than the threshold as a “potential suspicious person”.
  • candidate A is detected as a “potential suspicious person”.
  • the suspicious person detection unit 302 detects the suspicious person S at the time T.
  • the potential suspicious person detection unit 303 extracts candidates A to C as in the case of the first determination criterion. That is, it is assumed that the situation shown in FIG. 8A is detected for the third determination criterion.
  • the potential suspicious person detection unit 303 calculates the movement trajectory of each person (suspicious person S, candidates A to C) in the same manner as the second determination criterion.
  • the trajectory shown in FIG. 10A is calculated for the suspicious person S
  • the trajectory shown in FIG. 10B is calculated for the candidate A
  • the trajectory shown in FIG. 10C is calculated for the candidate B.
  • FIG. 10D is a diagram in which the above three trajectories are superimposed.
  • the solid line corresponds to the locus of the suspicious person S
  • the dotted line corresponds to the locus of the candidate A
  • the alternate long and short dash line corresponds to the locus of the candidate B.
  • the potential suspicious person detecting unit 303 determines whether or not there is a point where the locus of the suspicious person S and the locus of the candidates A to C intersect. At that time, if there is a crossing point, the potential suspicious person detection unit 303 determines whether or not the time of the crossing point is within a predetermined range. For example, the locus of the suspicious person S shown in FIG. 10A and the locus of the candidate A shown in FIG. 10B intersect at time P2. On the other hand, the locus of the suspicious person S shown in FIG. 10A and the locus of the candidate B shown in FIG. 10C do not intersect at the same time. That is, there is a point where the locus of the suspicious person S and the locus of the candidate B intersect, but the time is shifted, and the determination regarding the time among the above two determinations is not satisfied.
  • the potential suspicious person detection unit 303 treats a candidate corresponding to a trajectory that matches the above two determinations (there is an intersecting point and the time is within a predetermined range) as a “potential suspicious person”.
  • candidate A is detected as a “potential suspicious person”.
  • the suspicious person detection unit 302 detects the suspicious person S at time T. Thereafter, the potential suspicious person detection unit 303 extracts candidates as in the case of the first determination criterion. In this case, it is assumed that the situation shown in FIG. When a suspicious person is detected, the potential suspicious person detection unit 303 requests the video server 20 to submit past video data.
  • the potential suspicious person detection unit 303 analyzes the acquired video data and determines whether or not the suspicious person S is performing a predetermined operation.
  • the predetermined operation corresponds to, for example, an operation of talking with another person by operating a smartphone or the like and an operation of exchanging hand signals with the other person.
  • the potential suspicious person detection unit 303 determines whether or not the suspicious person is performing a predetermined operation by applying a technique such as pattern matching to the person image shown in the acquired video data. For example, when a suspicious person makes an “X” mark or a “ ⁇ ” mark using an arm or a finger, the potential suspicious person detection unit 303 determines that the suspicious person exchanges a hand signal with another person. To do.
  • the potential suspicious person detection unit 303 causes the candidate to perform an action corresponding to the predetermined action at the same time (substantially the same time). It is determined whether or not. For example, if a suspicious person is making a call using a smartphone or the like, it is determined whether or not there is a candidate making a call at the same time. Alternatively, if the suspicious person is sending a hand signal, it is determined whether or not there is a candidate sending a hand signal at the same time.
  • the potential suspicious person detection unit 303 sets a candidate that matches the above determination as a “potential suspicious person”. For example, referring to FIG. 11 (b), at time P1, the suspicious person S and the candidate A are both performing a call. Therefore, the potential suspicious person detection unit 303 detects the candidate A as a potential suspicious person. On the other hand, although candidate B is talking with another person at time P2 (see FIG. 11C), it is treated as a potential suspicious person because it differs from the time when suspicious person S is talking (time P1). I will not.
  • the potential suspicious person detection unit 303 determines whether or not a person related to the detected suspicious person exists in the security area according to a plurality of determination criteria (determination conditions).
  • the potential suspicious person detection unit 303 performs the above four determinations in order, and if a candidate that satisfies the determination criteria is found (when a potential suspicious person is detected), the four determinations are made.
  • the process may be stopped, or four processes may be executed in a row.
  • one potential suspicious person does not necessarily exist for one suspicious person, and there may be two or more potential suspicious persons. In consideration of such a possibility, the potential suspicious person detection unit 303 executes the detection process of potential suspicious persons (detection process based on the above four determination criteria) for all of the selected candidates. desirable.
  • the potential suspicious person detection unit 303 passes the determination result (suspicious person and / or potential suspicious person) to the detection result output unit 304.
  • the detection result output unit 304 outputs the detection results of the suspicious person detecting unit 302 and the potential suspicious person detecting unit 303 to the outside. For example, when a suspicious person and a cooperator (potential suspicious person) are detected, the detection result output unit 304 performs an output such as requesting an operation center operator to strengthen security in the security area. Specifically, the detection result output unit 304 displays information indicating that a suspicious person and a cooperator have been detected while designating an area that requires enhanced security on an operation center monitor or the like.
  • the detection result may be presented to a security company operator or the like by GUI (Graphical User Interface).
  • GUI Graphic User Interface
  • the detection result output unit 304 may display a screen as shown in FIG. 12 on the monitor of the operation center.
  • the detection result output unit 304 clearly indicates that the two persons are related while showing a suspicious person and a collaborator (potential suspicious person) on the map indicating the security area. .
  • the suspicious person and the collaborator are connected by a line to clearly show the relationship between the two.
  • the illustration of FIG. 12 is not intended to limit the mode of clearly indicating the suspicious person and the potential suspicious person.
  • the suspicious person or the potential suspicious person may be surrounded by a circle, or the suspicious person and the potential suspicious person may blink.
  • the detection result output unit 304 may display the basis when a potential suspicious person is detected on a monitor or the like. For example, when a potential suspicious person is determined based on the first determination criterion, a plurality of cameras 10 that have acquired video data of the potential suspicious person and the suspicious person may be specified. Alternatively, when a potential suspicious person is determined based on the second and third determination criteria, the movement trajectories of the suspicious person and the potential suspicious person may be displayed on a monitor or the like.
  • the detection result output unit 304 may acquire past video data from the video server 20, create a video that can grasp the action history of the suspicious person and the potential suspicious person, and output the video to a monitor or the like. That is, the detection result output unit 304 processes and edits the video data acquired from the video server 20 to create a video that can grasp what action the suspicious person or potential suspicious person has performed in the past. May be.
  • the detection result output unit 304 when information about the suspicious person or potential suspicious person (information other than the feature amount) can be acquired from the database server 40, the detection result output unit 304 also displays the acquired information on a monitor or the like. Also good. For example, when the face image and profile (name, address, etc.) of the suspicious person can be acquired from the database server 40 (or another database server), the detection result output unit 304 displays these pieces of information in FIG. It may be displayed superimposed on the display.
  • the operation of the analysis server 30 is summarized as shown in the flowchart shown in FIG.
  • step S201 the suspicious person detection unit 302 determines whether or not video data has been acquired from the video server 20.
  • step S201 If the video data has not been acquired (step S201, No branch), the process of step S201 is continued. If the video data has been acquired (step S201, Yes branch), the suspicious person detection unit 302 extracts a person's face image from the video data and calculates the feature amount (step S202).
  • the suspicious person detection unit 302 requests the database server 40 to submit information related to the criminal (for example, a feature amount calculated from the criminal face image) (step S203).
  • step S204 the suspicious person detection unit 302 determines whether or not the information from the database server 40 has been received.
  • step S204 If the information cannot be received (step S204, No branch), the process of step S204 is repeated.
  • the suspicious person detection unit 302 tries to detect the suspicious person by comparing feature amounts or the like (step S205).
  • step S205 If no suspicious person is detected (step S205, No branch), the process returns to step S201 and continues.
  • step S205 When a suspicious person is detected (step S205, Yes branch), the fact is notified to the potential suspicious person detection unit 303, and the potential suspicious person detection unit 303 notifies the video server 20 in a past predetermined period. A request is made to submit video data (step S206).
  • step S207 the potential suspicious person detection unit 303 determines whether the video data from the video server 20 has been acquired. If video data cannot be received (step S207, No branch), the process of step S207 is repeated.
  • the potential suspicious person detection unit 303 When the video data can be received (step S207, Yes branch), the potential suspicious person detection unit 303 performs an analysis to find a potential suspicious person related to the detected suspicious person (step S208). When a potential suspicious person is not detected (step S209, No branch), the process returns to step S201 and continues. When a potential suspicious person is detected (step S209, Yes branch), the potential suspicious person detection unit 303 notifies the detection result output unit 304 of the detection result.
  • the detection result output unit 304 outputs the detection result to an operation center monitor or the like (step S210).
  • FIG. 14 is a diagram illustrating an example of a processing configuration of the database server 40 according to the first embodiment.
  • the database server 40 includes a communication control unit 401 and a database access unit 402.
  • the communication control unit 401 controls communication with other devices in the same manner as the communication control unit 201 of the video server 20.
  • the database access unit 402 accesses a database in which information about criminals is stored, and processes an information submission request from the analysis server 30. More specifically, when the database access unit 402 is requested by the analysis server 30 to submit information related to a criminal, the database access unit 402 reads out necessary information from the database and responds.
  • the operation of the database server 40 is summarized as shown in the flowchart of FIG.
  • step S301 the database access unit 402 determines whether an information submission request from the analysis server 30 has been received. If the information submission request has been received (step S301, Yes branch), the database access unit 402 reads out necessary information from the database and transmits it to the analysis server 30 (step S302). If an information submission request has not been received (step S301, No branch), the process returns to step S301 and the processing is continued.
  • step S01 the video server 20 transmits the latest video data to the analysis server 30 at predetermined intervals. That is, the video server 20 outputs real-time information on the security area to the analysis server 30.
  • the analysis server 30 extracts the face image of the person shown in the video data and calculates the feature amount (step S02).
  • the analysis server 30 requests the database server 40 to submit information on the criminal (step S03).
  • the database server 40 transmits information on the criminal to the analysis server 30 in response to the request (step S04).
  • the analysis server 30 uses the previously calculated feature amount and information about the criminal to perform a matching process of a person appearing in the video data (determination process as to whether or not a person appearing in the video data is registered in the criminal database). Execute (Step S05).
  • the processes in steps S01 to S05 are repeated.
  • the collation processing if a person shown in the video data is registered in the criminal database, the person is treated as a suspicious person, and detection of a corresponding potential suspicious person is started.
  • the situation of the site (security area) is grasped in real time via the camera 10 and the video server 20, and the presence of a suspicious person is immediately detected. If there is a suspicious person, the processing after step S11 in FIG. 16 is executed.
  • step S11 the analysis server 30 designates necessary video data by “time (period)”, “location”, “camera ID”, etc., and requests the video server 20 to output video data.
  • the video server 20 reads necessary data from the storage medium and outputs it as video data (step S12).
  • the analysis server 30 analyzes the video data to determine whether there is a potential suspicious person (step S13).
  • the analysis server 30 outputs a detection result of a potential suspicious person or the like (step S14).
  • step S13-1 the analysis server 30 requests the database server 40 to submit information regarding criminals.
  • the database server 40 transmits information about the criminal (step S13-2).
  • the analysis server 30 performs collation processing on the potential suspicious person using the feature quantity calculated from the face image of the potential suspicious person and the feature quantity acquired from the database server 40 (step S13-3).
  • step S14 the analysis server 30 may output the detection result including the result of the matching process.
  • the monitoring system according to the first embodiment when the presence of a suspicious person is recognized in the security area, an attempt is made to detect a potential suspicious person who is estimated to have a predetermined relationship with the suspicious person. . Specifically, the monitoring system according to the first embodiment attempts to detect a potential suspicious person using the above-described four determination criteria.
  • the main monitoring target for example, a person with a predecessor
  • a target for example, a person who acts together with the predecessor
  • criminals are not necessarily acting alone, and there may be multiple collaborators (accomplices). Even in such a case, the monitoring system according to the first embodiment can detect potential collaborators of suspicious persons. As a result, it is possible to take measures such as strengthening the security system in advance and estimate the scale of crime (crime risk) planned based on the number of potential suspicious individuals.
  • the candidate is set as a “potential suspicious person”.
  • a process may increase the number of false positives (misidentified as potential suspicious persons despite being unrelated to suspicious persons) when there are many people in the security area. is there.
  • a candidate who is making a call at the same timing as a suspicious person is detected as a “potential suspicious person”.
  • a large event venue is a security area, it is assumed that many people are talking at the same time, and as a result, many "potential suspicious persons" are detected. .
  • the potential suspicious person detection unit 303 calculates the number that matches the above-described plurality of determination criteria for each of the plurality of candidates, and selects a potential suspicious person based on the calculated number. To detect. More specifically, the potential suspicious person detection unit 303 performs the determination process according to the first to fourth determination criteria for each candidate, and calculates the number of times corresponding to each determination process as the relevance score. . Thereafter, the potential suspicious person detection unit 303 performs a threshold process on the relevance score of each candidate to determine potential suspicious persons (narrow the number of potential suspicious persons).
  • the potential suspicious person detection unit 303 creates table information as shown in FIG. Referring to FIG. 18, since candidate A corresponds to the first to fourth determination criteria, the relevance score is calculated as “4”. On the other hand, regarding the candidate B, since the second and third determination criteria are not applicable, the relevance score is calculated as “2”. Accordingly, if the threshold for determining whether or not the candidate is a “potential suspicious person” is “3”, the candidate A is detected as a potential suspicious person and the candidate B is irrelevant to the suspicious person. Determined.
  • the potential suspicious person detection unit 303 may calculate the relevance score by weighting each determination criterion. For example, as shown in FIG. 19A, a weight is given in advance to each criterion. Then, the potential suspicious person detection unit 303 calculates the relevance score based on the predetermined weight and the determination result.
  • the relevance scores of candidates A and B are “37” and “7”, respectively.
  • the potential suspicious person detection unit 303 performs threshold processing on the relevance score calculated according to the weighted determination criterion, and detects a potential suspicious person.
  • a mechanism for reducing the score may be introduced. For example, if a person who is determined to be a suspicious person and a person who is determined to be a potential suspicious person enter the event venue, etc.
  • the target suspicious person detection unit 303 may perform processing such as reducing the relevance score. This is based on the idea that a suspicious person and their accomplices are more likely to enter individually than to enter at the same time.
  • the detection accuracy can be improved by detecting a potential suspicious person using the relevance score.
  • the configuration of the monitoring system described in the above embodiment is an example, and is not intended to limit the configuration of the system.
  • the video server 20, the analysis server 30, and the database server 40 may be integrated, and these functions may be realized by a single device. Alternatively, some functions of each device may be realized by other devices.
  • the database server 40 may implement the suspicious person detection process (the process of the suspicious person detection unit 302) by the analysis server 30. In this case, data transmission / reception between the analysis server 30 and the database server 40 can be reduced.
  • the system disclosed in the present application can be applied to other than the security system.
  • the disclosure of the present application is applied to an investigation system by the police or the like, criminals including accomplices can be identified and arrested.
  • the present disclosure can be applied to a lost child search system in a commercial facility such as a department store or a theme park.
  • an operation related to the main object is performed by analyzing past video data while setting either “parent or child” as the main monitoring object. Identify people.
  • a parent searching for a lost child or a child searching for a parent can be quickly detected.
  • the monitoring system disclosed in the present application can be applied to prevent fraud at a test venue or the like.
  • the camera 10 is assumed to be a fixed camera, but the camera used in the system may be a wearable camera or the like worn by a guard or the like, or a camera mounted on a drone or the like. There may be.
  • each camera since the position of the acquired video differs depending on time, it is desirable that each camera send its own position information and the like when sending video data to the video server 20.
  • the analysis server 30 performs analysis of video data (for example, calculation of movement trajectory of a suspicious person, etc.) in consideration of the position information of the camera.
  • a person registered in the criminal database is treated as a “suspicious person”, but detection of a suspicious person is not limited to a method using the criminal database.
  • the operator may check the video transmitted from the camera, and the operator may detect “suspicious person”.
  • the administrator inputs the suspicious person to the analysis server 30 using a GUI or the like, and the analysis server 30 may execute the above-described method for detecting a potential suspicious person.
  • a “suspicious person” may be detected using a technique disclosed in Patent Document 2 (detection of a suspicious person using biological information). That is, a suspicious person may be detected using information other than video data (image data).
  • “person” is set for both the main monitoring target and its related monitoring target, but these monitoring targets may be “things”.
  • a package that has been left unattended for a long time may be the main monitoring target, and a person related to the package may be searched for as a “potential suspicious person”.
  • a robot, a drone, or the like may be a monitoring target. That is, the combination of the main monitoring target and the related monitoring target can be any combination of “person” and “thing”.
  • detection of a potential suspicious person is performed using four determination criteria.
  • detection of a potential suspicious person is not limited to the above four determination criteria (fifth and sixth). May be introduced).
  • a potential suspicious person may be detected by analyzing the eyes of a suspicious person and a candidate.
  • a candidate whose total number of suspicious persons and eyes is equal to or greater than a predetermined number of times may be detected as a potential suspicious person.
  • a potential suspicious person may be detected if they match.
  • the candidate may be detected as a potential suspicious person.
  • detection of a person (potential suspicious person) suspected of having an association with the main monitoring target (suspicious person) has been described.
  • the person suspected of having an association with a potential suspicious person Detection may be performed. That is, a person who is detected as a potential suspicious person may be replaced with a “suspicious person”, and a potential suspicious person may be detected for the replaced suspicious person.
  • a person suspected of having a predetermined relationship with a potential suspicious person can be detected.
  • a person who exists when a “suspicious person” is detected is treated as a potential suspicious candidate.
  • the selection of the candidate is not limited to the timing when the “suspicious person” is detected. For example, if there is a situation where a face image is not extracted at the above timing, past video data may be analyzed, and all persons existing within a predetermined period may be treated as potential suspicious candidates. .
  • the monitoring system periodically acquires video data from the video server 20 and acquires the acquired video data (the future based on the time of finding the suspicious person).
  • the detection processing of the potential suspicious person may be continued by analyzing the video data.
  • the feature quantity of the face image is used for specifying the person, but other feature quantities may be used.
  • a feature amount extracted from clothes as well as a face image may be used.
  • the feature amount of the face image and the feature amount extracted from the clothes may be combined into one feature amount (feature amount vector).
  • the video server 20 transmits the on-site situation to the analysis server 30 periodically (at a predetermined sampling interval).
  • the image data may be transmitted to the analysis server 30 only when it is determined that a person is captured by the camera, such as when there are few people to be monitored in the security area. That is, when the video server 20 tries to extract a person or a face image and can extract a person or the like, the image data at that time may be transmitted to the analysis server 30.
  • [Appendix 1] The analysis server according to the first aspect described above.
  • [Appendix 2] The analysis server according to appendix 1, comprising a first detection unit that detects the first monitoring target based on video data output in real time by the video server.
  • [Appendix 3] The analysis according to appendix 2, wherein the video server stores video data from the camera and includes a second detection unit that detects the second monitoring target based on past video data stored in the video server. server.
  • the second detection unit includes: The analysis server according to appendix 3, wherein the second monitoring target is detected by analyzing past video data obtained from the video server based on a plurality of determination criteria.
  • the second detection unit includes: The analysis server according to appendix 4, wherein a target that satisfies at least one of the plurality of determination criteria is detected as the second monitoring target.
  • the second detection unit includes: The analysis server according to appendix 4, wherein for the candidate to be the second monitoring target, a number that matches the plurality of determination criteria is calculated, and the second monitoring target is detected based on the calculated number.
  • the first detection unit includes: Get information about criminals from the database, The analysis server according to any one of appendices 3 to 6, wherein the first monitoring target is detected based on the acquired information on the criminal and information obtained from the video data acquired from the video server.
  • the information about the criminal is a feature amount calculated from the criminal face image
  • the first detection unit detects the first monitoring target based on a feature amount calculated from the criminal face image and a feature amount of the face image calculated from the video data acquired from the video server.
  • the second detection unit includes: Any one of appendices 3 to 8, wherein the video data acquired from the video server is analyzed, and the second monitoring target is detected based on whether or not there are targets to be captured together with the first monitoring target in a plurality of cameras.
  • the second detection unit includes: The video data acquired from the video server is analyzed, and the distance to the first monitoring target is equal to or smaller than a predetermined value and the distance to the first monitoring target is equal to or smaller than the predetermined value.
  • the analysis server according to any one of appendices 3 to 9, wherein the second monitoring target is detected based on whether or not there is a target that exceeds time.
  • the second detection unit includes: Analyzing video data acquired from the video server, calculating a trajectory due to movement of the first monitoring target and a trajectory due to movement of the detection candidate of the second monitoring target, The analysis server according to any one of appendices 3 to 10, wherein the second monitoring target is detected based on whether or not a trajectory of one monitoring target and a trajectory of the detection candidate intersect.
  • the second detection unit includes: Any one of appendices 3 to 11, wherein video data acquired from the video server is analyzed, and the second monitoring target is detected based on whether or not there is a target that has performed an operation related to the first monitoring target.
  • [Appendix 13] The analysis server according to any one of appendices 1 to 12, further comprising an output unit that outputs a display screen that clearly shows the relationship between the first monitoring target and the second monitoring target.
  • Appendix 14 This is the same as the monitoring system according to the second aspect described above.
  • Appendix 15 The analysis server The monitoring system according to appendix 14, comprising a first detection unit that detects the first monitoring target based on video data output in real time by the video server.
  • the video server stores video data from the camera, The analysis server The monitoring system according to appendix 15, further comprising a second detection unit that detects the second monitoring target based on past video data stored in the video server.
  • the second detection unit includes: The monitoring system according to appendix 16, wherein the second monitoring target is detected by analyzing past video data obtained from the video server based on a plurality of determination criteria.
  • the second detection unit includes: The monitoring system according to appendix 17, wherein an object that satisfies at least one of the plurality of determination criteria is detected as the second monitoring object.
  • the second detection unit includes: The monitoring system according to appendix 17, wherein for the candidate to be the second monitoring target, a number matching the plurality of determination criteria is calculated, and the second monitoring target is detected based on the calculated number.
  • the first detection unit includes: Get information about criminals from the database, The monitoring system according to any one of appendices 16 to 19, wherein the first monitoring target is detected based on the information on the acquired criminal and information obtained from video data acquired from the video server.
  • the information about the criminal is a feature amount calculated from the criminal face image
  • the first detection unit detects the first monitoring target based on a feature amount calculated from the criminal face image and a feature amount of the face image calculated from the video data acquired from the video server.
  • the monitoring system according to appendix 20.
  • the second detection unit includes: Any one of appendices 16 to 21, wherein the video data acquired from the video server is analyzed, and the second monitoring target is detected based on whether or not there are targets to be captured together with the first monitoring target in a plurality of cameras.
  • the monitoring system according to one.
  • the second detection unit includes: The video data acquired from the video server is analyzed, and the distance to the first monitoring target is equal to or smaller than a predetermined value and the distance to the first monitoring target is equal to or smaller than the predetermined value.
  • the monitoring system according to any one of appendices 16 to 22, wherein the second monitoring target is detected based on whether or not there is a target that is longer than the time.
  • the second detection unit includes: Analyzing the video data acquired from the video server, calculating a trajectory due to the movement of the first monitoring object and a trajectory due to the movement of the detection candidate of the second monitoring object; 24.
  • the monitoring system according to any one of appendices 16 to 23, wherein the second monitoring target is detected based on whether or not one monitoring target trajectory and the detection candidate trajectory intersect.
  • the second detection unit includes: Any one of appendices 16 to 24, wherein the video data acquired from the video server is analyzed, and the second monitoring target is detected based on whether or not there is a target that has performed an operation related to the first monitoring target.
  • a monitoring system according to any one of the above.
  • a suspicious person When a suspicious person is detected by the solution monitoring camera, a person who moves in cooperation with the suspicious person / suspicious object is set as a person (potential suspicious person) associated with the suspicious person. Since it is possible to detect potential suspicious companions, it is possible to take measures such as strengthening the security system in advance.
  • FIG. 24 is a block diagram illustrating the configuration of the information processing apparatus.
  • the analysis server may include the information processing apparatus illustrated in the upper diagram.
  • the information processing apparatus includes a central processing unit (CPU: Central Processing Unit) and a memory.
  • CPU Central Processing Unit
  • the information processing apparatus may realize part or all of the functions of each unit included in the analysis server by causing the CPU to execute a program stored in the memory.
  • Form 1 In a surveillance system that detects suspicious persons or suspicious objects from video data such as surveillance cameras, A monitoring system, wherein a person who moves in cooperation with a detected suspicious person or suspicious object is a person related to the suspicious person or the suspicious object.
  • Form 2 The coordinated movement is related to the suspicious person for a predetermined time or more when the distance from the suspicious person / suspicious object that is simultaneously reflected by a plurality of cameras indicating that at least one of the following conditions is satisfied is not more than a predetermined value.
  • High operation (hand signaling for mobile phone calls, incoming calls, etc.)
  • the monitoring system according to aspect 1.

Abstract

Provided is an analysis server capable of detecting a main subject to be monitored and a subject having relevance to the main subject to be monitored. In the case where a first subject to be monitored is detected, the analysis server detects a second subject to be monitored which is estimated to have relevance to the first subject to be monitored, on the basis of video data acquired from a video server that outputs video data from a camera to outside.

Description

解析サーバ、監視システム、監視方法及びプログラムAnalysis server, monitoring system, monitoring method and program
(関連出願についての記載)
 本発明は、米国仮出願:62/437747号(2016年12月22日出願)の優先権主張に基づくものであり、同出願の全記載内容は引用をもって本書に組み込み記載されているものとする。
 本発明は、解析サーバ、監視システム、監視方法及びプログラムに関する。
(Description of related applications)
The present invention is based on the priority claim of US Provisional Application No. 62/437747 (filed on Dec. 22, 2016), the entire contents of which are incorporated herein by reference. .
The present invention relates to an analysis server, a monitoring system, a monitoring method, and a program.
 近年、テロリズムの脅威が高まっており、多くの人が集まる大規模イベントなどの警備が年々強化される傾向にある。例えば、イベント会場に固定的に設置されるカメラや警備員が装着したウェアラブルカメラを使用した映像監視による警備が行われている。 In recent years, the threat of terrorism has increased, and security for large-scale events where many people gather has been increasing year by year. For example, security is performed by video surveillance using a camera that is fixedly installed at an event venue or a wearable camera attached to a security guard.
 また、情報通信技術(ICT;Information and Communication Technology)を活用したより高品質な警備サービスが提供されることが期待されている。つまり、監視カメラ等から得られる映像を人が監視するだけでなく、情報通信技術を積極的に活用し、警備強化に貢献することが期待されている。例えば、顔認証等の技術を導入し、監視カメラから得られる顔画像と犯罪者データベースに登録された顔画像を照合することで不審者を早期に検出するシステムの活用が検討されている。 In addition, it is expected that a higher quality security service using information communication technology (ICT) will be provided. In other words, it is expected that humans not only monitor images obtained from surveillance cameras and the like, but also actively utilize information communication technology to contribute to strengthening security. For example, the use of a system that detects a suspicious person at an early stage by introducing a technique such as face authentication and collating a face image obtained from a surveillance camera with a face image registered in a criminal database has been studied.
 特許文献1には、監視カメラの画像に存在する人の位置を検出し、検出した人が予め定められた守備者および被守備者を除く不定者であるか否かをICタグ及びアンテナを利用して特定する技術が開示されている。また、当該文献が開示する技術では、不定者と特定された人の年齢、性別、および一緒に行動している人数を推定し、当該推定結果に基づいて、犯罪リスク値を演算している。 In Patent Document 1, the position of a person existing in an image of a surveillance camera is detected, and an IC tag and an antenna are used to determine whether or not the detected person is an indeterminate person excluding a predetermined defender and defender. Thus, the technology to be specified is disclosed. In addition, in the technology disclosed in the document, the age, sex, and number of people acting together are estimated, and the crime risk value is calculated based on the estimation result.
 特許文献2には、不審者検出装置の検出情報に基づいて、検出された者が本当に不審者であるかを確認し、さらに、犯罪者データベースのデータと照合して犯罪者としての経歴があるか否かを確認する技術が開示されている。また、当該文献には、上記確認結果から、利用者と警備会社(あるいは警察)に通報することや、現場に近い街頭放送機器を選択して地域放送装置を介して街頭放送により警告情報を流すことが開示されている。 Patent Document 2 confirms whether the detected person is really a suspicious person based on the detection information of the suspicious person detecting device, and further has a career as a criminal by collating with data in a criminal database. A technique for confirming whether or not is disclosed. Also, in the document, from the above confirmation result, the user and the security company (or police) are notified, or the street broadcasting device close to the site is selected and the warning information is circulated by street broadcasting via the regional broadcasting device. It is disclosed.
特許第5301973号公報Japanese Patent No. 5301973 特開2008-204219号公報JP 2008-204219 A
 近年のテロリズム等における犯罪の状況を鑑みると、単独の実行犯が犯罪を行うこともあるが、複数の実行犯が犯罪を行うこともある。この場合、複数の実行犯により犯罪が行われる方がその規模が大きくなり、被害も甚大なものとなり得る。従って、単独犯による犯罪を未然に防ぐことに加え、複数犯による犯罪を未然に防ぐことがより重要となる。 In light of the recent crime situation in terrorism and the like, a single execution offender may commit a crime, but a plurality of execution offenders may commit a crime. In this case, the crime is committed by a plurality of execution offenders, and the scale becomes larger and the damage can be serious. Therefore, in addition to preventing crimes by single offenders, it is more important to prevent crimes by multiple offenders.
 この点、特許文献1及び2に開示された技術では十分ではない。例えば、特許文献1には、不定者と一緒に行動する人数から犯罪者レベルを推定することが開示されている。しかし、当該文献では、守備者と被守備者以外は不定者であり、犯罪者の潜在的な仲間(共犯者)に関しては考慮されていない。また、特許文献2には、不審者をデータベース照合することが開示されているが、当該文献においても犯罪者の潜在的な仲間(共犯者)に関しては考慮されていない。 In this respect, the techniques disclosed in Patent Documents 1 and 2 are not sufficient. For example, Patent Document 1 discloses that a criminal level is estimated from the number of persons acting together with an indefinite person. However, in this document, those other than the defender and the defender are indeterminate, and no consideration is given to potential criminals (accomplices). Further, Patent Document 2 discloses that a suspicious person is collated with a database, but this document does not take into account potential criminals (accomplices).
 本発明は、主たる監視対象に加え、主たる監視対象と関連性のある対象の検出を可能とすることに寄与する、解析サーバ、監視システム、監視方法及びプログラムを提供することを目的とする。 The object of the present invention is to provide an analysis server, a monitoring system, a monitoring method, and a program that contribute to enabling detection of an object related to the main monitoring object in addition to the main monitoring object.
 本発明の第1の視点によれば、第1の監視対象が検出された場合、前記第1の監視対象と関係を有すると推定される第2の監視対象を、カメラからの映像データを外部に出力する映像サーバから取得した映像データに基づき検出する、解析サーバが提供される。 According to the first aspect of the present invention, when the first monitoring target is detected, the second monitoring target estimated to have a relationship with the first monitoring target is used as the video data from the camera. An analysis server is provided that detects based on the video data acquired from the video server that outputs the data.
 本発明の第2の視点によれば、カメラからの映像データを外部に出力する、映像サーバと、第1の監視対象が検出された場合、前記第1の監視対象と関係を有すると推定される第2の監視対象を前記映像サーバから取得した映像データに基づき検出する、解析サーバと、を含む、監視システムが提供される。 According to the second aspect of the present invention, when a video server that outputs video data from a camera to the outside and a first monitoring target are detected, it is estimated that the first monitoring target has a relationship with the first monitoring target. There is provided a monitoring system including an analysis server that detects a second monitoring target based on video data acquired from the video server.
 本発明の第3の視点によれば、カメラからの映像データを外部に出力する映像サーバから映像データを取得し、第1の監視対象が検出された場合、前記第1の監視対象と関係を有すると推定される第2の監視対象を前記映像サーバから取得した映像データに基づき検出すること、を含む、監視方法が提供される。 According to a third aspect of the present invention, when video data is acquired from a video server that outputs video data from a camera to the outside, and a first monitoring target is detected, the relationship with the first monitoring target is determined. A monitoring method including detecting a second monitoring target estimated to have based on video data acquired from the video server is provided.
 本発明の第4の視点によれば、カメラからの映像データを外部に出力する映像サーバから映像データを取得する処理と、第1の監視対象が検出された場合、前記第1の監視対象と関係を有すると推定される第2の監視対象を前記映像サーバから取得した映像データに基づき検出する処理と、をコンピュータに実行させるプログラムが提供される。
 なお、このプログラムは、コンピュータが読み取り可能な記憶媒体に記録することができる。記憶媒体は、半導体メモリ、ハードディスク、磁気記録媒体、光記録媒体等の非トランジェント(non-transient)なものとすることができる。本発明は、コンピュータプログラム製品として具現することも可能である。
According to the fourth aspect of the present invention, the process of acquiring video data from a video server that outputs video data from a camera to the outside, and the first monitoring target when the first monitoring target is detected, There is provided a program for causing a computer to execute a process of detecting a second monitoring target estimated to have a relationship based on video data acquired from the video server.
This program can be recorded on a computer-readable storage medium. The storage medium may be non-transient such as a semiconductor memory, a hard disk, a magnetic recording medium, an optical recording medium, or the like. The present invention can also be embodied as a computer program product.
 本発明の各視点によれば、主たる監視対象に加え、主たる監視対象と関連性のある対象の検出を可能とすることに寄与する、解析サーバ、監視システム、監視方法及びプログラムが、提供される。 According to each aspect of the present invention, an analysis server, a monitoring system, a monitoring method, and a program that contribute to enabling detection of a target related to the main monitoring target in addition to the main monitoring target are provided. .
一実施形態の概要を説明するための図である。It is a figure for demonstrating the outline | summary of one Embodiment. 第1の実施形態に係る監視システムの構成の一例を示す図である。It is a figure which shows an example of a structure of the monitoring system which concerns on 1st Embodiment. 第1の実施形態に係る解析サーバのハードウェア構成の一例を示すブロック図である。It is a block diagram which shows an example of the hardware constitutions of the analysis server which concerns on 1st Embodiment. 第1の実施形態に係る映像サーバの処理構成の一例を示す図である。It is a figure which shows an example of the process structure of the video server which concerns on 1st Embodiment. 映像サーバの記憶媒体に格納される情報の一例を示す図である。It is a figure which shows an example of the information stored in the storage medium of a video server. 第1の実施形態に係る映像サーバの動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the video server which concerns on 1st Embodiment. 第1の実施形態に係る解析サーバの処理構成の一例を示す図である。It is a figure which shows an example of the process structure of the analysis server which concerns on 1st Embodiment. 潜在的不審者検出部の動作を説明するための図である。It is a figure for demonstrating operation | movement of a potential suspicious person detection part. 潜在的不審者検出部の動作を説明するための図である。It is a figure for demonstrating operation | movement of a potential suspicious person detection part. 潜在的不審者検出部の動作を説明するための図である。It is a figure for demonstrating operation | movement of a potential suspicious person detection part. 潜在的不審者検出部の動作を説明するための図である。It is a figure for demonstrating operation | movement of a potential suspicious person detection part. 検出結果出力部が表示する画面の一例を示す図である。It is a figure which shows an example of the screen which a detection result output part displays. 第1の実施形態に係る解析サーバの動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the analysis server which concerns on 1st Embodiment. 第1の実施形態に係るデータベースサーバの処理構成の一例を示す図である。It is a figure which shows an example of the process structure of the database server which concerns on 1st Embodiment. 第1の実施形態に係るデータベースサーバの動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the database server which concerns on 1st Embodiment. 第1の実施形態に係る監視システムの動作の一例を示すシーケンス図である。It is a sequence diagram which shows an example of operation | movement of the monitoring system which concerns on 1st Embodiment. 第1の実施形態に係る監視システムの動作の別の一例を示すシーケンス図である。It is a sequence diagram which shows another example of operation | movement of the monitoring system which concerns on 1st Embodiment. 第2の実施形態に係る潜在的不審者検出部が作成するテーブル情報の一例である。It is an example of the table information which the potential suspicious person detection part which concerns on 2nd Embodiment produces. 第2の実施形態に係る潜在的不審者検出部の動作を説明するための図である。It is a figure for demonstrating operation | movement of the potential suspicious person detection part which concerns on 2nd Embodiment. 一実施形態に係るシステムの動作の一例を示すシーケンス図である。It is a sequence diagram which shows an example of operation | movement of the system which concerns on one Embodiment. 一実施形態に係るシステムの潜在不審者検出に関するフローチャートである。It is a flowchart regarding the potential suspicious person detection of the system which concerns on one Embodiment. 一実施形態に係る映像サーバの動作(a)と犯罪者データベースの動作(b)の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement (a) of the video server which concerns on one Embodiment, and operation | movement (b) of a criminal database. 一実施形態に係る解析サーバのブロック図である。It is a block diagram of the analysis server which concerns on one Embodiment. 一実施形態に係る情報処理装置の構成を例示するブロック図である。It is a block diagram which illustrates the composition of the information processor concerning one embodiment.
 初めに、一実施形態の概要について説明する。なお、この概要に付記した図面参照符号は、理解を助けるための一例として各要素に便宜上付記したものであり、この概要の記載はなんらの限定を意図するものではない。 First, an outline of one embodiment will be described. Note that the reference numerals of the drawings attached to the outline are attached to the respective elements for convenience as an example for facilitating understanding, and the description of the outline is not intended to be any limitation.
 一実施形態に係る解析サーバ101は、第1の監視対象が検出された場合、第1の監視対象と関係を有すると推定される第2の監視対象を、カメラからの映像データを外部に出力する映像サーバから取得した映像データに基づき検出する(図1参照)。また、解析サーバ101は、映像サーバがリアルタイムに出力する映像データに基づき第1の監視対象を検出する第1の検出部と、映像サーバに蓄積された過去の映像データに基づき、第2の監視対象を検出する第2の検出部と、を備えていてもよい。 When the first monitoring target is detected, the analysis server 101 according to the embodiment outputs the second monitoring target estimated to have a relationship with the first monitoring target to the outside as video data from the camera It detects based on the video data acquired from the video server (refer FIG. 1). The analysis server 101 also includes a first detection unit that detects a first monitoring target based on video data output from the video server in real time, and a second monitoring based on past video data stored in the video server. A second detection unit that detects a target.
 解析サーバ101は、例えば、後述するような4つの判定技術を用いて、映像サーバに蓄積された映像データを解析し、第2の監視対象の検出を試みる。その結果、主たる監視対象に加え、主たる監視対象と関連性のある対象の検出が可能となる。 The analysis server 101 analyzes the video data stored in the video server using, for example, the following four determination techniques, and tries to detect the second monitoring target. As a result, in addition to the main monitoring target, it is possible to detect a target related to the main monitoring target.
 以下に具体的な実施の形態について、図面を参照してさらに詳しく説明する。なお、各実施形態において同一構成要素には同一の符号を付し、その説明を省略する。また、各図におけるブロック間の接続線は、双方向及び単方向の双方を含む。一方向矢印については、主たる信号(データ)の流れを模式的に示すものであり、双方向性を排除するものではない。 Hereinafter, specific embodiments will be described in more detail with reference to the drawings. In addition, in each embodiment, the same code | symbol is attached | subjected to the same component and the description is abbreviate | omitted. In addition, the connection lines between the blocks in each drawing include both bidirectional and unidirectional directions. The unidirectional arrow schematically shows the main signal (data) flow and does not exclude bidirectionality.
[第1の実施形態]
 第1の実施形態について、図面を用いてより詳細に説明する。
[First Embodiment]
The first embodiment will be described in more detail with reference to the drawings.
 第1の実施形態では、監視システムをイベント会場等の警備システムに適用する場合について説明する。また、第1の実施形態では、監視システムが主に監視の対象とする「主たる監視対象」として、過去に罪を犯した人物を設定する。より具体的には、第1の実施形態に係る監視システムでは、過去に罪を犯し、犯罪者データベースに氏名、住所等が記憶されている人物を主たる監視対象として扱い、そのような人物を「不審者」として検出する。さらに、第1の実施形態に係る監視システムでは、主たる監視対象と関係を有すると推定される監視対象(関連性が疑われる監視対象)を「潜在的不審者」として検出する。 In the first embodiment, a case where the monitoring system is applied to a security system such as an event venue will be described. In the first embodiment, a person who has committed a crime in the past is set as a “main monitoring target” that is mainly monitored by the monitoring system. More specifically, in the monitoring system according to the first embodiment, a person who has committed a crime in the past and whose name, address, etc. are stored in the criminal database is treated as a main monitoring target. Detect as "suspicious person". Furthermore, in the monitoring system according to the first embodiment, a monitoring target (monitoring target suspected of being related) estimated to have a relationship with the main monitoring target is detected as a “potential suspicious person”.
[システム構成]
 図2は、第1の実施形態に係る監視システムの構成の一例を示す図である。図2を参照すると、監視システムには、複数のカメラ10-1~10-3と、映像サーバ20と、解析サーバ30と、データベース(DB;Data Base)サーバ40と、が含まれる。
[System configuration]
FIG. 2 is a diagram illustrating an example of the configuration of the monitoring system according to the first embodiment. Referring to FIG. 2, the monitoring system includes a plurality of cameras 10-1 to 10-3, a video server 20, an analysis server 30, and a database (DB) server 40.
 なお、以降の説明において、カメラ10-1~10-3を区別する特段の理由がない場合には単に「カメラ10」と表記する。また、図2には3台のカメラ10を図示しているがカメラの台数を制限する趣旨ではない。複数のカメラ10は、警備対象となるエリアに網羅的に配置される。つまり、警備対象となるエリアに応じて必要な台数のカメラが設置される。 In the following description, when there is no particular reason to distinguish the cameras 10-1 to 10-3, they are simply referred to as “camera 10”. In addition, FIG. 2 shows three cameras 10, but this is not intended to limit the number of cameras. The plurality of cameras 10 are comprehensively arranged in an area to be guarded. That is, a required number of cameras are installed according to the area to be guarded.
 各カメラ10は、有線又は無線のネットワークにより映像サーバ20と接続されている。各カメラ10は、映像データを映像サーバ20に向けて出力する。なお、映像データは時系列に連続する複数の画像データの集合と捉えることもできる。そこで、本願開示では、各カメラが撮影したデータを映像データと表記するが、各カメラが出力するデータは映像データに限定されず、画像データであってもよいことは勿論である。 Each camera 10 is connected to the video server 20 via a wired or wireless network. Each camera 10 outputs video data to the video server 20. Note that the video data can be regarded as a set of a plurality of image data continuous in time series. Therefore, in the present disclosure, data captured by each camera is described as video data. However, data output from each camera is not limited to video data, and may of course be image data.
 映像サーバ20は、記憶媒体を備え、各カメラ10から提出される映像データを蓄積する。映像サーバ20は、外部(具体的には、解析サーバ30)からの要求に応じて、記憶媒体に蓄積された映像データを出力する。 The video server 20 includes a storage medium and stores video data submitted from each camera 10. The video server 20 outputs video data stored in the storage medium in response to a request from the outside (specifically, the analysis server 30).
 解析サーバ30は、映像サーバ20から映像データを取得する。解析サーバ30は、主たる監視対象(上述の第1の監視対象;不審者)が検出された場合、当該主たる監視対象と関係を有すると推定される監視対象(上述の第2の監視対象;潜在的不審者)を映像サーバ20から取得した映像データに基づき検出する。より具体的には、解析サーバ30は、映像サーバ20からリアルタイムに出力される各カメラ10の映像データとデータベースサーバ40に蓄積されている情報を用いて、不審者を検出する。 The analysis server 30 acquires video data from the video server 20. When the main monitoring target (the above-described first monitoring target; suspicious person) is detected, the analysis server 30 is estimated to have a relationship with the main monitoring target (the above-described second monitoring target; latent). Based on the video data acquired from the video server 20. More specifically, the analysis server 30 detects a suspicious person using video data of each camera 10 output from the video server 20 in real time and information stored in the database server 40.
 図2の例では、解析サーバ30は、不審者61を初めに検出する。その後、解析サーバ30は、不審者61と所定の関係にあると推定される潜在的不審者62を検出する。解析サーバ30は、例えば、警備会社のオペレーションセンタに設置され、不審者及び/又は潜在的不審者を検出した場合には、モニタにその結果を表示する。当該結果に接したオペレータは、現場に急行する、警察に通報するといった行動を起こす。 In the example of FIG. 2, the analysis server 30 detects the suspicious person 61 first. Thereafter, the analysis server 30 detects a potential suspicious person 62 estimated to have a predetermined relationship with the suspicious person 61. The analysis server 30 is installed in, for example, an operation center of a security company, and when a suspicious person and / or a potential suspicious person is detected, the result is displayed on a monitor. The operator in contact with the result takes action such as rushing to the site or reporting to the police.
 データベースサーバ40は、犯罪者に関する情報が蓄積された犯罪者データベースを管理する装置である。犯罪者に関する情報は、例えば、過去に罪を犯した人物の顔画像や当該顔画像から算出した特徴量(特徴量ベクトル)等である。第1の実施形態では、上記データベースには少なくとも過去に罪を犯した人物の顔画像から算出した特徴量が含まれる。後述するように、データベースサーバ40は、解析サーバ30からの要求に応じて犯罪者に関する情報を送信する。その際、データベースには顔画像から算出した特徴量を登録しておくことで、情報の授受を容易にすることができる。通常、顔画像のデータサイズよりも特徴量のデータサイズの方が小さいためである。 The database server 40 is a device that manages a criminal database in which information on criminal is accumulated. The information on the criminal is, for example, a face image of a person who has committed a crime in the past, a feature amount (feature amount vector) calculated from the face image, or the like. In the first embodiment, the database includes at least feature amounts calculated from face images of persons who have sinned in the past. As will be described later, the database server 40 transmits information on the criminal in response to a request from the analysis server 30. At this time, information can be easily exchanged by registering the feature amount calculated from the face image in the database. This is because the data size of the feature amount is usually smaller than the data size of the face image.
[ハードウェア構成]
 次に、第1の実施形態に係る監視システムを構成する各種装置のハードウェア構成を説明する。
[Hardware configuration]
Next, the hardware configuration of various devices constituting the monitoring system according to the first embodiment will be described.
 図3は、第1の実施形態に係る解析サーバ30のハードウェア構成の一例を示すブロック図である。解析サーバ30は、所謂、コンピュータ(情報処理装置)により構成可能であり、図3に例示する構成を備える。例えば、解析サーバ30は、内部バスにより相互に接続される、CPU(Central Processing Unit)31、メモリ32、入出力インターフェイス33及び通信インターフェイスであるNIC(Network Interface Card)34等を備える。 FIG. 3 is a block diagram illustrating an example of a hardware configuration of the analysis server 30 according to the first embodiment. The analysis server 30 can be configured by a so-called computer (information processing apparatus), and has the configuration illustrated in FIG. For example, the analysis server 30 includes a CPU (Central Processing Unit) 31, a memory 32, an input / output interface 33, a NIC (Network Interface Card) 34 that is a communication interface, and the like that are connected to each other via an internal bus.
 但し、図3に示す構成は、解析サーバ30のハードウェア構成を限定する趣旨ではない。解析サーバ30は、図示しないハードウェアを含んでもよいし、必要に応じて入出力インターフェイス33を備えていなくともよい。また、解析サーバ30に含まれるCPU等の数も図3の例示に限定する趣旨ではなく、例えば、複数のCPUが解析サーバ30に含まれていてもよい。 However, the configuration illustrated in FIG. 3 is not intended to limit the hardware configuration of the analysis server 30. The analysis server 30 may include hardware (not shown) or may not include the input / output interface 33 as necessary. Further, the number of CPUs and the like included in the analysis server 30 is not limited to the example illustrated in FIG. 3. For example, a plurality of CPUs may be included in the analysis server 30.
 メモリ32は、RAM(Random Access Memory)、ROM(Read Only Memory)、補助記憶装置(ハードディスク等)である。 The memory 32 is a RAM (Random Access Memory), a ROM (Read Only Memory), or an auxiliary storage device (hard disk or the like).
 入出力インターフェイス33は、図示しない表示装置や入力装置のインターフェイスである。表示装置は、例えば、液晶ディスプレイ等である。入力装置は、例えば、キーボードやマウス等のユーザ操作を受け付ける装置や、USB(Universal Serial Bus)メモリ等の外部記憶装置から情報を入力する装置である。ユーザは、キーボードやマウス等を用いて、必要な情報を解析サーバ30に入力する。 The input / output interface 33 is an interface of a display device and an input device (not shown). The display device is, for example, a liquid crystal display. The input device is, for example, a device that accepts a user operation such as a keyboard or a mouse, or a device that inputs information from an external storage device such as a USB (Universal Serial Bus) memory. The user inputs necessary information to the analysis server 30 using a keyboard, a mouse, or the like.
 解析サーバ30の機能は、後述する処理モジュールにより実現される。当該処理モジュールは、例えば、メモリ32に格納されたプログラムをCPU31が実行することで実現される。また、そのプログラムは、ネットワークを介してダウンロードするか、あるいは、プログラムを記憶した記憶媒体を用いて、更新することができる。さらに、上記処理モジュールは、半導体チップにより実現されてもよい。即ち、上記処理モジュールが行う機能は、何らかのハードウェア及び/又はソフトウェアにより実現できればよい。また、コンピュータの記憶部に、上述したコンピュータプログラムをインストールすることにより、コンピュータを解析サーバ30として機能させることができる。さらにまた、上述したコンピュータプログラムをコンピュータに実行させることにより、コンピュータにより不審者及び潜在的不審者の監視方法を実行することができる。 The function of the analysis server 30 is realized by a processing module described later. The processing module is realized, for example, when the CPU 31 executes a program stored in the memory 32. The program can be downloaded through a network or updated using a storage medium storing the program. Furthermore, the processing module may be realized by a semiconductor chip. In other words, the function performed by the processing module may be realized by some hardware and / or software. In addition, the computer can function as the analysis server 30 by installing the above-described computer program in the storage unit of the computer. Furthermore, by causing a computer to execute the above-described computer program, the computer can execute a monitoring method for suspicious persons and potential suspicious persons.
 なお、映像サーバ20及びデータベースサーバ40も解析サーバ30と同様に情報処理装置により構成可能であり、その基本的なハードウェア構成は解析サーバ30と同一とすることができるので、その説明を省略する。また、カメラ10のハードウェア構成に関しても当業者に取って明らかなものであるため、その説明を省略する。 Note that the video server 20 and the database server 40 can also be configured by an information processing device in the same manner as the analysis server 30, and the basic hardware configuration can be the same as that of the analysis server 30, and thus the description thereof is omitted. . Further, since the hardware configuration of the camera 10 is obvious to those skilled in the art, the description thereof is omitted.
[処理モジュール]
 続いて、第1の実施形態に係る監視システムを構成する各種装置の処理モジュールについて説明する。
[Process module]
Subsequently, processing modules of various devices constituting the monitoring system according to the first embodiment will be described.
[映像サーバ]
 図4は、第1の実施形態に係る映像サーバ20の処理構成の一例を示す図である。図4を参照すると、映像サーバ20は、通信制御部201と、データ蓄積部202と、データ出力部203と、を含んで構成される。
[Video server]
FIG. 4 is a diagram illustrating an example of a processing configuration of the video server 20 according to the first embodiment. Referring to FIG. 4, the video server 20 includes a communication control unit 201, a data storage unit 202, and a data output unit 203.
 通信制御部201は、他の装置(カメラ10、解析サーバ30)との間の通信を制御する。また、通信制御部201は、外部から取得したデータ(パケット)を適切な処理モジュールに振り分ける。例えば、カメラ10から映像データを取得した場合には、通信制御部201は、当該映像データをデータ蓄積部202に引き渡す。また、通信制御部201は、各処理モジュールから取得したデータを他の装置に向けて送信する。例えば、データ出力部203から映像データ(複数の画像データ)を取得した場合には、通信制御部201は、当該映像データを解析サーバ30に向けて送信する。 The communication control unit 201 controls communication with other devices (camera 10, analysis server 30). Further, the communication control unit 201 distributes data (packets) acquired from the outside to an appropriate processing module. For example, when video data is acquired from the camera 10, the communication control unit 201 delivers the video data to the data storage unit 202. Further, the communication control unit 201 transmits data acquired from each processing module to another device. For example, when video data (a plurality of image data) is acquired from the data output unit 203, the communication control unit 201 transmits the video data to the analysis server 30.
 データ蓄積部202は、通信制御部201を介してカメラ10から映像データを取得すると、当該取得した映像データを、取得したカメラ10ごとに区分してHDD(Hard Disk Drive)等の記憶媒体に格納する。その際、データ蓄積部202は、必要に応じて取得した映像データを圧縮して記憶媒体に格納する。 When the data storage unit 202 acquires video data from the camera 10 via the communication control unit 201, the data storage unit 202 classifies the acquired video data for each acquired camera 10 and stores it in a storage medium such as an HDD (Hard Disk Disk Drive). To do. At that time, the data storage unit 202 compresses the acquired video data as necessary and stores it in the storage medium.
 また、解析サーバ30は、映像サーバ20に対して過去の映像データの提出を要求することがある。そこで、データ蓄積部202は、過去の映像データが容易に取り出せるようにタイムスタンプを映像データと共に記憶媒体に格納する(図5参照)。 Also, the analysis server 30 may request the video server 20 to submit past video data. Therefore, the data storage unit 202 stores the time stamp together with the video data in a storage medium so that the past video data can be easily extracted (see FIG. 5).
 データ出力部203は、記憶媒体に蓄積された映像データを解析サーバ30に出力する。例えば、データ出力部203は、各カメラ10にて撮影された映像データをリアルタイムで解析サーバ30に出力する。つまり、データ出力部203は、記憶媒体に格納された映像データ(画像データ)のうち最新のデータを定期的に読み出し、当該読み出したデータを、通信制御部201を介して解析サーバ30に送信する。 The data output unit 203 outputs the video data stored in the storage medium to the analysis server 30. For example, the data output unit 203 outputs video data captured by each camera 10 to the analysis server 30 in real time. That is, the data output unit 203 periodically reads the latest data from the video data (image data) stored in the storage medium, and transmits the read data to the analysis server 30 via the communication control unit 201. .
 さらに、データ出力部203は、解析サーバ30からの要求に応じ、指定された範囲の映像データ(一連の画像データ)を解析サーバ30に出力する。例えば、解析サーバ30は、取得したい映像データを「時間(期間)」、「場所」、「カメラID(Identifier)」等により指定して、映像データの提出を映像サーバ20に要求する。データ出力部203は、これらの情報に基づき、蓄積された映像データから要求に適合するデータを特定し、解析サーバ30に出力する。 Further, the data output unit 203 outputs video data (a series of image data) in a specified range to the analysis server 30 in response to a request from the analysis server 30. For example, the analysis server 30 specifies video data to be acquired by “time (period)”, “location”, “camera ID (Identifier)”, and the like, and requests the video server 20 to submit video data. Based on these pieces of information, the data output unit 203 identifies data that meets the request from the stored video data, and outputs the data to the analysis server 30.
 映像サーバ20の動作をまとめると、例えば、図6に示すフローチャートのとおりとなる。 The operation of the video server 20 is summarized as shown in the flowchart of FIG.
 ステップS101において、データ出力部203は、最新の映像データを解析サーバ30に出力する。 In step S101, the data output unit 203 outputs the latest video data to the analysis server 30.
 ステップS102において、データ出力部203は、解析サーバ30からの「映像データ提出要求」を受信しているか否かを確認する。 In step S102, the data output unit 203 confirms whether or not the “video data submission request” from the analysis server 30 has been received.
 映像データ提出要求を受信していれば(ステップS102、Yes分岐)、データ出力部203は、記憶媒体から必要な範囲の映像データを読み出し、解析サーバ30に送信する(ステップS103)。 If the video data submission request has been received (step S102, Yes branch), the data output unit 203 reads the video data in a necessary range from the storage medium and transmits it to the analysis server 30 (step S103).
 映像データ提出要求を受信していなければ(ステップS102、No分岐)、ステップS101に戻り処理が継続される。 If the video data submission request has not been received (step S102, No branch), the process returns to step S101 and the processing is continued.
[解析サーバ]
 図7は、第1の実施形態に係る解析サーバ30の処理構成の一例を示す図である。図7を参照すると、解析サーバ30は、通信制御部301と、不審者検出部302と、潜在的不審者検出部303と、検出結果出力部304と、を含んで構成される。
[Analysis server]
FIG. 7 is a diagram illustrating an example of a processing configuration of the analysis server 30 according to the first embodiment. Referring to FIG. 7, the analysis server 30 includes a communication control unit 301, a suspicious person detection unit 302, a potential suspicious person detection unit 303, and a detection result output unit 304.
 通信制御部301は、映像サーバ20の通信制御部201と同様に他の装置との間の通信を制御する。 The communication control unit 301 controls communication with other devices in the same manner as the communication control unit 201 of the video server 20.
 不審者検出部302は、映像サーバ20がリアルタイムに出力する映像データに基づき不審者を検出する。不審者検出部302は、上述の第1の検出部に相当する。 The suspicious person detection unit 302 detects a suspicious person based on video data output from the video server 20 in real time. The suspicious person detection unit 302 corresponds to the first detection unit described above.
 初めに、不審者検出部302は、映像サーバ20から取得した映像データに写る人物の顔画像を抽出することを試みる。なお、映像データから顔画像を抽出する際には種々の技術を用いることができる。 First, the suspicious person detection unit 302 tries to extract a face image of a person shown in the video data acquired from the video server 20. Various techniques can be used to extract a face image from video data.
 例えば、参考文献1(特開2014-170979号公報)に開示されているように、入力画像(顔画像を含む画像データ)と、顔画像のテンプレート画像と、を比較し、両者の差分が閾値以下であるか否かにより、顔画像を抽出してもよい。また、色情報やエッジの方向や密度を組み合わせたモデルをあらかじめ記憶しておき、入力フレームからモデルに類似した領域が検出された場合に顔が存在すると判定し、顔画像を抽出することもできる。さらに、顔(頭部)の輪郭は楕円、目や口は矩形形状をしていることを利用して作成したテンプレートを使用し、顔画像を検出することも可能である。さらにまた、頬や額の部分は輝度が高く、目や口の部分の輝度は低いという輝度分布の特性を利用した顔検出手法や、顔の対称性や肌色領域と位置を利用して顔検出を行う手法等を用いてもよい。あるいは、大量の顔と非顔の学習サンプルから得られた特徴量分布を統計的に学習し、入力画像から得られる特徴量が顔と非顔のどちらの分布に属するかを判定する手法を用いてもよい。即ち、サポートベクターマシン等の機械学習に係る技術を顔画像の抽出に用いてもよい。 For example, as disclosed in Reference Document 1 (Japanese Patent Laid-Open No. 2014-170979), an input image (image data including a face image) is compared with a template image of a face image, and the difference between the two is a threshold value. A face image may be extracted depending on whether or not it is the following. It is also possible to store a model combining color information, edge direction and density in advance, determine that a face exists when an area similar to the model is detected from the input frame, and extract a face image. . Furthermore, it is possible to detect a face image using a template created using the fact that the contour of the face (head) is an ellipse and the eyes and mouth are rectangular. Furthermore, a face detection method that uses the characteristics of luminance distribution that the cheeks and forehead are bright and the eyes and mouth are low, and face detection using the face symmetry and skin color area and position. A technique for performing the above may be used. Alternatively, a method is used that statistically learns the feature quantity distribution obtained from a large number of face and non-face learning samples and determines whether the feature quantity obtained from the input image belongs to the face or non-face distribution. May be. That is, a technique related to machine learning such as a support vector machine may be used for face image extraction.
 顔画像の抽出に成功した場合には、不審者検出部302は、当該顔画像を特徴付ける複数の特徴量(所謂、特徴量ベクトル)を算出する。 When the face image is successfully extracted, the suspicious person detection unit 302 calculates a plurality of feature amounts (so-called feature amount vectors) that characterize the face image.
 顔画像からの特徴量ベクトル算出に関しては、例えば、参考文献2(特開2015-097000号公報)に開示されている技術を用いることができる。具体的には、顔画像から特徴点(例えば、目、鼻、口等の中心点や端点)を抽出し、当該抽出した特徴点の位置関係、特徴点近傍の濃淡値や特性(周期性、方向性、色分布等)を特徴量として算出する。その後、当該特徴量を並べて(特徴量の組を作って)特徴量ベクトルを算出する。 For calculating a feature vector from a face image, for example, a technique disclosed in Reference Document 2 (Japanese Patent Laid-Open No. 2015-097000) can be used. Specifically, feature points (for example, center points and end points of eyes, nose, mouth, etc.) are extracted from the face image, the positional relationship of the extracted feature points, the gray value and characteristics (periodicity, Directionality, color distribution, etc.) are calculated as feature quantities. Then, the feature quantity vectors are calculated by arranging the feature quantities (making a set of feature quantities).
 ここで、特徴量ベクトル算出の元となった顔画像が異なれば、特徴量ベクトルも異なる。換言するならば、特徴量ベクトル算出の元となった顔画像が同一であれば、特徴量ベクトルも同一又はほぼ同一となる。 Here, if the face image from which the feature vector is calculated is different, the feature vector is also different. In other words, if the face images from which the feature quantity vectors are calculated are the same, the feature quantity vectors are the same or substantially the same.
 その後、不審者検出部302は、データベースサーバ40から犯罪者に関する情報を取得する。この場合、不審者検出部302は、データベースサーバ40から犯罪者の顔画像から算出された特徴量(特徴量ベクトル)を取得する。なお、データベースサーバ40から取得する顔画像の特徴量ベクトルは、不審者検出部302が算出した特徴量ベクトルと比較可能な特徴量ベクトルとする。換言するならば、不審者検出部302は、データベースサーバ40(即ち、犯罪者データベース)に格納されている特徴量ベクトルと同種の特徴量ベクトルを算出する。 Thereafter, the suspicious person detection unit 302 acquires information on the criminal from the database server 40. In this case, the suspicious person detection unit 302 acquires a feature amount (feature amount vector) calculated from the criminal face image from the database server 40. The feature vector of the face image acquired from the database server 40 is a feature vector that can be compared with the feature vector calculated by the suspicious person detection unit 302. In other words, the suspicious person detecting unit 302 calculates a feature quantity vector of the same type as the feature quantity vector stored in the database server 40 (that is, the criminal database).
 次に、不審者検出部302は、映像データから算出した顔画像の特徴量ベクトルと、データベースサーバ40から取得した特徴量ベクトルと、に関する照合処理を行う。具体的には、不審者検出部302は、データベースサーバ40から取得した特徴量ベクトルのそれぞれに関し、上記映像データから算出した特徴量ベクトルとの類似度を算出する。 Next, the suspicious person detection unit 302 performs a collation process on the feature vector of the face image calculated from the video data and the feature vector acquired from the database server 40. Specifically, the suspicious person detection unit 302 calculates the degree of similarity with the feature amount vector calculated from the video data for each feature amount vector acquired from the database server 40.
 例えば、不審者検出部302は、2つの特徴量ベクトル間のカイ二乗距離やユークリッド距離等を算出する。算出されたカイ二乗距離やユークリッド距離は、2つの特徴量ベクトル(特徴量ベクトルにより特徴付けられる2つの顔画像)間の類似度を示す指標となる。なお、2つの特徴量ベクトルの類似度を示す指標は上記のユークリッド距離やカイ二乗距離に限定されない。上記指標は、2つの特徴量ベクトルの相関値(Correlation)等であってもよい。 For example, the suspicious person detection unit 302 calculates a chi-square distance, a Euclidean distance, or the like between two feature quantity vectors. The calculated chi-square distance and Euclidean distance serve as an index indicating the degree of similarity between two feature quantity vectors (two face images characterized by the feature quantity vectors). The index indicating the similarity between the two feature quantity vectors is not limited to the Euclidean distance or the chi-square distance. The index may be a correlation value (Correlation) between two feature vectors.
 次に、不審者検出部302は、算出した類似度に対して閾値処理を施し、映像サーバ20から取得した映像データに写る人物がデータベースサーバ40に登録されているか否かを判定する。 Next, the suspicious person detection unit 302 performs threshold processing on the calculated similarity, and determines whether or not a person shown in the video data acquired from the video server 20 is registered in the database server 40.
 不審者検出部302は、映像サーバ20から取得した映像データに写る人物に似た人物がデータベースサーバ40に登録されている場合には、当該人物を「不審者」に設定する。また、不審者検出部302は、不審者が検出された場合には、その旨を潜在的不審者検出部303に通知する。 The suspicious person detecting unit 302 sets the person as “suspicious person” when a person similar to the person shown in the video data acquired from the video server 20 is registered in the database server 40. Further, when a suspicious person is detected, the suspicious person detecting unit 302 notifies the potential suspicious person detecting unit 303 to that effect.
 潜在的不審者検出部303は、映像サーバ20に蓄積された過去の映像データに基づき、当該不審者と関係があると推定される人物(所定の関連性を有すると疑われる人物)を検出する。潜在的不審者検出部303は、不審者検出部302により不審者が検出された場合に、映像サーバ20から得られる過去の映像データを複数の判定基準に基づき解析することで、潜在的不審者の検出を試みる。潜在的不審者検出部303は、上述の第2の検出部に相当する。 The potential suspicious person detection unit 303 detects a person (a person suspected of having a predetermined relationship) estimated to be related to the suspicious person based on past video data accumulated in the video server 20. . The potential suspicious person detection unit 303 analyzes the past video data obtained from the video server 20 based on a plurality of determination criteria when the suspicious person is detected by the suspicious person detection unit 302, thereby Try to detect. The potential suspicious person detection unit 303 corresponds to the above-described second detection unit.
 潜在的不審者検出部303は、以下に示す複数の判定基準のうち少なくとも1つを満たす対象(人物)が存在する場合に、当該人物を「潜在的不審者」として検出する。 The potential suspicious person detection unit 303 detects the person as a “potential suspicious person” when there is a target (person) that satisfies at least one of the following determination criteria.
 第1の判定基準は、複数のカメラ10にて不審者と潜在的不審者として検出する対象者(潜在的不審者の候補者、以下、単に候補者と称する)が同時に写っている場合である。潜在的不審者検出部303は、映像サーバ20から取得した映像データを解析し、複数のカメラ10において不審者と共に写る候補者が存在するか否かにより、潜在的不審者を検出する。 The first criterion is a case where a plurality of cameras 10 detect a suspicious person and a target person who is detected as a potential suspicious person (a potential suspicious candidate, hereinafter simply referred to as a candidate). . The potential suspicious person detection unit 303 analyzes the video data acquired from the video server 20 and detects a potential suspicious person depending on whether or not there are candidates that appear in the plurality of cameras 10 together with the suspicious person.
 第2の判定基準は、不審者と候補者の間の距離が近く、且つ、不審者と候補者の距離が近い状態が長時間に亘る場合である。潜在的不審者検出部303は、映像サーバ20から取得した映像データを解析し、不審者との間の距離が所定値以下、且つ、不審者との間の距離が上記所定以下の状態が所定の時間以上となる候補者が存在するか否かにより、潜在的不審者を検出する。 The second criterion is when the distance between the suspicious person and the candidate is short and the distance between the suspicious person and the candidate is long. The potential suspicious person detecting unit 303 analyzes the video data acquired from the video server 20, and the distance between the suspicious person and the suspicious person is less than the predetermined value and the predetermined distance is less than the predetermined value. A potential suspicious person is detected based on whether or not there is a candidate whose time is equal to or longer than the above.
 第3の判定基準は、不審者の軌跡(移動による軌跡)と候補者の軌跡が交わる場合である。潜在的不審者検出部303は、映像サーバ20から取得した映像データを解析して、不審者が移動したことによる軌跡と、候補者が移動したことによる軌跡と、を算出し、不審者の軌跡と候補者の軌跡が交差するか否かにより、潜在的不審者を検出する。 The third criterion is when the trajectory of the suspicious person (trajectory due to movement) and the trajectory of the candidate intersect. The potential suspicious person detecting unit 303 analyzes the video data acquired from the video server 20, calculates a trajectory due to the suspicious person moving and a trajectory due to the candidate moving, and the suspicious person trajectory. The potential suspicious person is detected based on whether or not the trajectory of the candidate intersects.
 第4の判定基準は、不審者の動作と関連する所定の動作を候補者が行っている場合である。潜在的不審者検出部303は、映像サーバ20から取得した映像データを解析し、不審者と関連する動作を行った候補者が存在するか否かにより、潜在的不審者を検出する。 The fourth criterion is when the candidate is performing a predetermined action related to the action of the suspicious person. The potential suspicious person detection unit 303 analyzes the video data acquired from the video server 20, and detects a potential suspicious person depending on whether there is a candidate who has performed an operation related to the suspicious person.
 以下、上記判定基準のそれぞれについて詳細を説明する。 Hereinafter, the details of each of the above criteria will be described.
 初めに、図8を参照しつつ、第1の判定基準について説明する。図8(a)において、不審者検出部302は、不審者Sを検出したものとする。なお、図8において、灰色で着色された領域は各カメラ10の撮影可能エリアとする。図8には、着色されていない領域も存在するが、これらの領域は図示していないカメラ10により撮影されているものとする。また、不審者検出部302が不審者Sを検出した時刻をTとする。 First, the first criterion will be described with reference to FIG. In FIG. 8A, it is assumed that the suspicious person detecting unit 302 detects the suspicious person S. In FIG. 8, the area colored in gray is an imageable area of each camera 10. Although there are uncolored areas in FIG. 8, these areas are taken by a camera 10 (not shown). Also, T is the time when the suspicious person detection unit 302 detects the suspicious person S.
 図8(a)の場合、潜在的不審者検出部303は、時刻Tにおいて警備エリア(監視対象エリア)に存在する人物を不審者Sに対する「潜在的不審者」の候補(上述の候補者)として抽出する。図8(a)の例では、不審者S以外の3人の人物が候補者A~Cとして抽出される。例えば、潜在的不審者検出部303は、時刻Tにける各カメラ10の映像データ(画像データ)に対して上述した顔画像の抽出処理等を適用し、候補者を抽出する。その際、同一人物が複数のカメラ10に写っている場合には、潜在的不審者検出部303は、特徴量ベクトルが類似する人物は同一人物であると扱い人物の重複を排除するのが望ましい。 In the case of FIG. 8A, the potential suspicious person detection unit 303 selects a person who exists in the security area (monitoring target area) at time T as a “potential suspicious person” candidate for the suspicious person S (the above-mentioned candidate). Extract as In the example of FIG. 8A, three persons other than the suspicious person S are extracted as candidates A to C. For example, the potential suspicious person detection unit 303 applies the above-described face image extraction processing or the like to the video data (image data) of each camera 10 at time T, and extracts candidates. At that time, if the same person is captured by a plurality of cameras 10, the potential suspicious person detection unit 303 preferably treats persons having similar feature vectors as the same person and eliminates duplication of persons. .
 次に、潜在的不審者検出部303は、映像サーバ20に対して、各カメラ10における時刻Tからの過去の映像データ(複数の画像データ)の提出を要求する。図8の例では、潜在的不審者検出部303は、時刻P3から現在の時刻Tまでの映像データの提出を映像サーバ20に要求したものとする。また、図8(b)~(d)には、時刻P3から時刻Tまでの映像データのうち時刻P2、P1の画像データを代表して記載している。なお、時刻P1~P3の関係は、時刻P1の画像データが最も新しく、時刻P3の画像データが最も古いものとなる。 Next, the potential suspicious person detection unit 303 requests the video server 20 to submit past video data (a plurality of image data) from the time T in each camera 10. In the example of FIG. 8, it is assumed that the potential suspicious person detection unit 303 requests the video server 20 to submit video data from time P3 to the current time T. 8B to 8D, the image data at times P2 and P1 among the video data from time P3 to time T are shown as representatives. The relationship between the times P1 to P3 is that the image data at the time P1 is the newest and the image data at the time P3 is the oldest.
 過去の映像データを取得すると、潜在的不審者検出部303は、不審者ともに複数のカメラ10にて同時に写る候補者の有無を判定する。例えば、図8(b)を参照すると、不審者Sと候補者Aがカメラ10-1に同時に映っている。また、図8(d)を参照すると、不審者S、候補者A及びCがカメラ10-3にて同時に映っている。この場合、複数のカメラ10にて不審者Sと同時に映っているのは候補者Aであるから、潜在的不審者検出部303は、候補者Aを潜在的不審者として検出する。なお、候補者Cは、一度、不審者Sと一緒にカメラ10-3の映像データに写っているに過ぎないので、潜在的不審者として検出されない。 When the past video data is acquired, the potential suspicious person detection unit 303 determines whether or not there are candidates that are simultaneously captured by the plurality of cameras 10 together with the suspicious person. For example, referring to FIG. 8B, the suspicious person S and the candidate A are simultaneously shown on the camera 10-1. Further, referring to FIG. 8D, the suspicious person S and candidates A and C are simultaneously shown by the camera 10-3. In this case, since it is the candidate A that appears simultaneously with the suspicious person S in the plurality of cameras 10, the potential suspicious person detecting unit 303 detects the candidate A as a potential suspicious person. Note that candidate C is not detected as a potential suspicious person because it is only reflected in the video data of the camera 10-3 once together with the suspicious person S.
 続いて、図9を参照しつつ、第2の判定基準について説明する。この場合にも、不審者検出部302は、時刻Tにおいて不審者Sを検出したものとする。また、不審者Sが検出されると、潜在的不審者検出部303は、第1の判定基準の場合と同じように、候補者A~Cを抽出する。つまり、第2の判定基準についても、図8(a)の状況が検出されたものとする。 Subsequently, the second determination criterion will be described with reference to FIG. Also in this case, it is assumed that the suspicious person detection unit 302 detects the suspicious person S at the time T. When the suspicious person S is detected, the potential suspicious person detecting unit 303 extracts candidates A to C as in the case of the first determination criterion. That is, it is assumed that the situation shown in FIG. 8A is also detected for the second determination criterion.
 次に、潜在的不審者検出部303は、映像サーバ20に対して、各カメラ10における時刻Tからの過去の映像データの提出を要求する。第2の判定基準に関しても、時刻P3から時刻Tまでの映像データの提出を要求したものとする。 Next, the potential suspicious person detection unit 303 requests the video server 20 to submit past video data from the time T in each camera 10. As for the second criterion, it is assumed that the submission of video data from time P3 to time T is requested.
 過去の映像データを取得すると、潜在的不審者検出部303は、不審者Sと候補者A~Cそれぞれについて、上記期間内(時刻P3~T)の移動の軌跡を計算する。例えば、潜在的不審者検出部303は、映像データをなす画像データごとに人物(不審者S、候補者A~C)を特定し、特定した人物の位置を計算する。例えば、潜在的不審者検出部303は、各カメラ10の位置情報(座標)と各画像データから得られる各人物に関する情報(カメラ10からの方向、画像データにおける人物の大きさ)等を用いることで各人物の位置を算出する。その後、潜在的不審者検出部303は、各画像データ(各時刻)にて算出された位置(座標)を接続することで、各人物に関する移動の軌跡とする。 When the past video data is acquired, the potential suspicious person detecting unit 303 calculates the movement trajectory within the above period (time P3 to T) for each of the suspicious person S and candidates A to C. For example, the potential suspicious person detection unit 303 specifies a person (suspicious person S, candidates A to C) for each image data forming the video data, and calculates the position of the specified person. For example, the potential suspicious person detection unit 303 uses position information (coordinates) of each camera 10 and information on each person obtained from each image data (direction from the camera 10, the size of the person in the image data), and the like. To calculate the position of each person. After that, the potential suspicious person detection unit 303 connects the positions (coordinates) calculated in each image data (each time) to obtain a movement trajectory for each person.
 例えば、不審者Sに関しては図9(a)に示す軌跡が、候補者Aに関しては図9(b)に示す軌跡が、候補者Bに関しては図9(c)に示す軌跡がそれぞれ計算されたものとする。また、図9(d)は上記3つの軌跡を重ね合わせた図である。図9(d)において、実線が不審者Sの軌跡、点線が候補者Aの軌跡、一点鎖線が候補者Bの軌跡にそれぞれ対応する。なお、図9以降の図面において、軌跡上の黒点近傍に時刻を記載している。 For example, the trajectory shown in FIG. 9A is calculated for the suspicious person S, the trajectory shown in FIG. 9B is calculated for the candidate A, and the trajectory shown in FIG. 9C is calculated for the candidate B. Shall. FIG. 9D is a diagram in which the above three trajectories are superimposed. In FIG. 9D, the solid line corresponds to the locus of the suspicious person S, the dotted line corresponds to the locus of the candidate A, and the alternate long and short dash line corresponds to the locus of the candidate B. In FIG. 9 and subsequent drawings, the time is written near the black spot on the locus.
 次に、潜在的不審者検出部303は、不審者Sに対応する軌跡と候補者A~Cそれぞれに対応する軌跡の間の距離が所定の範囲内に収まっている時間(以下、行動時間と称する)を計算する。潜在的不審者検出部303は、各画像データ(各時刻)における不審者と候補者間の距離を計算し、当該計算した距離が所定の値以下となる画像データの数を計数することで、上記行動時間を算出できる。 Next, the potential suspicious person detecting unit 303 determines the time during which the distance between the locus corresponding to the suspicious person S and the locus corresponding to each of the candidates A to C is within a predetermined range (hereinafter referred to as action time and Calculated). The potential suspicious person detection unit 303 calculates the distance between the suspicious person and the candidate in each image data (at each time), and counts the number of image data in which the calculated distance is equal to or less than a predetermined value. The action time can be calculated.
 例えば、図9(a)に示す不審者Sの軌跡と図9(b)に示す候補者Aの軌跡では、2つの軌跡間の距離は近く(2人の間の距離が所定値以下)、その行動時間も時刻P3~Tであると計算できる。対して、図9(a)に示す不審者Sの軌跡と図9(c)示す候補者Bの軌跡では、時刻P3~P2の間は2人の間の距離は近いが、時刻P2~P1~Tの間は2人の間の距離は離れている(2人の距離が所定値より長い)。 For example, in the trajectory of the suspicious person S shown in FIG. 9A and the trajectory of the candidate A shown in FIG. 9B, the distance between the two trajectories is close (the distance between the two people is a predetermined value or less), The action time can also be calculated to be times P3 to T. On the other hand, in the trajectory of the suspicious person S shown in FIG. 9A and the trajectory of the candidate B shown in FIG. 9C, the distance between the two people is short between the times P3 and P2, but the times P2 to P1. Between ˜T, the distance between the two persons is long (the distance between the two persons is longer than a predetermined value).
 潜在的不審者検出部303は、上記行動時間に対して閾値処理を施して、閾値以上の行動時間を有する候補者を「潜在的不審者」として検出する。 The potential suspicious person detection unit 303 performs threshold processing on the action time, and detects a candidate having an action time equal to or greater than the threshold as a “potential suspicious person”.
 図9の例では、候補者Aが「潜在的不審者」として検出される。 In the example of FIG. 9, candidate A is detected as a “potential suspicious person”.
 続いて、図10を参照しつつ、第3の判定基準について説明する。この場合にも、不審者検出部302は、時刻Tにおいて不審者Sを検出したものとする。不審者Sが検出されると、潜在的不審者検出部303は、第1の判定基準の場合と同じように、候補者A~Cを抽出する。つまり、第3の判定基準についても、図8(a)の状況が検出されたものとする。 Subsequently, the third criterion will be described with reference to FIG. Also in this case, it is assumed that the suspicious person detection unit 302 detects the suspicious person S at the time T. When the suspicious person S is detected, the potential suspicious person detection unit 303 extracts candidates A to C as in the case of the first determination criterion. That is, it is assumed that the situation shown in FIG. 8A is detected for the third determination criterion.
 次に、潜在的不審者検出部303は、第2の判定基準と同様に各人物(不審者S、候補者A~C)の移動の軌跡を計算する。例えば、不審者Sに関しては図10(a)に示す軌跡が、候補者Aに関しては図10(b)に示す軌跡が、候補者Bに関しては図10(c)に示す軌跡がそれぞれ計算されたものとする。また、図10(d)は上記3つの軌跡を重ね合わせた図である。図10(d)において、実線が不審者Sの軌跡、点線が候補者Aの軌跡、一点鎖線が候補者Bの軌跡にそれぞれ対応する。 Next, the potential suspicious person detection unit 303 calculates the movement trajectory of each person (suspicious person S, candidates A to C) in the same manner as the second determination criterion. For example, the trajectory shown in FIG. 10A is calculated for the suspicious person S, the trajectory shown in FIG. 10B is calculated for the candidate A, and the trajectory shown in FIG. 10C is calculated for the candidate B. Shall. FIG. 10D is a diagram in which the above three trajectories are superimposed. In FIG. 10D, the solid line corresponds to the locus of the suspicious person S, the dotted line corresponds to the locus of the candidate A, and the alternate long and short dash line corresponds to the locus of the candidate B.
 次に、潜在的不審者検出部303は、不審者Sの軌跡と候補者A~Cの軌跡が交差する点があるか否かを判定する。その際、交差する点がある場合には、潜在的不審者検出部303は、当該交差する点の時刻が所定の範囲内にあるか否かを判定する。例えば、図10(a)に示す不審者Sの軌跡と図10(b)に示す候補者Aの軌跡は、時刻P2にて交わっている。対して、図10(a)に示す不審者Sの軌跡と図10(c)に示す候補者Bの軌跡は同一時刻で交わることはない。つまり、不審者Sの軌跡と候補者Bの軌跡は交わる点があるが、その時刻がずれており上記2つの判定のうち時刻に関する判定を満たさない。 Next, the potential suspicious person detecting unit 303 determines whether or not there is a point where the locus of the suspicious person S and the locus of the candidates A to C intersect. At that time, if there is a crossing point, the potential suspicious person detection unit 303 determines whether or not the time of the crossing point is within a predetermined range. For example, the locus of the suspicious person S shown in FIG. 10A and the locus of the candidate A shown in FIG. 10B intersect at time P2. On the other hand, the locus of the suspicious person S shown in FIG. 10A and the locus of the candidate B shown in FIG. 10C do not intersect at the same time. That is, there is a point where the locus of the suspicious person S and the locus of the candidate B intersect, but the time is shifted, and the determination regarding the time among the above two determinations is not satisfied.
 潜在的不審者検出部303は、上記2つの判定(交差する点があり、その時刻が所定の範囲内)に合致する軌跡に対応する候補者を「潜在的不審者」として扱う。 The potential suspicious person detection unit 303 treats a candidate corresponding to a trajectory that matches the above two determinations (there is an intersecting point and the time is within a predetermined range) as a “potential suspicious person”.
 図10の例では、候補者Aが「潜在的不審者」として検出される。 In the example of FIG. 10, candidate A is detected as a “potential suspicious person”.
 続いて、図11を参照しつつ、第4の判定基準について説明する。 Subsequently, the fourth determination criterion will be described with reference to FIG.
 第4の判定基準についても、不審者検出部302は、時刻Tにおいて不審者Sを検出したものとする。その後、潜在的不審者検出部303は、第1の判定基準の場合と同様に候補者を抽出する。この場合、図11(a)に示す状況が検出されたものとする。不審者が検出されると、潜在的不審者検出部303は、映像サーバ20に対して過去の映像データの提出を要求する。 Regarding the fourth determination criterion, it is assumed that the suspicious person detection unit 302 detects the suspicious person S at time T. Thereafter, the potential suspicious person detection unit 303 extracts candidates as in the case of the first determination criterion. In this case, it is assumed that the situation shown in FIG. When a suspicious person is detected, the potential suspicious person detection unit 303 requests the video server 20 to submit past video data.
 その後、潜在的不審者検出部303は、取得した映像データを解析して、不審者Sが予め定めた動作を行っているか否かを判定する。予め定めた動作とは、例えば、スマートフォン等を操作して他者と通話している動作や他者と手信号を交わしている動作が該当する。例えば、潜在的不審者検出部303は、取得した映像データに写る人物画像に対してパターンマッチング等の技術を適用することで、不審者が予め定めた動作を行っているか否かを判定する。例えば、不審者が腕や指を使って「×」印や「○」印を作っている場合に、潜在的不審者検出部303は、不審者は他者と手信号を交わしていると判定する。 Thereafter, the potential suspicious person detection unit 303 analyzes the acquired video data and determines whether or not the suspicious person S is performing a predetermined operation. The predetermined operation corresponds to, for example, an operation of talking with another person by operating a smartphone or the like and an operation of exchanging hand signals with the other person. For example, the potential suspicious person detection unit 303 determines whether or not the suspicious person is performing a predetermined operation by applying a technique such as pattern matching to the person image shown in the acquired video data. For example, when a suspicious person makes an “X” mark or a “◯” mark using an arm or a finger, the potential suspicious person detection unit 303 determines that the suspicious person exchanges a hand signal with another person. To do.
 不審者が予め定めた動作を行っている場合には、潜在的不審者検出部303は、当該予め定めた動作に対応する動作を候補者が同時刻(実質的に同じ時刻)にて行っているか否かを判定する。例えば、不審者がスマートフォン等により通話を行っていれば、同時刻に通話を行っている候補者が存在するか否かが判定される。あるいは、不審者が手信号を送っていれば、同時刻にて手信号を送っている候補者が存在するか否かが判定される。 When the suspicious person performs a predetermined action, the potential suspicious person detection unit 303 causes the candidate to perform an action corresponding to the predetermined action at the same time (substantially the same time). It is determined whether or not. For example, if a suspicious person is making a call using a smartphone or the like, it is determined whether or not there is a candidate making a call at the same time. Alternatively, if the suspicious person is sending a hand signal, it is determined whether or not there is a candidate sending a hand signal at the same time.
 潜在的不審者検出部303は、上記判定に合致する候補者を「潜在的不審者」に設定する。例えば、図11(b)を参照すると、時刻P1にて、不審者Sと候補者Aが共に通話を行っている動作を行っている。従って、潜在的不審者検出部303は、候補者Aを潜在的不審者として検出する。対して、候補者Bは時刻P2にて他者と通話しているが(図11(c)参照)、不審者Sが通話している時刻(時刻P1)と異なるため潜在的不審者として扱われない。 The potential suspicious person detection unit 303 sets a candidate that matches the above determination as a “potential suspicious person”. For example, referring to FIG. 11 (b), at time P1, the suspicious person S and the candidate A are both performing a call. Therefore, the potential suspicious person detection unit 303 detects the candidate A as a potential suspicious person. On the other hand, although candidate B is talking with another person at time P2 (see FIG. 11C), it is treated as a potential suspicious person because it differs from the time when suspicious person S is talking (time P1). I will not.
 以上のように、潜在的不審者検出部303は、複数の判定基準(判定条件)に従い、検出された不審者に関連する人物が警備エリアに存在するか否かを判定する。なお、潜在的不審者検出部303は、上記4つの判定を順番に実行していき判定基準を満たす候補者が発見された場合(潜在的不審者が検出された場合)には、4つの判定処理を停止しても良いし、4つの処理を一通り実行してもよい。但し、1人の不審者に対して1人の潜在的不審者が存在するとは限らず、2人以上の潜在的不審者が存在する可能性もある。このような可能性を考慮すれば、潜在的不審者検出部303は、選定された候補者の全てに関し、潜在的不審者の検出処理(上記4つの判定基準による検出処理)を実行するのが望ましい。 As described above, the potential suspicious person detection unit 303 determines whether or not a person related to the detected suspicious person exists in the security area according to a plurality of determination criteria (determination conditions). The potential suspicious person detection unit 303 performs the above four determinations in order, and if a candidate that satisfies the determination criteria is found (when a potential suspicious person is detected), the four determinations are made. The process may be stopped, or four processes may be executed in a row. However, one potential suspicious person does not necessarily exist for one suspicious person, and there may be two or more potential suspicious persons. In consideration of such a possibility, the potential suspicious person detection unit 303 executes the detection process of potential suspicious persons (detection process based on the above four determination criteria) for all of the selected candidates. desirable.
 また、上記4つの判定処理では、各処理にて映像サーバ20から映像データを取得すると説明している。しかし、既に実行された判定処理の過程で取得した映像データを再利用可能な場合には、映像サーバ20から映像データを取得する必要がないのは勿論である。 In the above four determination processes, it is described that the video data is acquired from the video server 20 in each process. However, it is needless to say that it is not necessary to acquire the video data from the video server 20 when the video data acquired in the course of the determination process that has already been performed can be reused.
 潜在的不審者検出部303は、判定結果(不審者及び/又は潜在的不審者)を検出結果出力部304に引き渡す。 The potential suspicious person detection unit 303 passes the determination result (suspicious person and / or potential suspicious person) to the detection result output unit 304.
 検出結果出力部304は、不審者検出部302、潜在的不審者検出部303の検出結果を外部に出力する。例えば、検出結果出力部304は、不審者とその協力者(潜在的不審者)が検出された場合には、オペレーションセンタのオペレータに警備エリアの警備強化を要請するといった出力を行う。具体的には、検出結果出力部304は、オペレーションセンタのモニタ等に警備強化が必要なエリアを指定しつつ、不審者とその協力者が検出された旨を表示する。 The detection result output unit 304 outputs the detection results of the suspicious person detecting unit 302 and the potential suspicious person detecting unit 303 to the outside. For example, when a suspicious person and a cooperator (potential suspicious person) are detected, the detection result output unit 304 performs an output such as requesting an operation center operator to strengthen security in the security area. Specifically, the detection result output unit 304 displays information indicating that a suspicious person and a cooperator have been detected while designating an area that requires enhanced security on an operation center monitor or the like.
 あるいは、検出結果はGUI(Graphical User Interface)により警備会社のオペレータ等に提示されてもよい。例えば、検出結果出力部304は、図12に示すような画面をオペレーションセンタのモニタに表示してもよい。 Alternatively, the detection result may be presented to a security company operator or the like by GUI (Graphical User Interface). For example, the detection result output unit 304 may display a screen as shown in FIG. 12 on the monitor of the operation center.
 例えば、図12を参照すると、検出結果出力部304は、警備エリアを示すマップに不審者及び協力者(潜在的不審者)を示しつつ、当該2人の人物に関連性があることを明示する。図12の例では、不審者と協力者を線で結び2人の関係性を明示している。但し、図12の例示は、不審者と潜在的不審者を明示する態様を限定する趣旨ではない。例えば、不審者や潜在的不審者を丸印が囲む、あるいは、不審者と潜在的不審者を点滅させる等の態様であってもよい。 For example, referring to FIG. 12, the detection result output unit 304 clearly indicates that the two persons are related while showing a suspicious person and a collaborator (potential suspicious person) on the map indicating the security area. . In the example of FIG. 12, the suspicious person and the collaborator are connected by a line to clearly show the relationship between the two. However, the illustration of FIG. 12 is not intended to limit the mode of clearly indicating the suspicious person and the potential suspicious person. For example, the suspicious person or the potential suspicious person may be surrounded by a circle, or the suspicious person and the potential suspicious person may blink.
 あるいは、検出結果出力部304は、潜在的不審者を検出した際の根拠をモニタ等に表示してもよい。例えば、第1の判定基準により潜在的不審者を割り出した場合には、潜在的不審者が不審者と共に写る映像データを取得した複数のカメラ10を明示してもよい。あるいは、第2、第3の判定基準により潜在的不審者を割り出した場合には、不審者及び潜在的不審者それぞれの移動軌跡をモニタ等に表示してもよい。 Alternatively, the detection result output unit 304 may display the basis when a potential suspicious person is detected on a monitor or the like. For example, when a potential suspicious person is determined based on the first determination criterion, a plurality of cameras 10 that have acquired video data of the potential suspicious person and the suspicious person may be specified. Alternatively, when a potential suspicious person is determined based on the second and third determination criteria, the movement trajectories of the suspicious person and the potential suspicious person may be displayed on a monitor or the like.
 あるいは、検出結果出力部304は、映像サーバ20から過去の映像データを取得し、不審者及び潜在的不審者の行動履歴が把握可能な動画を作成し、モニタ等に出力してもよい。つまり、検出結果出力部304は、映像サーバ20から取得した映像データを加工、編集することにより、不審者や潜在的不審者が過去にどのような行動を行っていたかが把握可能な動画を作成してもよい。 Alternatively, the detection result output unit 304 may acquire past video data from the video server 20, create a video that can grasp the action history of the suspicious person and the potential suspicious person, and output the video to a monitor or the like. That is, the detection result output unit 304 processes and edits the video data acquired from the video server 20 to create a video that can grasp what action the suspicious person or potential suspicious person has performed in the past. May be.
 さらに、不審者や潜在的不審者に関する情報(特徴量以外の情報)をデータベースサーバ40から取得可能な場合には、検出結果出力部304は、当該取得した情報も合わせてモニタ等に表示してもよい。例えば、不審者の顔画像やプロフィール(氏名、住所等)がデータベースサーバ40(あるいは、他のデータベースサーバでもよい)から取得可能な場合には、検出結果出力部304は、これらの情報を図12の表示に重畳して表示してもよい。 Further, when information about the suspicious person or potential suspicious person (information other than the feature amount) can be acquired from the database server 40, the detection result output unit 304 also displays the acquired information on a monitor or the like. Also good. For example, when the face image and profile (name, address, etc.) of the suspicious person can be acquired from the database server 40 (or another database server), the detection result output unit 304 displays these pieces of information in FIG. It may be displayed superimposed on the display.
 解析サーバ30の動作をまとめると、例えば、図13に示すフローチャートのとおりとなる。 The operation of the analysis server 30 is summarized as shown in the flowchart shown in FIG.
 ステップS201において、不審者検出部302は、映像サーバ20から映像データを取得したか否かを判定する。 In step S201, the suspicious person detection unit 302 determines whether or not video data has been acquired from the video server 20.
 映像データを取得していなければ(ステップS201、No分岐)、ステップS201の処理が継続される。映像データを取得していれば(ステップS201、Yes分岐)、不審者検出部302は、映像データから人物の顔画像を抽出すると共に、その特徴量を算出する(ステップS202)。 If the video data has not been acquired (step S201, No branch), the process of step S201 is continued. If the video data has been acquired (step S201, Yes branch), the suspicious person detection unit 302 extracts a person's face image from the video data and calculates the feature amount (step S202).
 その後、不審者検出部302は、データベースサーバ40に対して、犯罪者に関する情報(例えば、犯罪者の顔画像から算出された特徴量)の提出を要求する(ステップS203)。 Thereafter, the suspicious person detection unit 302 requests the database server 40 to submit information related to the criminal (for example, a feature amount calculated from the criminal face image) (step S203).
 ステップS204において、不審者検出部302は、データベースサーバ40からの情報が受信できたか否かを判定する。 In step S204, the suspicious person detection unit 302 determines whether or not the information from the database server 40 has been received.
 情報が受信できない場合(ステップS204、No分岐)には、ステップS204の処理が繰り返される。情報が受信できた場合(ステップS204、Yes分岐)には、不審者検出部302は、特徴量の比較等により不審者の検出を試みる(ステップS205)。 If the information cannot be received (step S204, No branch), the process of step S204 is repeated. When the information can be received (step S204, Yes branch), the suspicious person detection unit 302 tries to detect the suspicious person by comparing feature amounts or the like (step S205).
 不審者が検出されない場合(ステップS205、No分岐)には、ステップS201に戻り処理が継続される。 If no suspicious person is detected (step S205, No branch), the process returns to step S201 and continues.
 不審者が検出された場合(ステップS205、Yes分岐)には、その旨が潜在的不審者検出部303に通知され、潜在的不審者検出部303が映像サーバ20に対し、過去の所定期間における映像データの提出を要求する(ステップS206)。 When a suspicious person is detected (step S205, Yes branch), the fact is notified to the potential suspicious person detection unit 303, and the potential suspicious person detection unit 303 notifies the video server 20 in a past predetermined period. A request is made to submit video data (step S206).
 ステップS207において、潜在的不審者検出部303は、映像サーバ20からの映像データが取得できたか否かを判定する。映像データが受信できない場合(ステップS207、No分岐)には、ステップS207の処理が繰り返される。 In step S207, the potential suspicious person detection unit 303 determines whether the video data from the video server 20 has been acquired. If video data cannot be received (step S207, No branch), the process of step S207 is repeated.
 映像データが受信できた場合(ステップS207、Yes分岐)には、潜在的不審者検出部303は、検出された不審者に関する潜在的不審者を発見する解析を行う(ステップS208)。潜在的不審者が検出されない場合(ステップS209、No分岐)には、ステップS201に戻り処理が継続される。潜在的不審者が検出された場合(ステップS209、Yes分岐)には、潜在的不審者検出部303は、検出結果を検出結果出力部304に通知する。 When the video data can be received (step S207, Yes branch), the potential suspicious person detection unit 303 performs an analysis to find a potential suspicious person related to the detected suspicious person (step S208). When a potential suspicious person is not detected (step S209, No branch), the process returns to step S201 and continues. When a potential suspicious person is detected (step S209, Yes branch), the potential suspicious person detection unit 303 notifies the detection result output unit 304 of the detection result.
 検出結果出力部304は、検出結果をオペレーションセンタのモニタ等に出力する(ステップS210)。 The detection result output unit 304 outputs the detection result to an operation center monitor or the like (step S210).
[データベースサーバ]
 図14は、第1の実施形態に係るデータベースサーバ40の処理構成の一例を示す図である。図14を参照すると、データベースサーバ40は、通信制御部401と、データベースアクセス部402と、を含んで構成される。
Database server
FIG. 14 is a diagram illustrating an example of a processing configuration of the database server 40 according to the first embodiment. Referring to FIG. 14, the database server 40 includes a communication control unit 401 and a database access unit 402.
 通信制御部401は、映像サーバ20の通信制御部201等と同様に他の装置との間の通信を制御する。 The communication control unit 401 controls communication with other devices in the same manner as the communication control unit 201 of the video server 20.
 データベースアクセス部402は、犯罪者に関する情報等が格納されたデータベースにアクセスし、解析サーバ30からの情報提出要求を処理する。より具体的には、データベースアクセス部402は、解析サーバ30から犯罪者に関する情報の提出を要求されると、データベースから必要な情報を読み出し応答する。 The database access unit 402 accesses a database in which information about criminals is stored, and processes an information submission request from the analysis server 30. More specifically, when the database access unit 402 is requested by the analysis server 30 to submit information related to a criminal, the database access unit 402 reads out necessary information from the database and responds.
 データベースサーバ40の動作をまとめると、例えば、図15に示すフローチャートのとおりとなる。 The operation of the database server 40 is summarized as shown in the flowchart of FIG.
 ステップS301において、データベースアクセス部402は、解析サーバ30からの情報提出要求を受信したか否かを判定する。情報提出要求を受信していれば(ステップS301、Yes分岐)、データベースアクセス部402は、データベースから必要な情報を読み出し、解析サーバ30に送信する(ステップS302)。情報提出要求を受信していなければ(ステップS301、No分岐)、ステップS301に戻り処理が継続される。 In step S301, the database access unit 402 determines whether an information submission request from the analysis server 30 has been received. If the information submission request has been received (step S301, Yes branch), the database access unit 402 reads out necessary information from the database and transmits it to the analysis server 30 (step S302). If an information submission request has not been received (step S301, No branch), the process returns to step S301 and the processing is continued.
[システムの動作]
 次に、図16を参照しつつ、第1の実施形態に係る監視システムの動作を説明する。
[System Operation]
Next, the operation of the monitoring system according to the first embodiment will be described with reference to FIG.
 ステップS01において、映像サーバ20は、最新の映像データを所定の間隔にて解析サーバ30に送信する。つまり、映像サーバ20は、警備エリアのリアルタイムな情報を解析サーバ30に出力する。 In step S01, the video server 20 transmits the latest video data to the analysis server 30 at predetermined intervals. That is, the video server 20 outputs real-time information on the security area to the analysis server 30.
 解析サーバ30は、当該映像データに写る人物の顔画像を抽出し、その特徴量を算出する(ステップS02)。 The analysis server 30 extracts the face image of the person shown in the video data and calculates the feature amount (step S02).
 その後、解析サーバ30は、データベースサーバ40に対し、犯罪者に関する情報の提出を要求する(ステップS03)。 Thereafter, the analysis server 30 requests the database server 40 to submit information on the criminal (step S03).
 データベースサーバ40は、当該要求に応じて、犯罪者に関する情報を解析サーバ30に送信する(ステップS04)。 The database server 40 transmits information on the criminal to the analysis server 30 in response to the request (step S04).
 解析サーバ30は、先に算出した特徴量と犯罪者に関する情報を用いて、映像データに写る人物の照合処理(映像データに写る人物が犯罪者データベースに登録されているか否かの判定処理)を実行する(ステップS05)。 The analysis server 30 uses the previously calculated feature amount and information about the criminal to perform a matching process of a person appearing in the video data (determination process as to whether or not a person appearing in the video data is registered in the criminal database). Execute (Step S05).
 照合処理の結果、映像データに写る人物が犯罪者データベースに登録されていなければ、ステップS01~S05の処理が繰り返される。照合処理の結果、映像データに写る人物が犯罪者データベースに登録されていれば、当該人物は不審者として扱われ、対応する潜在的不審者の検出が開始される。 If the person shown in the video data is not registered in the criminal database as a result of the collation process, the processes in steps S01 to S05 are repeated. As a result of the collation processing, if a person shown in the video data is registered in the criminal database, the person is treated as a suspicious person, and detection of a corresponding potential suspicious person is started.
 このように、第1の実施形態に係る監視システムでは、カメラ10及び映像サーバ20を介して現場(警備エリア)の状況をリアルタイムに把握し、不審者の存在を直ちに検出する。不審者が存在すれば、図16のステップS11以降の処理が実行される。 As described above, in the monitoring system according to the first embodiment, the situation of the site (security area) is grasped in real time via the camera 10 and the video server 20, and the presence of a suspicious person is immediately detected. If there is a suspicious person, the processing after step S11 in FIG. 16 is executed.
 ステップS11において、解析サーバ30は、必要な映像データを「時間(期間)」、「場所」及び「カメラID」等により指定し、映像データの出力を映像サーバ20に要求する。 In step S11, the analysis server 30 designates necessary video data by “time (period)”, “location”, “camera ID”, etc., and requests the video server 20 to output video data.
 映像サーバ20は、当該要求に応じて、記憶媒体から必要なデータを読み出して映像データとして出力する(ステップS12)。 In response to the request, the video server 20 reads necessary data from the storage medium and outputs it as video data (step S12).
 その後、解析サーバ30は、映像データを解析することで、潜在的不審者の有無を判定する(ステップS13)。 Thereafter, the analysis server 30 analyzes the video data to determine whether there is a potential suspicious person (step S13).
 その後、解析サーバ30は、潜在的不審者等の検出結果を出力する(ステップS14)。 Thereafter, the analysis server 30 outputs a detection result of a potential suspicious person or the like (step S14).
 なお、図16の例示では、検出された潜在的不審者が犯罪履歴を有するかの確認は行われていない。そこで、図17に示すように、潜在的不審者に関しても犯罪者データベースに登録されているかの判定がなされてもよい。 In the example of FIG. 16, it is not confirmed whether the detected potential suspicious person has a criminal history. Therefore, as shown in FIG. 17, it may be determined whether a potential suspicious person is registered in the criminal database.
 具体的には、ステップS13-1において、解析サーバ30は、犯罪者に関する情報提出をデータベースサーバ40に要求する。データベースサーバ40は、当該要求に応じて、犯罪者に関する情報を送信する(ステップS13-2)。解析サーバ30は、潜在的不審者の顔画像から算出した特徴量とデータベースサーバ40から取得した特徴量を用いて、潜在的不審者に関する照合処理を行う(ステップS13-3)。その後、解析サーバ30は、ステップS14において、当該照合処理の結果も含めて検出結果を出力してもよい。 Specifically, in step S13-1, the analysis server 30 requests the database server 40 to submit information regarding criminals. In response to the request, the database server 40 transmits information about the criminal (step S13-2). The analysis server 30 performs collation processing on the potential suspicious person using the feature quantity calculated from the face image of the potential suspicious person and the feature quantity acquired from the database server 40 (step S13-3). Thereafter, in step S14, the analysis server 30 may output the detection result including the result of the matching process.
 以上のように、第1の実施形態に係る監視システムでは、警備エリアに不審者の存在を認めると、当該不審者と所定の関係を有するであろうと推定される潜在的不審者の検出を試みる。具体的には、第1の実施形態に係る監視システムでは、上述した4つの判定基準を用いて、潜在的不審者の検出を試みる。その結果、主たる監視対象(例えば、前科がある人物)に加え、主たる監視対象と関連性のある対象(例えば、前科者と行動を共にする人物)の検出が可能となる。犯罪者は単独で行動しているとは限らず、複数の協力者(共犯者)が存在する可能性がある。このような場合であっても、第1の実施形態に係る監視システムでは、不審者の潜在的な協力者等を検出することができる。その結果、事前に警備体制の強化など対策を打つことや、潜在的不審者の数等により計画されている犯罪の規模(犯罪リスク)を事前に見積もることが可能となる。 As described above, in the monitoring system according to the first embodiment, when the presence of a suspicious person is recognized in the security area, an attempt is made to detect a potential suspicious person who is estimated to have a predetermined relationship with the suspicious person. . Specifically, the monitoring system according to the first embodiment attempts to detect a potential suspicious person using the above-described four determination criteria. As a result, in addition to the main monitoring target (for example, a person with a predecessor), it is possible to detect a target (for example, a person who acts together with the predecessor) in relation to the main monitoring target. Criminals are not necessarily acting alone, and there may be multiple collaborators (accomplices). Even in such a case, the monitoring system according to the first embodiment can detect potential collaborators of suspicious persons. As a result, it is possible to take measures such as strengthening the security system in advance and estimate the scale of crime (crime risk) planned based on the number of potential suspicious individuals.
[第2の実施形態]
 続いて、第2の実施形態について図面を参照して詳細に説明する。
[Second Embodiment]
Next, a second embodiment will be described in detail with reference to the drawings.
 第2の実施形態では、第1の実施形態にて説明した潜在的不審者の検出処理の精度をより向上させることを説明する。なお、第2の実施形態におけるシステム構成や各機器の構成は、第1の実施形態と同様とすることができるので、図2や図4等に相当する説明を省略する。 In the second embodiment, it will be described that the accuracy of the detection process of the potential suspicious person described in the first embodiment is further improved. Since the system configuration and the configuration of each device in the second embodiment can be the same as those in the first embodiment, the description corresponding to FIG. 2 and FIG. 4 is omitted.
 第1の実施形態では、第1~第4の判定基準のいずれかに該当する候補者が存在すれば、当該候補者を「潜在的不審者」に設定していた。しかし、このような処理であると、警備エリアに多くの人が存在する場合などで、誤検出(不審者と無関係であるにも関わらず潜在的不審者と誤判定)が増加する可能性がある。例えば、図11を用いて説明した第4の判定基準では、不審者と同じタイミングで通話を行っている候補者は「潜在的不審者」として検出される。しかし、大きなイベントの会場等が警備エリアであるような場合には、多くの人が同時に通話を行っていることが想定され、結果として多くの「潜在的不審者」が検出されることになる。 In the first embodiment, if there is a candidate corresponding to any of the first to fourth determination criteria, the candidate is set as a “potential suspicious person”. However, such a process may increase the number of false positives (misidentified as potential suspicious persons despite being unrelated to suspicious persons) when there are many people in the security area. is there. For example, according to the fourth criterion described with reference to FIG. 11, a candidate who is making a call at the same timing as a suspicious person is detected as a “potential suspicious person”. However, when a large event venue is a security area, it is assumed that many people are talking at the same time, and as a result, many "potential suspicious persons" are detected. .
 そこで、第2の実施形態に係る潜在的不審者検出部303は、複数の候補者それぞれに関して、上述の複数の判定基準に合致する数を算出し、算出された数に基づき潜在的不審者を検出する。より具体的には、潜在的不審者検出部303は、各候補者に関し、第1~第4の判定基準に係る判定処理を実施し、各判定処理に該当した回数を関連性スコアとして算出する。その後、潜在的不審者検出部303は、各候補者の関連性スコアに対して閾値処理を施すことで、潜在的不審者を決定する(潜在的不審者の数を絞り込む)。 Therefore, the potential suspicious person detection unit 303 according to the second embodiment calculates the number that matches the above-described plurality of determination criteria for each of the plurality of candidates, and selects a potential suspicious person based on the calculated number. To detect. More specifically, the potential suspicious person detection unit 303 performs the determination process according to the first to fourth determination criteria for each candidate, and calculates the number of times corresponding to each determination process as the relevance score. . Thereafter, the potential suspicious person detection unit 303 performs a threshold process on the relevance score of each candidate to determine potential suspicious persons (narrow the number of potential suspicious persons).
 例えば、潜在的不審者検出部303は、図18に示すようなテーブル情報を作成する。図18を参照すると、候補者Aに関しては第1~第4の判定基準に該当するために、関連性スコアが「4」として算出される。対して、候補者Bに関しては、第2及び第3の判定基準が非該当であるので、関連性スコアが「2」として算出される。従って、候補者が「潜在的不審者」であるか否かを定める閾値を「3」とすれば、候補者Aは潜在的不審者として検出され、候補者Bは不審者と無関係であると判定される。 For example, the potential suspicious person detection unit 303 creates table information as shown in FIG. Referring to FIG. 18, since candidate A corresponds to the first to fourth determination criteria, the relevance score is calculated as “4”. On the other hand, regarding the candidate B, since the second and third determination criteria are not applicable, the relevance score is calculated as “2”. Accordingly, if the threshold for determining whether or not the candidate is a “potential suspicious person” is “3”, the candidate A is detected as a potential suspicious person and the candidate B is irrelevant to the suspicious person. Determined.
 あるいは、潜在的不審者検出部303は、各判定基準に重みを付けて関連性スコアを算出してもよい。例えば、図19(a)に示すように、各判定基準に対して予め重みを付与しておく。そして、潜在的不審者検出部303は、予め定めた重みと判定結果に基づき関連性スコアを算出する。 Alternatively, the potential suspicious person detection unit 303 may calculate the relevance score by weighting each determination criterion. For example, as shown in FIG. 19A, a weight is given in advance to each criterion. Then, the potential suspicious person detection unit 303 calculates the relevance score based on the predetermined weight and the determination result.
 例えば、図19(a)のように重みを定めると、候補者AとBの関連性スコアはそれぞれ「37」と「7」となる。潜在的不審者検出部303は、重み付けされた判定基準に従って算出された関連性スコアに対して閾値処理を実行し、潜在的不審者を検出する。 For example, when the weights are determined as shown in FIG. 19A, the relevance scores of candidates A and B are “37” and “7”, respectively. The potential suspicious person detection unit 303 performs threshold processing on the relevance score calculated according to the weighted determination criterion, and detects a potential suspicious person.
 また、第2の実施形態のように、関連性スコアを用いて潜在的不審者の検出を行う場合には、当該スコアを減少させる仕組みを導入しても良い。例えば、イベント会場等で人の入退場が監視カメラにより把握可能な場合であって、不審者と判定された人物と潜在的不審者と判定された人物が同時に入場している場合には、潜在的不審者検出部303は、上記関連性スコアを減少させる等の処理を施してもよい。このことは、不審者とその共犯者が同時に入場するよりも、個別に入場する方がより可能性が高いであろうという考えに基づく。 Also, as in the second embodiment, when a potential suspicious person is detected using a relevance score, a mechanism for reducing the score may be introduced. For example, if a person who is determined to be a suspicious person and a person who is determined to be a potential suspicious person enter the event venue, etc. The target suspicious person detection unit 303 may perform processing such as reducing the relevance score. This is based on the idea that a suspicious person and their accomplices are more likely to enter individually than to enter at the same time.
 第2の実施形態によれば、関連性スコアを用いて潜在的不審者を検出することで、その検出精度を向上させることができる。 According to the second embodiment, the detection accuracy can be improved by detecting a potential suspicious person using the relevance score.
 上記実施形態にて説明した監視システムの構成等は例示であって、システムの構成を限定する趣旨ではない。例えば、映像サーバ20、解析サーバ30及びデータベースサーバ40が統合され、一台の装置によりこれらの機能が実現されていてもよい。あるいは、各装置の一部の機能が他の装置により実現されてもよい。例えば、解析サーバ30による不審者検出処理(不審者検出部302の処理)がデータベースサーバ40により実現されてもよい。この場合、解析サーバ30とデータベースサーバ40の間におけるデータ送受信を軽減することができる。 The configuration of the monitoring system described in the above embodiment is an example, and is not intended to limit the configuration of the system. For example, the video server 20, the analysis server 30, and the database server 40 may be integrated, and these functions may be realized by a single device. Alternatively, some functions of each device may be realized by other devices. For example, the database server 40 may implement the suspicious person detection process (the process of the suspicious person detection unit 302) by the analysis server 30. In this case, data transmission / reception between the analysis server 30 and the database server 40 can be reduced.
 上記実施形態では、警備システムに本願開示の監視システムを適用する場合について説明した。しかし、本願開示のシステムは、警備システム以外にも適用可能である。例えば、警察等による捜査システムに本願開示を適用すれば、共犯者も含めて犯罪者の特定、逮捕等が可能となる。あるいは、デパートやテーマパーク等の商業施設における迷子の捜索システムに本願開示を適用することができる。例えば、主たる監視対象に「親又は子」のいずれかを設定しつつ、過去の映像データを解析することで、当該主たる対象と関連する動作(例えば、辺りを見回すような動作)を行っている人物を特定する。その結果、迷子を捜す親又は親を探す子を素早く検出することができる。あるいは、試験会場等における不正防止に本願開示の監視システムを適用することもできる。 In the above embodiment, the case where the monitoring system disclosed herein is applied to the security system has been described. However, the system disclosed in the present application can be applied to other than the security system. For example, if the disclosure of the present application is applied to an investigation system by the police or the like, criminals including accomplices can be identified and arrested. Alternatively, the present disclosure can be applied to a lost child search system in a commercial facility such as a department store or a theme park. For example, an operation related to the main object (for example, an operation of looking around) is performed by analyzing past video data while setting either “parent or child” as the main monitoring object. Identify people. As a result, a parent searching for a lost child or a child searching for a parent can be quickly detected. Alternatively, the monitoring system disclosed in the present application can be applied to prevent fraud at a test venue or the like.
 上記実施形態では、カメラ10は固定カメラを想定しているが、システムにて使用されるカメラは警備員等が装着するウェアブルカメラ等であってもよいし、ドローン等に搭載されるカメラであってもよい。この場合、時間によって取得された映像の位置が異なるため、各カメラは映像を映像サーバ20に映像データを送る際に自身の位置情報等を送ることが望ましい。解析サーバ30は、当該カメラの位置情報も考慮して映像データの解析(例えば、不審者等の移動軌跡の計算)を行う。 In the above embodiment, the camera 10 is assumed to be a fixed camera, but the camera used in the system may be a wearable camera or the like worn by a guard or the like, or a camera mounted on a drone or the like. There may be. In this case, since the position of the acquired video differs depending on time, it is desirable that each camera send its own position information and the like when sending video data to the video server 20. The analysis server 30 performs analysis of video data (for example, calculation of movement trajectory of a suspicious person, etc.) in consideration of the position information of the camera.
 上記実施形態では、犯罪者データベースに登録されている人物を「不審者」として扱っているが、不審者の検出は犯罪者データベースを利用した方法に限定されない。例えば、カメラから送信されてくる映像をオペレータが確認し、当該オペレータが「不審者」の検出を行っても良い。この場合、管理者はGUI等を用いて解析サーバ30に上記不審者の入力を行い、解析サーバ30は上記説明した潜在的不審者の検出方法を実行すればよい。あるいは、例えば、特許文献2に開示されたような技術(生体情報を用いた不審者の検出)を用いて「不審者」を検出してもよい。つまり、映像データ(画像データ)以外の情報を用いて不審者を検出してもよい。 In the above embodiment, a person registered in the criminal database is treated as a “suspicious person”, but detection of a suspicious person is not limited to a method using the criminal database. For example, the operator may check the video transmitted from the camera, and the operator may detect “suspicious person”. In this case, the administrator inputs the suspicious person to the analysis server 30 using a GUI or the like, and the analysis server 30 may execute the above-described method for detecting a potential suspicious person. Alternatively, for example, a “suspicious person” may be detected using a technique disclosed in Patent Document 2 (detection of a suspicious person using biological information). That is, a suspicious person may be detected using information other than video data (image data).
 上記実施形態では、主たる監視対象及びその関連する監視対象共に「人」を設定しているが、これらの監視対象は「物」であってもよい。例えば、長時間放置されているような荷物を主たる監視対象とし、当該荷物と関係のある人物を「潜在的不審者」として探し出してもよい。あるいは、ロボットやドローン等が監視対象であってよい。つまり、主たる監視対象とその関連する監視対象の組み合わせは、「人」と「物」の任意の組み合わせとすることができる。 In the above embodiment, “person” is set for both the main monitoring target and its related monitoring target, but these monitoring targets may be “things”. For example, a package that has been left unattended for a long time may be the main monitoring target, and a person related to the package may be searched for as a “potential suspicious person”. Alternatively, a robot, a drone, or the like may be a monitoring target. That is, the combination of the main monitoring target and the related monitoring target can be any combination of “person” and “thing”.
 上記実施形態では、4つの判定基準を用いて潜在的不審者の検出を行っているが、潜在的不審者の検出は上記4つの判定基準に限定されないことは勿論である(第5、第6の判定基準を導入してもよい)。例えば、不審者と候補者の目線を解析することで、潜在的不審者の検出を行っても良い。例えば、不審者と目線を合わせた数が所定の回数以上である候補者を潜在的不審者として検出してもよいし、不審者の目線の先(見つめる先)と候補者の目線の先が一致するような場合に潜在的不審者を検出してもよい。あるいは、大多数の人間が見つめる先(例えば、コンサート会場のステージ)とは異なる場所に不審者と候補者の目線がある場合に、当該候補者を潜在的不審者として検出してもよい。 In the above-described embodiment, detection of a potential suspicious person is performed using four determination criteria. However, detection of a potential suspicious person is not limited to the above four determination criteria (fifth and sixth). May be introduced). For example, a potential suspicious person may be detected by analyzing the eyes of a suspicious person and a candidate. For example, a candidate whose total number of suspicious persons and eyes is equal to or greater than a predetermined number of times may be detected as a potential suspicious person. A potential suspicious person may be detected if they match. Alternatively, when a suspicious person and a candidate's eyes are in a different place from a place where most people look (for example, a stage in a concert hall), the candidate may be detected as a potential suspicious person.
 上記実施形態では、主たる監視対象(不審者)と関連性を有すると疑われる人物(潜在的不審者)の検出することを説明したが、潜在的不審者と関連性を有すると疑われる人物の検出が行われてもよい。つまり、潜在的不審者と検出された人物を「不審者」に置き替えて、当該置き替えられた不審者に対する潜在的不審者の検出が行われてもよい。このことにより、潜在的不審者と所定の関係を有すると疑われる人物までも検出することができる。 In the above embodiment, detection of a person (potential suspicious person) suspected of having an association with the main monitoring target (suspicious person) has been described. However, the person suspected of having an association with a potential suspicious person Detection may be performed. That is, a person who is detected as a potential suspicious person may be replaced with a “suspicious person”, and a potential suspicious person may be detected for the replaced suspicious person. Thus, even a person suspected of having a predetermined relationship with a potential suspicious person can be detected.
 上記実施形態では、「不審者」を検出した際に存在する人物を潜在的不審者の候補者として扱っているが、当該候補者の選出は「不審者」を検出したタイミングに限定されない。例えば、上記タイミングでは顔画像が抽出されない等の事情がある場合には、過去の映像データを解析し、所定の期間内に存在する人物は全て潜在的不審者の候補者として扱われてもよい。 In the above embodiment, a person who exists when a “suspicious person” is detected is treated as a potential suspicious candidate. However, the selection of the candidate is not limited to the timing when the “suspicious person” is detected. For example, if there is a situation where a face image is not extracted at the above timing, past video data may be analyzed, and all persons existing within a predetermined period may be treated as potential suspicious candidates. .
 あるいは、不審者が検出されたタイミングにて「潜在的不審者」が検出されない場合であっても、当該検出時よりも後に「潜在的不審者」が警備エリアに現れる場合も想定される。このような場合を想定し、監視システムは、不審者が検出された場合には、定期的に映像データを映像サーバ20から取得し、当該取得した映像データ(不審者発見時を基準とした未来の映像データ)を解析することで、潜在的不審者の検出処理を継続してもよい。 Alternatively, even if “potential suspicious person” is not detected at the timing when the suspicious person is detected, it is also assumed that “potential suspicious person” appears in the security area after the detection. Assuming such a case, when a suspicious person is detected, the monitoring system periodically acquires video data from the video server 20 and acquires the acquired video data (the future based on the time of finding the suspicious person). The detection processing of the potential suspicious person may be continued by analyzing the video data.
 上記実施形態では、人物の特定に顔画像の特徴量を用いているが、他の特徴量を用いてもよい。例えば、映像データを解析する際に人物を特定する必要があるが、その場合に、顔画像だけではなく服装から抽出した特徴量を用いてもよい。あるいは、顔画像の特徴量と服装から抽出した特徴量をまとめて1つの特徴量(特徴量ベクトル)としてもよい。 In the above embodiment, the feature quantity of the face image is used for specifying the person, but other feature quantities may be used. For example, when analyzing video data, it is necessary to specify a person. In that case, a feature amount extracted from clothes as well as a face image may be used. Alternatively, the feature amount of the face image and the feature amount extracted from the clothes may be combined into one feature amount (feature amount vector).
 上記実施形態では、映像サーバ20は定期的に(所定のサンプリング間隔で)、現場の状況を解析サーバ30に送信する場合について説明した。しかし、警備エリアには監視対象となる人が少ない場合など、カメラに人物が写ったと判断される場合に限り画像データを解析サーバ30に送信してもよい。つまり、映像サーバ20にて、人物や顔画像の抽出を試みて、人物等が抽出できた場合に、その際の画像データを解析サーバ30に送信してもよい。 In the above embodiment, the case has been described in which the video server 20 transmits the on-site situation to the analysis server 30 periodically (at a predetermined sampling interval). However, the image data may be transmitted to the analysis server 30 only when it is determined that a person is captured by the camera, such as when there are few people to be monitored in the security area. That is, when the video server 20 tries to extract a person or a face image and can extract a person or the like, the image data at that time may be transmitted to the analysis server 30.
 上述の説明で用いた複数のフローチャートでは、複数の工程(処理)が順番に記載されているが、各実施形態で実行される工程の実行順序は、その記載の順番に制限されない。各実施形態では、例えば各処理を並行して実行する等、図示される工程の順番を内容的に支障のない範囲で変更することができる。また、上記実施形態で説明した事項は、相反しない範囲で組み合わせることができる。 In the plurality of flowcharts used in the above description, a plurality of steps (processes) are described in order, but the execution order of the steps executed in each embodiment is not limited to the description order. In each embodiment, the order of the illustrated steps can be changed within a range that does not hinder the contents, for example, the processes are executed in parallel. Moreover, the matter demonstrated by the said embodiment can be combined in the range which does not conflict.
 上記の実施形態の一部又は全部は、以下の付記のようにも記載され得るが、以下には限られない。
[付記1]
 上述の第1の視点に係る解析サーバのとおりである。
[付記2]
 前記映像サーバがリアルタイムに出力する映像データに基づき前記第1の監視対象を検出する、第1の検出部を備える、付記1の解析サーバ。
[付記3]
 前記映像サーバは前記カメラからの映像データを蓄積し、前記映像サーバに蓄積された過去の映像データに基づき、前記第2の監視対象を検出する、第2の検出部を備える、付記2の解析サーバ。
[付記4]
 前記第2の検出部は、
 前記映像サーバから得られる過去の映像データを複数の判定基準に基づき解析することで、前記第2の監視対象を検出する、付記3の解析サーバ。
[付記5]
 前記第2の検出部は、
 前記複数の判定基準のうち少なくとも1つを満たす対象を前記第2の監視対象として検出する、付記4の解析サーバ。
[付記6]
 前記第2の検出部は、
 前記第2の監視対象となる候補に関して、前記複数の判定基準に合致する数を算出し、前記算出された数に基づき前記第2の監視対象を検出する、付記4の解析サーバ。
[付記7]
 前記第1の検出部は、
 犯罪者に関する情報をデータベースから取得し、
 前記取得した犯罪者に関する情報と前記映像サーバから取得した映像データから得られる情報とに基づき、前記第1の監視対象を検出する、付記3乃至6のいずれか一に記載の解析サーバ。
[付記8]
 前記犯罪者に関する情報は犯罪者の顔画像から算出された特徴量であり、
 前記第1の検出部は、前記犯罪者の顔画像から算出された特徴量と、前記映像サーバから取得した映像データから算出した顔画像の特徴量と、に基づき前記第1の監視対象を検出する、付記7の解析サーバ。
[付記9]
 前記第2の検出部は、
 前記映像サーバから取得した映像データを解析し、複数のカメラにおいて前記第1の監視対象と共に写る対象が存在するか否かにより、前記第2の監視対象を検出する、付記3乃至8のいずれか一に記載の解析サーバ。
[付記10]
 前記第2の検出部は、
 前記映像サーバから取得した映像データを解析し、前記第1の監視対象との間の距離が所定値以下、且つ、前記第1の監視対象との間の距離が前記所定以下の状態が所定の時間以上となる対象が存在するか否かにより、前記第2の監視対象を検出する、付記3乃至9のいずれか一に記載の解析サーバ。
[付記11]
 前記第2の検出部は、
 前記映像サーバから取得した映像データを解析して、前記第1の監視対象が移動したことによる軌跡と、前記第2の監視対象の検出候補が移動したことによる軌跡と、を算出し、前記第1の監視対象の軌跡と前記検出候補の軌跡が交差するか否かにより、前記第2の監視対象を検出する、付記3乃至10のいずれか一に記載の解析サーバ。
[付記12]
 前記第2の検出部は、
 前記映像サーバから取得した映像データを解析し、前記第1の監視対象と関連する動作を行った対象が存在するか否かにより、前記第2の監視対象を検出する、付記3乃至11のいずれか一に記載の解析サーバ。
[付記13]
 前記第1の監視対象と前記第2の監視対象の関連性を明示するような表示画面を出力する、出力部をさらに備える、付記1乃至12のいずれか一に記載の解析サーバ。
[付記14]
 上述の第2の視点に係る監視システムのとおりである。
[付記15]
 前記解析サーバは、
 前記映像サーバがリアルタイムに出力する映像データに基づき前記第1の監視対象を検出する、第1の検出部を備える、付記14の監視システム。
[付記16]
 前記映像サーバは前記カメラからの映像データを蓄積し、
 前記解析サーバは、
 前記映像サーバに蓄積された過去の映像データに基づき、前記第2の監視対象を検出する、第2の検出部を備える、付記15の監視システム。
[付記17]
 前記第2の検出部は、
 前記映像サーバから得られる過去の映像データを複数の判定基準に基づき解析することで、前記第2の監視対象を検出する、付記16の監視システム。
[付記18]
 前記第2の検出部は、
 前記複数の判定基準のうち少なくとも1つを満たす対象を前記第2の監視対象として検出する、付記17の監視システム。
[付記19]
 前記第2の検出部は、
 前記第2の監視対象となる候補に関して、前記複数の判定基準に合致する数を算出し、前記算出された数に基づき前記第2の監視対象を検出する、付記17の監視システム。
[付記20]
 前記第1の検出部は、
 犯罪者に関する情報をデータベースから取得し、
 前記取得した犯罪者に関する情報と前記映像サーバから取得した映像データから得られる情報とに基づき、前記第1の監視対象を検出する、付記16乃至19のいずれか一に記載の監視システム。
[付記21]
 前記犯罪者に関する情報は犯罪者の顔画像から算出された特徴量であり、
 前記第1の検出部は、前記犯罪者の顔画像から算出された特徴量と、前記映像サーバから取得した映像データから算出した顔画像の特徴量と、に基づき前記第1の監視対象を検出する、付記20の監視システム。
[付記22]
 前記第2の検出部は、
 前記映像サーバから取得した映像データを解析し、複数のカメラにおいて前記第1の監視対象と共に写る対象が存在するか否かにより、前記第2の監視対象を検出する、付記16乃至21のいずれか一に記載の監視システム。
[付記23]
 前記第2の検出部は、
 前記映像サーバから取得した映像データを解析し、前記第1の監視対象との間の距離が所定値以下、且つ、前記第1の監視対象との間の距離が前記所定以下の状態が所定の時間以上となる対象が存在するか否かにより、前記第2の監視対象を検出する、付記16乃至22のいずれか一に記載の監視システム。
[付記24]
 前記第2の検出部は、
 前記映像サーバから取得した映像データを解析して、前記第1の監視対象が移動したことによる軌跡と、前記第2の監視対象の検出候補が移動したことによる軌跡と、を算出し、前記第1の監視対象の軌跡と前記検出候補の軌跡が交差するか否かにより、前記第2の監視対象を検出する、付記16乃至23のいずれか一に記載の監視システム。
[付記25]
 前記第2の検出部は、
 前記映像サーバから取得した映像データを解析し、前記第1の監視対象と関連する動作を行った対象が存在するか否かにより、前記第2の監視対象を検出する、付記16乃至24のいずれか一に記載の監視システム。
[付記26]
 前記解析サーバは、
 前記第1の監視対象と前記第2の監視対象の関連性を明示するような表示画面を出力する、出力部をさらに備える、付記14乃至25のいずれか一に記載の監視システム。
[付記27]
 上述の第3の視点に係る監視方法のとおりである。
[付記28]
 上述の第4の視点に係るプログラムのとおりである。
 なお、付記27及び28の形態は、付記1の形態と同様に、付記2の形態~付記13の形態に展開することが可能である。
A part or all of the above embodiments can be described as in the following supplementary notes, but is not limited thereto.
[Appendix 1]
The analysis server according to the first aspect described above.
[Appendix 2]
The analysis server according to appendix 1, comprising a first detection unit that detects the first monitoring target based on video data output in real time by the video server.
[Appendix 3]
The analysis according to appendix 2, wherein the video server stores video data from the camera and includes a second detection unit that detects the second monitoring target based on past video data stored in the video server. server.
[Appendix 4]
The second detection unit includes:
The analysis server according to appendix 3, wherein the second monitoring target is detected by analyzing past video data obtained from the video server based on a plurality of determination criteria.
[Appendix 5]
The second detection unit includes:
The analysis server according to appendix 4, wherein a target that satisfies at least one of the plurality of determination criteria is detected as the second monitoring target.
[Appendix 6]
The second detection unit includes:
The analysis server according to appendix 4, wherein for the candidate to be the second monitoring target, a number that matches the plurality of determination criteria is calculated, and the second monitoring target is detected based on the calculated number.
[Appendix 7]
The first detection unit includes:
Get information about criminals from the database,
The analysis server according to any one of appendices 3 to 6, wherein the first monitoring target is detected based on the acquired information on the criminal and information obtained from the video data acquired from the video server.
[Appendix 8]
The information about the criminal is a feature amount calculated from the criminal face image,
The first detection unit detects the first monitoring target based on a feature amount calculated from the criminal face image and a feature amount of the face image calculated from the video data acquired from the video server. The analysis server of appendix 7.
[Appendix 9]
The second detection unit includes:
Any one of appendices 3 to 8, wherein the video data acquired from the video server is analyzed, and the second monitoring target is detected based on whether or not there are targets to be captured together with the first monitoring target in a plurality of cameras. The analysis server described in 1.
[Appendix 10]
The second detection unit includes:
The video data acquired from the video server is analyzed, and the distance to the first monitoring target is equal to or smaller than a predetermined value and the distance to the first monitoring target is equal to or smaller than the predetermined value. The analysis server according to any one of appendices 3 to 9, wherein the second monitoring target is detected based on whether or not there is a target that exceeds time.
[Appendix 11]
The second detection unit includes:
Analyzing video data acquired from the video server, calculating a trajectory due to movement of the first monitoring target and a trajectory due to movement of the detection candidate of the second monitoring target, The analysis server according to any one of appendices 3 to 10, wherein the second monitoring target is detected based on whether or not a trajectory of one monitoring target and a trajectory of the detection candidate intersect.
[Appendix 12]
The second detection unit includes:
Any one of appendices 3 to 11, wherein video data acquired from the video server is analyzed, and the second monitoring target is detected based on whether or not there is a target that has performed an operation related to the first monitoring target. The analysis server described in kaichi.
[Appendix 13]
The analysis server according to any one of appendices 1 to 12, further comprising an output unit that outputs a display screen that clearly shows the relationship between the first monitoring target and the second monitoring target.
[Appendix 14]
This is the same as the monitoring system according to the second aspect described above.
[Appendix 15]
The analysis server
The monitoring system according to appendix 14, comprising a first detection unit that detects the first monitoring target based on video data output in real time by the video server.
[Appendix 16]
The video server stores video data from the camera,
The analysis server
The monitoring system according to appendix 15, further comprising a second detection unit that detects the second monitoring target based on past video data stored in the video server.
[Appendix 17]
The second detection unit includes:
The monitoring system according to appendix 16, wherein the second monitoring target is detected by analyzing past video data obtained from the video server based on a plurality of determination criteria.
[Appendix 18]
The second detection unit includes:
The monitoring system according to appendix 17, wherein an object that satisfies at least one of the plurality of determination criteria is detected as the second monitoring object.
[Appendix 19]
The second detection unit includes:
The monitoring system according to appendix 17, wherein for the candidate to be the second monitoring target, a number matching the plurality of determination criteria is calculated, and the second monitoring target is detected based on the calculated number.
[Appendix 20]
The first detection unit includes:
Get information about criminals from the database,
The monitoring system according to any one of appendices 16 to 19, wherein the first monitoring target is detected based on the information on the acquired criminal and information obtained from video data acquired from the video server.
[Appendix 21]
The information about the criminal is a feature amount calculated from the criminal face image,
The first detection unit detects the first monitoring target based on a feature amount calculated from the criminal face image and a feature amount of the face image calculated from the video data acquired from the video server. The monitoring system according to appendix 20.
[Appendix 22]
The second detection unit includes:
Any one of appendices 16 to 21, wherein the video data acquired from the video server is analyzed, and the second monitoring target is detected based on whether or not there are targets to be captured together with the first monitoring target in a plurality of cameras. The monitoring system according to one.
[Appendix 23]
The second detection unit includes:
The video data acquired from the video server is analyzed, and the distance to the first monitoring target is equal to or smaller than a predetermined value and the distance to the first monitoring target is equal to or smaller than the predetermined value. The monitoring system according to any one of appendices 16 to 22, wherein the second monitoring target is detected based on whether or not there is a target that is longer than the time.
[Appendix 24]
The second detection unit includes:
Analyzing the video data acquired from the video server, calculating a trajectory due to the movement of the first monitoring object and a trajectory due to the movement of the detection candidate of the second monitoring object; 24. The monitoring system according to any one of appendices 16 to 23, wherein the second monitoring target is detected based on whether or not one monitoring target trajectory and the detection candidate trajectory intersect.
[Appendix 25]
The second detection unit includes:
Any one of appendices 16 to 24, wherein the video data acquired from the video server is analyzed, and the second monitoring target is detected based on whether or not there is a target that has performed an operation related to the first monitoring target. A monitoring system according to any one of the above.
[Appendix 26]
The analysis server
The monitoring system according to any one of appendices 14 to 25, further comprising an output unit that outputs a display screen that clearly indicates the relationship between the first monitoring target and the second monitoring target.
[Appendix 27]
This is the same as the monitoring method according to the third aspect described above.
[Appendix 28]
It is as the program which concerns on the above-mentioned 4th viewpoint.
Note that the forms of Supplementary Notes 27 and 28 can be expanded to the forms of Supplementary Note 2 to Supplementary Note 13 similarly to the form of Supplementary Note 1.
 本願開示では、以下の形態も可能である。 In the present disclosure, the following modes are also possible.
解決策
監視カメラで不審者を検出した場合、前記不審者/不審物と連携した動きをする人物を前記不審者と関連する人物(潜在的な不審者)とする。
効果
不審者の潜在的な仲間を検出することができるため、事前に警備体制の強化など対策を打つことが可能になる。
When a suspicious person is detected by the solution monitoring camera, a person who moves in cooperation with the suspicious person / suspicious object is set as a person (potential suspicious person) associated with the suspicious person.
Since it is possible to detect potential suspicious companions, it is possible to take measures such as strengthening the security system in advance.
 図24は、情報処理装置の構成を例示するブロック図である。実施形態に係る解析サーバは、上図に示す情報処理装置を備えていてもよい。情報処理装置は、中央処理装置(CPU:Central Processing Unit)およびメモリを有する。情報処理装置は、メモリに記憶されているプログラムをCPUが実行することにより、解析サーバが有する各部の機能の一部または全部を実現してもよい。 FIG. 24 is a block diagram illustrating the configuration of the information processing apparatus. The analysis server according to the embodiment may include the information processing apparatus illustrated in the upper diagram. The information processing apparatus includes a central processing unit (CPU: Central Processing Unit) and a memory. The information processing apparatus may realize part or all of the functions of each unit included in the analysis server by causing the CPU to execute a program stored in the memory.
形態1
監視カメラ等の映像データから不審者または不審物を検出する監視システムにおいて、
検出した不審者または不審物と連携した動きをする人物を前記不審者または前記不審物と関連のある人物とする
ことを特徴とする監視システム。
形態2
前記連携した動きとは、以下の条件のうち少なくとも一つ以上を満たすことを指す
複数のカメラで同時に映る
不審者/不審物との距離が所定値以下の場合が所定時間以上
不審者と関連性の高い動作(携帯の発信・着信等、手信号)
不審者の動線と軌跡が交わる、
形態1に記載の監視システム。
Form 1
In a surveillance system that detects suspicious persons or suspicious objects from video data such as surveillance cameras,
A monitoring system, wherein a person who moves in cooperation with a detected suspicious person or suspicious object is a person related to the suspicious person or the suspicious object.
Form 2
The coordinated movement is related to the suspicious person for a predetermined time or more when the distance from the suspicious person / suspicious object that is simultaneously reflected by a plurality of cameras indicating that at least one of the following conditions is satisfied is not more than a predetermined value. High operation (hand signaling for mobile phone calls, incoming calls, etc.)
The suspicious person's flow line and trajectory intersect,
The monitoring system according to aspect 1.
 なお、引用した上記の特許文献等の各開示は、本書に引用をもって繰り込むものとする。本発明の全開示(請求の範囲を含む)の枠内において、さらにその基本的技術思想に基づいて、実施形態ないし実施例の変更・調整が可能である。また、本発明の全開示の枠内において種々の開示要素(各請求項の各要素、各実施形態ないし実施例の各要素、各図面の各要素等を含む)の多様な組み合わせ、ないし、選択が可能である。すなわち、本発明は、請求の範囲を含む全開示、技術的思想にしたがって当業者であればなし得るであろう各種変形、修正を含むことは勿論である。特に、本書に記載した数値範囲については、当該範囲内に含まれる任意の数値ないし小範囲が、別段の記載のない場合でも具体的に記載されているものと解釈されるべきである。 In addition, each disclosure of the above cited patent documents, etc. shall be incorporated by reference into this document. Within the scope of the entire disclosure (including claims) of the present invention, the embodiments and examples can be changed and adjusted based on the basic technical concept. In addition, various combinations or selections of various disclosed elements (including each element in each claim, each element in each embodiment or example, each element in each drawing, etc.) within the scope of the entire disclosure of the present invention. Is possible. That is, the present invention of course includes various variations and modifications that could be made by those skilled in the art according to the entire disclosure including the claims and the technical idea. In particular, with respect to the numerical ranges described in this document, any numerical value or small range included in the range should be construed as being specifically described even if there is no specific description.
10、10-1~10-3 カメラ
20 映像サーバ
30、101 解析サーバ
31 CPU(Central Processing Unit)
32 メモリ
33 入出力インターフェイス
34 NIC(Network Interface Card)
40 データベースサーバ(DBサーバ)
61 不審者
62 潜在的不審者
201、301、401 通信制御部
202 データ蓄積部
203 データ出力部
302 不審者検出部
303 潜在的不審者検出部
304 検出結果出力部
402 データベースアクセス部
10, 10-1 to 10-3 Camera 20 Video server 30, 101 Analysis server 31 CPU (Central Processing Unit)
32 Memory 33 Input / output interface 34 NIC (Network Interface Card)
40 Database server (DB server)
61 Suspicious person 62 Potential suspicious person 201, 301, 401 Communication control part 202 Data storage part 203 Data output part 302 Suspicious person detection part 303 Potential suspicious person detection part 304 Detection result output part 402 Database access part

Claims (16)

  1.  第1の監視対象が検出された場合、前記第1の監視対象と関係を有すると推定される第2の監視対象を、カメラからの映像データを外部に出力する映像サーバから取得した映像データに基づき検出する、解析サーバ。 When the first monitoring target is detected, the second monitoring target estimated to have a relationship with the first monitoring target is converted into video data acquired from a video server that outputs video data from the camera to the outside. Analysis server to detect based on.
  2.  前記映像サーバがリアルタイムに出力する映像データに基づき前記第1の監視対象を検出する、第1の検出部を備える、請求項1の解析サーバ。 The analysis server according to claim 1, further comprising a first detection unit that detects the first monitoring target based on video data output in real time by the video server.
  3.  前記映像サーバは前記カメラからの映像データを蓄積し、前記映像サーバに蓄積された過去の映像データに基づき、前記第2の監視対象を検出する、第2の検出部を備える、請求項2の解析サーバ。 The said video server is provided with the 2nd detection part which accumulate | stores the video data from the said camera, and detects the said 2nd monitoring object based on the past video data accumulate | stored in the said video server. Analysis server.
  4.  前記第2の検出部は、
     前記映像サーバから得られる過去の映像データを複数の判定基準に基づき解析することで、前記第2の監視対象を検出する、請求項3の解析サーバ。
    The second detection unit includes:
    The analysis server according to claim 3, wherein the second monitoring target is detected by analyzing past video data obtained from the video server based on a plurality of determination criteria.
  5.  前記第2の検出部は、
     前記複数の判定基準のうち少なくとも1つを満たす対象を前記第2の監視対象として検出する、請求項4の解析サーバ。
    The second detection unit includes:
    The analysis server according to claim 4, wherein an object that satisfies at least one of the plurality of determination criteria is detected as the second monitoring object.
  6.  前記第2の検出部は、
     前記第2の監視対象となる候補に関して、前記複数の判定基準に合致する数を算出し、前記算出された数に基づき前記第2の監視対象を検出する、請求項4の解析サーバ。
    The second detection unit includes:
    The analysis server according to claim 4, wherein a number that matches the plurality of determination criteria is calculated for the candidate to be the second monitoring target, and the second monitoring target is detected based on the calculated number.
  7.  前記第1の検出部は、
     犯罪者に関する情報をデータベースから取得し、
     前記取得した犯罪者に関する情報と前記映像サーバから取得した映像データから得られる情報とに基づき、前記第1の監視対象を検出する、請求項3乃至6のいずれか一項に記載の解析サーバ。
    The first detection unit includes:
    Get information about criminals from the database,
    The analysis server according to any one of claims 3 to 6, wherein the first monitoring target is detected based on the information on the acquired criminal and information obtained from video data acquired from the video server.
  8.  前記犯罪者に関する情報は犯罪者の顔画像から算出された特徴量であり、
     前記第1の検出部は、前記犯罪者の顔画像から算出された特徴量と、前記映像サーバから取得した映像データから算出した顔画像の特徴量と、に基づき前記第1の監視対象を検出する、請求項7の解析サーバ。
    The information about the criminal is a feature amount calculated from the criminal face image,
    The first detection unit detects the first monitoring target based on a feature amount calculated from the criminal face image and a feature amount of the face image calculated from the video data acquired from the video server. The analysis server according to claim 7.
  9.  前記第2の検出部は、
     前記映像サーバから取得した映像データを解析し、複数のカメラにおいて前記第1の監視対象と共に写る対象が存在するか否かにより、前記第2の監視対象を検出する、請求項3乃至8のいずれか一項に記載の解析サーバ。
    The second detection unit includes:
    The video data acquired from the video server is analyzed, and the second monitoring target is detected based on whether or not there are targets to be captured together with the first monitoring target in a plurality of cameras. The analysis server according to one item.
  10.  前記第2の検出部は、
     前記映像サーバから取得した映像データを解析し、前記第1の監視対象との間の距離が所定値以下、且つ、前記第1の監視対象との間の距離が前記所定以下の状態が所定の時間以上となる対象が存在するか否かにより、前記第2の監視対象を検出する、請求項3乃至9のいずれか一項に記載の解析サーバ。
    The second detection unit includes:
    The video data acquired from the video server is analyzed, and the distance to the first monitoring target is equal to or smaller than a predetermined value and the distance to the first monitoring target is equal to or smaller than the predetermined value. The analysis server according to any one of claims 3 to 9, wherein the second monitoring target is detected based on whether or not there is a target that exceeds time.
  11.  前記第2の検出部は、
     前記映像サーバから取得した映像データを解析して、前記第1の監視対象が移動したことによる軌跡と、前記第2の監視対象の検出候補が移動したことによる軌跡と、を算出し、前記第1の監視対象の軌跡と前記検出候補の軌跡が交差するか否かにより、前記第2の監視対象を検出する、請求項3乃至10のいずれか一項に記載の解析サーバ。
    The second detection unit includes:
    Analyzing video data acquired from the video server, calculating a trajectory due to movement of the first monitoring target and a trajectory due to movement of the detection candidate of the second monitoring target, The analysis server according to any one of claims 3 to 10, wherein the second monitoring target is detected based on whether or not a trajectory of one monitoring target and a trajectory of the detection candidate intersect.
  12.  前記第2の検出部は、
     前記映像サーバから取得した映像データを解析し、前記第1の監視対象と関連する動作を行った対象が存在するか否かにより、前記第2の監視対象を検出する、請求項3乃至11のいずれか一項に記載の解析サーバ。
    The second detection unit includes:
    The video data acquired from the video server is analyzed, and the second monitoring target is detected based on whether or not there is a target that has performed an operation related to the first monitoring target. The analysis server according to any one of the above.
  13.  前記第1の監視対象と前記第2の監視対象の関連性を明示するような表示画面を出力する、出力部をさらに備える、請求項1乃至12のいずれか一項に記載の解析サーバ。 The analysis server according to any one of claims 1 to 12, further comprising an output unit that outputs a display screen that clearly shows the relationship between the first monitoring target and the second monitoring target.
  14.  カメラからの映像データを外部に出力する、映像サーバと、
     第1の監視対象が検出された場合、前記第1の監視対象と関係を有すると推定される第2の監視対象を前記映像サーバから取得した映像データに基づき検出する、解析サーバと、
     を含む、監視システム。
    A video server that outputs video data from the camera to the outside;
    An analysis server for detecting a second monitoring target estimated to have a relationship with the first monitoring target based on video data acquired from the video server when the first monitoring target is detected;
    Including monitoring system.
  15.  カメラからの映像データを外部に出力する映像サーバから映像データを取得し、
     第1の監視対象が検出された場合、前記第1の監視対象と関係を有すると推定される第2の監視対象を前記映像サーバから取得した映像データに基づき検出すること、
     を含む、監視方法。
    Obtain video data from a video server that outputs video data from the camera to the outside,
    When a first monitoring target is detected, detecting a second monitoring target estimated to have a relationship with the first monitoring target based on video data acquired from the video server;
    Including a monitoring method.
  16.  カメラからの映像データを外部に出力する映像サーバから映像データを取得する処理と、
     第1の監視対象が検出された場合、前記第1の監視対象と関係を有すると推定される第2の監視対象を前記映像サーバから取得した映像データに基づき検出する処理と、
     をコンピュータに実行させるプログラム。
    Processing to obtain video data from a video server that outputs video data from the camera to the outside;
    When a first monitoring target is detected, a process of detecting a second monitoring target estimated to have a relationship with the first monitoring target based on video data acquired from the video server;
    A program that causes a computer to execute.
PCT/JP2017/008327 2016-12-22 2017-03-02 Analysis server, monitoring system, monitoring method, and program WO2018116488A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2018557514A JP7040463B2 (en) 2016-12-22 2017-03-02 Analysis server, monitoring system, monitoring method and program
JP2022034125A JP2022082561A (en) 2016-12-22 2022-03-07 Analysis server, monitoring system, monitoring method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662437747P 2016-12-22 2016-12-22
US62/437747 2016-12-22

Publications (1)

Publication Number Publication Date
WO2018116488A1 true WO2018116488A1 (en) 2018-06-28

Family

ID=62626168

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/008327 WO2018116488A1 (en) 2016-12-22 2017-03-02 Analysis server, monitoring system, monitoring method, and program

Country Status (2)

Country Link
JP (2) JP7040463B2 (en)
WO (1) WO2018116488A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684162A (en) * 2018-11-09 2019-04-26 平安科技(深圳)有限公司 Equipment state prediction method, system, terminal and computer readable storage medium
CN110462621A (en) * 2019-03-29 2019-11-15 阿里巴巴集团控股有限公司 Sensitive data element is managed in block chain network
WO2020049980A1 (en) * 2018-09-06 2020-03-12 Nec Corporation A method for identifying potential associates of at least one target person, and an identification device
WO2020050003A1 (en) * 2018-09-06 2020-03-12 Nec Corporation Method, identification device and non-transitory computer readable medium for multi-layer potential associates discovery
JP2020119494A (en) * 2018-02-13 2020-08-06 ゴリラ・テクノロジー・インコーポレイテッドGorilla Technology Inc. Distributed image analysis system
JP2020160581A (en) * 2019-03-25 2020-10-01 グローリー株式会社 Face information management system, face information management device, face information management method, and program
GB2589232A (en) * 2018-06-08 2021-05-26 Geoquest Systems Bv A method for generating predicted ultrasonic measurements from sonic data
US11037443B1 (en) 2020-06-26 2021-06-15 At&T Intellectual Property I, L.P. Facilitation of collaborative vehicle warnings
CN113449560A (en) * 2020-03-26 2021-09-28 广州金越软件技术有限公司 Technology for comparing human faces based on dynamic portrait library
US11184517B1 (en) 2020-06-26 2021-11-23 At&T Intellectual Property I, L.P. Facilitation of collaborative camera field of view mapping
US11233979B2 (en) 2020-06-18 2022-01-25 At&T Intellectual Property I, L.P. Facilitation of collaborative monitoring of an event
JP2022033600A (en) * 2020-08-17 2022-03-02 横河電機株式会社 Device, system, method, and program
JP2022526382A (en) * 2019-09-30 2022-05-24 深▲セン▼市商▲湯▼科技有限公司 Behavioral analytics methods, devices, electronic devices, storage media and computer programs
US11356349B2 (en) 2020-07-17 2022-06-07 At&T Intellectual Property I, L.P. Adaptive resource allocation to facilitate device mobility and management of uncertainty in communications
US11368991B2 (en) 2020-06-16 2022-06-21 At&T Intellectual Property I, L.P. Facilitation of prioritization of accessibility of media
US11411757B2 (en) 2020-06-26 2022-08-09 At&T Intellectual Property I, L.P. Facilitation of predictive assisted access to content
US11768082B2 (en) 2020-07-20 2023-09-26 At&T Intellectual Property I, L.P. Facilitation of predictive simulation of planned environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005234765A (en) * 2004-02-18 2005-09-02 Omron Corp Image acquisition device and search device
JP2006092396A (en) * 2004-09-27 2006-04-06 Oki Electric Ind Co Ltd Apparatus for detecting lone person and person in group
JP2010231402A (en) * 2009-03-26 2010-10-14 Sogo Keibi Hosho Co Ltd Method and system for image display of monitoring device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5284599B2 (en) * 2007-03-30 2013-09-11 株式会社日立国際電気 Image processing device
JP5139947B2 (en) * 2008-10-03 2013-02-06 三菱電機インフォメーションテクノロジー株式会社 Surveillance image storage system and surveillance image storage method for surveillance image storage system
JP5203319B2 (en) * 2009-08-25 2013-06-05 セコム株式会社 Abandonment monitoring device
JP2011248548A (en) * 2010-05-25 2011-12-08 Fujitsu Ltd Content determination program and content determination device
JP5691764B2 (en) * 2011-04-12 2015-04-01 サクサ株式会社 Abandoned or removed detection system
JP5691807B2 (en) * 2011-04-28 2015-04-01 サクサ株式会社 Abandoned or removed detection system
JPWO2015166612A1 (en) * 2014-04-28 2017-04-20 日本電気株式会社 Video analysis system, video analysis method, and video analysis program
JP5999394B2 (en) * 2015-02-20 2016-09-28 パナソニックIpマネジメント株式会社 Tracking support device, tracking support system, and tracking support method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005234765A (en) * 2004-02-18 2005-09-02 Omron Corp Image acquisition device and search device
JP2006092396A (en) * 2004-09-27 2006-04-06 Oki Electric Ind Co Ltd Apparatus for detecting lone person and person in group
JP2010231402A (en) * 2009-03-26 2010-10-14 Sogo Keibi Hosho Co Ltd Method and system for image display of monitoring device

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020119494A (en) * 2018-02-13 2020-08-06 ゴリラ・テクノロジー・インコーポレイテッドGorilla Technology Inc. Distributed image analysis system
GB2589232B (en) * 2018-06-08 2022-11-09 Geoquest Systems Bv A method for generating predicted ultrasonic measurements from sonic data
GB2589232A (en) * 2018-06-08 2021-05-26 Geoquest Systems Bv A method for generating predicted ultrasonic measurements from sonic data
JP2021536182A (en) * 2018-09-06 2021-12-23 日本電気株式会社 Methods, identification devices and identification programs for discovering potential multi-layer companions
WO2020050003A1 (en) * 2018-09-06 2020-03-12 Nec Corporation Method, identification device and non-transitory computer readable medium for multi-layer potential associates discovery
JP2020530970A (en) * 2018-09-06 2020-10-29 日本電気株式会社 Methods, identification devices and programs for identifying potential partners of at least one target person
WO2020049980A1 (en) * 2018-09-06 2020-03-12 Nec Corporation A method for identifying potential associates of at least one target person, and an identification device
JP7380812B2 (en) 2018-09-06 2023-11-15 日本電気株式会社 Identification method, identification device, identification system and identification program
JP7188565B2 (en) 2018-09-06 2022-12-13 日本電気株式会社 Method, identification device and identification program for discovering multi-layer potential mates
US11250251B2 (en) 2018-09-06 2022-02-15 Nec Corporation Method for identifying potential associates of at least one target person, and an identification device
CN109684162A (en) * 2018-11-09 2019-04-26 平安科技(深圳)有限公司 Equipment state prediction method, system, terminal and computer readable storage medium
CN109684162B (en) * 2018-11-09 2022-05-27 平安科技(深圳)有限公司 Equipment state prediction method, system, terminal and computer readable storage medium
JP2020160581A (en) * 2019-03-25 2020-10-01 グローリー株式会社 Face information management system, face information management device, face information management method, and program
JP7356244B2 (en) 2019-03-25 2023-10-04 グローリー株式会社 Facial information management system, facial information management method, and program
JP2020521342A (en) * 2019-03-29 2020-07-16 アリババ・グループ・ホールディング・リミテッドAlibaba Group Holding Limited Managing sensitive data elements in blockchain networks
CN110462621A (en) * 2019-03-29 2019-11-15 阿里巴巴集团控股有限公司 Sensitive data element is managed in block chain network
JP2022526382A (en) * 2019-09-30 2022-05-24 深▲セン▼市商▲湯▼科技有限公司 Behavioral analytics methods, devices, electronic devices, storage media and computer programs
CN113449560A (en) * 2020-03-26 2021-09-28 广州金越软件技术有限公司 Technology for comparing human faces based on dynamic portrait library
US11956841B2 (en) 2020-06-16 2024-04-09 At&T Intellectual Property I, L.P. Facilitation of prioritization of accessibility of media
US11368991B2 (en) 2020-06-16 2022-06-21 At&T Intellectual Property I, L.P. Facilitation of prioritization of accessibility of media
US11233979B2 (en) 2020-06-18 2022-01-25 At&T Intellectual Property I, L.P. Facilitation of collaborative monitoring of an event
US11184517B1 (en) 2020-06-26 2021-11-23 At&T Intellectual Property I, L.P. Facilitation of collaborative camera field of view mapping
US11509812B2 (en) 2020-06-26 2022-11-22 At&T Intellectual Property I, L.P. Facilitation of collaborative camera field of view mapping
US11611448B2 (en) 2020-06-26 2023-03-21 At&T Intellectual Property I, L.P. Facilitation of predictive assisted access to content
US11411757B2 (en) 2020-06-26 2022-08-09 At&T Intellectual Property I, L.P. Facilitation of predictive assisted access to content
US11037443B1 (en) 2020-06-26 2021-06-15 At&T Intellectual Property I, L.P. Facilitation of collaborative vehicle warnings
US11902134B2 (en) 2020-07-17 2024-02-13 At&T Intellectual Property I, L.P. Adaptive resource allocation to facilitate device mobility and management of uncertainty in communications
US11356349B2 (en) 2020-07-17 2022-06-07 At&T Intellectual Property I, L.P. Adaptive resource allocation to facilitate device mobility and management of uncertainty in communications
US11768082B2 (en) 2020-07-20 2023-09-26 At&T Intellectual Property I, L.P. Facilitation of predictive simulation of planned environment
JP2022033600A (en) * 2020-08-17 2022-03-02 横河電機株式会社 Device, system, method, and program
US11657515B2 (en) 2020-08-17 2023-05-23 Yokogawa Electric Corporation Device, method and storage medium
JP7415848B2 (en) 2020-08-17 2024-01-17 横河電機株式会社 Apparatus, system, method and program

Also Published As

Publication number Publication date
JP7040463B2 (en) 2022-03-23
JPWO2018116488A1 (en) 2019-12-12
JP2022082561A (en) 2022-06-02

Similar Documents

Publication Publication Date Title
WO2018116488A1 (en) Analysis server, monitoring system, monitoring method, and program
JP6525229B1 (en) Digital search security system, method and program
CN101221621B (en) Method and system for warning a monitored user about adverse behaviors
JP4924607B2 (en) Suspicious behavior detection apparatus and method, program, and recording medium
JP2018173914A (en) Image processing system, imaging apparatus, learning model creation method, and information processing device
JP6729793B2 (en) Information processing apparatus, control method, and program
CN110163167B (en) Directional real-time tracking method and system in waiting hall
JP5645646B2 (en) Grasping object recognition device, grabbing object recognition method, and grabbing object recognition program
JP6233624B2 (en) Information processing system, information processing method, and program
CN109446936A (en) A kind of personal identification method and device for monitoring scene
JP2010191620A (en) Method and system for detecting suspicious person
JP2022008672A (en) Information processing apparatus, information processing method, and program
JP5718632B2 (en) Part recognition device, part recognition method, and part recognition program
JP2021132267A (en) Video monitoring system and video monitoring method
KR20200136034A (en) Image processing method, device, terminal device, server and system
US11308792B2 (en) Security systems integration
EA038293B1 (en) Method and system for detecting troubling events during interaction with a self-service device
JP2016122300A (en) Image processing apparatus, image processing method, and program
CN112330742A (en) Method and device for recording activity routes of key personnel in public area
JP2020052822A (en) Information processing apparatus, authentication system, control method thereof, and program
WO2022059223A1 (en) Video analyzing system and video analyzing method
TWI730795B (en) Multi-target human body temperature tracking method and system
JP2018142137A (en) Information processing device, information processing method and program
CN113128414A (en) Personnel tracking method and device, computer readable storage medium and electronic equipment
JP6022625B2 (en) Part recognition device, part recognition method, and part recognition program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17882762

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018557514

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17882762

Country of ref document: EP

Kind code of ref document: A1