WO2022025127A1 - Video analysis device and video analysis method - Google Patents

Video analysis device and video analysis method Download PDF

Info

Publication number
WO2022025127A1
WO2022025127A1 PCT/JP2021/027931 JP2021027931W WO2022025127A1 WO 2022025127 A1 WO2022025127 A1 WO 2022025127A1 JP 2021027931 W JP2021027931 W JP 2021027931W WO 2022025127 A1 WO2022025127 A1 WO 2022025127A1
Authority
WO
WIPO (PCT)
Prior art keywords
flow line
person
residence time
unit
total residence
Prior art date
Application number
PCT/JP2021/027931
Other languages
French (fr)
Japanese (ja)
Inventor
幸代 山邊
直人 瀧
高宏 水口
幸直 小幡
弘典 小味
Original Assignee
株式会社日立産業制御ソリューションズ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立産業制御ソリューションズ filed Critical 株式会社日立産業制御ソリューションズ
Publication of WO2022025127A1 publication Critical patent/WO2022025127A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a technique of a moving image analysis device and a moving image analysis method.
  • a stagnant person in order to search for a person who has been reflected on the same camera for a certain period of time or longer (hereinafter referred to as a stagnant person), a stagnant detection using a flow line detection is performed.
  • Patent Document 1 includes a processor 12 that "acquires captured images 31 for each of a plurality of cameras 1 and acquires tracking information by analyzing the captured images 31 to generate a surveillance image including an image relating to a prowling person.
  • the processor 12 has, for each camera 1, a first person rectangular 32 indicating that the person detected from the captured image 31 is a prowling person candidate based on the first and second tracking tables, and a prowling person.
  • a person behavior monitoring device and a person behavior monitoring system for generating a monitoring image in which a second person rectangle 32 indicating that the image is superimposed on the captured image 31 are disclosed (see summary).
  • Patent Document 2 states, "In an image processing device, an area for detecting a subject is set in an image to be processed, a plurality of images to be processed that are continuous in time are analyzed to detect the subject, and the subject is continuously detected in time. Based on the detection results of the subject in the plurality of images to be processed, the residence time during which the subject stays in the area is determined, and in the determination, the subject moves from the inside of the area to the outside of the area and returns to the inside of the area. In this case, the time when the subject is located outside the area is included in the residence time. ”
  • the image processing device, the image processing system, the control method of the image processing device, and the program are disclosed (see summary).
  • Retention detection using flow line detection as shown in Patent Document 1 and Patent Document 2 has the following two problems and is not suitable for long-term retention tracking.
  • (A) Generally, when the flow line is interrupted, it is determined that the staying person has disappeared. Therefore, if there is a tracking error due to the interruption of the flow line, the total residence time of the person cannot be calculated. That is, if the stagnant person is blocked by an object or the like and most of the person is not reflected, the detection of the flow line is interrupted. Similarly, when the person to be tracked for detecting the flow line is framed out, the flow line is interrupted. As a result, even if the same person is reflected again, another person will be retained, and the total residence time cannot be calculated accurately.
  • the present invention has been made in view of such a background, and it is an object of the present invention to make it possible to calculate the residence time over a long period of time.
  • the present invention relates to a moving image acquisition unit that acquires a moving image from an imaging unit, a person detecting unit that detects a person image in each of the acquired images constituting the moving image, and the detected person image.
  • the flow line detection processing unit that extracts one flow line and detects the flow line information that is the information about the extracted flow line, and the flow line information that is once interrupted detected by the flow line detection processing unit.
  • a determination unit that determines whether or not each of the acquired plurality of the flow line information is the same person's flow line based on the person's characteristic amount in the flow line, and the same person.
  • the present invention it is an object to make it possible to calculate the residence time over a long period of time.
  • FIG. 1 is a diagram showing a configuration example of the moving image analysis system Z according to the first embodiment.
  • the moving image analysis system Z includes a moving image analysis server 1, a total residence time analysis server 2, a moving image shooting device (imaging unit) 3 such as a camera, a database 4, an output server 5, and an output device (output unit) 6.
  • the moving image analysis server 1 has a moving image data processing unit (moving image acquisition unit) 11, a temporary storage unit 12, a frame image analysis unit 13, and a flow line analysis unit 14.
  • the moving image data processing unit 11 extracts frame image data (hereinafter referred to as frame image data) constituting the moving image data acquired from the moving image shooting device 3, and stores the extracted frame image data in the temporary storage unit 12.
  • the temporary storage unit 12 is a storage area formed in the memory 71 (see FIG. 5).
  • the frame image analysis unit 13 extracts a person image from the extracted frame image data, and stores the information about the extracted person image (person information 411 in FIG. 2) in the temporary storage unit 12 as the person data 41 (see FIG. 6). ). In addition, the frame image analysis unit 13 stores the person information 411 in the person data 41 of the database 4.
  • the flow line analysis unit 14 tracks the movement trajectory of the person image in each continuous frame image data using the result analyzed by the frame image analysis unit 13, and extracts the flow line data 42. Then, the flow line analysis unit 14 stores the information regarding the extracted flow lines in the flow line data 42 of FIG. 7 stored in the database 4.
  • the total residence time analysis server 2 has a total residence time analysis unit 21.
  • the total residence time analysis unit 21 once re-searches whether or not the information regarding the flow line stored in the database 4 is the flow line of the same person. Then, the total residence time analysis unit 21 calculates the total residence time of each person based on the result of the re-search. Further, the total residence time analysis unit 21 determines whether or not an alarm notification is necessary based on the calculated total residence time.
  • the output server 5 outputs the total residence time calculated by the total residence time analysis unit 21 to the output device 6, and issues an alarm to the output device 6 when the total residence time analysis unit 21 determines that alarm notification is necessary. Output.
  • the database 4 stores the person data 41 (see FIG. 6), the flow line data 42 (see FIG. 7), the flow line rediscovery data 44 (see FIG. 9), and the total residence time data 45 (see FIG. 10).
  • the temporary storage unit 12 stores the person data 41 (see FIG. 6) and the frame data 43 (see FIG. 8). That is, the same person data 41 is stored in the temporary storage unit 12 and the database 4. The respective data 41 to 45 stored in the database 4 and the temporary storage unit 12 will be described later.
  • the video analysis server 1, the total residence time analysis server 2, the database 4, and the output server 5 are separate devices, but they may be integrated devices. Alternatively, two or more devices may be installed as one device.
  • FIG. 2 is a diagram showing a detailed configuration of the frame image analysis unit 13 in the first embodiment.
  • the frame image analysis unit 13 has a frame image data acquisition unit 131 and a person extraction processing unit 132.
  • the frame image data acquisition unit 131 acquires the frame image data stored in the temporary storage unit 12 and passes it to the person extraction processing unit 132.
  • the person extraction processing unit 132 extracts a person image from the frame image data acquired by the frame image data acquisition unit 131. Then, the person extraction processing unit 132 stores the information (person information 411) related to the extracted person image in the person data 41 (see FIG. 6) of the temporary storage unit 12. In addition, the person extraction processing unit 132 stores the person information 411 in the person data 41 of the database 4.
  • FIG. 3 is a diagram showing a detailed configuration of the flow line analysis unit 14 in the first embodiment.
  • the flow line analysis unit 14 has a person information acquisition unit 141 and a flow line detection processing unit (determination unit) 142.
  • the person information acquisition unit 141 acquires the person information 411 from the person data 41 (see FIG. 6) stored in the temporary storage unit 12.
  • the flow line detection processing unit 142 detects the flow line, which is the locus of each person in the continuous frame image data, based on the person information 411 acquired by the person information acquisition unit 141.
  • the flow line information (flow line information 421) generated as a result of processing by the flow line detection processing unit 142 is stored in the flow line data 42 stored in the database 4.
  • FIG. 4 is a diagram showing a detailed configuration of the total residence time analysis unit 21 in the first embodiment.
  • the total residence time analysis unit 21 includes a periodic search unit 211, a flow line re-detection unit 212, a total residence time calculation unit 213, and an alarm determination processing unit 214.
  • the periodic search unit 211 determines whether or not the search time for the total residence time has been reached.
  • the periodic search unit 211 can be omitted.
  • the flow line re-detection unit 212 once re-searches the information regarding the flow line stored in the database 4 to see if it is the flow line of the same person.
  • the flow line rediscovery information 441 and the like, which are the results of the re-search, are stored in the flow line rediscovery data 44 (see FIG. 9) of the database 4.
  • the total residence time calculation unit 213 calculates the total residence time of each person based on the flow line rediscovery data 44 stored in the database 4.
  • the alarm determination processing unit 214 determines whether or not an alarm notification is necessary based on the calculated total residence time.
  • the total residence time calculated by the total residence time calculation unit 213 and the necessity determination result of the alarm notification by the alarm determination processing unit 214 are stored in the total residence time data 45 (see FIG. 10) stored in the database 4. To.
  • FIG. 5 is a diagram showing a hardware configuration diagram of the arithmetic unit 7.
  • the arithmetic unit 7 constitutes a moving image analysis server 1, a total residence time analysis server 2, and an output server 5.
  • the arithmetic unit 7 includes a memory 71, a CPU 72 (Central Processing Unit), a storage device 73 such as an HD (Hard Disk), a communication device 74, and an input device 75.
  • the communication device 74 transmits / receives information to / from another device and the database 4.
  • the input device 75 is a keyboard, a mouse, or the like. Further, the program stored in the storage device 73 is loaded into the memory 71. Then, the loaded program is executed by the CPU 72.
  • the moving image data processing unit 11, the frame image analysis unit 13, the flow line analysis unit 14, the total residence time analysis unit 21, and the frame image analysis unit 13, the flow line analysis unit 14, and the total residence time analysis unit 21 are configured.
  • Each part 131 to 132, 141 to 142, 211 to 214 is embodied.
  • FIG. 6 is a diagram showing an example of person data 41.
  • the person data 41 has columns of "person ID”, “frame ID”, “rectangular information”, “time stamp”, “person feature amount”, “traffic line ID”, and "person image”.
  • the "person ID” field an ID uniquely assigned to the person image detected in the frame image data is stored. Even for the same person, if the frame image data of the detection source is different, different person IDs are given.
  • the "frame ID” column an ID assigned to the frame image data in which the person image indicated by the "person ID" is detected is stored.
  • Information about a rectangle including a person image is stored in the "rectangle information” column.
  • time stamp information regarding the date and time when the frame image data indicated by the "frame ID” was captured is stored.
  • person feature amount information about the feature indicated by the detected person image is stored.
  • traffic line ID the ID given by the flow line analysis unit 14, and the flow line ID in which the ID connecting the person IDs determined to be the same person is stored, is traced to the flow line ID of a certain person. You can follow the flow line (trajectory of movement).
  • the detected person image data is stored in the "person image” column.
  • FIG. 7 is a diagram showing an example of the flow line data 42.
  • the flow line data 42 has columns for "flow line ID", "list of person IDs”, “start frame ID”, “end frame ID”, and "presence / absence of frame out”.
  • the "traffic line ID” column the same data as the "traffic line ID” of the person data 41 is stored.
  • the “list of person IDs” column person IDs to which the same flow line ID is assigned are stored as a list.
  • start frame ID the frame ID of the first frame in which the person image to which the target flow line ID is assigned is detected is stored.
  • the frame ID of the last frame in which the person image to which the target flow line ID is assigned is detected is stored.
  • Information regarding the presence / absence of flameout is stored in the "presence / absence of frameout” column. Whether or not there is a frame out is determined by whether or not the person image is at the edge of the frame image in the last frame of the flow line. If the person image is at the end of the frame image in the last frame of the flow line, "Yes" is stored in the "Presence / absence of frame out” column.
  • FIG. 8 is a diagram showing an example of frame data 43.
  • the frame data 43 has columns for "frame ID”, "frame image”, and "thumbnail image".
  • the "frame ID” column is an ID uniquely assigned to each frame image.
  • Frame image data is stored in the "frame image” field.
  • thumbnail image In the "thumbnail image” field, a thumbnail image in which the size of the frame image data in the "frame image” is reduced is stored.
  • FIG. 9 is a diagram showing an example of the flow line rediscovery data 44.
  • the flow line rediscovery data 44 includes "flow line ID”, “retention start time”, “retention end time”, “retention time (seconds)", “personal feature amount”, "presence or absence of the same person", and “same person ID”. It has each column of.
  • the "flow line ID” column the same as the flow line ID in the person data 41 of FIG. 6, an ID connecting the person IDs determined to be the same person is stored.
  • the "retention start time” column is the time indicated by the first frame image in a series of frame images including the person ID connected by the flow line ID (that is, the frame image connected by the flow line ID).
  • the “retention end time” column is the time indicated by the last frame image in the series of frame images including the person ID connected by the flow line ID (that is, the frame image connected by the flow line ID).
  • the “retention time (seconds)” column the time obtained by subtracting the “retention start time” from the “retention end time” is stored.
  • the person feature amount” column the person feature amount of the person image associated with the flow line ID in the flow line data 42 of FIG. 7 is stored.
  • the "presence / absence of same person” column information indicating whether or not there is another flow line ID of the same person as the target flow line ID in the flow line stored in the flow line rediscovery data 44 is stored. The flow.
  • each of the flow line IDs to which "A" is given as the "same person ID” is a flow line ID determined to be the same person.
  • each of the flow line IDs to which "B” is assigned is a flow line ID that is determined to be the same person, although they are different from “A”.
  • the information stored in the "same person ID" is generally not the name of the person but the ID number.
  • FIG. 10 is a diagram showing an example of total residence time data 45.
  • the total residence time data 45 has columns for "same person ID", “total residence time”, and "excess determination”. The same person ID as in FIG. 9 is stored in the "same person ID” column.
  • the total residence time obtained by adding all the residence times of the flow line IDs to which the same person ID is given in the flow line rediscovery data 44 of FIG. 9 is stored.
  • the “excess determination” column whether the total residence time exceeds a predetermined time (first threshold value V1) ("excess") or the total residence time is a predetermined time (second threshold value V2; V1> V2). Information such as whether it exceeds (“warning”) is stored. Further, when the total residence time does not exceed a predetermined time (second threshold value V2), "normal” is stored in the "excess determination” column.
  • FIG. 11 is a flowchart showing the procedure of the flow line collection process performed in the first embodiment.
  • the moving image data processing unit 11 and the frame image analysis unit 13 perform moving image analysis processing (S1). Details of the video analysis process will be described later.
  • the flow line analysis unit 14 performs the flow line detection process (S2). Details of the flow line detection process will be described later. Information about the flow line is collected by steps S1 and S2.
  • FIG. 12 is a flowchart showing the procedure of the total residence time extraction process performed in the first embodiment.
  • the total residence time analysis unit 21 performs the flow line re-detection process (S4). Details of the flow line rediscovery process will be described later.
  • step S4 it is searched in steps S1 and S2 whether or not the flow lines recognized as belonging to different persons are the flow lines of the same person.
  • the total residence time calculation unit 213 performs a total residence time calculation process for calculating the total residence time for each person using the result of step S4 (S5). Details of the total residence time calculation process will be described later.
  • the output processing unit 51 performs screen display processing for displaying the total residence time calculated by the total residence time calculation processing on the display device (S6). Details of the screen display process will be described later.
  • FIG. 13 is a flowchart showing a detailed procedure of the moving image analysis process (step S1 in FIG. 11) performed in the first embodiment.
  • the moving image data processing unit 11 acquires the moving image data captured by the moving image shooting device 3 (S101).
  • the moving image data processing unit 11 acquires each of the frame image data constituting the moving image data from the acquired moving image data (S102), and stores each frame image data in the temporary storage unit 12 (S103).
  • the moving image data processing unit 11 assigns a frame ID to each of the acquired frame image data, and then stores the frame image data in the frame data 43 (see FIG. 8) of the temporary storage unit 12.
  • the frame image data acquisition unit 131 acquires frame image data from the temporary storage unit 12 (S111).
  • the person extraction processing unit 132 extracts a person image from the frame image data acquired based on the person feature amount (S112). Here, all the person images existing in the frame image are extracted. Then, the person extraction processing unit 132 assigns a person ID to each of the extracted person images, and the person data 41 (see FIG. 6) for each person feature amount, data related to the frame image, and the person image from which the person image data is extracted. ) (S113).
  • the flow line analysis unit 14 determines whether or not the extraction of the person images of all the persons has been completed in the frame image to be processed (completed for all persons ?; S114). When the extraction of the human image of all the persons is not completed (S114 ⁇ No), the flow line analysis unit 14 returns the process to step S111. When the extraction of the human image of all the persons is completed (S114), the moving image analysis server 1 returns the process to step S2 in FIG.
  • FIG. 14 is a flowchart showing a detailed procedure of the flow line detection process (step S2 in FIG. 11) performed in the first embodiment.
  • the person information acquisition unit 141 acquires the person information 411 from the person data 41 (S201).
  • the person information 411 indicates a record of the person data 41 shown in FIG.
  • the movement line detection processing unit 142 compares the person feature amount in the frame image immediately before the current frame image with the person feature amount calculated in the extraction of the person image in step S112 in the person data 41. By doing so, it is determined whether or not the person information 411 (similar person information 411) having a similar person feature amount exists in the previous frame image (S202).
  • the similar person feature amount means that the difference between the person feature amounts has a value within a predetermined range (person feature amount threshold value).
  • the person information 411 is a record of the person data 41 shown in FIG.
  • the flow line detection processing unit 142 When the similar person information 411 does not exist in the previous frame image (S202 ⁇ No), the flow line detection processing unit 142 newly assigns the flow line ID to the corresponding person information 411 (S203). Further, the flow line detection processing unit 142 stores the flow line ID newly assigned in step S202 in the "flow line ID" column of the flow line data 42 shown in FIG. 7, and processes it in the "person ID list” column. The person ID of the target person information 411 is stored (S204). In step S203, the flow line detection processing unit 142 stores the frame ID of the person information 411 to be processed in the "start frame ID" column of the flow line data 42 of FIG. 7. At this time, the flow line detection processing unit 142 also stores the flow line ID assigned to the flow line ID column of the person data 41.
  • the flow line detection processing unit 142 determines whether or not the person image is being tracked for a predetermined time or longer (S211). ..
  • step S211 first, the flow line detection processing unit 142 acquires the flow line information 421 having the same flow line ID as the flow line ID in the similar person information 411 in the flow line data 42 shown in FIG. 7.
  • the flow line information 421 is a record in the flow line data 42.
  • the flow line detection processing unit 142 acquires the frame ID stored in the "start frame ID" column of the acquired flow line information 421.
  • the acquired frame ID is used as the start frame ID.
  • the flow line detection processing unit 142 searches for the person data 41 using the acquired start frame ID as a key, and acquires a time stamp stored in association with the frame ID corresponding to the start frame ID.
  • the acquired time stamp is referred to as a start time stamp.
  • the time stamp stored in the person information 411 acquired in step S201 with the start time stamp it is determined whether or not the tracking of the person image is performed for a predetermined time or more.
  • the flow line detection processing unit 142 When the tracking of the person image is performed for a predetermined time or longer (S211 ⁇ Yes), the flow line detection processing unit 142 temporarily cuts off the flow line by generating a new flow line ID (S203). If the flow line continues indefinitely, the flow line re-detection process described later cannot be started for the person image, but such a problem can be solved by temporarily cutting off the flow line at regular intervals. be able to.
  • the flow line detection processing unit 142 uses the flow line ID in the similar person information 411 as the flow line of the person information 411 to be processed. Store in the ID field (S212). Further, in the flow line data 42 shown in FIG. 7, the flow line detection processing unit 142 is the person to be processed in the "person ID list" column of the flow line ID record to be processed in step S204. The person ID of the information 411 is stored (S213).
  • the flow line detection processing unit 142 determines whether or not the association with the flow line ID is completed for all the person information 411 (completed for all the person information 411; S214). When the association with the flow line ID is not completed for all the person information 411 (S214 ⁇ No), the flow line detection processing unit 142 returns the processing to step S201. When the association with the flow line ID is completed for all the person information 411 (S214 ⁇ Yes), the flow line detection processing unit 142 detects the end frame image for each flow line ID, and is shown in FIG. 7. The frame ID of the detected end frame image is stored in the "end frame ID" column of the flow line data 42 (S215). After that, the moving image analysis server 1 returns to the process of FIG.
  • the flow line detection processing unit 142 searches for the end frame image based on the end frame ID of the flow line data 42. Then, when the person image is at the end of the frame image in the end frame image, the flow line detection processing unit 142 stores “Yes” in the “Presence / absence of frame out” column of the flow line data 42.
  • FIG. 15 is a flowchart showing a detailed procedure of the flow line rediscovery process (step S4 in FIG. 12) according to the first embodiment.
  • the periodic search unit 211 determines whether or not the time set by the user has elapsed from the time stamp referred to in the previous retention search (S401). In the present embodiment, it is said that the process of step S402 or less is performed when the time set by the user elapses, but the process of step S402 or less is performed when the user instructs to start the search for the total residence time. May be good. If a certain period of time has not elapsed (S401 ⁇ No), the periodic search unit 211 returns the process to step S401.
  • the flow line re-detection unit 212 acquires all the flow line IDs within the specified time range in the flow line data 42 from the flow line data 42 (S403).
  • the designated time range is input by the user via the input device 75.
  • the flow line rediscovery unit 212 stores the flow line ID acquired in step S403 in the “flow line ID” column of the flow line rediscovery data 44 shown in FIG.
  • the flow line re-detection unit 212 selects one flow line ID from the acquired flow line IDs as the flow line re-detection target (S404). Then, the flow line re-detection unit 212 searches for the person image in the flow line ID acquired in step S404 and the person image considered to be the same person among the person images in the flow line ID acquired in step S403 (same person). Search; S405). Specifically, the flow line re-detection unit 212 acquires the person ID stored in the flow line information 421 having the flow line ID acquired in step S403. This person ID is referred to as a flow line rediscovery person ID.
  • the flow line re-detection unit 212 searches for the person data 41 and acquires the person feature amount of the flow line re-detection person ID.
  • the acquired person feature amount is referred to as a flow line rediscovered person feature amount.
  • a person having a flow line rediscovered person feature amount is referred to as a flow line rediscovered person.
  • the flow line re-detection unit 212 acquires the person ID in the flow line information 421 having the flow line ID acquired in step S403. Then, the flow line re-detection unit 212 compares the person feature amount acquired from the person data 41 based on this person ID with the flow line re-detection person feature amount.
  • the flow line re-detection unit 212 determines whether or not there is a person image that can be considered to be the same person as the flow line rediscovered person among the person images in the flow line ID acquired in step S403 (same). Person existence; S411). Specifically, the flow line re-detection unit 212 determines whether or not there is a person feature amount similar to the flow line rediscovered person feature amount (within a predetermined threshold value).
  • the flow line re-detection unit 212 displays the flow line re-detection information 441, which is information on the flow line re-detection person special amount, as a single person in FIG. It is stored in the flow line rediscovery data 44 (S412), and the process proceeds to step S414.
  • the flow line re-detection unit 212 stores the flow line ID considered to be the same person in the flow line re-detection data 44 shown in FIG. 9 (S413). ..
  • step S412 and step S413 will be described with reference to the flow line rediscovery data 44 shown in FIG. (A)
  • the flow line re-detection unit 212 stores information in each column of the "retention start time”, “retention end time”, and "person feature amount” of the record corresponding to the flow line ID of the flow line re-detection person.
  • the "start frame ID” and the "end frame ID” of the flow line data 42 are acquired by using the flow line ID as a key.
  • the flow line rediscovery unit 212 searches for the person data 41, acquires the acquired time stamps of the “start frame ID” and the “end frame ID”, and uses the acquired time stamps as the flow line rediscovery data. It is stored in the "retention start time” and "retention end time” columns of 44.
  • the flow line re-detection unit 212 stores the flow line re-detection person ID in the “person feature amount” column. Further, the flow line re-detection unit 212 stores "none” in the "presence / absence of the same person” column, and stores the same person ID in the "same person ID” column so as not to be common with other person images. ("C" in the example of FIG. 9).
  • the flow line re-detection unit 212 includes the flow line ID of the flow line rediscovered person and the flow line ID of the person image determined to be the same person as the flow line rediscovered person in step S405.
  • Information is stored in each column of "retention start time”, “retention end time”, and "personal feature amount” in each of the records corresponding to. Since the information in these columns is the same as in (A) the case where the same person does not exist, the description here will be omitted. Further, “Yes” is stored in the "Presence / absence of the same person” column, and the same person ID that is not common to other person images is stored in the "Same person ID".
  • the flow line re-detection unit 212 calculates the residence time for each flow line ID by calculating the difference between the retention end time and the retention start time (S414), and the calculated residence time is used as the flow line. It is stored in the "residence time" column of the rediscovery data 44.
  • the residence time calculated in step S414 is a residence time corresponding to each flow line.
  • the flow line rediscovery unit 212 excludes the flow line ID of the flow line rediscovered person and the flow line ID determined to be the same person as the flow line rediscovered person (S415). .. Then, the flow line re-detection unit 212 determines whether or not the flow line ID acquired in step S202 remains (S416). When the flow line ID remains (S416 ⁇ Yes), the process returns to step S403, and the flow line re-detection unit 212 selects a new flow line ID from the remaining flow line IDs. When the flow line ID does not remain (S416 ⁇ No), the total residence time analysis unit 21 returns the process to step S5 in FIG.
  • FIG. 16 is a flowchart showing a detailed procedure of the total residence time calculation process (step S5 in FIG. 12) performed in the first embodiment.
  • the total residence time calculation unit 213 refers to the flow line rediscovery data 44 shown in FIG. 9 and selects one flow line ID (S501).
  • the total residence time calculation unit 213 refers to the "same person ID" column of the flow line rediscovery data 44, and has the same person ID as the same person ID associated with the flow line ID selected in step S501. All the flow line IDs having the above are acquired (S502).
  • the total residence time calculation unit 213 refers to the “residence time” column of the flow line rediscovery data 44 shown in FIG. 9, and the residence time associated with the flow line ID acquired in steps S501 and S502. (Total residence time) is calculated (S503).
  • the total residence time calculation unit 213 has the same person ID and the total residence time calculated in step S503 in the "same person ID" column and the "total residence time” column of the total residence time data 45 shown in FIG. Is stored in association with each other.
  • the total residence time calculation unit 213 determines whether or not the calculated total residence time T exceeds a predetermined threshold value V (T> V ?; S511). When the total residence time does not exceed a predetermined threshold value (S511 ⁇ No), the total residence time calculation unit 213 proceeds to step S513. When the total residence time exceeds the processing threshold value (S511 ⁇ Yes), the output processing unit 51 outputs an alarm by displaying the pop-up screen 812 on the monitoring moving image display screen 810 shown in FIG. 18 (S512).
  • the total residence time calculation unit 213 searches the flow line rediscovery data 44 and determines whether or not the unprocessed flow line ID exists (S513). When the unprocessed flow line ID exists (S513 ⁇ Yes), the total residence time calculation unit 213 returns the process to step S501. When the unprocessed flow line ID does not exist (S513 ⁇ No), the total residence time analysis unit 21 returns the process to step S6 in FIG.
  • FIG. 17 is a flowchart showing a detailed procedure of the total residence time detailed screen display process (step S6 in FIG. 12) performed in the first embodiment.
  • the total residence time detail screen display process shown in FIG. 17 is a process for displaying the total residence time detail screen 820 shown in FIG.
  • the output processing unit 51 increases the total residence time stored in the "total residence time" column of the total residence time data 45 shown in FIG. 10 in descending order. Sort (S601).
  • the display instruction of the total residence time detail screen 820 will be described later.
  • the output processing unit 51 selects one same person ID of the total residence time data 45 (S602).
  • the output processing unit 51 refers to the "total residence time” column of the total residence time data 45, and determines whether or not the total residence time T corresponding to the selected same person ID exceeds the first threshold value V1. (T> V1 ?; S603).
  • the output processing unit 51 stores “excess” in the “excess determination” column in the corresponding record of the total residence time data 45 (see FIG. 10) (S611). ..
  • the bar graph 821 of the person to be processed on the total residence time detail screen 820 shown in FIG. 19 is displayed in, for example, “red” (S612). That is, an output indicating that the total residence time exceeds a predetermined threshold value is output.
  • the output processing unit 51 determines whether or not the total residence time T corresponding to the selected same person ID exceeds the second threshold value V2 (T> V2). S621). However, the first threshold value V1> the second threshold value V2.
  • the output processing unit 51 stores “warning” in the “excess determination” column in the corresponding record of the total residence time data 45 (S622). Then, the bar graph 821 of the person to be processed on the total residence time detail screen 820 shown in FIG. 19 is displayed in, for example, “orange” (S623).
  • the output processing unit 51 stores “normal” in the “excess determination” column in the corresponding record of the total residence time data 45 (S631). Then, the bar graph 821 of the person to be processed on the total residence time detail screen 820 shown in FIG. 19 is displayed in, for example, “green” (S632).
  • the output processing unit 51 determines whether or not the processing has been completed for all the same person IDs (S633). When the processing is not completed for all the same person IDs (S633 ⁇ No), the output processing unit 51 returns the processing to step S602 and acquires one unprocessed same person ID (S602). When the processing is completed for all the same person IDs (S633 ⁇ Yes), the output processing unit 51 returns to the processing of FIG.
  • the excess determination is shown in three stages of "excess”, “warning”, and "normal", but the present invention is not limited to this.
  • the excess determination may be indicated in four stages of "excess”, “warning", “caution”, and "normal".
  • FIG. 18 is a diagram showing a surveillance moving image display screen 810.
  • the monitoring video display screen 810 has a monitoring video display unit 811, a total residence time detail screen display button 813, and the like.
  • the surveillance moving image display unit 811 displays the moving image captured by the moving image shooting device 3.
  • the total residence time detail screen display button 813 display instruction of the total residence time detail screen 820
  • the total residence time detail screen 820 shown in FIG. 19 is displayed on the display unit.
  • the total residence time data 45 shown in FIG. 10 if there is a person (“excess”) whose total residence time exceeds a predetermined time, the total residence time is exceeded as shown in FIG. A pop-up screen 812 notifying that there is a person is displayed.
  • the total residence time detail screen 820 shown in FIG. 19 is displayed on the display unit (display instruction of the total residence time detail screen 820).
  • the pop-up screen 812 notifies the person who is determined to be "excessive” in the total residence time data 45 shown in FIG.
  • the present invention is not limited to this, and a notification by the pop-up screen 812 may be given to a person who is determined to be "alert” or "attention".
  • FIG. 19 is a diagram showing a total residence time detail screen 820.
  • the total residence time detail screen 820 is displayed by selecting and inputting the total residence time detail screen display button 813 and the pop-up screen 812 in FIG.
  • the total residence time detail screen 820 has a total residence time display unit 825 and a person image display unit 823.
  • the total residence time display unit 825 displays the total residence time for each person as a bar graph 821 based on the "total residence time" stored in the total residence time data 45 shown in FIG. In the example shown in FIG. 19, the total residence time of four persons (retention persons) of "A", "B", "C", and "D" is displayed.
  • the personal residence information display screen 830 shown in FIG. 20 is displayed, and the person associated with the selected and input bar graph 821 is displayed.
  • the stagnation information is displayed.
  • the broken line 822 indicates the excess determination threshold value (first threshold value V1 in FIG. 17), and the bar graph 821a of the person “A” whose total residence time exceeds the broken line 822 is displayed in red, for example. Further, although the total residence time does not exceed the broken line 822, the total residence time is close to the broken line 822 (in FIG.
  • the total residence time T is V1>T>V2; V2 is the second threshold value).
  • 821b is displayed in orange, for example.
  • the bar graphs 821c of the other persons "C” and "D” are displayed in green indicating that the total residence time is normal.
  • the color of the bar graph 821 is determined by the output processing unit 51 searching for the total residence time data 45 (see FIG. 10) as described above.
  • the normal total residence time here means that the total residence time is not long-term.
  • the person image display unit 823 displays an image (person image) of a person whose total residence time is displayed on the total residence time display unit 825.
  • This person image is acquired by the output processing unit 51 referring to the "person image” column of the person data 41 in FIG.
  • the output processing unit 51 acquires the flow line ID associated with the “same person ID” column for the person whose total residence time in the flow line rediscovery data 44 shown in FIG. 9 is displayed. Subsequently, the output processing unit 51 refers to the “list of person IDs” column of the flow line data 42 shown in FIG. 7, and acquires the start frame ID or the end frame ID associated with the flow line ID).
  • the start frame ID is acquired. At this time, there are a plurality of candidates for the start frame ID to be acquired, but only one of them (for example, the start frame ID stored first) may be acquired.
  • FIG. 20 is a diagram showing a personal residence information display screen 830.
  • the personal residence information display screen 830 is a screen that displays detailed information about the residence time of a person determined to be the same by the moving image analysis server 1.
  • the personal residence information display screen 830 includes a residence time information display unit 831, a person image display unit 836, a check box 837, a delete button 838, a total residence time display unit 839B, and a search start time display unit 839A.
  • the search start time of the past 4 hours from the current time (13:00:00), that is, the residence time is set to 9:00:00.
  • the residence time information display unit 831 the residence time of the person determined by the video analysis server 1 to be the same is displayed for each flow line. That is, each of the flow line lines 832 displayed on the residence time information display unit 831 corresponds to each of the flow line IDs in FIG. 7. Further, the line 834A in the residence time information display unit 831 indicates the search start time, and the line 834B indicates the current time. That is, the moving image analysis server 1 searches for the residence time for the time between the line 834A and the line 834B (arrow 835). Further, the frame-out display 833 indicates the time when the corresponding person is frame-out.
  • the frame-out display 833 is displayed by the moving image analysis server 1 referring to the “presence / absence of frame-out” column shown in FIG. That is, the output processing unit 51 refers to the “presence / absence of frame out” column of the flow line data 42. If the frame out is "Yes", the output processing unit 51 displays a frame out display 833 between the flow line 832 indicated by the flow line information 421 and the next flow line 832.
  • the person image display unit 836 displays the person images 836a and 836b associated with the flow line line 832. That is, the person image display unit 836 displays the person images 836a and 836b corresponding to the person ID in the "list of person IDs" column associated with the flow line ID shown by the flow line line 832 in FIG. 7. ing. Further, check boxes 837 are associated with and displayed on the respective person images 836a and 836b. It will be described later that the person image 836a is different from the other person images 836b.
  • the search start time is displayed on the search start time display unit 839A.
  • the total residence time display unit 839B displays the total residence time calculated in step S503 of FIG. That is, the total residence time display unit 839B displays the total time indicated by each of the flow line lines 832 displayed on the residence time information display unit 831.
  • the person image 836a is different from other person images 836b.
  • the moving image analysis server 1 erroneously recognizes the person in the person image 836a and the person in the other person image 836b as the same person.
  • the user checks the check box 837 (check box 837a in the example of FIG. 20) which is displayed in association with the person image 836b via the input device 75. input.
  • the delete button via the input device 75, the person image 836a and the flow line 832a associated with the person image 836a are deleted.
  • one person image 836a and the flow line 832a are deleted in FIG.
  • a plurality of person images 836a and the flow line 832a are deleted by inputting a check in a plurality of check boxes 837. It is possible. That is, the information of the predetermined flow line can be deleted.
  • the moving image analysis server 1 After deleting the person image 836a and the flow line 832a, the moving image analysis server 1 recalculates the total sum (total residence time) of the flow line 832 displayed on the personal residence information display screen 830.
  • the first embodiment it is determined whether or not the person is the same person based on the person feature amount for a plurality of interrupted flow lines, and if it is determined to be the same person, it is regarded as the same person's flow line. Calculate the total residence time. By doing so, it is possible to calculate the residence time over a long period of time by connecting the flow lines of the same staying person who has been interrupted by the person feature amount. Further, according to the first embodiment, even if a frameout occurs, it is determined whether or not the person is the same person based on the person feature amount for a plurality of interrupted flow lines, so that the same person can be used. The flow line can be detected. Therefore, the total residence time can be calculated even if flameout occurs.
  • the total residence time for each person is shown by a bar graph 821 together with the image of the person, and the total residence time calculated by changing the color of the bar graph 821 of the person exceeding a predetermined threshold value is shown.
  • it exceeds a predetermined threshold value it indicates that the total residence time exceeds the predetermined threshold value.
  • FIG. 21 is a diagram illustrating a flow line transfer.
  • the person H1 is detected at first, but it is detected in a state where the person H2 overlaps the person H1 at the time T1. If the feature quantities of the person H1 and the person H2 are close to each other, the moving image analysis server 1 confuses the person H2 with the person H1, and after the time T1, the person H2 is regarded as the person H1 and the flow line is detected. That is, the flow line shift occurs (thick solid line arrow 901).
  • FIG. 22 is a flowchart showing the procedure of the total residence time extraction process performed in the second embodiment.
  • the same processing as in FIG. 12 is assigned the same step number and the description thereof will be omitted.
  • the process different from that in FIG. 12 is that the user first sets the person feature amount threshold value to be less than a predetermined value via the input device 75 (step S3). At this time, the person feature amount threshold value is set to be as small as possible. By setting the person feature amount threshold value small in this way, the probability of being determined as “No” in step S202 of FIG. 14 increases. That is, the flow line is intentionally made easy to be interrupted. The effect of doing so will be described with reference to FIG.
  • FIG. 23 is a diagram showing the result of performing the process shown in FIG. 22.
  • the person H1 is detected (traffic line M0), but it is detected in a state where the person H2 overlaps the person H1 at the time T1.
  • the person feature amount threshold value is small, it is possible to distinguish between the person H1 before the time T1 and the person H2 overlapping the person H1 at the time T1. Therefore, the flow line is interrupted at time T1.
  • the flow line at time T1 is the first of the flow lines M1.
  • the conventional technology if it is determined that the flow lines are different, even the flow lines of the same person are actually recognized as the flow lines of different people. Therefore, in the conventional technique, if the person feature amount threshold is set small as in the present embodiment, the flow line is interrupted when the person feature amount of the same person changes due to, for example, the amount of light, and the flow line is different. There is a risk that it will be recognized as the flow line of the person. Therefore, with the conventional technology, it is not possible to set the person characteristic tension threshold value small.
  • the second embodiment even if the flow line is interrupted, the flow line of the same person is collected by the flow line re-detection process shown in FIG. 15, so that the person feature amount threshold value is set small. be able to. Thereby, according to the second embodiment, it is possible to prevent the flow line transfer.
  • the third embodiment when it is determined that two interrupted flow lines belong to the same person, the two flow lines are combined so that they can be recognized as one flow line. ..
  • the interrupted flow line collects the flow lines of the same person while remaining interrupted.
  • the third embodiment if the person feature amount and the position of the person in the image are close to each other, the flow lines are combined and treated as one flow line.
  • FIG. 24 is a diagram showing a detailed configuration of the flow line analysis unit 14b according to the third embodiment.
  • the flow line analysis unit 14b of the moving image analysis server 1b is different from the flow line analysis unit 14 shown in FIG. 3 in that it has a flow line connection unit 144 that connects the flow lines determined to be the same person. .. Since the configuration of the moving image analysis server 1b other than the flow line coupling portion 144 is the same as the configuration shown in FIGS. 1, 2, and 4, the illustration and description will be omitted.
  • FIG. 25 is a diagram showing an example of the flow line data 42b used in the third embodiment.
  • the difference from the flow line data 42 shown in FIG. 7 is that it has a “next flow line ID” column.
  • the flow line indicated by the flow line ID "00002” is determined to be the flow line of the same person as the flow line indicated by the flow line ID "00007".
  • the flow line ID "00007” is stored in the "next flow line ID” column in the record of the flow line ID "00002”.
  • the flow line of the flow line ID "00002” and the flow line of the flow line ID "00007” are combined and recognized as one flow line.
  • the flow line of the flow line ID “00015” is connected to the flow line of the flow line ID “000019”, and further is connected to the flow line of the flow line ID “00023”.
  • FIG. 26 is a flowchart showing the procedure of the flow line detection process performed in the third embodiment.
  • the same process as in FIG. 13 is assigned the same step number and the description thereof will be omitted.
  • the difference from FIG. 13 is that the flow line coupling process (S221) is performed after step S215. Details of the flow line coupling process will be described later.
  • FIG. 27 is a flowchart showing a detailed procedure of the flow line coupling process (S221 in FIG. 26) performed in the third embodiment.
  • the flow line coupling unit 144 selects two flow line IDs from the flow line data 42 (S701).
  • the selected flow line ID is such that the time of the last frame image of one flow line ID and the time of the first frame image of the other flow line ID exist within a predetermined range. be.
  • rectangular information is acquired from the last frame image of the previous flow line (referred to as the first flow line) in time (S702).
  • the rectangular information acquired in step S702 is referred to as first rectangular information.
  • the flow line coupling unit 144 starts from the first frame image of the flow line (referred to as the second flow line) in which the first frame image exists within a predetermined time with respect to the last frame image of the first flow line.
  • Acquire rectangle information (S703).
  • the rectangular information acquired in step S703 is referred to as a second rectangular information.
  • the first rectangle information and the second rectangle information are acquired from the flow line data 42 and the person data 41.
  • the flow line coupling unit 144 determines whether or not the distance D between the position P1 of the first rectangular information and the position P2 of the second rectangular information is equal to or less than a predetermined value D1 (D ⁇ D1; S711). ).
  • D1 a predetermined value
  • the position of each rectangle information is the center of the rectangle or the like, and can be appropriately set by the user.
  • the flow line coupling unit 144 proceeds to step S721.
  • the flow line connecting portion 144 has a person feature amount in the first rectangular information (referred to as the first person feature amount C1) and a person in the second rectangular information.
  • the feature amount (referred to as the second person feature amount C2) is compared.
  • the flow line coupling portion 144 determines whether or not the absolute value of the difference between the first person feature amount C1 and the second person feature amount C2 is equal to or less than a predetermined value C (
  • the flow line connecting portion 144 proceeds to step S721.
  • the flow line connecting portion 144 has the first flow line and the second flow line. Is determined to be the flow line of the same person, and the first flow line and the second flow line are combined (S713).
  • the flow line coupling unit 144 stores the flow line ID of the second flow line in the “next flow line ID” of the flow line information 421 indicating the first flow line.
  • step S711 determines whether or not the processing of steps S701 to S713 has been completed for all the flow line IDs (S721).
  • the flow line connecting unit 144 returns the processing to step S701.
  • the flow line analysis unit 14b returns to the process of FIG. 26.
  • FIG. 28 is a diagram showing a specific example of the flow line coupling process.
  • the frame image F1 shows the last frame image of the frame image constituting a certain flow line.
  • the frame image F2 shows a frame image within a predetermined time from the time indicated by the frame image F1, and shows the first frame image among the frame images constituting a flow line different from the flow line to which the frame image F1 belongs.
  • the rectangle R1 includes a person image of the person H11 in the frame image F1.
  • the rectangle R2 and the rectangle R3 include an image of a person (personal image) included in the frame image F2.
  • the flow line connecting portion 144 compares the distance between the position of the rectangle R1 and the rectangle R2 in the frame image, and the distance between the rectangle R1 and the position of the rectangle R3 in the frame image. This comparison corresponds to the process of step S711 in FIG.
  • the distance between the positions of the flow line connecting portion 144 in the frame image between the rectangle R1 and the rectangle R2 (solid arrow) is the distance between the positions in the frame image between the rectangle R1 and the rectangle R3. Closer than (broken arrow). Therefore, the flow line connecting portion 144 compares the person special feature amount in the rectangle R1 with the person feature amount in the rectangle R2. This comparison corresponds to step S712 in FIG. When these rectangular person images have similar person feature quantities, it is determined that the person in the rectangle R1 and the person in the rectangle R2 are the same person, and the flow lines in these people are combined.
  • the number of flow lines used for the flow line re-detection process can be reduced by combining the flow lines that can be combined before the flow line rediscovery process is performed. As a result, it is possible to keep the probability of recognizing a different person and calculating the total residence time as shown in the flow line line 832a of FIG. 20 low.
  • the present invention is not limited to the above-described embodiment, and includes various modifications.
  • the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to those having all the described configurations.
  • it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment and it is also possible to add the configuration of another embodiment to the configuration of one embodiment.
  • each of the above-mentioned configurations, functions, parts 11, 13, 14, 21, 51, 131, 132, 141, 142, 144, 211 to 214, database 4, etc. includes some or all of them, for example, an integrated circuit. It may be realized by hardware by designing with. Further, as shown in FIG. 5, each configuration, function, etc. described above may be realized by software by interpreting and executing a program in which a processor such as a CPU 72 realizes each function. In addition to storing information such as programs, tables, and files that realize each function in HD (Hard Disk), a memory 71, a recording device such as SSD (Solid State Drive), or an IC (Integrated Circuit) card.
  • HD Hard Disk
  • a memory 71 a recording device
  • SSD Solid State Drive
  • IC Integrated Circuit
  • control lines and information lines are shown as necessary for explanation, and not all the control lines and information lines are shown in the product. In practice, you can think of almost all configurations as interconnected.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

In order to enable calculation of long-term staying time, the present invention is characterized by comprising: a video data processing unit (11) that acquires a video from a video recording device (3); a frame image analysis unit (13) that detects a person image in each of the images constituting the acquired video; a movement line analysis unit (14) that extracts a line of movement from the detected person images, detects movement line information which is information about the extracted line of movement, acquires multiple pieces of movement line information detected, and determines whether the multiple pieces of movement line information acquired represent the line of movement of the same person on the basis of person features in the lines of movement; a total staying time analysis unit (21) that calculates, on the basis of the pieces of movement line information determined as being derived from the same person, the total staying time of that person; and an output server (5) that outputs the calculated total staying time to an output unit.

Description

動画解析装置及び動画解析方法Video analysis device and video analysis method
 本発明は、動画解析装置及び動画解析方法の技術に関する。 The present invention relates to a technique of a moving image analysis device and a moving image analysis method.
 現在、同一のカメラに一定時間以上映り続けている人物(以下、滞留人物と称する)を検索するため、動線検出を用いた滞留検知が行われている。 Currently, in order to search for a person who has been reflected on the same camera for a certain period of time or longer (hereinafter referred to as a stagnant person), a stagnant detection using a flow line detection is performed.
 このような動線検出による滞留検知を行うものとして特許文献1及び特許文献2に記載の技術が開示されている。
 特許文献1には、「複数のカメラ1ごとの撮影画像31を取得して、その撮影画像31の解析により追尾情報を取得して、うろつき人物に関する画像を含む監視画像を生成するプロセッサ12を備え、このプロセッサ12は、カメラ1ごとに、撮影画像31から検出された人物について、第1、第2の追尾テーブルに基づき、うろつき人物候補であることを示す第1の人物矩形32および、うろつき人物であることを示す第2の人物矩形32を撮影画像31に重畳した監視画像を生成する」人物行動監視装置及び人物行動監視システムが開示されている(要約参照)。
The techniques described in Patent Document 1 and Patent Document 2 are disclosed as those for detecting retention by detecting a flow line.
Patent Document 1 includes a processor 12 that "acquires captured images 31 for each of a plurality of cameras 1 and acquires tracking information by analyzing the captured images 31 to generate a surveillance image including an image relating to a prowling person. The processor 12 has, for each camera 1, a first person rectangular 32 indicating that the person detected from the captured image 31 is a prowling person candidate based on the first and second tracking tables, and a prowling person. A person behavior monitoring device and a person behavior monitoring system for generating a monitoring image in which a second person rectangle 32 indicating that the image is superimposed on the captured image 31 are disclosed (see summary).
 特許文献2には、「画像処理装置であって、処理対象画像に被写体を検出する領域を設定し、時間的に連続する複数の処理対象画像を解析して被写体を検出し、時間的に連続する複数の処理対象画像における被写体の検出結果に基づいて、被写体が領域内に滞留している滞留時間を決定し、該決定において、被写体が領域内から領域外に移動して、領域内に戻った場合に、被写体が領域外に位置していた時間を滞留時間に含める」画像処理装置、画像処理システム、画像処理装置の制御方法、及び、プログラムが開示されている(要約参照)。 Patent Document 2 states, "In an image processing device, an area for detecting a subject is set in an image to be processed, a plurality of images to be processed that are continuous in time are analyzed to detect the subject, and the subject is continuously detected in time. Based on the detection results of the subject in the plurality of images to be processed, the residence time during which the subject stays in the area is determined, and in the determination, the subject moves from the inside of the area to the outside of the area and returns to the inside of the area. In this case, the time when the subject is located outside the area is included in the residence time. ”The image processing device, the image processing system, the control method of the image processing device, and the program are disclosed (see summary).
特開2020-074506号公報Japanese Unexamined Patent Publication No. 2020-074506 特開2020-091649号公報Japanese Unexamined Patent Publication No. 2020-0916449
 特許文献1及び特許文献2に示すような動線検出を使用した滞留検知は、以下に示すような2つの課題があり、長時間の滞留追跡には不向きである。
 (A)一般に、動線が途切れると、滞留人物がいなくなったと判定される。そのため、動線が途切れたことによる追跡エラーがあると、人物の総滞留時間が算出できない。つまり、滞留人物が物などに遮られ、人物の大部分が映らない場合、動線の検知が途切れてしまう。同様に、動線検出のための追跡対象となる人物がフレームアウトすると、動線が途切れてしまう。これにより、同一人物が再度映り込んでも別人物の滞留とされてしまい、正確な総滞留時間の算出ができない。
Retention detection using flow line detection as shown in Patent Document 1 and Patent Document 2 has the following two problems and is not suitable for long-term retention tracking.
(A) Generally, when the flow line is interrupted, it is determined that the staying person has disappeared. Therefore, if there is a tracking error due to the interruption of the flow line, the total residence time of the person cannot be calculated. That is, if the stagnant person is blocked by an object or the like and most of the person is not reflected, the detection of the flow line is interrupted. Similarly, when the person to be tracked for detecting the flow line is framed out, the flow line is interrupted. As a result, even if the same person is reflected again, another person will be retained, and the total residence time cannot be calculated accurately.
 (B)滞留人物に対してオクルージョン(別の人物との重なり)が生じると誤追跡が生じてしまう。 (B) If an occlusion (overlap with another person) occurs for the staying person, erroneous tracking will occur.
 このような背景に鑑みて本発明がなされたのであり、本発明は、長期間にわたる滞留時間を算出可能にすることを課題とする。 The present invention has been made in view of such a background, and it is an object of the present invention to make it possible to calculate the residence time over a long period of time.
 前記した課題を解決するため、本発明は、撮像部から動画を取得する動画取得部と、取得した前記動画を構成する画像それぞれにおける人物画像を検出する人物検出部と、検出した前記人物画像について、ひとつづきの動線を抽出し、抽出した前記動線に関する情報である動線情報を検出する動線検出処理部と、前記動線検出処理部が検出した、一度途切れている前記動線情報を複数取得し、取得した複数の前記動線情報それぞれについて、同一人物の前記動線であるか否かを、前記動線における人物特徴量を基に判定する判定部と、前記同一人物と判定された前記動線情報を基に、その人物の総滞留時間を算出する総滞留時間算出部と、算出した前記総滞留時間を出力部に出力する出力処理部と、を有することを特徴とする。
 その他の解決手段は実施形態中において適宜記載する。
In order to solve the above-mentioned problems, the present invention relates to a moving image acquisition unit that acquires a moving image from an imaging unit, a person detecting unit that detects a person image in each of the acquired images constituting the moving image, and the detected person image. , The flow line detection processing unit that extracts one flow line and detects the flow line information that is the information about the extracted flow line, and the flow line information that is once interrupted detected by the flow line detection processing unit. And a determination unit that determines whether or not each of the acquired plurality of the flow line information is the same person's flow line based on the person's characteristic amount in the flow line, and the same person. It is characterized by having a total residence time calculation unit that calculates the total residence time of the person based on the flow line information, and an output processing unit that outputs the calculated total residence time to the output unit. ..
Other solutions will be described as appropriate in the embodiments.
 本発明によれば、長期間にわたる滞留時間を算出可能にすることを課題とする。 According to the present invention, it is an object to make it possible to calculate the residence time over a long period of time.
第1実施形態に係る動画解析システムの構成例を示す図である。It is a figure which shows the structural example of the moving image analysis system which concerns on 1st Embodiment. 第1実施形態におけるフレーム画像解析部の詳細な構成を示す図である。It is a figure which shows the detailed structure of the frame image analysis part in 1st Embodiment. 第1実施形態における動線解析部の詳細な構成を示す図である。It is a figure which shows the detailed structure of the flow line analysis part in 1st Embodiment. 第1実施形態における総滞留時間解析部の詳細な構成を示す図である。It is a figure which shows the detailed structure of the total residence time analysis part in 1st Embodiment. 演算装置のハードウェア構成図を示す図である。It is a figure which shows the hardware block diagram of the arithmetic unit. 人物データの例を示す図である。It is a figure which shows the example of the person data. 動線データの例を示す図である。It is a figure which shows the example of the flow line data. フレームデータの例を示す図である。It is a figure which shows the example of a frame data. 動線再検出データの例を示す図である。It is a figure which shows the example of the flow line rediscovery data. 総滞留時間データの例を示す図である。It is a figure which shows the example of the total residence time data. 第1実施形態で行われる動線収集処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the flow line collection process performed in 1st Embodiment. 第1実施形態で行われる総滞留時間抽出処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the total residence time extraction processing performed in 1st Embodiment. 第1実施形態で行われる動画解析処理の詳細な手順を示すフローチャートである。It is a flowchart which shows the detailed procedure of the moving image analysis processing performed in 1st Embodiment. 第1実施形態で行われる動線検出処理の詳細な手順を示すフローチャートである。It is a flowchart which shows the detailed procedure of the flow line detection process performed in 1st Embodiment. 第1実施形態に係る動線再検出処理の詳細な手順を示すフローチャートである。It is a flowchart which shows the detailed procedure of the flow line re-detection process which concerns on 1st Embodiment. 第1実施形態で行われる総滞留時間算出処理の詳細な手順を示すフローチャートである。It is a flowchart which shows the detailed procedure of the total residence time calculation process performed in 1st Embodiment. 第1実施形態で行われる総滞留時間詳細画面表示処理の詳細な手順を示すフローチャートである。It is a flowchart which shows the detailed procedure of the total residence time detailed screen display processing performed in 1st Embodiment. 監視動画表示画面を示す図である。It is a figure which shows the surveillance moving image display screen. 総滞留時間詳細画面を示す図である。It is a figure which shows the total residence time detailed screen. 個人滞留情報表示画面を示す図である。It is a figure which shows the personal stay information display screen. 動線乗り移りを説明する図である。It is a figure explaining the flow line transfer. 第2実施形態で行われる総滞留時間抽出処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the total residence time extraction processing performed in 2nd Embodiment. 第2実施形態で行われる総滞留時間抽出処理を行った結果を示す図である。It is a figure which shows the result of having performed the total residence time extraction process performed in 2nd Embodiment. 第3実施形態における動線解析部の詳細な構成を示す図である。It is a figure which shows the detailed structure of the flow line analysis part in 3rd Embodiment. 第3実施形態で用いられる動線データの例を示す図である。It is a figure which shows the example of the flow line data used in 3rd Embodiment. 第3実施形態で行われる動線検出処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the flow line detection process performed in 3rd Embodiment. 第3実施形態で行われる動線結合処理の詳細な手順を示すフローチャートである。It is a flowchart which shows the detailed procedure of the flow line coupling process performed in 3rd Embodiment. 動線結合処理の具体例を示す図である。It is a figure which shows the specific example of the flow line coupling process.
 次に、本発明を実施するための形態(「実施形態」という)について、適宜図面を参照しながら詳細に説明する。 Next, an embodiment for carrying out the present invention (referred to as "embodiment") will be described in detail with reference to the drawings as appropriate.
 [第1実施形態]
 まず、図1~図20を参照して、本発明の第1実施形態を説明する。
 <システム構成図>
 図1は、第1実施形態に係る動画解析システムZの構成例を示す図である。
 動画解析システムZは、動画解析サーバ1、総滞留時間解析サーバ2、カメラなどの動画撮影装置(撮像部)3、データベース4、出力サーバ5、出力装置(出力部)6を有する。
 動画解析サーバ1は、動画データ処理部(動画取得部)11、一時記憶部12、フレーム画像解析部13、動線解析部14を有する。
 動画データ処理部11は、動画撮影装置3から取得した動画データを構成するフレーム画像のデータ(以下、フレーム画像データと称する)を抽出し、抽出したフレーム画像データを一時記憶部12に格納する。なお、一時記憶部12は、メモリ71(図5参照)に形成される記憶領域である。
[First Embodiment]
First, the first embodiment of the present invention will be described with reference to FIGS. 1 to 20.
<System configuration diagram>
FIG. 1 is a diagram showing a configuration example of the moving image analysis system Z according to the first embodiment.
The moving image analysis system Z includes a moving image analysis server 1, a total residence time analysis server 2, a moving image shooting device (imaging unit) 3 such as a camera, a database 4, an output server 5, and an output device (output unit) 6.
The moving image analysis server 1 has a moving image data processing unit (moving image acquisition unit) 11, a temporary storage unit 12, a frame image analysis unit 13, and a flow line analysis unit 14.
The moving image data processing unit 11 extracts frame image data (hereinafter referred to as frame image data) constituting the moving image data acquired from the moving image shooting device 3, and stores the extracted frame image data in the temporary storage unit 12. The temporary storage unit 12 is a storage area formed in the memory 71 (see FIG. 5).
 フレーム画像解析部13は、抽出したフレーム画像データから人物画像を抽出し、抽出した人物画像に関する情報(図2の人物情報411)を一時記憶部12に格納されている人物データ41(図6参照)に格納する。加えて、フレーム画像解析部13は、データベース4の人物データ41に人物情報411を格納する。
 動線解析部14は、フレーム画像解析部13が解析した結果を用いて、それぞれの連続したフレーム画像データにおける人物画像の移動軌跡を追跡し、動線データ42を抽出する。そして、動線解析部14は、抽出した動線に関する情報をデータベース4に格納されている図7の動線データ42に格納する。
The frame image analysis unit 13 extracts a person image from the extracted frame image data, and stores the information about the extracted person image (person information 411 in FIG. 2) in the temporary storage unit 12 as the person data 41 (see FIG. 6). ). In addition, the frame image analysis unit 13 stores the person information 411 in the person data 41 of the database 4.
The flow line analysis unit 14 tracks the movement trajectory of the person image in each continuous frame image data using the result analyzed by the frame image analysis unit 13, and extracts the flow line data 42. Then, the flow line analysis unit 14 stores the information regarding the extracted flow lines in the flow line data 42 of FIG. 7 stored in the database 4.
 総滞留時間解析サーバ2は総滞留時間解析部21を有する。
 総滞留時間解析部21は、一旦、データベース4に格納された動線に関する情報に対して、同一人物の動線であるか否かを再検索する。そして、総滞留時間解析部21は再検索の結果を基に、それぞれの人物の総滞留時間を算出する。さらに、総滞留時間解析部21は、算出した総滞留時間を基にアラーム通知の要不要を判定する。
The total residence time analysis server 2 has a total residence time analysis unit 21.
The total residence time analysis unit 21 once re-searches whether or not the information regarding the flow line stored in the database 4 is the flow line of the same person. Then, the total residence time analysis unit 21 calculates the total residence time of each person based on the result of the re-search. Further, the total residence time analysis unit 21 determines whether or not an alarm notification is necessary based on the calculated total residence time.
 出力サーバ5は、総滞留時間解析部21が算出した総滞留時間を出力装置6に出力するとともに、総滞留時間解析部21によってアラーム通知が必要と判定された場合は、出力装置6にアラームを出力する。 The output server 5 outputs the total residence time calculated by the total residence time analysis unit 21 to the output device 6, and issues an alarm to the output device 6 when the total residence time analysis unit 21 determines that alarm notification is necessary. Output.
 データベース4には、人物データ41(図6参照)、動線データ42(図7参照)、動線再検出データ44(図9参照)、総滞留時間データ45(図10参照)が格納されている。また、一時記憶部12には人物データ41(図6参照)、フレームデータ43(図8参照)が格納されている。つまり、人物データ41は一時記憶部12とデータベース4とに同一のものが格納されている。データベース4や、一時記憶部12に格納されているそれぞれのデータ41~45については後記する。 The database 4 stores the person data 41 (see FIG. 6), the flow line data 42 (see FIG. 7), the flow line rediscovery data 44 (see FIG. 9), and the total residence time data 45 (see FIG. 10). There is. Further, the temporary storage unit 12 stores the person data 41 (see FIG. 6) and the frame data 43 (see FIG. 8). That is, the same person data 41 is stored in the temporary storage unit 12 and the database 4. The respective data 41 to 45 stored in the database 4 and the temporary storage unit 12 will be described later.
 なお、図1に示す例では、動画解析サーバ1、総滞留時間解析サーバ2、データベース4、出力サーバ5のそれぞれが別の装置となっているが、一体の装置であってもよい。あるいは、2つ以上の装置が、1つの装置として設置されてもよい。 In the example shown in FIG. 1, the video analysis server 1, the total residence time analysis server 2, the database 4, and the output server 5 are separate devices, but they may be integrated devices. Alternatively, two or more devices may be installed as one device.
 (フレーム画像解析部13)
 図2は、第1実施形態におけるフレーム画像解析部13の詳細な構成を示す図である。図2において、図1と同様の構成については同一の符号を付して説明を省略する。
 フレーム画像解析部13は、フレーム画像データ取得部131、人物抽出処理部132を有する。
 フレーム画像データ取得部131は、一時記憶部12に格納されているフレーム画像データを取得し、人物抽出処理部132へわたす。
 人物抽出処理部132は、フレーム画像データ取得部131が取得したフレーム画像データから人物画像を抽出する。そして、人物抽出処理部132は抽出した人物画像に関する情報(人物情報411)を一時記憶部12の人物データ41(図6参照)に格納する。加えて、人物抽出処理部132は人物情報411をデータベース4の人物データ41に格納する。
(Frame image analysis unit 13)
FIG. 2 is a diagram showing a detailed configuration of the frame image analysis unit 13 in the first embodiment. In FIG. 2, the same components as those in FIG. 1 are designated by the same reference numerals and description thereof will be omitted.
The frame image analysis unit 13 has a frame image data acquisition unit 131 and a person extraction processing unit 132.
The frame image data acquisition unit 131 acquires the frame image data stored in the temporary storage unit 12 and passes it to the person extraction processing unit 132.
The person extraction processing unit 132 extracts a person image from the frame image data acquired by the frame image data acquisition unit 131. Then, the person extraction processing unit 132 stores the information (person information 411) related to the extracted person image in the person data 41 (see FIG. 6) of the temporary storage unit 12. In addition, the person extraction processing unit 132 stores the person information 411 in the person data 41 of the database 4.
 (動線解析部14)
 図3は、第1実施形態における動線解析部14の詳細な構成を示す図である。図3において、図1と同様の構成については同一の符号を付して説明を省略する。
 動線解析部14は、人物情報取得部141、動線検出処理部(判定部)142を有する。
 人物情報取得部141は一時記憶部12に格納されている人物データ41(図6参照)から人物情報411を取得する。
 動線検出処理部142は、人物情報取得部141が取得した人物情報411を基に、連続したフレーム画像データにおける人物それぞれの軌跡である動線の検出を行う。
 動線検出処理部142による処理の結果生成される動線の情報(動線情報421)はデータベース4に格納されている動線データ42に格納される。
(Flow line analysis unit 14)
FIG. 3 is a diagram showing a detailed configuration of the flow line analysis unit 14 in the first embodiment. In FIG. 3, the same components as those in FIG. 1 are designated by the same reference numerals and description thereof will be omitted.
The flow line analysis unit 14 has a person information acquisition unit 141 and a flow line detection processing unit (determination unit) 142.
The person information acquisition unit 141 acquires the person information 411 from the person data 41 (see FIG. 6) stored in the temporary storage unit 12.
The flow line detection processing unit 142 detects the flow line, which is the locus of each person in the continuous frame image data, based on the person information 411 acquired by the person information acquisition unit 141.
The flow line information (flow line information 421) generated as a result of processing by the flow line detection processing unit 142 is stored in the flow line data 42 stored in the database 4.
 (総滞留時間解析部21)
 図4は、第1実施形態における総滞留時間解析部21の詳細な構成を示す図である。図4において、図1と同様の構成については同一の符号を付して説明を省略する。
 総滞留時間解析部21は、定期検索部211、動線再検出部212、総滞留時間算出部213、アラーム判定処理部214を有する。
 定期検索部211は、ユーザが総滞留時間の検索を定期的に行うよう設定している場合、総滞留時間の検索時刻になったか否かを判定する。なお、定期検索部211は省略可能である。
 動線再検出部212は、一旦、データベース4に格納された動線に関する情報に対して、同一人物の動線であるか否かを再検索する。再検索の結果である動線再検出情報441等はデータベース4の動線再検出データ44(図9参照)に格納される。
 総滞留時間算出部213は、データベース4に格納されている動線再検出データ44を基に、それぞれの人物の総滞留時間を算出する。
 アラーム判定処理部214は、算出した総滞留時間を基にアラーム通知の要不要を判定する。総滞留時間算出部213によって算出された総滞留時間、及び、アラーム判定処理部214によるアラーム通知の要不要判定結果はデータベース4に格納されている総滞留時間データ45(図10参照)に格納される。
(Total residence time analysis unit 21)
FIG. 4 is a diagram showing a detailed configuration of the total residence time analysis unit 21 in the first embodiment. In FIG. 4, the same components as those in FIG. 1 are designated by the same reference numerals, and the description thereof will be omitted.
The total residence time analysis unit 21 includes a periodic search unit 211, a flow line re-detection unit 212, a total residence time calculation unit 213, and an alarm determination processing unit 214.
When the user is set to periodically search for the total residence time, the periodic search unit 211 determines whether or not the search time for the total residence time has been reached. The periodic search unit 211 can be omitted.
The flow line re-detection unit 212 once re-searches the information regarding the flow line stored in the database 4 to see if it is the flow line of the same person. The flow line rediscovery information 441 and the like, which are the results of the re-search, are stored in the flow line rediscovery data 44 (see FIG. 9) of the database 4.
The total residence time calculation unit 213 calculates the total residence time of each person based on the flow line rediscovery data 44 stored in the database 4.
The alarm determination processing unit 214 determines whether or not an alarm notification is necessary based on the calculated total residence time. The total residence time calculated by the total residence time calculation unit 213 and the necessity determination result of the alarm notification by the alarm determination processing unit 214 are stored in the total residence time data 45 (see FIG. 10) stored in the database 4. To.
 (ハードウェア構成)
 図5は、演算装置7のハードウェア構成図を示す図である。
 ここで、演算装置7は、動画解析サーバ1、総滞留時間解析サーバ2、出力サーバ5を構成するものである。
 演算装置7は、メモリ71、CPU72(Central Processing Unit)、HD(Hard Disk)等の記憶装置73、通信装置74、入力装置75を有している。
 通信装置74は、他の装置や、データベース4との情報の送受信を行うものである。
 入力装置75は、キーボードや、マウス等である。
 また、記憶装置73に格納されているプログラムがメモリ71にロードされる。そして、ロードされたプログラムがCPU72によって実行される。これにより、動画データ処理部11、フレーム画像解析部13、動線解析部14、総滞留時間解析部21、及び、フレーム画像解析部13、動線解析部14、総滞留時間解析部21を構成する各部131~132,141~142,211~214が具現化する。
(Hardware configuration)
FIG. 5 is a diagram showing a hardware configuration diagram of the arithmetic unit 7.
Here, the arithmetic unit 7 constitutes a moving image analysis server 1, a total residence time analysis server 2, and an output server 5.
The arithmetic unit 7 includes a memory 71, a CPU 72 (Central Processing Unit), a storage device 73 such as an HD (Hard Disk), a communication device 74, and an input device 75.
The communication device 74 transmits / receives information to / from another device and the database 4.
The input device 75 is a keyboard, a mouse, or the like.
Further, the program stored in the storage device 73 is loaded into the memory 71. Then, the loaded program is executed by the CPU 72. As a result, the moving image data processing unit 11, the frame image analysis unit 13, the flow line analysis unit 14, the total residence time analysis unit 21, and the frame image analysis unit 13, the flow line analysis unit 14, and the total residence time analysis unit 21 are configured. Each part 131 to 132, 141 to 142, 211 to 214 is embodied.
 <データベース4>
 次に、図6~図10を参照してデータベース4や一時記憶部12に格納されている各データ41~45を説明する。
 (人物データ41)
 図6は、人物データ41の例を示す図である。
 人物データ41は、「人物ID」、「フレームID」、「矩形情報」、「タイムスタンプ」、「人物特徴量」、「動線ID」、「人物画像」の各欄を有する。
 「人物ID」欄には、フレーム画像データ内で検出された人物画像に対して一意に付与されるIDが格納される。なお、例え、同一人物でも、検出元のフレーム画像データが異なれば異なる人物IDが付与される。
 「フレームID」欄には、「人物ID」が示す人物画像が検出されたフレーム画像データに付与されるIDが格納される。
 「矩形情報」欄には、人物画像を包含する矩形に関する情報が格納される。
 「タイムスタンプ」欄には、「フレームID」が示すフレーム画像データが撮像された日時に関する情報が格納される。
 「人物特徴量」欄には、検出された人物画像が示す特徴に関する情報が格納される。
 「動線ID」欄には、動線解析部14によって付与されるIDであり、同一人物であると判定された人物IDを繋げるIDが格納される動線IDをたどることによって、ある人物の動線(移動の軌跡)を追うことができる。
 「人物画像」欄には、検出された人物画像データが格納される。
<Database 4>
Next, each data 41 to 45 stored in the database 4 and the temporary storage unit 12 will be described with reference to FIGS. 6 to 10.
(Person data 41)
FIG. 6 is a diagram showing an example of person data 41.
The person data 41 has columns of "person ID", "frame ID", "rectangular information", "time stamp", "person feature amount", "traffic line ID", and "person image".
In the "person ID" field, an ID uniquely assigned to the person image detected in the frame image data is stored. Even for the same person, if the frame image data of the detection source is different, different person IDs are given.
In the "frame ID" column, an ID assigned to the frame image data in which the person image indicated by the "person ID" is detected is stored.
Information about a rectangle including a person image is stored in the "rectangle information" column.
In the "time stamp" column, information regarding the date and time when the frame image data indicated by the "frame ID" was captured is stored.
In the "person feature amount" column, information about the feature indicated by the detected person image is stored.
In the "traffic line ID" column, the ID given by the flow line analysis unit 14, and the flow line ID in which the ID connecting the person IDs determined to be the same person is stored, is traced to the flow line ID of a certain person. You can follow the flow line (trajectory of movement).
The detected person image data is stored in the "person image" column.
 (動線データ42)
 図7は、動線データ42の例を示す図である。
 動線データ42は、「動線ID」、「人物IDのリスト」、「開始フレームID」、「終了フレームID」、「フレームアウトの有無」の各欄を有している。
 「動線ID」欄には、人物データ41の「動線ID」と同様のデータが格納される。
 「人物IDのリスト」欄には、同じ動線IDが付与された人物IDがリストとして格納される。
 「開始フレームID」欄には、対象となる動線IDが付与された人物画像が検出された最初のフレームのフレームIDが格納される。
 「終了フレームID」欄には、対象となる動線IDが付与された人物画像が検出された最後のフレームのフレームIDが格納される。
 「フレームアウトの有無」欄には、フレームアウトの有無に関する情報が格納される。フレームアウトの有無は、動線の最後のフレームにおいて人物画像がフレーム画像の端にあったか否かで判定される。動線の最後のフレームにおいて人物画像がフレーム画像の端にあった場合、「フレームアウトの有無」欄に「有」が格納される。
(Flow line data 42)
FIG. 7 is a diagram showing an example of the flow line data 42.
The flow line data 42 has columns for "flow line ID", "list of person IDs", "start frame ID", "end frame ID", and "presence / absence of frame out".
In the "traffic line ID" column, the same data as the "traffic line ID" of the person data 41 is stored.
In the "list of person IDs" column, person IDs to which the same flow line ID is assigned are stored as a list.
In the "start frame ID" field, the frame ID of the first frame in which the person image to which the target flow line ID is assigned is detected is stored.
In the "end frame ID" column, the frame ID of the last frame in which the person image to which the target flow line ID is assigned is detected is stored.
Information regarding the presence / absence of flameout is stored in the "presence / absence of frameout" column. Whether or not there is a frame out is determined by whether or not the person image is at the edge of the frame image in the last frame of the flow line. If the person image is at the end of the frame image in the last frame of the flow line, "Yes" is stored in the "Presence / absence of frame out" column.
 なお、図7において、それぞれの動線IDが示す動線は、例え、同一人物の動線があったとしても別人物の動線として認識されている。 Note that, in FIG. 7, the flow lines indicated by the respective flow line IDs are recognized as the flow lines of different persons even if there is a flow line of the same person.
 (フレームデータ43)
 図8は、フレームデータ43の例を示す図である。
 フレームデータ43は、「フレームID」、「フレーム画像」、「サムネイル画像」の各欄を有している。
 「フレームID」欄には、フレーム画像それぞれに対して一意に付与されるIDである。
 「フレーム画像」欄には、フレーム画像データが格納される。
 「サムネイル画像」欄には、「フレーム画像」におけるフレーム画像データのサイズを小さくしたサムネイル画像が格納される。
(Frame data 43)
FIG. 8 is a diagram showing an example of frame data 43.
The frame data 43 has columns for "frame ID", "frame image", and "thumbnail image".
The "frame ID" column is an ID uniquely assigned to each frame image.
Frame image data is stored in the "frame image" field.
In the "thumbnail image" field, a thumbnail image in which the size of the frame image data in the "frame image" is reduced is stored.
 (動線再検出データ44)
 図9は、動線再検出データ44の例を示す図である。
 動線再検出データ44は、「動線ID」、「滞留開始時刻」、「滞留終了時刻」、「滞留時間(秒)」、「人物特徴量」、「同一人物有無」、「同一人物ID」の各欄を有する。
 「動線ID」欄には、図6の人物データ41における動線IDと同様であり、同一人物であると判定された人物IDを繋げるIDが格納される。
 「滞留開始時刻」欄には、動線IDで繋げられる人物IDを含む一連のフレーム画像(すなわち、動線IDで繋げられるフレーム画像)における最初のフレーム画像が示す時刻である。
 「滞留終了時刻」欄には、動線IDで繋げられる人物IDを含む一連のフレーム画像(すなわち、動線IDで繋げられるフレーム画像)における最後のフレーム画像が示す時刻である。
 「滞留時間(秒)」欄には、「滞留終了時刻」から「滞留開始時刻」を差し引いた時間が格納される。
 「人物特徴量」欄には、図7の動線データ42において動線IDと対応付けられている人物画像の人物特徴量が格納される。
 「同一人物有無」欄には、動線再検出データ44に格納されている動線において、対象となる動線IDと同一人物の動線IDが他にあるか否かを示す情報が格納される。
 「同一人物ID」欄には、同一人物と判定された動線に対して付与される共通のIDが格納される。図9の例において「同一人物ID」として「A」が付与されている動線IDのそれぞれは同一人物であると判定された動線IDである。また、「B」が付与されている動線IDのそれぞれは、「A」とは別人であるが、同一人物であると判定された動線IDである。なお、「同一人物ID」に格納される情報は、一般的に人物の氏名ではなく、ID番号である。
(Flow line rediscovery data 44)
FIG. 9 is a diagram showing an example of the flow line rediscovery data 44.
The flow line rediscovery data 44 includes "flow line ID", "retention start time", "retention end time", "retention time (seconds)", "personal feature amount", "presence or absence of the same person", and "same person ID". It has each column of.
In the "flow line ID" column, the same as the flow line ID in the person data 41 of FIG. 6, an ID connecting the person IDs determined to be the same person is stored.
The "retention start time" column is the time indicated by the first frame image in a series of frame images including the person ID connected by the flow line ID (that is, the frame image connected by the flow line ID).
The "retention end time" column is the time indicated by the last frame image in the series of frame images including the person ID connected by the flow line ID (that is, the frame image connected by the flow line ID).
In the "retention time (seconds)" column, the time obtained by subtracting the "retention start time" from the "retention end time" is stored.
In the "person feature amount" column, the person feature amount of the person image associated with the flow line ID in the flow line data 42 of FIG. 7 is stored.
In the "presence / absence of same person" column, information indicating whether or not there is another flow line ID of the same person as the target flow line ID in the flow line stored in the flow line rediscovery data 44 is stored. The flow.
In the "same person ID" column, a common ID given to a flow line determined to be the same person is stored. In the example of FIG. 9, each of the flow line IDs to which "A" is given as the "same person ID" is a flow line ID determined to be the same person. Further, each of the flow line IDs to which "B" is assigned is a flow line ID that is determined to be the same person, although they are different from "A". The information stored in the "same person ID" is generally not the name of the person but the ID number.
 (総滞留時間データ45)
 図10は、総滞留時間データ45の例を示す図である。
 総滞留時間データ45は、「同一人物ID」、「総滞留時間」、「超過判定」の各欄を有する。
 「同一人物ID」欄には、図9と同一人物IDが格納される。
 「総滞留時間」欄には、図9の動線再検出データ44において同じ同一人物IDが付与されている動線IDの滞留時間がすべて加算された総滞留時間が格納される。
 「超過判定」欄には、総滞留時間が所定の時間(第1閾値V1)を超過しているか(「超過」)、総滞留時間が所定の時間(第2閾値V2;V1>V2)を超えているか(「警戒」)等の情報が格納されている。また、総滞留時間が所定の時間(第2閾値V2)を超えていない場合、「超過判定」欄には「正常」が格納される。
(Total residence time data 45)
FIG. 10 is a diagram showing an example of total residence time data 45.
The total residence time data 45 has columns for "same person ID", "total residence time", and "excess determination".
The same person ID as in FIG. 9 is stored in the "same person ID" column.
In the "total residence time" column, the total residence time obtained by adding all the residence times of the flow line IDs to which the same person ID is given in the flow line rediscovery data 44 of FIG. 9 is stored.
In the "excess determination" column, whether the total residence time exceeds a predetermined time (first threshold value V1) ("excess") or the total residence time is a predetermined time (second threshold value V2; V1> V2). Information such as whether it exceeds (“warning”) is stored. Further, when the total residence time does not exceed a predetermined time (second threshold value V2), "normal" is stored in the "excess determination" column.
 <フローチャート>
 (全体処理)
 図11及び図12は、第1実施形態で行われる長期滞留者検知処理の手順を示すフローチャートである。
 まず、図11を参照して、動線収集処理を説明する。
 図11は、第1実施形態で行われる動線収集処理の手順を示すフローチャートである。
 まず、動画データ処理部11及びフレーム画像解析部13が動画解析処理を行う(S1)。動画解析処理の詳細は後記する。
 次に、動線解析部14が動線検出処理を行う(S2)。動線検出処理の詳細は後記する。ステップS1~S2によって、動線に関する情報が収集される。
<Flow chart>
(Overall processing)
11 and 12 are flowcharts showing the procedure of the long-term resident detection process performed in the first embodiment.
First, the flow line collection process will be described with reference to FIG.
FIG. 11 is a flowchart showing the procedure of the flow line collection process performed in the first embodiment.
First, the moving image data processing unit 11 and the frame image analysis unit 13 perform moving image analysis processing (S1). Details of the video analysis process will be described later.
Next, the flow line analysis unit 14 performs the flow line detection process (S2). Details of the flow line detection process will be described later. Information about the flow line is collected by steps S1 and S2.
 次に、図12を参照して、図11の処理を受けて行われる総滞留時間抽出処理を説明する。
 図12は、第1実施形態で行われる総滞留時間抽出処理の手順を示すフローチャートである。
 まず、総滞留時間解析部21が動線再検出処理を行う(S4)。動線再検出処理の詳細は後記する。ステップS4によって、ステップS1,S2において、別々の人物のものとして認識されていた動線について、同一人物の動線であるか否かが検索される。
 続いて、総滞留時間算出部213が、ステップS4の結果を用いて人物毎の総滞留時間を算出する総滞留時間算出処理を行う(S5)。総滞留時間算出処理の詳細は後記する。
 その後、出力処理部51が総滞留時間算出処理によって算出された総滞留時間を表示装置に表示する画面表示処理を行う(S6)。画面表示処理の詳細は後記する。
Next, with reference to FIG. 12, the total residence time extraction process performed in response to the process of FIG. 11 will be described.
FIG. 12 is a flowchart showing the procedure of the total residence time extraction process performed in the first embodiment.
First, the total residence time analysis unit 21 performs the flow line re-detection process (S4). Details of the flow line rediscovery process will be described later. In step S4, it is searched in steps S1 and S2 whether or not the flow lines recognized as belonging to different persons are the flow lines of the same person.
Subsequently, the total residence time calculation unit 213 performs a total residence time calculation process for calculating the total residence time for each person using the result of step S4 (S5). Details of the total residence time calculation process will be described later.
After that, the output processing unit 51 performs screen display processing for displaying the total residence time calculated by the total residence time calculation processing on the display device (S6). Details of the screen display process will be described later.
 (動画解析処理)
 図13は、第1実施形態で行われる動画解析処理(図11のステップS1)の詳細な手順を示すフローチャートである。
 まず、動画データ処理部11は、動画撮影装置3が撮像した動画データを取得する(S101)。
 次に、動画データ処理部11は、取得した動画データから、動画データを構成するフレーム画像データのそれぞれを取得し(S102)、それぞれのフレーム画像データを一時記憶部12へ格納する(S103)。動画データ処理部11は、取得したフレーム画像データのそれぞれに対してフレームIDを付与した上で一時記憶部12のフレームデータ43(図8参照)にフレーム画像データを格納する。
(Video analysis processing)
FIG. 13 is a flowchart showing a detailed procedure of the moving image analysis process (step S1 in FIG. 11) performed in the first embodiment.
First, the moving image data processing unit 11 acquires the moving image data captured by the moving image shooting device 3 (S101).
Next, the moving image data processing unit 11 acquires each of the frame image data constituting the moving image data from the acquired moving image data (S102), and stores each frame image data in the temporary storage unit 12 (S103). The moving image data processing unit 11 assigns a frame ID to each of the acquired frame image data, and then stores the frame image data in the frame data 43 (see FIG. 8) of the temporary storage unit 12.
 続いて、フレーム画像データ取得部131は、一時記憶部12からフレーム画像データを取得する(S111)。
 次に、人物抽出処理部132は、人物特徴量を基に取得したフレーム画像データから人物画像を抽出する(S112)。ここでは、フレーム画像中に存在するすべての人物画像が抽出される。
 そして、人物抽出処理部132は、抽出した人物画像それぞれに人物IDを付し、それぞれの人物特徴量や、フレーム画像に関するデータ、人物画像データを抽出した人物画像毎に人物データ41(図6参照)へ格納する(S113)。
Subsequently, the frame image data acquisition unit 131 acquires frame image data from the temporary storage unit 12 (S111).
Next, the person extraction processing unit 132 extracts a person image from the frame image data acquired based on the person feature amount (S112). Here, all the person images existing in the frame image are extracted.
Then, the person extraction processing unit 132 assigns a person ID to each of the extracted person images, and the person data 41 (see FIG. 6) for each person feature amount, data related to the frame image, and the person image from which the person image data is extracted. ) (S113).
 続いて、動線解析部14は、処理対象となっているフレーム画像中において、すべての人物の人物画像の抽出が終了したか否かを判定する(すべての人物について完了?;S114)。
 すべての人物の人物画像の抽出が終了していない場合(S114→No)、動線解析部14はステップS111へ処理を戻す。
 すべての人物の人物画像の抽出が終了している場合(S114)、動画解析サーバ1は図11のステップS2へ処理をリターンする。
Subsequently, the flow line analysis unit 14 determines whether or not the extraction of the person images of all the persons has been completed in the frame image to be processed (completed for all persons ?; S114).
When the extraction of the human image of all the persons is not completed (S114 → No), the flow line analysis unit 14 returns the process to step S111.
When the extraction of the human image of all the persons is completed (S114), the moving image analysis server 1 returns the process to step S2 in FIG.
 (動線検出処理)
 図14は、第1実施形態で行われる動線検出処理(図11のステップS2)の詳細な手順を示すフローチャートである。
 まず、人物情報取得部141は、人物データ41から人物情報411を取得する(S201)。前記したように、人物情報411とは、図6に示す人物データ41のレコードを示す。
 続いて、動線検出処理部142は、人物データ41において、現在のフレーム画像の1つ前のフレーム画像における人物特徴量と、ステップS112の人物画像の抽出において算出される人物特徴量とを比較することにより、類似の人物特徴量を有する人物情報411(類似の人物情報411)が1つ前のフレーム画像に存在するか否かを判定する(S202)。類似の人物特徴量とは、人物特徴量の差が所定の範囲(人物特徴量閾値)以内の値を有することである。また、人物情報411とは、図6に示す人物データ41のレコードである。
(Flow line detection processing)
FIG. 14 is a flowchart showing a detailed procedure of the flow line detection process (step S2 in FIG. 11) performed in the first embodiment.
First, the person information acquisition unit 141 acquires the person information 411 from the person data 41 (S201). As described above, the person information 411 indicates a record of the person data 41 shown in FIG.
Subsequently, the movement line detection processing unit 142 compares the person feature amount in the frame image immediately before the current frame image with the person feature amount calculated in the extraction of the person image in step S112 in the person data 41. By doing so, it is determined whether or not the person information 411 (similar person information 411) having a similar person feature amount exists in the previous frame image (S202). The similar person feature amount means that the difference between the person feature amounts has a value within a predetermined range (person feature amount threshold value). The person information 411 is a record of the person data 41 shown in FIG.
 1つ前のフレーム画像に類似の人物情報411が存在しない場合(S202→No)、動線検出処理部142は、該当する人物情報411に動線IDを新たに付与する(S203)。
 また、動線検出処理部142は、図7に示す動線データ42の「動線ID」欄に、ステップS202で新たに付与した動線IDを格納し、「人物IDのリスト」欄に処理対象となっている人物情報411の人物IDを格納する(S204)。ステップS203において、動線検出処理部142は、図7の動線データ42の「開始フレームID」欄に処理対象となっている人物情報411のフレームIDを格納する。この際、動線検出処理部142は、人物データ41の動線ID欄にも付与した動線IDを格納する。
When the similar person information 411 does not exist in the previous frame image (S202 → No), the flow line detection processing unit 142 newly assigns the flow line ID to the corresponding person information 411 (S203).
Further, the flow line detection processing unit 142 stores the flow line ID newly assigned in step S202 in the "flow line ID" column of the flow line data 42 shown in FIG. 7, and processes it in the "person ID list" column. The person ID of the target person information 411 is stored (S204). In step S203, the flow line detection processing unit 142 stores the frame ID of the person information 411 to be processed in the "start frame ID" column of the flow line data 42 of FIG. 7. At this time, the flow line detection processing unit 142 also stores the flow line ID assigned to the flow line ID column of the person data 41.
 1つ前のフレーム画像に類似の人物情報411が存在する場合(S202→Yes)、動線検出処理部142は、人物画像の追跡が所定時間以上行われているか否かを判定する(S211)。ステップS211において、動線検出処理部142は、まず、図7に示す動線データ42において、類似の人物情報411における動線IDと同じ動線IDを有する動線情報421を取得する。ここで、動線情報421とは、動線データ42におけるレコードである。続いて、動線検出処理部142は、取得した動線情報421における「開始フレームID」欄に格納されているフレームIDを取得する。ここで、取得されるフレームIDを開始フレームIDとする。そして、動線検出処理部142は、取得した開始フレームIDをキーとして、人物データ41を検索し、開始フレームIDに相当するフレームIDに対応付けられて格納されているタイムスタンプを取得する。ここで、取得されるタイムスタンプを開始タイムスタンプと称する。そして、ステップS201で取得した人物情報411に格納されているタイムスタンプと、開始タイムスタンプとを比較することで、人物画像の追跡が所定時間以上行われているか否かを判定する。 When there is similar person information 411 in the previous frame image (S202 → Yes), the flow line detection processing unit 142 determines whether or not the person image is being tracked for a predetermined time or longer (S211). .. In step S211 first, the flow line detection processing unit 142 acquires the flow line information 421 having the same flow line ID as the flow line ID in the similar person information 411 in the flow line data 42 shown in FIG. 7. Here, the flow line information 421 is a record in the flow line data 42. Subsequently, the flow line detection processing unit 142 acquires the frame ID stored in the "start frame ID" column of the acquired flow line information 421. Here, the acquired frame ID is used as the start frame ID. Then, the flow line detection processing unit 142 searches for the person data 41 using the acquired start frame ID as a key, and acquires a time stamp stored in association with the frame ID corresponding to the start frame ID. Here, the acquired time stamp is referred to as a start time stamp. Then, by comparing the time stamp stored in the person information 411 acquired in step S201 with the start time stamp, it is determined whether or not the tracking of the person image is performed for a predetermined time or more.
 人物画像の追跡が所定時間以上行われている場合(S211→Yes)、動線検出処理部142は、新たな動線IDを生成する(S203)ことで、動線を一旦打ち切る。動線がいつまでも続いていると、その人物画像に対して、後記する動線再検出処理を開始することができないが、一定時間ごとに動線を一旦打ち切ることで、そのような問題を解決することができる。 When the tracking of the person image is performed for a predetermined time or longer (S211 → Yes), the flow line detection processing unit 142 temporarily cuts off the flow line by generating a new flow line ID (S203). If the flow line continues indefinitely, the flow line re-detection process described later cannot be started for the person image, but such a problem can be solved by temporarily cutting off the flow line at regular intervals. be able to.
 人物画像の追跡が所定時間以上行われていない場合(S211→No)、動線検出処理部142は、類似の人物情報411における動線IDを、処理対象となっている人物情報411の動線ID欄に格納する(S212)。
 また、動線検出処理部142は、図7に示す動線データ42において、ステップS204で処理対象となっている動線IDのレコードの「人物IDのリスト」欄に処理対象となっている人物情報411の人物IDを格納する(S213)。
When the tracking of the person image has not been performed for a predetermined time or more (S211 → No), the flow line detection processing unit 142 uses the flow line ID in the similar person information 411 as the flow line of the person information 411 to be processed. Store in the ID field (S212).
Further, in the flow line data 42 shown in FIG. 7, the flow line detection processing unit 142 is the person to be processed in the "person ID list" column of the flow line ID record to be processed in step S204. The person ID of the information 411 is stored (S213).
 そして、動線検出処理部142は、すべての人物情報411について動線IDとの対応付けが完了したか否かを判定する(すべての人物情報411について完了;S214)。
 すべての人物情報411について動線IDとの対応付けが完了していない場合(S214→No)、動線検出処理部142はステップS201へ処理を戻す。
 すべての人物情報411について動線IDとの対応付けが完了している場合(S214→Yes)、動線検出処理部142は、それぞれの動線IDについて終了フレーム画像を検出し、図7に示す動線データ42の「終了フレームID」欄に検出した終了フレーム画像のフレームIDを格納する(S215)。その後、動画解析サーバ1は図11の処理へリターンする。
Then, the flow line detection processing unit 142 determines whether or not the association with the flow line ID is completed for all the person information 411 (completed for all the person information 411; S214).
When the association with the flow line ID is not completed for all the person information 411 (S214 → No), the flow line detection processing unit 142 returns the processing to step S201.
When the association with the flow line ID is completed for all the person information 411 (S214 → Yes), the flow line detection processing unit 142 detects the end frame image for each flow line ID, and is shown in FIG. 7. The frame ID of the detected end frame image is stored in the "end frame ID" column of the flow line data 42 (S215). After that, the moving image analysis server 1 returns to the process of FIG.
 なお、図14では図示していないが、動線検出処理部142は、動線データ42の終了フレームIDを基に、終了フレーム画像を検索する。そして、動線検出処理部142は、その終了フレーム画像において人物画像がフレーム画像の端にある場合、動線データ42の「フレームアウトの有無」欄に「有」を格納する。 Although not shown in FIG. 14, the flow line detection processing unit 142 searches for the end frame image based on the end frame ID of the flow line data 42. Then, when the person image is at the end of the frame image in the end frame image, the flow line detection processing unit 142 stores “Yes” in the “Presence / absence of frame out” column of the flow line data 42.
 (動線再検出処理)
 図15は、第1実施形態に係る動線再検出処理(図12のステップS4)の詳細な手順を示すフローチャートである。
 まず、定期検索部211が、前回の滞留検索に参照したタイムスタンプからユーザが設定した時間が経過したか否かを判定する(S401)。なお、本実施形態では、ユーザが設定した時間が経過すると、ステップS402以下の処理が行われるとしているが、ユーザが総滞留時間の検索開始を指示するとステップS402以下の処理が行われるようにしてもよい。
 一定時間経過していない場合(S401→No)、定期検索部211はステップS401へ処理を戻す。
 ユーザが設定した時間経過している場合(S401→Yes)、動線再検出部212は、動線データ42における指定時間範囲内の動線IDをすべて動線データ42から取得する(S403)。ここで、指定時間範囲は入力装置75を介して、ユーザによって入力されるものである。動線再検出部212は、ステップS403で取得した動線IDを、図9に示す動線再検出データ44の「動線ID」欄に格納する。
(Flow line rediscovery process)
FIG. 15 is a flowchart showing a detailed procedure of the flow line rediscovery process (step S4 in FIG. 12) according to the first embodiment.
First, the periodic search unit 211 determines whether or not the time set by the user has elapsed from the time stamp referred to in the previous retention search (S401). In the present embodiment, it is said that the process of step S402 or less is performed when the time set by the user elapses, but the process of step S402 or less is performed when the user instructs to start the search for the total residence time. May be good.
If a certain period of time has not elapsed (S401 → No), the periodic search unit 211 returns the process to step S401.
When the time set by the user has elapsed (S401 → Yes), the flow line re-detection unit 212 acquires all the flow line IDs within the specified time range in the flow line data 42 from the flow line data 42 (S403). Here, the designated time range is input by the user via the input device 75. The flow line rediscovery unit 212 stores the flow line ID acquired in step S403 in the “flow line ID” column of the flow line rediscovery data 44 shown in FIG.
 そして、動線再検出部212は、取得した動線IDのうちから、1つの動線IDを動線再検出対象として選択する(S404)。
 そして、動線再検出部212は、ステップS403で取得した動線IDにおける人物画像の内、ステップS404で取得した動線IDにおける人物画像と、同一人物と考えられる人物画像を検索する(同一人物の検索;S405)。具体的には、動線再検出部212は、ステップS403で取得した動線IDを有する動線情報421に格納されている人物IDを取得する。この人物IDを動線再検出人物IDと称する。続いて、動線再検出部212は、人物データ41を検索して、動線再検出人物IDの人物特徴量を取得する。ここで、取得した人物特徴量を動線再検出人物特徴量と称する。また、動線再検出人物特徴量を有する人物を動線再検出人物と称する。
Then, the flow line re-detection unit 212 selects one flow line ID from the acquired flow line IDs as the flow line re-detection target (S404).
Then, the flow line re-detection unit 212 searches for the person image in the flow line ID acquired in step S404 and the person image considered to be the same person among the person images in the flow line ID acquired in step S403 (same person). Search; S405). Specifically, the flow line re-detection unit 212 acquires the person ID stored in the flow line information 421 having the flow line ID acquired in step S403. This person ID is referred to as a flow line rediscovery person ID. Subsequently, the flow line re-detection unit 212 searches for the person data 41 and acquires the person feature amount of the flow line re-detection person ID. Here, the acquired person feature amount is referred to as a flow line rediscovered person feature amount. Further, a person having a flow line rediscovered person feature amount is referred to as a flow line rediscovered person.
 続いて、動線再検出部212は、ステップS403で取得した動線IDを有する動線情報421における人物IDを取得する。そして、動線再検出部212は、この人物IDを基に人物データ41から取得される人物特徴量と、動線再検出人物特徴量とを比較する。 Subsequently, the flow line re-detection unit 212 acquires the person ID in the flow line information 421 having the flow line ID acquired in step S403. Then, the flow line re-detection unit 212 compares the person feature amount acquired from the person data 41 based on this person ID with the flow line re-detection person feature amount.
 動線再検出部212は、検索の結果、ステップS403で取得した動線IDにおける人物画像の内、動線再検出人物と、同一人物と考えらえる人物画像が存在するかを判定する(同一人物存在;S411)。具体的には、動線再検出部212は、動線再検出人物特徴量と類似した(所定の閾値内の)人物特徴量が存在するか否かを判定する。
 同一人物と考えられる人物画像が存在しない場合(S411→No)、動線再検出部212は、単独人物として、動線再検出人物特量に関する情報である動線再検出情報441を図9に示す動線再検出データ44に格納し(S412)、ステップS414へ処理を進める。
 同一人物と考えられる人物画像が存在する場合(S411→Yes)、動線再検出部212は、同一人物と考えられる動線IDを図9に示す動線再検出データ44に格納する(S413)。
As a result of the search, the flow line re-detection unit 212 determines whether or not there is a person image that can be considered to be the same person as the flow line rediscovered person among the person images in the flow line ID acquired in step S403 (same). Person existence; S411). Specifically, the flow line re-detection unit 212 determines whether or not there is a person feature amount similar to the flow line rediscovered person feature amount (within a predetermined threshold value).
When there is no person image considered to be the same person (S411 → No), the flow line re-detection unit 212 displays the flow line re-detection information 441, which is information on the flow line re-detection person special amount, as a single person in FIG. It is stored in the flow line rediscovery data 44 (S412), and the process proceeds to step S414.
When there is a person image considered to be the same person (S411 → Yes), the flow line re-detection unit 212 stores the flow line ID considered to be the same person in the flow line re-detection data 44 shown in FIG. 9 (S413). ..
 ここで、図9に示す動線再検出データ44を参照して、ステップS412及びステップS413の詳細を説明する。
 (A)同一人物が存在しない場合。
 動線再検出部212は、動線再検出人物の動線IDに該当するレコードの「滞留開始時刻」、「滞留終了時刻」、「人物特徴量」の各欄に情報を格納する。ちなみに、「滞留開始時刻」及び「滞留終了時刻」は、動線IDをキーとして、動線データ42の「開始フレームID」及び「終了フレームID」を取得する。さらに、動線再検出部212は、人物データ41を検索し、取得した「開始フレームID」及び「終了フレームID」のタイムスタンプを取得し、取得した、それぞれのタイムスタンプを動線再検出データ44の「滞留開始時刻」及び「滞留終了時刻」欄に格納する。
Here, the details of step S412 and step S413 will be described with reference to the flow line rediscovery data 44 shown in FIG.
(A) When the same person does not exist.
The flow line re-detection unit 212 stores information in each column of the "retention start time", "retention end time", and "person feature amount" of the record corresponding to the flow line ID of the flow line re-detection person. Incidentally, for the "retention start time" and the "retention end time", the "start frame ID" and the "end frame ID" of the flow line data 42 are acquired by using the flow line ID as a key. Further, the flow line rediscovery unit 212 searches for the person data 41, acquires the acquired time stamps of the “start frame ID” and the “end frame ID”, and uses the acquired time stamps as the flow line rediscovery data. It is stored in the "retention start time" and "retention end time" columns of 44.
 また、動線再検出部212は、動線再検出人物IDを「人物特徴量」の欄に格納する。
 さらに、動線再検出部212は、「同一人物有無」の欄に「無」を格納し、「同一人物ID」欄には、他の人物画像と共通とならないような同一人物IDが格納される(図9の例では「C」)。
Further, the flow line re-detection unit 212 stores the flow line re-detection person ID in the “person feature amount” column.
Further, the flow line re-detection unit 212 stores "none" in the "presence / absence of the same person" column, and stores the same person ID in the "same person ID" column so as not to be common with other person images. ("C" in the example of FIG. 9).
 (B)同一人物が存在する場合
 動線再検出部212は、動線再検出人物の動線IDと、ステップS405で動線再検出人物と同一人物と判定された人物画像の動線IDとに対応するレコードのそれぞれにおける「滞留開始時刻」、「滞留終了時刻」、「人物特徴量」の各欄に情報を格納する。これらの欄における情報は、(A)同一人物が存在しない場合と同様であるので、ここでの説明を省略する。また、「同一人物有無」欄には「有」を格納し、「同一人物ID」には、その他の人物画像と共通しない同一人物IDを格納する。
 なお、この段階で動線再検出部212は、滞留終了時刻と滞留開始時刻との差分を算出することで、動線ID毎の滞留時間を算出し(S414)、算出した滞留時間を動線再検出データ44の「滞留時間」欄に格納する。ステップS414で、算出される滞留時間は動線1つ1つに相当する滞留時間である。
(B) When the same person exists The flow line re-detection unit 212 includes the flow line ID of the flow line rediscovered person and the flow line ID of the person image determined to be the same person as the flow line rediscovered person in step S405. Information is stored in each column of "retention start time", "retention end time", and "personal feature amount" in each of the records corresponding to. Since the information in these columns is the same as in (A) the case where the same person does not exist, the description here will be omitted. Further, "Yes" is stored in the "Presence / absence of the same person" column, and the same person ID that is not common to other person images is stored in the "Same person ID".
At this stage, the flow line re-detection unit 212 calculates the residence time for each flow line ID by calculating the difference between the retention end time and the retention start time (S414), and the calculated residence time is used as the flow line. It is stored in the "residence time" column of the rediscovery data 44. The residence time calculated in step S414 is a residence time corresponding to each flow line.
 ステップS414の後、動線再検出部212は、動線再検出人物の動線ID、及び、動線再検出人物と同一人物と判定された動線IDとを処理対象から除外する(S415)。
 そして、動線再検出部212は、ステップS202で取得した動線IDが残っているか否かを判定する(S416)。
 動線IDが残っている場合(S416→Yes)、ステップS403へ処理を戻し、動線再検出部212は、残っている動線IDから新たな動線IDを選択する。
 動線IDが残っていない場合(S416→No)、総滞留時間解析部21は図12のステップS5へ処理をリターンする。
After step S414, the flow line rediscovery unit 212 excludes the flow line ID of the flow line rediscovered person and the flow line ID determined to be the same person as the flow line rediscovered person (S415). ..
Then, the flow line re-detection unit 212 determines whether or not the flow line ID acquired in step S202 remains (S416).
When the flow line ID remains (S416 → Yes), the process returns to step S403, and the flow line re-detection unit 212 selects a new flow line ID from the remaining flow line IDs.
When the flow line ID does not remain (S416 → No), the total residence time analysis unit 21 returns the process to step S5 in FIG.
 これまでの技術では、一度途切れた動線は別人の動線として認識される。このため、正確な総滞留時間を求めることが困難であった。これに対し、第1実施形態では図15に示すように、すべての動線に関するデータを集めて、再度、それぞれの動線について同一人物の動線が存在するか否かを検索している。これにより、同一人物の動線でありながら一度途切れた動線であっても、同一人物であると認識できるため、後記する総滞留時間の算出精度を向上させることができる。 With conventional technology, a flow line that is once interrupted is recognized as another person's flow line. Therefore, it was difficult to obtain an accurate total residence time. On the other hand, in the first embodiment, as shown in FIG. 15, data on all the flow lines is collected, and whether or not the same person's flow lines exist for each flow line is searched again. As a result, even if the flow line of the same person is interrupted once, it can be recognized as the same person, so that the calculation accuracy of the total residence time described later can be improved.
 (総滞留時間算出処理)
 図16は、第1実施形態で行われる総滞留時間算出処理(図12のステップS5)の詳細な手順を示すフローチャートである。
 まず、総滞留時間算出部213は、図9に示す動線再検出データ44を参照し、動線IDを1つ選択する(S501)。
 その後、総滞留時間算出部213は、動線再検出データ44の「同一人物ID」欄を参照して、ステップS501で選択した動線IDに対応付けられている同一人物IDと同じ同一人物IDを有する動線IDをすべて取得する(S502)。
(Total residence time calculation process)
FIG. 16 is a flowchart showing a detailed procedure of the total residence time calculation process (step S5 in FIG. 12) performed in the first embodiment.
First, the total residence time calculation unit 213 refers to the flow line rediscovery data 44 shown in FIG. 9 and selects one flow line ID (S501).
After that, the total residence time calculation unit 213 refers to the "same person ID" column of the flow line rediscovery data 44, and has the same person ID as the same person ID associated with the flow line ID selected in step S501. All the flow line IDs having the above are acquired (S502).
 次に、総滞留時間算出部213は、図9に示す動線再検出データ44の「滞留時間」欄を参照し、ステップS501及びステップS502で取得した動線IDに対応付けられている滞留時間の総和(総滞留時間)を算出する(S503)。総滞留時間算出部213は、図10に示す総滞留時間データ45の「同一人物ID」欄と、「総滞留時間」欄のそれぞれに、同一人物IDと、ステップS503で算出した総滞留時間とを対応付けて格納する。 Next, the total residence time calculation unit 213 refers to the “residence time” column of the flow line rediscovery data 44 shown in FIG. 9, and the residence time associated with the flow line ID acquired in steps S501 and S502. (Total residence time) is calculated (S503). The total residence time calculation unit 213 has the same person ID and the total residence time calculated in step S503 in the "same person ID" column and the "total residence time" column of the total residence time data 45 shown in FIG. Is stored in association with each other.
 続いて、総滞留時間算出部213は、算出した総滞留時間Tが所定の閾値Vを超えているか否かを判定する(T>V?;S511)。
 総滞留時間が所定の閾値を超えていない場合(S511→No)、総滞留時間算出部213は、ステップS513へ処理を進める。
 総滞留時間が処理の閾値を超えている場合(S511→Yes)、出力処理部51は図18に示す監視動画表示画面810に、ポップアップ画面812を表示することでアラーム出力する(S512)。
Subsequently, the total residence time calculation unit 213 determines whether or not the calculated total residence time T exceeds a predetermined threshold value V (T> V ?; S511).
When the total residence time does not exceed a predetermined threshold value (S511 → No), the total residence time calculation unit 213 proceeds to step S513.
When the total residence time exceeds the processing threshold value (S511 → Yes), the output processing unit 51 outputs an alarm by displaying the pop-up screen 812 on the monitoring moving image display screen 810 shown in FIG. 18 (S512).
 そして、総滞留時間算出部213は、動線再検出データ44を検索して未処理の動線IDが存在するか否かを判定する(S513)。
 未処理の動線IDが存在する場合(S513→Yes)、総滞留時間算出部213はステップS501へ処理を戻す。
 未処理の動線IDが存在しない場合(S513→No)、総滞留時間解析部21は図12のステップS6へ処理をリターンする。
Then, the total residence time calculation unit 213 searches the flow line rediscovery data 44 and determines whether or not the unprocessed flow line ID exists (S513).
When the unprocessed flow line ID exists (S513 → Yes), the total residence time calculation unit 213 returns the process to step S501.
When the unprocessed flow line ID does not exist (S513 → No), the total residence time analysis unit 21 returns the process to step S6 in FIG.
 (総滞留時間詳細画面表示処理)
 図17は、第1実施形態で行われる総滞留時間詳細画面表示処理(図12のステップS6)の詳細な手順を示すフローチャートである。
 図17に示す総滞留時間詳細画面表示処理は、図19に示す総滞留時間詳細画面820を表示するための処理である。
 図19に示す総滞留時間詳細画面820の表示指示が行われると、出力処理部51は図10に示す総滞留時間データ45の「総滞留時間」欄に格納されている総滞留時間を大きい順にソートする(S601)。総滞留時間詳細画面820の表示指示については後記する。
 続いて、出力処理部51は、総滞留時間データ45の同一人物IDを一つ選択する(S602)。
 次に、出力処理部51は、総滞留時間データ45の「総滞留時間」欄を参照し、選択した同一人物IDに対応する総滞留時間Tが第1閾値V1を超えているか否かを判定する(T>V1?;S603)。
 第1閾値V1を超えている場合(S603→Yes)、出力処理部51は総滞留時間データ45(図10参照)の該当するレコードにおける「超過判定」欄に「超過」を格納する(S611)。そして、図19に示す総滞留時間詳細画面820における処理対象となっている人物の棒グラフ821を、例えば「赤」に表示する(S612)。つまり、総滞留時間が所定の閾値を超えていることを示す出力が行われる。
(Total residence time detail screen display processing)
FIG. 17 is a flowchart showing a detailed procedure of the total residence time detailed screen display process (step S6 in FIG. 12) performed in the first embodiment.
The total residence time detail screen display process shown in FIG. 17 is a process for displaying the total residence time detail screen 820 shown in FIG.
When the display instruction of the total residence time detail screen 820 shown in FIG. 19 is given, the output processing unit 51 increases the total residence time stored in the "total residence time" column of the total residence time data 45 shown in FIG. 10 in descending order. Sort (S601). The display instruction of the total residence time detail screen 820 will be described later.
Subsequently, the output processing unit 51 selects one same person ID of the total residence time data 45 (S602).
Next, the output processing unit 51 refers to the "total residence time" column of the total residence time data 45, and determines whether or not the total residence time T corresponding to the selected same person ID exceeds the first threshold value V1. (T> V1 ?; S603).
When the first threshold value V1 is exceeded (S603 → Yes), the output processing unit 51 stores “excess” in the “excess determination” column in the corresponding record of the total residence time data 45 (see FIG. 10) (S611). .. Then, the bar graph 821 of the person to be processed on the total residence time detail screen 820 shown in FIG. 19 is displayed in, for example, “red” (S612). That is, an output indicating that the total residence time exceeds a predetermined threshold value is output.
 第1閾値V1を超えていない場合(S603→No)、出力処理部51は選択した同一人物IDに対応する総滞留時間Tが第2閾値V2を超えているか否かを判定する(T>V2;S621)。ただし、第1閾値V1>第2閾値V2である。
 第2閾値V2を超えている場合(S621→Yes)、出力処理部51は総滞留時間データ45の該当するレコードにおける「超過判定」欄に「警戒」を格納する(S622)。そして、図19に示す総滞留時間詳細画面820における処理対象となっている人物の棒グラフ821を、例えば「橙」に表示する(S623)。
When the first threshold value V1 is not exceeded (S603 → No), the output processing unit 51 determines whether or not the total residence time T corresponding to the selected same person ID exceeds the second threshold value V2 (T> V2). S621). However, the first threshold value V1> the second threshold value V2.
When the second threshold value V2 is exceeded (S621 → Yes), the output processing unit 51 stores “warning” in the “excess determination” column in the corresponding record of the total residence time data 45 (S622). Then, the bar graph 821 of the person to be processed on the total residence time detail screen 820 shown in FIG. 19 is displayed in, for example, “orange” (S623).
 第2閾値V2を超えていない場合(S621→No)、出力処理部51は総滞留時間データ45の該当するレコードにおける「超過判定」欄に「正常」を格納する(S631)。そして、図19に示す総滞留時間詳細画面820における処理対象となっている人物の棒グラフ821を、例えば「緑」に表示する(S632)。 When the second threshold value V2 is not exceeded (S621 → No), the output processing unit 51 stores “normal” in the “excess determination” column in the corresponding record of the total residence time data 45 (S631). Then, the bar graph 821 of the person to be processed on the total residence time detail screen 820 shown in FIG. 19 is displayed in, for example, “green” (S632).
 ステップS612、S623,S632の後、出力処理部51は、すべての同一人物IDについて処理を完了したか否かを判定する(S633)。
 すべての同一人物IDについて処理が完了していない場合(S633→No)、出力処理部51はステップS602へ処理を戻し、未処理の同一人物IDを1つ取得する(S602)。
 すべての同一人物IDについて処理が完了している場合(S633→Yes)、出力処理部51は、図12の処理にリターンする。
After steps S612, S623, and S632, the output processing unit 51 determines whether or not the processing has been completed for all the same person IDs (S633).
When the processing is not completed for all the same person IDs (S633 → No), the output processing unit 51 returns the processing to step S602 and acquires one unprocessed same person ID (S602).
When the processing is completed for all the same person IDs (S633 → Yes), the output processing unit 51 returns to the processing of FIG.
 なお、本実施形態では、超過判定が「超過」、「警戒」、「正常」の3段階で示されているが、これに限らない。例えば、超過判定が「超過」、「警戒」、「注意」、「正常」の4段階で示されてもよい。 In this embodiment, the excess determination is shown in three stages of "excess", "warning", and "normal", but the present invention is not limited to this. For example, the excess determination may be indicated in four stages of "excess", "warning", "caution", and "normal".
 <画面例>
 (監視動画表示画面810)
 図18は、監視動画表示画面810を示す図である。
 監視動画表示画面810は、監視動画表示部811と、総滞留時間詳細画面表示ボタン813等を有している。
 監視動画表示部811は、動画撮影装置3で撮像されている動画が表示される。
 総滞留時間詳細画面表示ボタン813を、ユーザが選択入力すると(総滞留時間詳細画面820の表示指示)、図19に示す総滞留時間詳細画面820が表示部に表示される。
<Screen example>
(Surveillance video display screen 810)
FIG. 18 is a diagram showing a surveillance moving image display screen 810.
The monitoring video display screen 810 has a monitoring video display unit 811, a total residence time detail screen display button 813, and the like.
The surveillance moving image display unit 811 displays the moving image captured by the moving image shooting device 3.
When the user selects and inputs the total residence time detail screen display button 813 (display instruction of the total residence time detail screen 820), the total residence time detail screen 820 shown in FIG. 19 is displayed on the display unit.
 また、図10に示す総滞留時間データ45において、総滞留時間が所定の時間より超過している人物(「超過」)がいると、図18に示すように、総滞留時間を超過している人物がいる旨を通知するポップアップ画面812が表示される。ユーザが、ポップアップ画面812を選択入力すると、図19に示す総滞留時間詳細画面820が表示部に表示される(総滞留時間詳細画面820の表示指示)。 Further, in the total residence time data 45 shown in FIG. 10, if there is a person (“excess”) whose total residence time exceeds a predetermined time, the total residence time is exceeded as shown in FIG. A pop-up screen 812 notifying that there is a person is displayed. When the user selects and inputs the pop-up screen 812, the total residence time detail screen 820 shown in FIG. 19 is displayed on the display unit (display instruction of the total residence time detail screen 820).
 図18に示す例では、図10に示す総滞留時間データ45において、「超過」と判定されている人物について、ポップアップ画面812による通知が行われている。しかし、これに限らず、「警戒」や、「注意」と判定されている人物についてポップアップ画面812による通知が行われてもよい。 In the example shown in FIG. 18, the pop-up screen 812 notifies the person who is determined to be "excessive" in the total residence time data 45 shown in FIG. However, the present invention is not limited to this, and a notification by the pop-up screen 812 may be given to a person who is determined to be "alert" or "attention".
 (総滞留時間詳細画面820)
 図19は、総滞留時間詳細画面820を示す図である。
 総滞留時間詳細画面820は、前記したように、図3における総滞留時間詳細画面表示ボタン813や、ポップアップ画面812が選択入力されることで表示される。
 総滞留時間詳細画面820は、総滞留時間表示部825と、人物画像表示部823とを有する。
 総滞留時間表示部825には、図10に示す総滞留時間データ45に格納されている「総滞留時間」を基に、人物毎の総滞留時間が棒グラフ821として表示される。図19に示す例では、「A」、「B」、「C」、「D」の4人の人物(滞留人物)の総滞留時間が表示されている。なお、「A」、「B」、「C」、「D」は、それぞれの人物の氏名ではなく、識別番号等である。ユーザが、入力装置75を介して、いずれからの棒グラフ821が選択入力されると、図20に示す個人滞留情報表示画面830が表示され、選択入力された棒グラフ821に対応付けられている人物の滞留情報が表示される。ここで、破線822は、超過判定閾値(図17における第1閾値V1)を示しており、総滞留時間が破線822を超えている人物「A」の棒グラフ821aは、例えば赤く表示される。また、総滞留時間が破線822を超えていないものの、総滞留時間が破線822に近い(図17において、総滞留時間TがV1>T>V2;V2は第2閾値)人物「B」の棒グラフ821bは、例えば橙色に表示される。それ以外の人物「C」、「D」の棒グラフ821cは、総滞留時間が正常であることを示す緑色で表示される。棒グラフ821の色は、前記したように出力処理部51が総滞留時間データ45(図10参照)を検索することによって決定される。総滞留時間が正常であるとは、ここでは、総滞留時間が長期にわたっていないことを意味する。
(Total residence time detail screen 820)
FIG. 19 is a diagram showing a total residence time detail screen 820.
As described above, the total residence time detail screen 820 is displayed by selecting and inputting the total residence time detail screen display button 813 and the pop-up screen 812 in FIG.
The total residence time detail screen 820 has a total residence time display unit 825 and a person image display unit 823.
The total residence time display unit 825 displays the total residence time for each person as a bar graph 821 based on the "total residence time" stored in the total residence time data 45 shown in FIG. In the example shown in FIG. 19, the total residence time of four persons (retention persons) of "A", "B", "C", and "D" is displayed. Note that "A", "B", "C", and "D" are not the names of the respective persons, but the identification numbers and the like. When the user selects and inputs the bar graph 821 from any of them via the input device 75, the personal residence information display screen 830 shown in FIG. 20 is displayed, and the person associated with the selected and input bar graph 821 is displayed. The stagnation information is displayed. Here, the broken line 822 indicates the excess determination threshold value (first threshold value V1 in FIG. 17), and the bar graph 821a of the person “A” whose total residence time exceeds the broken line 822 is displayed in red, for example. Further, although the total residence time does not exceed the broken line 822, the total residence time is close to the broken line 822 (in FIG. 17, the total residence time T is V1>T>V2; V2 is the second threshold value). 821b is displayed in orange, for example. The bar graphs 821c of the other persons "C" and "D" are displayed in green indicating that the total residence time is normal. The color of the bar graph 821 is determined by the output processing unit 51 searching for the total residence time data 45 (see FIG. 10) as described above. The normal total residence time here means that the total residence time is not long-term.
 人物画像表示部823には、総滞留時間表示部825において総滞留時間が表示されている人物の画像(人物画像)が表示される。この人物画像は、出力処理部51が図6の人物データ41の「人物画像」欄を参照することで取得される。出力処理部51は、図9に示す動線再検出データ44における総滞留時間が表示されてる人物について「同一人物ID」欄に対応付けられている動線IDを取得する。続いて、出力処理部51は、図7に示す動線データ42の「人物IDのリスト」欄を参照し、動線IDに対応付けられている開始フレームIDもしくは終了フレームID)を取得する。ここでは、開始フレームIDが取得されるものとする。このとき、取得される開始フレームIDの候補は複数存在するが、いずれか1つ(例えば、もっとも最初に格納されている開始フレームID)が取得されればよい。 The person image display unit 823 displays an image (person image) of a person whose total residence time is displayed on the total residence time display unit 825. This person image is acquired by the output processing unit 51 referring to the "person image" column of the person data 41 in FIG. The output processing unit 51 acquires the flow line ID associated with the “same person ID” column for the person whose total residence time in the flow line rediscovery data 44 shown in FIG. 9 is displayed. Subsequently, the output processing unit 51 refers to the “list of person IDs” column of the flow line data 42 shown in FIG. 7, and acquires the start frame ID or the end frame ID associated with the flow line ID). Here, it is assumed that the start frame ID is acquired. At this time, there are a plurality of candidates for the start frame ID to be acquired, but only one of them (for example, the start frame ID stored first) may be acquired.
 (個人滞留情報表示画面830)
 図20は、個人滞留情報表示画面830を示す図である。
 個人滞留情報表示画面830は、動画解析サーバ1が同一と判定した人物の滞留時間についての詳細な情報を表示する画面である。図20に示すように、個人滞留情報表示画面830は、滞留時間情報表示部831、人物画像表示部836、チェックボックス837、削除ボタン838、総滞留時間表示部839B、検索開始時刻表示部839Aを有する。図20の例では、現在時刻(13:00:00)から過去4時間、すなわち、滞留時間の検索開始時刻を9:00:00としている。
(Personal retention information display screen 830)
FIG. 20 is a diagram showing a personal residence information display screen 830.
The personal residence information display screen 830 is a screen that displays detailed information about the residence time of a person determined to be the same by the moving image analysis server 1. As shown in FIG. 20, the personal residence information display screen 830 includes a residence time information display unit 831, a person image display unit 836, a check box 837, a delete button 838, a total residence time display unit 839B, and a search start time display unit 839A. Have. In the example of FIG. 20, the search start time of the past 4 hours from the current time (13:00:00), that is, the residence time is set to 9:00:00.
 滞留時間情報表示部831では、動画解析サーバ1が同一と判定した人物の滞留時間が動線毎に表示されている。すなわち、滞留時間情報表示部831に表示されている動線ライン832のそれぞれは、図7における動線IDのそれぞれに対応する。また、滞留時間情報表示部831における線834Aは検索開始時刻を示すものであり、線834Bは現在時刻を示すものである。すなわち、動画解析サーバ1は、線834Aと、線834Bとの間の時間について滞留時間の検索を行っている(矢印835)。また、フレームアウト表示833は、該当する人物がフレームアウトしている時間を示している。フレームアウト表示833は、動画解析サーバ1が図7に示す「フレームアウトの有無」欄を参照することによって表示される。つまり、出力処理部51は動線データ42の「フレームアウトの有無」欄を参照する。そして、フレームアウト「有」であれば、出力処理部51は、その動線情報421が示す動線ライン832と、次の動線ライン832との間にフレームアウト表示833を表示する。 In the residence time information display unit 831, the residence time of the person determined by the video analysis server 1 to be the same is displayed for each flow line. That is, each of the flow line lines 832 displayed on the residence time information display unit 831 corresponds to each of the flow line IDs in FIG. 7. Further, the line 834A in the residence time information display unit 831 indicates the search start time, and the line 834B indicates the current time. That is, the moving image analysis server 1 searches for the residence time for the time between the line 834A and the line 834B (arrow 835). Further, the frame-out display 833 indicates the time when the corresponding person is frame-out. The frame-out display 833 is displayed by the moving image analysis server 1 referring to the “presence / absence of frame-out” column shown in FIG. That is, the output processing unit 51 refers to the “presence / absence of frame out” column of the flow line data 42. If the frame out is "Yes", the output processing unit 51 displays a frame out display 833 between the flow line 832 indicated by the flow line information 421 and the next flow line 832.
 人物画像表示部836には、動線ライン832と対応付けられている人物画像836a,836bが表示されている。すなわち、人物画像表示部836には、図7において動線ライン832が示す動線IDと対応付けられている「人物IDのリスト」欄中の人物IDに該当する人物画像836a,836bが表示されている。また、それぞれの人物画像836a,836bにはチェックボックス837が対応付けられて表示されている。なお、人物画像836aが、その他の人物画像836bと異なっていることについては後記する。 The person image display unit 836 displays the person images 836a and 836b associated with the flow line line 832. That is, the person image display unit 836 displays the person images 836a and 836b corresponding to the person ID in the "list of person IDs" column associated with the flow line ID shown by the flow line line 832 in FIG. 7. ing. Further, check boxes 837 are associated with and displayed on the respective person images 836a and 836b. It will be described later that the person image 836a is different from the other person images 836b.
 検索開始時刻表示部839Aには検索開始時刻が表示される。総滞留時間表示部839Bには、図16のステップS503で算出された総滞留時間が表示される。すなわち、総滞留時間表示部839Bには、滞留時間情報表示部831に表示されている動線ライン832のそれぞれが示す時間の総和が表示される。 The search start time is displayed on the search start time display unit 839A. The total residence time display unit 839B displays the total residence time calculated in step S503 of FIG. That is, the total residence time display unit 839B displays the total time indicated by each of the flow line lines 832 displayed on the residence time information display unit 831.
 ここで、人物画像836aは、他の人物画像836bとは異なっている。これは、動画解析サーバ1が人物画像836aの人物と、その他の人物画像836bの人物とを同じ人物と誤認識していることを示している。このような場合、図20に示すように、ユーザは、入力装置75を介して、人物画像836bに対応付けられて表示されているチェックボックス837(図20の例ではチェックボックス837a)にチェックを入力する。その後、ユーザが入力装置75を介して削除ボタンを選択入力すると、人物画像836a、及び、人物画像836aに対応付けられている動線ライン832aが削除される。なお、図20では、1つの人物画像836aと動線ライン832aとが削除されているが、複数のチェックボックス837にチェックが入力されることにより、複数の人物画像836a及び動線ライン832aが削除可能である。つまり、所定の動線の情報を削除することができる。人物画像836a及び動線ライン832aの削除後、動画解析サーバ1は、個人滞留情報表示画面830に表示されている動線ライン832の総和(総滞留時間)を再計算する。 Here, the person image 836a is different from other person images 836b. This indicates that the moving image analysis server 1 erroneously recognizes the person in the person image 836a and the person in the other person image 836b as the same person. In such a case, as shown in FIG. 20, the user checks the check box 837 (check box 837a in the example of FIG. 20) which is displayed in association with the person image 836b via the input device 75. input. After that, when the user selects and inputs the delete button via the input device 75, the person image 836a and the flow line 832a associated with the person image 836a are deleted. In addition, although one person image 836a and the flow line 832a are deleted in FIG. 20, a plurality of person images 836a and the flow line 832a are deleted by inputting a check in a plurality of check boxes 837. It is possible. That is, the information of the predetermined flow line can be deleted. After deleting the person image 836a and the flow line 832a, the moving image analysis server 1 recalculates the total sum (total residence time) of the flow line 832 displayed on the personal residence information display screen 830.
 第1実施形態によれば、途切れている複数の動線に対して、人物特徴量を基に同一人物であるか否かを判定し、同一人物と判定されれば、同一人物の動線として総滞留時間を算出する。このようにすることで、途切れてしまった同一の滞留人物の動線を人物特徴量で繋ぎ合わせることにより、長期間にわたる滞留時間を算出可能にすることができる。また、第1実施形態によれば、フレームアウトが生じても、途切れている複数の動線に対して、人物特徴量を基に同一人物であるか否かを判定することで、同一人物の動線を検出可能である。従って、フレームアウトが生じても総滞留時間を算出することができる。 According to the first embodiment, it is determined whether or not the person is the same person based on the person feature amount for a plurality of interrupted flow lines, and if it is determined to be the same person, it is regarded as the same person's flow line. Calculate the total residence time. By doing so, it is possible to calculate the residence time over a long period of time by connecting the flow lines of the same staying person who has been interrupted by the person feature amount. Further, according to the first embodiment, even if a frameout occurs, it is determined whether or not the person is the same person based on the person feature amount for a plurality of interrupted flow lines, so that the same person can be used. The flow line can be detected. Therefore, the total residence time can be calculated even if flameout occurs.
 図20に示すように、異なる人物を総滞留時間の算出対象人物と誤認識している場合、異なっている人物のチェックボックス837にチェックを入力し、削除ボタン838を選択入力することによって、ご認識している人物の動線を除外することができる。これにより、追跡エラーの誤差が累積しないため、長時間の滞留監視が可能となる。 As shown in FIG. 20, when a different person is erroneously recognized as a person for which the total residence time is calculated, a check is entered in the check box 837 of the different person, and the delete button 838 is selected and input. It is possible to exclude the flow line of the recognized person. As a result, since the error of the tracking error does not accumulate, it is possible to monitor the residence for a long time.
 さらに、図19に示すように、人物毎の総滞留時間を、その人物の画像とともに棒グラフ821で示し、所定の閾値を超えている人物の棒グラフ821の色を変えることで算出した総滞留時間が所定の閾値を超えている場合、総滞留時間が所定の閾値を超えていることを示している。このようにすることで、ユーザは総滞留時間が所定の閾値を超えている人物を容易に認識することができる。 Further, as shown in FIG. 19, the total residence time for each person is shown by a bar graph 821 together with the image of the person, and the total residence time calculated by changing the color of the bar graph 821 of the person exceeding a predetermined threshold value is shown. When it exceeds a predetermined threshold value, it indicates that the total residence time exceeds the predetermined threshold value. By doing so, the user can easily recognize a person whose total residence time exceeds a predetermined threshold value.
 [第2実施形態]
 次に、図21~図23を参照して、本発明の第2実施形態について説明する。第2実施形態は、動線乗り移り防止に関する発明である。
 図21は、動線乗り移りを説明する図である。
 図21では、最初、人物H1が検知されているが、時刻T1において人物H2が人物H1に重なった状態で検知される。人物H1と、人物H2との特徴量が近ければ、動画解析サーバ1は、人物H2と人物H1とを混同してしまい、時刻T1以降は人物H2を人物H1として動線検出をおこなってしまう。つまり、動線乗り移りが生じてしまう(太実線矢印901)。第2実施形態では、このような動線乗り移りを防止することを課題とする。なお、人物H1は、時刻T1でしばらく停止した後、移動を始めるものとする。これにより、時刻T1以降、人物H1は、太実線矢印901とは異なる動線(太破線矢印902)として検出されるものとする。
[Second Embodiment]
Next, a second embodiment of the present invention will be described with reference to FIGS. 21 to 23. The second embodiment is an invention relating to prevention of flow line transfer.
FIG. 21 is a diagram illustrating a flow line transfer.
In FIG. 21, the person H1 is detected at first, but it is detected in a state where the person H2 overlaps the person H1 at the time T1. If the feature quantities of the person H1 and the person H2 are close to each other, the moving image analysis server 1 confuses the person H2 with the person H1, and after the time T1, the person H2 is regarded as the person H1 and the flow line is detected. That is, the flow line shift occurs (thick solid line arrow 901). In the second embodiment, it is an object to prevent such a flow line transfer. It is assumed that the person H1 stops for a while at the time T1 and then starts moving. As a result, after the time T1, the person H1 is detected as a flow line (thick broken line arrow 902) different from the thick solid line arrow 901.
 <フローチャート>
 図22は、第2実施形態で行われる総滞留時間抽出処理の手順を示すフローチャートである。
 なお、図22において、図12と同様の処理については同一のステップ番号を付して説明を省略する。
 図22において、図12と異なる処理は、最初に、ユーザが入力装置75を介して、人物特徴量閾値を所定の値未満で設定している点である(ステップS3)。このとき、人物特徴量閾値は、できる限り小さくなるよう設定される。
 このように、人物特徴量閾値を小さく設定することにより、図14のステップS202において、「No」と判定される確率が高くなる。すなわち、動線を意図的に途切れやすくする。このようにすることによる効果について図23を参照して説明する。
<Flow chart>
FIG. 22 is a flowchart showing the procedure of the total residence time extraction process performed in the second embodiment.
In FIG. 22, the same processing as in FIG. 12 is assigned the same step number and the description thereof will be omitted.
In FIG. 22, the process different from that in FIG. 12 is that the user first sets the person feature amount threshold value to be less than a predetermined value via the input device 75 (step S3). At this time, the person feature amount threshold value is set to be as small as possible.
By setting the person feature amount threshold value small in this way, the probability of being determined as “No” in step S202 of FIG. 14 increases. That is, the flow line is intentionally made easy to be interrupted. The effect of doing so will be described with reference to FIG.
 図23は、図22に示す処理を行った結果を示す図である。
 図23では、図21と同様、人物H1が検知されているが(動線M0)、時刻T1において人物H2が人物H1に重なった状態で検知される。しかし、人物特徴量閾値が小さいため、時刻T1より前の人物H1と、時刻T1で人物H1に重なった人物H2とを区別することができる。このため、時刻T1で動線が途切れる。なお、時刻T1の動線は、動線M1の最初となる。
FIG. 23 is a diagram showing the result of performing the process shown in FIG. 22.
In FIG. 23, as in FIG. 21, the person H1 is detected (traffic line M0), but it is detected in a state where the person H2 overlaps the person H1 at the time T1. However, since the person feature amount threshold value is small, it is possible to distinguish between the person H1 before the time T1 and the person H2 overlapping the person H1 at the time T1. Therefore, the flow line is interrupted at time T1. The flow line at time T1 is the first of the flow lines M1.
 そして、この後、図15に示す動線再検出処理が行われ、動線M1と、動線M2とのそれぞれは、他の動線の人物と比較され、それぞれ同一と考えられる人物の動線とまとめられる。 Then, after that, the flow line re-detection process shown in FIG. 15 is performed, and each of the flow line M1 and the flow line M2 is compared with the person of the other flow line, and the flow lines of the person considered to be the same are compared with each other. It is summarized as.
 これまでの技術では、異なる動線と判定されると、実際には同じ人物の動線でも、異なる人物の動線として認識されてしまう。そのため、これまでの技術では、本実施形態のように、人物特徴量閾値を小さく設定してしまうと、例えば、光の加減等により、同一人物の人物特徴量が変化すると動線が途切れ、別の人物の動線として認識されてしまうおそれがある。従って、これまでの技術では、人物特徴張閾値を小さく設定することができない。 With the conventional technology, if it is determined that the flow lines are different, even the flow lines of the same person are actually recognized as the flow lines of different people. Therefore, in the conventional technique, if the person feature amount threshold is set small as in the present embodiment, the flow line is interrupted when the person feature amount of the same person changes due to, for example, the amount of light, and the flow line is different. There is a risk that it will be recognized as the flow line of the person. Therefore, with the conventional technology, it is not possible to set the person characteristic tension threshold value small.
 これに対して、第2実施形態では、例え、動線が途切れても、図15に示す動線再検出処理によって、同一の人物の動線がまとめられるため、人物特徴量閾値を小さく設定することができる。これにより、第2実施形態によれば動線乗り移りを防止することができる。 On the other hand, in the second embodiment, even if the flow line is interrupted, the flow line of the same person is collected by the flow line re-detection process shown in FIG. 15, so that the person feature amount threshold value is set small. be able to. Thereby, according to the second embodiment, it is possible to prevent the flow line transfer.
 [第3実施形態]
 次に、図24~図28を参照して、本発明の第3実施形態を説明する。第3実施形態では、途切れている2つの動線が、同一人物のものである判定された場合に、この2つの動線を結合して、1つの動線として認識できるようにするものである。第1実施形態では、途切れた動線は、途切れたままで同一人物の動線を収集している。これに対して、第3実施形態では、人物特徴量及び画像中の人物の位置が近ければ動線を結合させ、1つの動線として扱うものである。
[Third Embodiment]
Next, a third embodiment of the present invention will be described with reference to FIGS. 24 to 28. In the third embodiment, when it is determined that two interrupted flow lines belong to the same person, the two flow lines are combined so that they can be recognized as one flow line. .. In the first embodiment, the interrupted flow line collects the flow lines of the same person while remaining interrupted. On the other hand, in the third embodiment, if the person feature amount and the position of the person in the image are close to each other, the flow lines are combined and treated as one flow line.
 <動線解析部14b>
 図24は、第3実施形態における動線解析部14bの詳細な構成を示す図である。図24において、図3と同様の構成については同一の符号を付して説明を省略する。
 動画解析サーバ1bの動線解析部14bが、図3に示す動線解析部14と異なる点は、同一人物と判定された動線を結合する動線結合部144を有している点である。
 動画解析サーバ1bについて動線結合部144以外の構成は、図1、図2、図4で示す構成と同様であるため、図示及び説明を省略する。
<Flow line analysis unit 14b>
FIG. 24 is a diagram showing a detailed configuration of the flow line analysis unit 14b according to the third embodiment. In FIG. 24, the same components as those in FIG. 3 are designated by the same reference numerals and description thereof will be omitted.
The flow line analysis unit 14b of the moving image analysis server 1b is different from the flow line analysis unit 14 shown in FIG. 3 in that it has a flow line connection unit 144 that connects the flow lines determined to be the same person. ..
Since the configuration of the moving image analysis server 1b other than the flow line coupling portion 144 is the same as the configuration shown in FIGS. 1, 2, and 4, the illustration and description will be omitted.
 <動線データ42b>
 図25は、第3実施形態で用いられる動線データ42bの例を示す図である。
 図25において、図7に示す動線データ42と異なる点は、「次の動線ID」欄を有していることである。
 例えば、動線ID「00002」で示される動線が、動線ID「00007」で示される動線と同一人物の動線であると判定されたものとする。このような場合、動線ID「00002」のレコードにおける「次の動線ID」欄に動線ID「00007」が格納される。これにより、動線ID「00002」の動線と、動線ID「00007」の動線とが結合され、1つの動線として認識される。図25の例では、動線ID「00015」の動線が、動線ID「00019」の動線と結合し、さらに、動線ID「00023」の動線と結合している。
<Flow line data 42b>
FIG. 25 is a diagram showing an example of the flow line data 42b used in the third embodiment.
In FIG. 25, the difference from the flow line data 42 shown in FIG. 7 is that it has a “next flow line ID” column.
For example, it is assumed that the flow line indicated by the flow line ID "00002" is determined to be the flow line of the same person as the flow line indicated by the flow line ID "00007". In such a case, the flow line ID "00007" is stored in the "next flow line ID" column in the record of the flow line ID "00002". As a result, the flow line of the flow line ID "00002" and the flow line of the flow line ID "00007" are combined and recognized as one flow line. In the example of FIG. 25, the flow line of the flow line ID “00015” is connected to the flow line of the flow line ID “000019”, and further is connected to the flow line of the flow line ID “00023”.
 <フローチャート>
 (動線検出処理)
 図26は、第3実施形態で行われる動線検出処理の手順を示すフローチャートである。図26において、図13と同様の処理については同一のステップ番号を付して説明を省略する。
 図26において、図13と異なる点は、ステップS215の後に動線結合処理(S221)が行われている点である。動線結合処理の詳細は後記する。
<Flow chart>
(Flow line detection processing)
FIG. 26 is a flowchart showing the procedure of the flow line detection process performed in the third embodiment. In FIG. 26, the same process as in FIG. 13 is assigned the same step number and the description thereof will be omitted.
In FIG. 26, the difference from FIG. 13 is that the flow line coupling process (S221) is performed after step S215. Details of the flow line coupling process will be described later.
 (動線結合処理)
 図27は、第3実施形態で行われる動線結合処理(図26のS221)の詳細な手順を示すフローチャートである。
 まず、動線結合部144は、動線データ42から動線IDを2つ選択する(S701)。ここで、選択される動線IDは、一方の動線IDの最後のフレーム画像の時刻と、他方の動線IDの最初のフレーム画像の時刻とが所定の範囲内に存在しているものである。
 そして、選択された2つの動線IDが示す動線のうち、時刻的に前の動線(第1動線と称する)の最後のフレーム画像から矩形情報を取得する(S702)。ステップS702で取得される矩形情報を第1矩形情報と称する。
 次に、動線結合部144は、第1動線の最後のフレーム画像に対して、所定時間以内に最初のフレーム画像が存在する動線(第2動線と称する)の最初のフレーム画像から矩形情報を取得する(S703)。ステップS703で取得される取得される矩形情報を第2矩形情報と称する。第1矩形情報及び第2矩形情報は、動線データ42と、人物データ41から取得される。
(Flow line connection processing)
FIG. 27 is a flowchart showing a detailed procedure of the flow line coupling process (S221 in FIG. 26) performed in the third embodiment.
First, the flow line coupling unit 144 selects two flow line IDs from the flow line data 42 (S701). Here, the selected flow line ID is such that the time of the last frame image of one flow line ID and the time of the first frame image of the other flow line ID exist within a predetermined range. be.
Then, among the flow lines indicated by the two selected flow line IDs, rectangular information is acquired from the last frame image of the previous flow line (referred to as the first flow line) in time (S702). The rectangular information acquired in step S702 is referred to as first rectangular information.
Next, the flow line coupling unit 144 starts from the first frame image of the flow line (referred to as the second flow line) in which the first frame image exists within a predetermined time with respect to the last frame image of the first flow line. Acquire rectangle information (S703). The rectangular information acquired in step S703 is referred to as a second rectangular information. The first rectangle information and the second rectangle information are acquired from the flow line data 42 and the person data 41.
 続いて、動線結合部144は、第1矩形情報の位置P1と、第2矩形情報の位置P2との距離Dが所定の値D1以下であるか否かを判定する(D≦D1;S711)。ここで、それぞれの矩形情報の位置は、矩形の中心等であり、ユーザが適宜に設定可能である。 Subsequently, the flow line coupling unit 144 determines whether or not the distance D between the position P1 of the first rectangular information and the position P2 of the second rectangular information is equal to or less than a predetermined value D1 (D ≦ D1; S711). ). Here, the position of each rectangle information is the center of the rectangle or the like, and can be appropriately set by the user.
 距離Dが所定の値D1より大きい場合(S711→No)、動線結合部144はステップS721へ処理を進める。
 距離Dが所定の値D1以下である場合(S711→Yes)、動線結合部144は、第1矩形情報における人物特徴量(第1人物特徴量C1と称する)と、第2矩形情報における人物特徴量(第2人物特徴量C2と称する)とを比較する。そして、動線結合部144は、第1人物特徴量C1と、第2人物特徴量C2と差の絶対値が所定の値C以下であるか否かを判定する(|C1-C2|≦C;S712)。
When the distance D is larger than the predetermined value D1 (S711 → No), the flow line coupling unit 144 proceeds to step S721.
When the distance D is equal to or less than a predetermined value D1 (S711 → Yes), the flow line connecting portion 144 has a person feature amount in the first rectangular information (referred to as the first person feature amount C1) and a person in the second rectangular information. The feature amount (referred to as the second person feature amount C2) is compared. Then, the flow line coupling portion 144 determines whether or not the absolute value of the difference between the first person feature amount C1 and the second person feature amount C2 is equal to or less than a predetermined value C (| C1-C2 | ≦ C). S712).
 第1人物特徴量C1と、第2人物特徴量C2と差の絶対値が所定の値Cより大きい場合(S712→No)、動線結合部144はステップS721へ処理を進める。
 第1人物特徴量C1と、第2人物特徴量C2と差の絶対値が所定の値C以下である場合(S712→Yes)、動線結合部144は、第1動線と第2動線とが同一人物の動線であると判定し、第1動線と第2動線とを結合する(S713)。動線結合部144は、図25に示す動線データ42bにおいて、第1動線を示す動線情報421の「次の動線ID」に第2動線の動線IDを格納する。
When the absolute value of the difference between the first person feature amount C1 and the second person feature amount C2 is larger than the predetermined value C (S712 → No), the flow line connecting portion 144 proceeds to step S721.
When the absolute value of the difference between the first person feature amount C1 and the second person feature amount C2 is a predetermined value C or less (S712 → Yes), the flow line connecting portion 144 has the first flow line and the second flow line. Is determined to be the flow line of the same person, and the first flow line and the second flow line are combined (S713). In the flow line data 42b shown in FIG. 25, the flow line coupling unit 144 stores the flow line ID of the second flow line in the “next flow line ID” of the flow line information 421 indicating the first flow line.
 ステップS711「No」、ステップS712「No」、ステップS713の後、動線結合部144はすべての動線IDについてステップS701~S713の処理を完了したか否かを判定する(S721)。
 すべての動線IDについてステップS701~S713の処理を完了していない場合(S721→No)、動線結合部144はステップS701へ処理を戻す。
 すべての動線IDについてステップS701~S713の処理を完了している場合(S721→Yes)、動線解析部14bは図26の処理にリターンする。
After step S711 "No", step S712 "No", and step S713, the flow line coupling unit 144 determines whether or not the processing of steps S701 to S713 has been completed for all the flow line IDs (S721).
When the processing of steps S701 to S713 is not completed for all the flow line IDs (S721 → No), the flow line connecting unit 144 returns the processing to step S701.
When the processes of steps S701 to S713 are completed for all the flow line IDs (S721 → Yes), the flow line analysis unit 14b returns to the process of FIG. 26.
 <動線結合処理の具体例>
 図28は、動線結合処理の具体例を示す図である。
 フレーム画像F1は、ある動線を構成するフレーム画像の最後のフレーム画像を示している。また、フレーム画像F2は、フレーム画像F1が示す時刻から所定時刻以内のフレーム画像を示し、フレーム画像F1が属す動線とは別の動線を構成するフレーム画像のうち、最初のフレーム画像を示している。また、矩形R1は、フレーム画像F1における人物H11の人物画像を含んでいる。矩形R2及び矩形R3は、フレーム画像F2に含まれる人物の画像(人物画像)を含んでいる。
<Specific example of flow line coupling processing>
FIG. 28 is a diagram showing a specific example of the flow line coupling process.
The frame image F1 shows the last frame image of the frame image constituting a certain flow line. Further, the frame image F2 shows a frame image within a predetermined time from the time indicated by the frame image F1, and shows the first frame image among the frame images constituting a flow line different from the flow line to which the frame image F1 belongs. ing. Further, the rectangle R1 includes a person image of the person H11 in the frame image F1. The rectangle R2 and the rectangle R3 include an image of a person (personal image) included in the frame image F2.
 ここで、動線結合部144は、矩形R1と、矩形R2とのフレーム画像中における位置の距離、矩形R1と、矩形R3とのフレーム画像中における位置の距離とを比較する。この比較は、図27のステップS711の処理に相当する。図28の例では、動線結合部144は、矩形R1と、矩形R2とのフレーム画像中における位置の距離(実線矢印)の方が矩形R1と、矩形R3とのフレーム画像中における位置の距離(破線矢印)より近い。そこで、動線結合部144は、矩形R1における人物特著量と、矩形R2における人物特徴量とを比較する。この比較は図27のステップS712に相当する。これらの矩形の人物画像が近い人物特徴量を有している場合、矩形R1における人物と、矩形R2における人物とが同一人物である判定し、これらの人物における動線を結合する。 Here, the flow line connecting portion 144 compares the distance between the position of the rectangle R1 and the rectangle R2 in the frame image, and the distance between the rectangle R1 and the position of the rectangle R3 in the frame image. This comparison corresponds to the process of step S711 in FIG. In the example of FIG. 28, the distance between the positions of the flow line connecting portion 144 in the frame image between the rectangle R1 and the rectangle R2 (solid arrow) is the distance between the positions in the frame image between the rectangle R1 and the rectangle R3. Closer than (broken arrow). Therefore, the flow line connecting portion 144 compares the person special feature amount in the rectangle R1 with the person feature amount in the rectangle R2. This comparison corresponds to step S712 in FIG. When these rectangular person images have similar person feature quantities, it is determined that the person in the rectangle R1 and the person in the rectangle R2 are the same person, and the flow lines in these people are combined.
 第3実施形態によれば、動線再検出処理を行う前に、結合できる動線を結合しておくことで、動線再検出処理に用いらえる動線の数を減らすことができる。これによって、図20の動線ライン832aのように異なる人物をご認識して総滞留時間を算出してしまう確率を低く抑えることができる。 According to the third embodiment, the number of flow lines used for the flow line re-detection process can be reduced by combining the flow lines that can be combined before the flow line rediscovery process is performed. As a result, it is possible to keep the probability of recognizing a different person and calculating the total residence time as shown in the flow line line 832a of FIG. 20 low.
 本発明は前記した実施形態に限定されるものではなく、様々な変形例が含まれる。例えば、前記した実施形態は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明したすべての構成を有するものに限定されるものではない。また、ある実施形態の構成の一部を他の実施形態の構成に置き換えることが可能であり、ある実施形態の構成に他の実施形態の構成を加えることも可能である。また、各実施形態の構成の一部について、他の構成の追加・削除・置換をすることが可能である。 The present invention is not limited to the above-described embodiment, and includes various modifications. For example, the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to those having all the described configurations. Further, it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment, and it is also possible to add the configuration of another embodiment to the configuration of one embodiment. Further, it is possible to add / delete / replace a part of the configuration of each embodiment with another configuration.
 また、前記した各構成、機能、各部11,13,14,21,51,131,132,141,142,144,211~214、データベース4等は、それらの一部又はすべてを、例えば集積回路で設計すること等によりハードウェアで実現してもよい。また、図5に示すように、前記した各構成、機能等は、CPU72等のプロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによりソフトウェアで実現してもよい。各機能を実現するプログラム、テーブル、ファイル等の情報は、HD(Hard Disk)に格納すること以外に、メモリ71や、SSD(Solid State Drive)等の記録装置、又は、IC(Integrated Circuit)カードや、SD(Secure Digital)カード、DVD(Digital Versatile Disc)等の記録媒体に格納することができる。
 また、各実施形態において、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしもすべての制御線や情報線を示しているとは限らない。実際には、ほとんどすべての構成が相互に接続されていると考えてよい。
Further, each of the above-mentioned configurations, functions, parts 11, 13, 14, 21, 51, 131, 132, 141, 142, 144, 211 to 214, database 4, etc. includes some or all of them, for example, an integrated circuit. It may be realized by hardware by designing with. Further, as shown in FIG. 5, each configuration, function, etc. described above may be realized by software by interpreting and executing a program in which a processor such as a CPU 72 realizes each function. In addition to storing information such as programs, tables, and files that realize each function in HD (Hard Disk), a memory 71, a recording device such as SSD (Solid State Drive), or an IC (Integrated Circuit) card. It can also be stored in a recording medium such as an SD (Secure Digital) card or a DVD (Digital Versatile Disc).
Further, in each embodiment, the control lines and information lines are shown as necessary for explanation, and not all the control lines and information lines are shown in the product. In practice, you can think of almost all configurations as interconnected.
 3   動画撮影装置(撮像部)
 6   出力装置(出力部)
 11  動画データ処理部(動画取得部)
 41  人物データ(人物画像を含む)
 75  入力装置(入力部)
 42b 動線データ(結合した動線のデータを含む)
 51  出力処理部
 132 人物抽出処理部(人物検出部)
 142 動線検出処理部(判定部)
 213 総滞留時間算出部
 421 動線情報
 821,821a,821b 棒グラフ(総滞留時間に関する情報、総滞留時間が前記所定の閾値を超えていることを示す出力)
 823 人物画像表示部(総滞留時間の算出対象となっている人物の情報)
 837,837a チェックボックス(所定の動線の情報を削除)
 838 削除ボタン(所定の動線の情報を削除)
 S1  動画解析処理(人物検出ステップ)
 S2  動線検出処理(動線検出処理ステップ)
 S3  動線再検出処理(判定ステップ)
 S4  総滞留時間算出処理(総滞留時間算出ステップ)
 S5  画面表示処理(出力処理ステップ)
 Z   動画解析システム(動画解析装置)
3 Movie shooting device (imaging unit)
6 Output device (output unit)
11 Video data processing unit (video acquisition unit)
41 Person data (including person images)
75 Input device (input unit)
42b Flow line data (including combined flow line data)
51 Output processing unit 132 Person extraction processing unit (person detection unit)
142 Flow line detection processing unit (judgment unit)
213 Total residence time calculation unit 421 Flow line information 821,821a, 821b Bar graph (information on total residence time, output indicating that the total residence time exceeds the predetermined threshold value)
823 Person image display unit (information on the person whose total residence time is calculated)
837,837a check box (deletes information on the specified flow line)
838 Delete button (Delete information on the specified flow line)
S1 Video analysis process (person detection step)
S2 Flow line detection process (flow line detection process step)
S3 Flow line rediscovery process (judgment step)
S4 Total residence time calculation process (total residence time calculation step)
S5 screen display processing (output processing step)
Z video analysis system (video analysis device)

Claims (7)

  1.  撮像部から動画を取得する動画取得部と、
     取得した前記動画を構成する画像それぞれにおける人物画像を検出する人物検出部と、
     検出した前記人物画像について、ひとつづきの動線を抽出し、抽出した前記動線に関する情報である動線情報を検出する動線検出処理部と、
     前記動線検出処理部が検出した、一度途切れている前記動線情報を複数取得し、取得した複数の前記動線情報それぞれについて、同一人物の前記動線であるか否かを、前記動線における人物特徴量を基に判定する判定部と、
     前記同一人物と判定された前記動線情報を基に、その人物の総滞留時間を算出する総滞留時間算出部と、
     算出した前記総滞留時間を出力部に出力する出力処理部と、
     を有することを特徴とする動画解析装置。
    A video acquisition unit that acquires video from the image pickup unit, and a video acquisition unit
    A person detection unit that detects a person image in each of the acquired images constituting the moving image,
    A flow line detection processing unit that extracts one flow line from the detected person image and detects the flow line information that is information about the extracted flow line.
    A plurality of the once interrupted flow line information detected by the flow line detection processing unit is acquired, and for each of the acquired plurality of the flow line information, whether or not the flow line is the same person is determined. Judgment unit that judges based on the amount of character features in
    A total residence time calculation unit that calculates the total residence time of the person based on the flow line information determined to be the same person, and
    An output processing unit that outputs the calculated total residence time to the output unit, and
    A moving image analysis device characterized by having.
  2.  前記動線検出処理部は、
     前記動線が示す時間が、所定時間以上とならないよう前記動線を検出する
     ことを特徴とする請求項1に記載の動画解析装置。
    The flow line detection processing unit is
    The moving image analysis device according to claim 1, wherein the flow line is detected so that the time indicated by the flow line does not exceed a predetermined time.
  3.  前記出力処理部は、
     算出した前記総滞留時間に関する情報と、当該総滞留時間の算出対象となっている人物の情報とを前記出力部に表示し、
     算出した前記総滞留時間が所定の閾値を超えている場合、前記総滞留時間が前記所定の閾値を超えていることを示す出力を行う
     ことを特徴とする請求項1に記載の動画解析装置。
    The output processing unit
    The calculated information on the total residence time and the information on the person for whom the total residence time is to be calculated are displayed in the output unit.
    The moving image analysis apparatus according to claim 1, wherein when the calculated total residence time exceeds a predetermined threshold value, an output indicating that the total residence time exceeds the predetermined threshold value is output.
  4.  前記出力処理部は、
     それぞれの人物における動線の情報を前記出力部に出力し、
     入力部を介して、所定の動線の情報を削除可能である
     ことを特徴とする請求項1に記載の動画解析装置。
    The output processing unit
    Information on the flow line of each person is output to the output unit, and the information is output to the output unit.
    The moving image analysis device according to claim 1, wherein information on a predetermined flow line can be deleted via an input unit.
  5.  前記動線検出処理部は、
     前記人物特徴量に基づいて、前記動線を検出し、
     前記動線を抽出する際の、前記人物特徴量の閾値はできる限り小さくなるよう設定される
     ことを特徴とする請求項1に記載の動画解析装置。
    The flow line detection processing unit is
    The flow line is detected based on the person feature amount,
    The moving image analysis device according to claim 1, wherein the threshold value of the person feature amount when extracting the flow line is set to be as small as possible.
  6.  前記動線検出処理部は、
     人物特徴量、及び、前後の前記画像における前記人物画像の位置に基づいて、異なる前記動線が同一人物の前記動線であると判定される場合、これらの前記動線を結合させる
     ことを特徴とする請求項1に記載の動画解析装置。
    The flow line detection processing unit is
    When it is determined that different flow lines are the flow lines of the same person based on the person feature amount and the position of the person image in the images before and after, it is characterized by combining these flow lines. The moving image analysis device according to claim 1.
  7.  人物の総滞留時間を算出する動画解析装置が、
     撮像部から取得した動画を構成する画像それぞれにおける人物画像を検出する人物検出ステップと、
     検出した前記人物画像について、ひとつづきの動線を抽出し、抽出した前記動線に関する情報である動線情報を検出する動線検出処理ステップと、
     前記動線検出処理ステップが検出した、一度途切れている前記動線情報を複数取得し、取得した複数の前記動線情報それぞれについて、同一人物の前記動線であるか否かを、前記動線における人物特徴量を基に判定する判定ステップと、
     前記同一人物と判定された前記動線情報を基に、その人物の総滞留時間を算出する総滞留時間算出ステップと、
     算出した前記総滞留時間を出力部に出力する出力処理ステップと、
     を実行することを特徴とする動画解析方法。
    A video analysis device that calculates the total residence time of a person
    A person detection step for detecting a person image in each of the images constituting the moving image acquired from the image pickup unit, and
    A flow line detection processing step of extracting one flow line from the detected person image and detecting the flow line information which is information about the extracted flow line, and
    A plurality of the once interrupted flow line information detected by the flow line detection processing step is acquired, and for each of the acquired plurality of the flow line information, whether or not the flow line is the same person is determined. Judgment step to judge based on the person's feature amount in
    A total residence time calculation step for calculating the total residence time of the person based on the flow line information determined to be the same person, and
    An output processing step that outputs the calculated total residence time to the output unit, and
    A video analysis method characterized by executing.
PCT/JP2021/027931 2020-07-28 2021-07-28 Video analysis device and video analysis method WO2022025127A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-127762 2020-07-28
JP2020127762A JP6794575B1 (en) 2020-07-28 2020-07-28 Video analysis device and video analysis method

Publications (1)

Publication Number Publication Date
WO2022025127A1 true WO2022025127A1 (en) 2022-02-03

Family

ID=73544681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/027931 WO2022025127A1 (en) 2020-07-28 2021-07-28 Video analysis device and video analysis method

Country Status (2)

Country Link
JP (1) JP6794575B1 (en)
WO (1) WO2022025127A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7162761B1 (en) * 2021-05-26 2022-10-28 三菱電機株式会社 Person tracking device and person tracking method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006215858A (en) * 2005-02-04 2006-08-17 Toshiba Corp Entrance and exit management system and entrance and exit management method
JP2016048834A (en) * 2014-08-27 2016-04-07 パナソニックIpマネジメント株式会社 Monitoring device, monitoring system and monitoring method
JP2017017488A (en) * 2015-06-30 2017-01-19 キヤノン株式会社 Monitoring system, imaging control method, and program
JP2019125238A (en) * 2018-01-18 2019-07-25 グローリー株式会社 Stay time detection system, stay time detection device and stay time detection method
JP2019194857A (en) * 2018-05-04 2019-11-07 キヤノン株式会社 Object tracking method and device
JP2020091649A (en) * 2018-12-05 2020-06-11 キヤノン株式会社 Image processing device, image processing system, control method of image processing device, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006215858A (en) * 2005-02-04 2006-08-17 Toshiba Corp Entrance and exit management system and entrance and exit management method
JP2016048834A (en) * 2014-08-27 2016-04-07 パナソニックIpマネジメント株式会社 Monitoring device, monitoring system and monitoring method
JP2017017488A (en) * 2015-06-30 2017-01-19 キヤノン株式会社 Monitoring system, imaging control method, and program
JP2019125238A (en) * 2018-01-18 2019-07-25 グローリー株式会社 Stay time detection system, stay time detection device and stay time detection method
JP2019194857A (en) * 2018-05-04 2019-11-07 キヤノン株式会社 Object tracking method and device
JP2020091649A (en) * 2018-12-05 2020-06-11 キヤノン株式会社 Image processing device, image processing system, control method of image processing device, and program

Also Published As

Publication number Publication date
JP6794575B1 (en) 2020-12-02
JP2022024915A (en) 2022-02-09

Similar Documents

Publication Publication Date Title
JP6854881B2 (en) Face image matching system and face image search system
US9141184B2 (en) Person detection system
KR100808316B1 (en) Person searching device, person searching method and access control system
US9002069B2 (en) Social media event detection and content-based retrieval
JP6885682B2 (en) Monitoring system, management device, and monitoring method
JP2016076073A (en) Data processing device, data processing method, and computer program
US20160350583A1 (en) Image search system and image search method
JP5751321B2 (en) Information processing apparatus and information processing program
JP5982557B2 (en) Video surveillance system and image search system
WO2022025127A1 (en) Video analysis device and video analysis method
JP2019101664A (en) Estimating program, estimating system, and estimating method
JP5202419B2 (en) Security system and security method
US9613271B2 (en) Determining severity of a geomagnetic disturbance on a power grid using similarity measures
US11348367B2 (en) System and method of biometric identification and storing and retrieving suspect information
JP5236607B2 (en) Anomaly detection device
JP2010117952A (en) Apparatus and method for identifying object
US10783365B2 (en) Image processing device and image processing system
JP2006163527A (en) Image retrieval device and method
US20230131717A1 (en) Search processing device, search processing method, and computer program product
Suprem et al. Evaluating generalizability of fine-tuned models for fake news detection
JP7235820B2 (en) People Flow Analysis Method and Apparatus by Similar Image Retrieval
CN112541403B (en) Indoor personnel falling detection method by utilizing infrared camera
JP2010087937A (en) Video detection device, video detection method and video detection program
JP7414660B2 (en) Abnormal behavior detection system and abnormal behavior detection method
JP6923011B2 (en) Travel time storage system, travel time storage method and travel time storage program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21850934

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21850934

Country of ref document: EP

Kind code of ref document: A1