WO2014021004A1 - Image processing system, image processing method, and program - Google Patents

Image processing system, image processing method, and program Download PDF

Info

Publication number
WO2014021004A1
WO2014021004A1 PCT/JP2013/066565 JP2013066565W WO2014021004A1 WO 2014021004 A1 WO2014021004 A1 WO 2014021004A1 JP 2013066565 W JP2013066565 W JP 2013066565W WO 2014021004 A1 WO2014021004 A1 WO 2014021004A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
input
moving image
image processing
video camera
Prior art date
Application number
PCT/JP2013/066565
Other languages
French (fr)
Japanese (ja)
Inventor
祐介 ▲高▼橋
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to SG11201500693QA priority Critical patent/SG11201500693QA/en
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to RU2015106938A priority patent/RU2015106938A/en
Priority to US14/416,716 priority patent/US10841528B2/en
Priority to JP2014528039A priority patent/JP6332833B2/en
Priority to BR112015001949-8A priority patent/BR112015001949B1/en
Priority to CN201380040754.3A priority patent/CN104718749A/en
Priority to MX2015001292A priority patent/MX2015001292A/en
Publication of WO2014021004A1 publication Critical patent/WO2014021004A1/en
Priority to US16/286,449 priority patent/US10999635B2/en
Priority to US16/286,430 priority patent/US10778931B2/en
Priority to US16/378,081 priority patent/US10750113B2/en
Priority to US16/925,183 priority patent/US11343575B2/en
Priority to US17/563,261 priority patent/US20220124410A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • Some aspects according to the present invention relate to an image processing system, an image processing method, and a program.
  • Patent Document 1 discloses an apparatus that can appropriately perform tracking (monitoring) of a person across cameras using connection relationship information between cameras. This apparatus obtains the correspondence between persons according to the similarity of the person feature amount between a point appearing in the camera field of view (In point) and a point disappearing from the camera field of view (Out point).
  • An image processing system capable of suppressing confusion related to identification of a target person when performing person tracking Is one of the purposes.
  • One image processing system includes an input unit that receives input of moving images captured by a plurality of video cameras, and a registration unit that can register one or more persons appearing in the moving images input from the input unit.
  • Display control means for switching the moving image input from the video camera for each person registered by the registration means.
  • An image processing method includes a step of receiving input of moving images captured by a plurality of video cameras, a step of registering one or more persons appearing in the input moving images, and an input from the video camera
  • the image processing system performs a step of displaying a moving image for each registered person in a switchable manner.
  • the program according to the present invention includes a process of receiving input of moving images captured by a plurality of video cameras, a process of registering one or more persons appearing in the input moving images, and a moving image input from the video cameras. For each registered person to be displayed in a switchable manner.
  • “part”, “means”, “apparatus”, and “system” do not simply mean physical means, but “part”, “means”, “apparatus”, “system”. This includes the case where the functions possessed by "are realized by software. Further, even if the functions of one “unit”, “means”, “apparatus”, and “system” are realized by two or more physical means or devices, two or more “parts” or “means”, The functions of “device” and “system” may be realized by a single physical means or device.
  • an image processing system an image processing method, and a program capable of suppressing confusion related to identification of a target person when performing person tracking.
  • (1. First embodiment) 1 to 5 are diagrams for explaining the first embodiment.
  • the first embodiment will be described along the following flow with reference to these drawings.
  • “1.1” indicates the functional configuration of the entire system
  • “1.2” indicates a specific example of the display screen, thereby giving an overview of the entire first embodiment.
  • a specific example of a hardware configuration that can be realized by “1.3” and a flow of processing by “1, 4” is shown.
  • the effects and the like according to the present embodiment will be described after “1.5”.
  • FIG. 1 is a block diagram showing a functional configuration of the monitoring system 1.
  • the monitoring system 1 is roughly composed of an information processing server 100 and a plurality of video cameras 200 that capture moving images (video cameras 200A to 200N are collectively referred to as video cameras 200).
  • the video camera 200 captures (captures) a moving image. Further, the video camera 200 determines whether or not a person is reflected in the captured moving image, and information such as a position and a feature amount in the moving image related to the person together with the captured moving image. Send to.
  • the video camera 200 can also track a person in the captured moving image. It should be noted that processes such as person detection, person feature extraction, and camera tracking may be performed on the information processing server 100 or other information processing apparatus (not shown).
  • the information processing server 100 performs various processes such as detection of a person, registration of a person to be tracked, and tracking of a registered person by analyzing a moving image captured by the video camera 200.
  • a case where person monitoring is performed based on a real-time moving image captured by the video camera 200 will be described as an example.
  • the present invention is not limited to this.
  • a past image captured by the video camera 200 may be used. It is also conceivable to monitor (analyze) a moving image.
  • the information processing server 100 includes a camera control unit 110, a similarity calculation unit 120, a tracking person registration unit 130, a next camera prediction unit 140, a display screen generation unit 150, an input device 160, a display device 170, and a database (DB) 180. Including.
  • the function of the information processing server 100 may be realized by a plurality of devices such as a server and a client.
  • the camera control (camera control unit 110) and tracking (monitoring) person registration processing (tracking person) The processing of the registration unit 130), the prediction of the video camera 200 in which the person to be tracked next appears (next camera prediction unit 140), the generation of the display screen (display screen generation unit 150), etc. is performed on the server side by the user (monitorer ) Input (input device 160), display screen output (display device 170), and the like may be performed on the client side.
  • Various methods of sharing processing by the server / client can be considered.
  • the camera control unit 110 controls the video camera 200. More specifically, based on a user instruction or the like input from the input device 160, a command for zooming in or zooming out, changing the shooting direction up, down, left, or right is transmitted to the video camera 200. Also, the moving image and the person detection information received from the video camera 200 are registered in the DB 180 as the captured moving image 181 and the detected person information 183.
  • the similarity calculation unit 120 performs the detection process of the monitored person by calculating the similarity between the person shown in the moving image input from the video camera 200 and the person registered in the person tracking information 185. At this time, the similarity calculation unit 120 selects a person in the moving image input from the video camera 200 from a plurality of person images related to each registered person (person images related to a plurality of timings of the same person). The similarity is calculated after selecting a person image having a similar posture to the image. Thereby, it is possible to improve the accuracy of similarity calculation.
  • the similar posture specifically means, for example, the state of facing front / back / right / left, whether it is bent, whether it overlaps with another person, etc. Those that can be discriminated to be the same or similar (parameters for discriminating these are approximate).
  • the tracking person registering unit 130 identifies a person shown in the captured moving image input from the video camera 200 based on a user instruction input from the input device 160 or the like (monitored target / tracked target). Registered in the person tracking information 185 of the DB 180. The tracking person registration unit 130 also determines that the person shown in the captured moving image input from the video camera 200 is the same person as the person already registered in the person tracking information 185. Can also register the person in the person tracking information 185.
  • the next camera prediction unit 140 predicts to which video camera 200 a person who is reflected in a certain video camera 200 is reflected next.
  • Various prediction methods are conceivable. For example, the calculation may be performed based on the installation distance between the video cameras 200, the structure of the building, the walking speed of the person, etc. It is conceivable to predict probabilistically by statistical processing of information such as which video camera 200 has appeared.
  • the display screen generation unit 150 generates a display screen to be displayed on the display device 170 as illustrated in FIGS. 2 and 3 to be described later.
  • a window 21 is displayed for each person to be tracked, and the window 21 can be switched by a tab 25.
  • a moving image of the video camera 200 in which the person to be tracked is projected or predicted to be projected in the near future is arranged.
  • the display screen generation unit 150 determines whether or not to newly register a person shown in the moving image as a person to be tracked or whether or not to associate the person with an already registered person.
  • a registrable GUI (Graphical User Interface) is displayed on the display device 170.
  • the input device 160 is a device for a user (monitor) to input various information.
  • a pointing device such as a mouse, a touch pad, or a touch panel, a keyboard, and the like correspond to the input device 160.
  • Various processes such as registration of the target person by the tracking person registration unit 130, association with the registered person, and switching of the tab 25 are performed based on operations on the input device 160.
  • the display device 170 is a display that displays an image on, for example, a liquid crystal or an organic EL (Electro Luminescence).
  • the display device 170 displays the display screen created by the display screen generation unit 150.
  • the DB 170 is constructed on various storage devices such as an HDD (Hard Disk Drive) (not shown).
  • the DB 180 manages the captured moving image 181, the detected person information 183, and the person tracking information 185.
  • the captured moving image 181 is a moving image input from the video camera 200.
  • the moving image 181 that has been taken for a certain period of time after shooting or a portion that can be determined that a person is not shown may be deleted.
  • the detected person information 183 is information such as a feature amount of a person detected by the video camera 200, a shooting time in the shooting moving image 181 and a person image.
  • the person tracking information 185 is information on a person that is tracked by the tracking person registration unit 130 among the persons detected as the detected person information 183.
  • the person shown in the video of the plurality of video cameras 200 is associated as the same person by the tracking person registration unit 130, the information is also registered in the person tracking information 185.
  • FIGS. 2 and 3 are diagrams illustrating specific examples of a display screen (hereinafter also referred to as a monitoring screen 20) that the display device 170 displays for person monitoring. First, FIG. 2 will be described.
  • a window including moving image display areas 23A to 23D (hereinafter also collectively referred to as a moving image display area 23) that displays captured moving images input from a plurality of video cameras 200.
  • 21 and tabs 25A to 25C (hereinafter also collectively referred to as tabs 25) for switching the windows 21.
  • the moving image display area 23 arranged on the window 21 displays multi-camera images input from a plurality of video cameras 200 as described above.
  • the video of the video camera 200 displayed in each moving image display area 23 may be switched at any time. For example, it is conceivable that, after a person to be monitored moves out of the display area, it can be predicted that the person will appear next or move to the video of the video camera 200 that appears as the person moves.
  • the tab 25 is for switching the window 21.
  • the windows 21 that can be switched by the tab 25 are provided according to the person to be monitored. In the example of FIG. 2, windows 21 (including corresponding moving image display areas 23) are set for the three monitored persons registered by the tracking person registration unit 130. 25 can be switched.
  • a monitored person other than the monitored person (monitored person corresponding to the person image 25A1 in the example of FIG. 2) presented to the user as the window 21, a moving image (video) from any of the video cameras 200 is displayed. ), Or if the next camera prediction unit 140 predicts that it will appear in the near future, it prompts the user to switch the window 21. In the example of FIG. 2, the color of the tab 25B changes or blinks to prompt the user to switch to the tab 25B.
  • FIG. 3 is a specific example when the window 21 is switched by the tab 25B.
  • a person P2 who is a monitored person (a monitored person corresponding to the person image 25B1 in the example of FIG. 3) is shown in the moving image display area 23C.
  • the display screen generation unit 150 informs the user who is a monitor by a change in the state of the monitored person (detection of a new monitored person or the near future in the near future). Prediction of the person being monitored) is not limited to this. For example, notification may be made by displaying a window message or by voice. Alternatively, it is conceivable that the window 21 is forcibly switched without any user operation in accordance with detection of a new monitored person or prediction of detected person being monitored.
  • FIG. 4 is a flowchart showing a processing flow of the information processing server 100 according to the present embodiment.
  • Each processing step to be described later can be executed in any order or in parallel as long as there is no contradiction in processing contents, and other steps can be added between the processing steps. good. Further, a step described as a single step for convenience can be executed by being divided into a plurality of steps, and a step described as being divided into a plurality of steps for convenience can be executed as one step.
  • the similarity calculation unit 120 determines whether or not a different person to be monitored (monitored person) related to the window 21 displayed on the monitoring screen 20 has been detected (S401). For example, in the example of FIG. 2, the similarity calculation unit 120 determines whether or not the monitored person related to the person images 25B1 and 25C1 has been detected.
  • the display screen generation unit 150 changes the color of the tab 25 or changes the dot, etc. 21 (switching of tab 25) is prompted.
  • the next camera prediction unit 140 predicts that the monitored person related to the other window 21 will appear in the near future (for example, within 5 seconds). (Yes in S403), the display screen generation unit 150 proceeds to S405 and prompts the window 21 to be switched.
  • the display screen generation unit 150 displays the tab. It is highlighted by changing or blinking 25 colors. Thereby, even when there are a plurality of persons to be tracked, the person can be monitored on the screen divided for each person by the tab 25, so that confusion can be prevented.
  • the function of the information processing server 100 can be realized by a plurality of information processing apparatuses (for example, a server and a client).
  • the information processing server 100 includes a processor 501, a memory 503, a storage device 505, an input interface (I / F) 507, a data I / F 509, a communication I / F 511, and a display device 513.
  • a processor 501 the information processing server 100 includes a processor 501, a memory 503, a storage device 505, an input interface (I / F) 507, a data I / F 509, a communication I / F 511, and a display device 513.
  • the processor 501 controls various processes in the information processing server 100 by executing a program stored in the memory 503. For example, the processes related to the camera control unit 110, the similarity calculation unit 120, the tracking person registration unit 130, the next camera prediction unit 140, and the display screen generation unit 150 described in FIG. 1 are temporarily stored in the memory 503. It can be realized as a program that mainly operates on the processor 501.
  • the memory 503 is a storage medium such as a RAM (Random Access Memory).
  • the memory 503 temporarily stores a program code of a program executed by the processor 501 and data necessary for executing the program. For example, a stack area necessary for program execution is secured in the storage area of the memory 503.
  • the storage device 505 is a nonvolatile storage medium such as an HDD (Hard Disk Drive) or a flash memory.
  • the storage device 505 is stored as an operating system, various programs for realizing the camera control unit 110, the similarity calculation unit 120, the tracking person registration unit 130, the next camera prediction unit 140, and the display screen generation unit 150, and the DB 180.
  • Various data including the captured moving image 181, detected person information 183, and person tracking information 185 are stored.
  • Programs and data stored in the storage device 505 are referred to by the processor 501 by being loaded into the memory 503 as necessary.
  • the input I / F 507 is a device for receiving input from the user.
  • the input device 160 described in FIG. 1 is realized by the input I / F 507.
  • Specific examples of the input I / F 507 include a keyboard, a mouse, a touch panel, and various sensors.
  • the input I / F 507 may be connected to the information processing server 100 via an interface such as USB (Universal Serial Bus).
  • the data I / F 509 is a device for inputting data from outside the information processing server 100.
  • Specific examples of the data I / F 509 include a drive device for reading data stored in various storage media.
  • the data I / F 509 may be provided outside the information processing server 100. In this case, the data I / F 509 is connected to the information processing server 100 via an interface such as a USB.
  • the communication I / F 511 is a device for performing data communication with a device external to the information processing server 100, such as a video camera 200, by wire or wireless. It is conceivable that the communication I / F 511 is provided outside the information processing server 100. In this case, the communication I / F 511 is connected to the information processing server 100 via an interface such as a USB.
  • the display device 513 is a device for displaying various kinds of information such as the monitoring screen 20, for example, and is a liquid crystal display, an organic EL (Electro-Luminescence) display, or the like.
  • the display device 513 may be provided outside the information processing server 100. In that case, the display device 513 is connected to the information processing server 100 via, for example, a display cable.
  • the information processing server 100 displays the moving images input from the plurality of video cameras 200 on the window 21 generated for each monitoring target person.
  • the windows 21 can be switched by tabs 25, respectively. If it can be predicted that a person to be tracked (monitored person) who is not displaying the main screen appears in the video of the video camera 200 or is likely to appear in the near future, the display screen generation unit 150 sets the tab 25. Highlight by changing color or blinking. Thereby, even when there are a plurality of persons to be tracked, the person can be monitored on the screen divided for each person by the tab 25, so that confusion can be prevented.
  • FIG. 6 is a block diagram illustrating a functional configuration of the monitoring apparatus 600 that is an image processing system.
  • the monitoring device 600 includes an input unit 610, a registration unit 620, and a display control unit 630.
  • the input unit 610 receives input of moving images captured by a plurality of video cameras.
  • the registration unit 620 can register one or more persons shown in the moving image input from the input unit 610.
  • the display control unit 630 displays the moving image input from the video camera in a switchable manner for each person registered by the registration unit 620.
  • the monitoring apparatus 600 According to the present embodiment, it is possible to suppress confusion related to the identification of the target person when performing person tracking.
  • the monitoring system 1 according to the third embodiment will be described.
  • the same components as those in the first embodiment are denoted by the same reference numerals and description thereof is omitted.
  • the same points as in the first embodiment are not described as necessary.
  • the difference between the monitoring system 1 according to the third embodiment and the monitoring system 1 according to the first embodiment mobile phone will be mainly described.
  • the functional configuration of the monitoring system 1 according to the present embodiment is basically the same as that according to the first embodiment illustrated in FIG. 1, but the display generated by the display screen generation unit 150 and displayed on the display device 170.
  • the screen is different.
  • FIG. 7 shows a specific example of a display screen generated by the display generation unit 150 according to the present embodiment.
  • a video camera in which a tracking target person is shown in each window 31 on the display screen generated by the display screen generation unit 150 according to the present embodiment or is predicted to appear in the near future. 200 moving images and a time chart image are arranged.
  • a specific example of a display screen displayed by the display device 170 in the present embodiment will be described with reference to FIGS. 7 and 8.
  • a time chart image to be displayed for person monitoring is arranged (hereinafter, this display screen is also referred to as a monitoring screen 30).
  • the monitoring screen 30 has a moving image display area 23 that has moving image display areas 33 ⁇ / b> A to 33 ⁇ / b> D (hereinafter collectively referred to as a moving image display area 33). It is different in that.
  • the time chart image is arranged in the moving image display area 33D instead of the moving image, which is different from the monitoring image according to FIG.
  • the time chart image shows in which time zone of each camera the person P2 corresponding to the tab 25B is detected.
  • FIG. 8 is an enlarged view of the time chart image displayed in the moving image display area 33D.
  • T1 is a time chart showing in which time zone of each camera the person corresponding to the tab of the currently displayed window is detected.
  • numbers in the left column in the figure represent camera numbers
  • T2 represents a time axis. For example, it is assumed that the interval between the scales is 5 seconds at T2, and the leftmost scale currently represents 10 o'clock.
  • FIG. 8 displays the detection status of the person P2 corresponding to the tab 25B
  • T1 indicates that the person P2 is in the camera 1 from 10: 00: 5 to 10:00:10. It shows that it was reflected.
  • T1 represents that the person P2 was reflected in the camera 2 from 10:00 to 10:00:10.
  • T3 is a knob for sliding the entire T1 and T2 in the right direction or the left direction (future or past).
  • T4 is a button for selecting a monitoring image to be displayed in the moving image display areas 33A to 33C in FIG. 7 and 8, the monitoring images corresponding to the cameras 2, 4 and 6 are displayed in the moving image display areas 33A, 33B and 33C.
  • the button switching process at T4 is performed based on an operation on the input device 160. For example, when the input device 160 is a mouse, the display image may be switched by placing the cursor on the corresponding button and clicking. Alternatively, when the input device 160 is a touch panel, the display image may be switched by the user directly touching the button.
  • the display screen generation unit 150 includes the similarity calculation unit 120, the tracking person registration unit 130, the next camera prediction unit 140, the captured moving image 181, and the detected person information.
  • a time chart image may be generated using 183, person tracking information 185, and the like.
  • the display screen generation unit 150 displays the corresponding person. You may make it fill the time slot
  • the display screen generation unit 150 displays the current time zone detection status in the second column from the right, and the rightmost column indicates the status predicted by the next camera prediction unit 140. May be displayed.
  • the display screen generation unit 150 may display T1 in the moving image display area 33D so as to flow from right to left in real time.
  • the display screen generation unit 150 may generate T1 not for real-time captured moving images but for accumulated past captured moving images (offline moving image data).
  • the display screen generation unit 150 arranges the time chart image on the display screen. Thereby, it is possible to confirm at a glance which camera the person to be monitored was detected in which time zone.
  • the display screen generation unit 150 according to the present embodiment displays a button capable of switching which monitor image captured by which camera is displayed in the moving image display area 33. Thereby, the user can arbitrarily switch an image to be displayed while checking the time chart.
  • the functional configuration of the monitoring system 1 according to the present embodiment is basically the same as that according to the first embodiment illustrated in FIG. 1, but the display generated by the display screen generation unit 150 and displayed on the display device 170.
  • the screen is different.
  • FIG. 9 shows a specific example of a display screen generated by the display generation unit 150 according to the present embodiment.
  • a tracking target person is shown in or near each window 41 in a display screen (hereinafter also referred to as a monitoring screen 40) generated by the display screen generation unit 150 according to the present embodiment.
  • a map image is arranged together with a moving image of the video camera 200 predicted to be projected in the future.
  • the moving image display area 23 is a moving image display area 43A to 43D (hereinafter also collectively referred to as a moving image display area 43). .
  • the difference is that not the moving image but the map image is arranged in the moving image display area 43D.
  • the map image shows a person corresponding to the tab 25A1 and a locus of the person corresponding to the tab 25B1.
  • the display screen generation unit 150 changes the map image in real time according to the detection and tracking results.
  • the display screen generation unit 150 may display the color of the tab and the color of the locus displayed on the map image in a similar color for each monitoring target person.
  • the tab 25A and the color of the trajectory of the person corresponding to the tab 25A in the map image are red
  • the color of the tab 25B and the trajectory of the person corresponding to the tab 25B in the map image is blue.
  • the color In the example of FIG. 9, the person corresponding to the tab 25C is not detected on the map image.
  • the tab 25C and the color of the locus of the person corresponding to the tab 25C in the map image may be yellow colors. good.
  • the display screen generation unit 150 may notify the user of the person detection by surrounding the person on the moving image with a corresponding similar color rectangle.
  • the display screen generation unit 150 arranges the map image on the display screen and displays the movement trajectory of the monitoring target person. Thereby, the user can confirm the movement trajectory of the monitoring target person at a glance.
  • the display screen generation unit 150 displays the tab color and the locus color displayed on the map image in similar colors for each person to be monitored. Thereby, the user can confirm the movement trajectory of the plurality of monitoring target persons more easily at a glance.
  • FIG. 10 is a diagram for explaining the fifth embodiment.
  • the monitoring system 1 displays a monitoring screen (in the example of FIG. 10, a display screen similar to the monitoring screen 20 according to the first embodiment) on the mobile terminal 1000.
  • the mobile terminal 1000 includes a notebook computer, a tablet terminal, a PDA, a mobile phone, a smartphone, a portable game machine, and the like.
  • the display of the mobile terminal 1000 is configured with a touch panel.
  • the monitoring system 1 realizes at least the functions of the input device 160 and the display device 170 on the mobile terminal 1000 among the functions of the information processing server 100 illustrated in FIG.
  • the information processing server 100 and the mobile terminal 1000 can be linked. It is possible to realize the same function as in the first embodiment.
  • the user can monitor the display from the bottom of the screen to the top of the screen (in the direction of the arrow in the figure) by hand.
  • the system 1 may realize a user interface for switching the display on the display from the lower monitored person to the upper monitored person in the order of the tabs to the corresponding window.
  • the tab may be switched by directly touching the image area of the tab.
  • the tracking status of the monitored person can be confirmed outdoors, for example, by carrying a guard.
  • An image processing system comprising: a display control unit that displays a switchable manner for each person registered by the registration unit.
  • Appendix 3 The image processing system according to appendix 2, wherein the display control means urges switching of the window when a person registered by the registration means is captured in a moving image input from the video camera.
  • Appendix 4 The image processing system according to appendix 2 or appendix 3, wherein the display control unit urges switching of the window when it can be predicted that the person registered by the registration unit appears in the moving image input from the video camera. .
  • Input means for receiving an input of a moving image captured by a video camera; and display control means for displaying the moving image input from the video camera for each registered person so as to be switchable, the display control means Is an image processing system that displays, for each registered person, information related to the video camera that took the person and the time when the person was taken.
  • Input means for receiving an input of a moving image captured by a video camera; and display control means for displaying the moving image input from the video camera for each registered person so as to be switchable, the display control means
  • the image processing system which displays the map information showing the locus of movement for each registered person.
  • Appendix 10 The image processing method according to appendix 9, wherein the moving image input from the video camera is displayed so that a window associated with each registered person can be switched.
  • Appendix 11 The image processing method according to appendix 10, wherein the window switching is urged when a person registered by the registration unit is captured in a moving image input from the video camera.
  • Appendix 12 The image processing method according to appendix 10 or appendix 11, wherein the window switching is urged when it can be predicted that the person registered by the registration unit appears in the moving image input from the video camera.
  • Appendix 13 12. The image processing method according to appendix 10 or appendix 11, wherein a plurality of moving images input from the input unit are arranged on the window.
  • the image processing system performs a step of receiving input of a moving image captured by the video camera and a step of displaying the moving image input from the video camera for each registered person in a switchable manner.
  • the image processing method which displays the information regarding the video camera which image
  • the image processing system performs a step of receiving input of a moving image captured by the video camera and a step of displaying the moving image input from the video camera for each registered person in a switchable manner.
  • Appendix 17 A process of receiving input of moving images picked up by a plurality of video cameras, a process of registering one or more persons appearing in the input moving images, and a moving person input from the video cameras; A program that causes a computer to execute a process of switching display each time.
  • Appendix 18 18. The program according to appendix 17, wherein the moving image input from the video camera is displayed so that a window associated with each registered person can be switched.
  • appendix 19 The program according to appendix 18, which prompts switching of the window when a person registered by the registration unit is captured in a moving image input from the video camera.
  • Appendix 20 The program according to appendix 18 or appendix 19, wherein the window is switched when the person registered by the registration unit can be predicted to be reflected in the moving image input from the video camera.
  • Appendix 21 20.
  • Appendix 22 A process for receiving input of a moving image picked up by a video camera and a process for displaying the moving image input from the video camera for each registered person in a switchable manner. A program that displays information about the video camera that took the person and the time when the person was taken.
  • Appendix 23 A process for receiving input of a moving image picked up by a video camera and a process for displaying the moving image input from the video camera for each registered person in a switchable manner. A program that displays map information showing the trajectory of each movement.
  • Appendix 24 Item 23.
  • the program according to any one of appendix 17 to appendix 22, which displays information on a mobile terminal having a touch panel as an interface.
  • DESCRIPTION OF SYMBOLS 1 ... Monitoring system, 20 ... Monitoring screen, 21 ... Window, 23A, 23B, 23C, 23D ... Moving image display area, 25A, 25B, 25C ... Tab, 25A1, 25B1, 25C1 ... person image, 100 ... information processing server, 110 ... camera control unit, 120 ... similarity calculation unit, 130 ... tracking person registration unit, 140 ... next camera prediction unit, 150 ... Display screen generation unit, 160 ... input device, 170 ... display device, 180 ... database (DB), 181 ... moving image, 183 ... detected person information, 185 ... ⁇ Person tracking information, 200 ... video camera, P1, P2 ... person

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided are an image processing system, an image processing method and a program, whereby it is possible to suppress confusion associated with the identification of a person of interest when tracking people. The image processing system is provided with: a camera control unit (110) that receives the input of moving images captured by a plurality of video cameras (200); a tracked person registration unit (130) that is capable of registering one or more person(s) appearing in the moving images input via the camera control unit (110); and a display screen generation unit (150) that displays in a manner such that the moving images input via the video cameras (200) can be switched for each of the persons registered in the tracked person registration unit (130).

Description

画像処理システム、画像処理方法、及びプログラムImage processing system, image processing method, and program
 本発明に係るいくつかの態様は、画像処理システム、画像処理方法、及びプログラムに関する。 Some aspects according to the present invention relate to an image processing system, an image processing method, and a program.
 近年、複数のカメラからの映像を利用して広範囲にわたる監視を行うシステムが考えられている。例えば特許文献1は、カメラ間の連結関係情報を用いてカメラ間にまたがる人物の追尾(モニタリング)を適切に行うことのできる装置を開示している。この装置は、カメラ視野に出現した点(In点)と、カメラ視野から消失した点(Out点)における人物特徴量の類似度に応じて、人物の対応関係を求める。 In recent years, there has been considered a system that performs extensive monitoring using video from multiple cameras. For example, Patent Document 1 discloses an apparatus that can appropriately perform tracking (monitoring) of a person across cameras using connection relationship information between cameras. This apparatus obtains the correspondence between persons according to the similarity of the person feature amount between a point appearing in the camera field of view (In point) and a point disappearing from the camera field of view (Out point).
特開2008-219570号公報JP 2008-219570 A
 特許文献1記載の装置のように、類似度に応じて自動的に人物の対応関係を定める場合には、一定の確率で誤りが生じる。そこで、人間が関与する形で人物の対応付けを行うこともある。 As in the device described in Patent Document 1, when the correspondence between persons is automatically determined according to the degree of similarity, an error occurs with a certain probability. Therefore, there is a case in which a person is associated with a person involved.
 しかしながら、監視に係る人間の関与度を高くすると、作業が煩雑になる可能性が高い。特に、監視対象の人物が複数名に及ぶ場合には、どの人物を対象に操作を行なっているのかをユーザが把握しづらくなったり、人物指定操作が複雑となったりしやすい。 However, if the degree of human involvement in monitoring is increased, the work is likely to be complicated. In particular, when there are a plurality of persons to be monitored, it is difficult for the user to grasp which person is performing the operation or the person specifying operation is complicated.
 本発明のいくつかの態様は前述の課題に鑑みてなされたものであり、人物追跡を行う際に、対象人物の特定に係る混乱を抑制することのできる画像処理システム、画像処理方法、及びプログラムを提供することを目的の1つとする。 Some aspects of the present invention have been made in view of the above-described problems. An image processing system, an image processing method, and a program capable of suppressing confusion related to identification of a target person when performing person tracking Is one of the purposes.
 本発明に係る1の画像処理システムは、複数のビデオカメラで撮像された動画像の入力を受ける入力手段と、前記入力手段から入力された動画像に映る人物を1以上登録可能な登録手段と、前記ビデオカメラから入力された動画像を、前記登録手段により登録された人物毎に、切換え可能に表示する表示制御手段とを備える。 One image processing system according to the present invention includes an input unit that receives input of moving images captured by a plurality of video cameras, and a registration unit that can register one or more persons appearing in the moving images input from the input unit. Display control means for switching the moving image input from the video camera for each person registered by the registration means.
 本発明に係る画像処理方法は、複数のビデオカメラで撮像された動画像の入力を受けるステップと、前記入力された動画像に映る人物を1以上登録するステップと、前記ビデオカメラから入力された動画像を、前記登録された人物毎に、切換え可能に表示するステップとを画像処理システムが行う。 An image processing method according to the present invention includes a step of receiving input of moving images captured by a plurality of video cameras, a step of registering one or more persons appearing in the input moving images, and an input from the video camera The image processing system performs a step of displaying a moving image for each registered person in a switchable manner.
 本発明に係るプログラムは、複数のビデオカメラで撮像された動画像の入力を受ける処理と、前記入力された動画像に映る人物を1以上登録する処理と、前記ビデオカメラから入力された動画像を、前記登録された人物毎に、切換え可能に表示する処理とをコンピュータに実行させる。 The program according to the present invention includes a process of receiving input of moving images captured by a plurality of video cameras, a process of registering one or more persons appearing in the input moving images, and a moving image input from the video cameras. For each registered person to be displayed in a switchable manner.
 尚、本発明において、「部」や「手段」、「装置」、「システム」とは、単に物理的手段を意味するものではなく、その「部」や「手段」、「装置」、「システム」が有する機能をソフトウェアによって実現する場合も含む。また、1つの「部」や「手段」、「装置」、「システム」が有する機能が2つ以上の物理的手段や装置により実現されても、2つ以上の「部」や「手段」、「装置」、「システム」の機能が1つの物理的手段や装置により実現されても良い。 In the present invention, “part”, “means”, “apparatus”, and “system” do not simply mean physical means, but “part”, “means”, “apparatus”, “system”. This includes the case where the functions possessed by "are realized by software. Further, even if the functions of one “unit”, “means”, “apparatus”, and “system” are realized by two or more physical means or devices, two or more “parts” or “means”, The functions of “device” and “system” may be realized by a single physical means or device.
 本発明によれば、人物追跡を行う際に、対象人物の特定に係る混乱を抑制することのできる画像処理システム、画像処理方法、及びプログラムを提供することができる。 According to the present invention, it is possible to provide an image processing system, an image processing method, and a program capable of suppressing confusion related to identification of a target person when performing person tracking.
第1実施形態に係る監視システムの概略構成を示す機能ブロック図である。It is a functional block diagram which shows schematic structure of the monitoring system which concerns on 1st Embodiment. 表示画面の具体例を示す図である。It is a figure which shows the specific example of a display screen. 表示画面の具体例を示す図である。It is a figure which shows the specific example of a display screen. 図1に示す情報処理サーバの処理の流れを示すフローチャートである。It is a flowchart which shows the flow of a process of the information processing server shown in FIG. 図1に示す情報処理サーバを実装可能なハードウェア構成の機能構成を示すブロック図である。It is a block diagram which shows the function structure of the hardware constitutions which can mount the information processing server shown in FIG. 第2実施形態に係る監視装置の概略構成を示す機能ブロック図である。It is a functional block diagram which shows schematic structure of the monitoring apparatus which concerns on 2nd Embodiment. 第3実施形態に係る表示画面の具体例を示す図である。It is a figure which shows the specific example of the display screen which concerns on 3rd Embodiment. 第4実施形態に係る表示画面の具体例を示す図である。It is a figure which shows the specific example of the display screen which concerns on 4th Embodiment. 第4実施形態に係る表示画面の具体例を示す図である。It is a figure which shows the specific example of the display screen which concerns on 4th Embodiment. 第5実施形態に係るモバイル端末の外観の具体例を示す図である。It is a figure which shows the specific example of the external appearance of the mobile terminal which concerns on 5th Embodiment.
 以下に本発明の実施形態を説明する。以下の説明及び参照する図面の記載において、同一又は類似の構成には、それぞれ同一又は類似の符号が付されている。 Embodiments of the present invention will be described below. In the following description and the description of the drawings to be referred to, the same or similar components are denoted by the same or similar reference numerals.
 (1.第1実施形態)
 図1乃至図5は、第1実施形態を説明するための図である。以下、これらの図を参照しながら、以下の流れに沿って第1実施形態を説明する。まず「1.1」でシステム全体の機能構成を示すと共に、「1.2」で表示画面の具体例を示すことで、第1実施形態全体の概要を示す。その上で「1.3」で処理の流れを、「1,4」で実現可能なハードウェア構成の具体例を示す。最後に、「1.5」以降で、本実施形態に係る効果などを説明する。
(1. First embodiment)
1 to 5 are diagrams for explaining the first embodiment. Hereinafter, the first embodiment will be described along the following flow with reference to these drawings. First, “1.1” indicates the functional configuration of the entire system, and “1.2” indicates a specific example of the display screen, thereby giving an overview of the entire first embodiment. In addition, a specific example of a hardware configuration that can be realized by “1.3” and a flow of processing by “1, 4” is shown. Finally, the effects and the like according to the present embodiment will be described after “1.5”.
 (1.1 機能構成)
 図1を参照しながら、本実施形態に係る情報処理システムである監視システム1の機能構成を説明する。図1は、監視システム1の機能構成を示すブロック図である。
(1.1 Functional configuration)
With reference to FIG. 1, a functional configuration of a monitoring system 1 which is an information processing system according to the present embodiment will be described. FIG. 1 is a block diagram showing a functional configuration of the monitoring system 1.
 監視システム1は、大きく分けて、情報処理サーバ100と、動画像を撮像する複数のビデオカメラ200(ビデオカメラ200A乃至200Nを総称してビデオカメラ200と呼ぶ。)とから構成される。 The monitoring system 1 is roughly composed of an information processing server 100 and a plurality of video cameras 200 that capture moving images (video cameras 200A to 200N are collectively referred to as video cameras 200).
 ビデオカメラ200は、動画像を撮像(撮影)する。またビデオカメラ200は、当該撮像した動画像内に人物が映っているか否かを判別すると共に、当該人物に係る動画像内の位置や特徴量等の情報を、撮影動画像と共に情報処理サーバ100へと送信する。また、ビデオカメラ200は、撮像した動画像内の人物追跡も行うことができる。
 尚、人物の検出や人物の特徴量の抽出、カメラ内の人物追跡などの処理は、例えば情報処理サーバ100や、図示しない他の情報処理装置上で行なっても良い。
The video camera 200 captures (captures) a moving image. Further, the video camera 200 determines whether or not a person is reflected in the captured moving image, and information such as a position and a feature amount in the moving image related to the person together with the captured moving image. Send to. The video camera 200 can also track a person in the captured moving image.
It should be noted that processes such as person detection, person feature extraction, and camera tracking may be performed on the information processing server 100 or other information processing apparatus (not shown).
 情報処理サーバ100は、ビデオカメラ200で撮像された動画像を解析することにより、人物の検出や、追跡する人物の登録、登録された人物の追跡などの各種処理を行う。 The information processing server 100 performs various processes such as detection of a person, registration of a person to be tracked, and tracking of a registered person by analyzing a moving image captured by the video camera 200.
 尚、以下ではビデオカメラ200により撮像されるリアルタイムの動画像を元に人物監視を行う場合を例に説明を行うが、これに限られるものではなく、例えば、ビデオカメラ200により撮像された、過去の動画像を対象に監視(分析)することも考えられる。 In the following, a case where person monitoring is performed based on a real-time moving image captured by the video camera 200 will be described as an example. However, the present invention is not limited to this. For example, a past image captured by the video camera 200 may be used. It is also conceivable to monitor (analyze) a moving image.
 情報処理サーバ100は、カメラ制御部110、類似度算出部120、追跡人物登録部130、次カメラ予測部140、表示画面生成部150、入力装置160、表示装置170、及びデータベース(DB)180を含む。 The information processing server 100 includes a camera control unit 110, a similarity calculation unit 120, a tracking person registration unit 130, a next camera prediction unit 140, a display screen generation unit 150, an input device 160, a display device 170, and a database (DB) 180. Including.
 ここで、情報処理サーバ100の機能は、例えばサーバとクライアント等の複数の装置により実現してもよく、例えばカメラの制御(カメラ制御部110)や追跡(監視)する人物の登録処理(追跡人物登録部130)、追跡対象人物が次に登場するビデオカメラ200の予測(次カメラ予測部140)、表示画面の生成(表示画面生成部150)、等の処理はサーバ側で、ユーザ(監視者)による入力(入力装置160)や表示画面出力(表示装置170)等の処理はクライアント側で行うようにしもて良い。このサーバ/クライアントによる処理の分担方法は種々考えることができる。 Here, the function of the information processing server 100 may be realized by a plurality of devices such as a server and a client. For example, the camera control (camera control unit 110) and tracking (monitoring) person registration processing (tracking person) The processing of the registration unit 130), the prediction of the video camera 200 in which the person to be tracked next appears (next camera prediction unit 140), the generation of the display screen (display screen generation unit 150), etc. is performed on the server side by the user (monitorer ) Input (input device 160), display screen output (display device 170), and the like may be performed on the client side. Various methods of sharing processing by the server / client can be considered.
 カメラ制御部110は、ビデオカメラ200を制御する。より具体的には、入力装置160から入力されたユーザ指示等に基づき、ズームインやズームアウト、撮影方向の上下左右への変更等を行うための命令を、ビデオカメラ200へと送信する。また、ビデオカメラ200から受信した動画像や人物検出情報を、撮影動画像181や検出人物情報183としてDB180へ登録する。 The camera control unit 110 controls the video camera 200. More specifically, based on a user instruction or the like input from the input device 160, a command for zooming in or zooming out, changing the shooting direction up, down, left, or right is transmitted to the video camera 200. Also, the moving image and the person detection information received from the video camera 200 are registered in the DB 180 as the captured moving image 181 and the detected person information 183.
 類似度算出部120は、ビデオカメラ200から入力された動画像に映る人物と、人物追跡情報185に登録された人物との類似度を算出することにより、被監視者の検出処理を行う。このとき、類似度算出部120は、登録された各人物に係る複数の人物画像(同一人物の複数のタイミングに係る人物画像)の中から、ビデオカメラ200から入力された動画像内の人物の画像と姿勢が類似する人物画像を選択した上で類似度を算出する。これにより、類似度算出の精度を上げることが可能となる。 The similarity calculation unit 120 performs the detection process of the monitored person by calculating the similarity between the person shown in the moving image input from the video camera 200 and the person registered in the person tracking information 185. At this time, the similarity calculation unit 120 selects a person in the moving image input from the video camera 200 from a plurality of person images related to each registered person (person images related to a plurality of timings of the same person). The similarity is calculated after selecting a person image having a similar posture to the image. Thereby, it is possible to improve the accuracy of similarity calculation.
 ここで、類似する姿勢とは、具体的には、例えば、正面/背面/右/左のいずれを向いているか、屈んでいるか否か、他の人と重なっているか否か等の状態がそれぞれ同一又は類似と判別できる(これらを判別するためのパラメータが近似している)ものをいう。 Here, the similar posture specifically means, for example, the state of facing front / back / right / left, whether it is bent, whether it overlaps with another person, etc. Those that can be discriminated to be the same or similar (parameters for discriminating these are approximate).
 追跡人物登録部130は、ビデオカメラ200から入力された撮影動画像に映っている人物を、入力装置160から入力されたユーザ指示等に基づき、追跡対象の被監視者(監視対象者/追跡対象者)としてDB180の人物追跡情報185へ登録する。また追跡人物登録部130は、ビデオカメラ200から入力された撮影動画像の中に映っている人物が、人物追跡情報185に既に登録された人物と同一人物であるとユーザに判定された場合には、その人物を人物追跡情報185に登録することもできる。 The tracking person registering unit 130 identifies a person shown in the captured moving image input from the video camera 200 based on a user instruction input from the input device 160 or the like (monitored target / tracked target). Registered in the person tracking information 185 of the DB 180. The tracking person registration unit 130 also determines that the person shown in the captured moving image input from the video camera 200 is the same person as the person already registered in the person tracking information 185. Can also register the person in the person tracking information 185.
 次カメラ予測部140は、あるビデオカメラ200に映っている(映っていた)人物が、次にどのビデオカメラ200に映るかを予測する。予測の方法は種々考えられるが、例えば、各ビデオカメラ200間の設置距離や建物の構造、人物の歩く速度等に基づいて算出することや、或いは、追跡人物が過去、どの程度の時間を置いてどのビデオカメラ200に登場したか、といった情報の統計的な処理により、確率的に予測すること等が考えられる。 The next camera prediction unit 140 predicts to which video camera 200 a person who is reflected in a certain video camera 200 is reflected next. Various prediction methods are conceivable. For example, the calculation may be performed based on the installation distance between the video cameras 200, the structure of the building, the walking speed of the person, etc. It is conceivable to predict probabilistically by statistical processing of information such as which video camera 200 has appeared.
 表示画面生成部150は、表示装置170に表示させる、後述する図2や図3に例示するような表示画面を生成する。表示画面生成部150が生成する表示画面には、追跡対象の人物毎にウィンドウ21を表示すると共に、当該ウィンドウ21をタブ25で切換えられるようになっている。各ウィンドウ21には、追跡対象人物が映っている、若しくは近い将来映ると予測されるビデオカメラ200の動画像が配置される。また、表示画面生成部150は、当該動画像に映った人物を、追跡対象の人物として新たに登録するか否か、若しくは、既に登録済みの人物と同一人物として対応づけるか否か、等を登録可能なGUI(Graphical User Interface)を表示装置170に表示させる。 The display screen generation unit 150 generates a display screen to be displayed on the display device 170 as illustrated in FIGS. 2 and 3 to be described later. On the display screen generated by the display screen generation unit 150, a window 21 is displayed for each person to be tracked, and the window 21 can be switched by a tab 25. In each window 21, a moving image of the video camera 200 in which the person to be tracked is projected or predicted to be projected in the near future is arranged. In addition, the display screen generation unit 150 determines whether or not to newly register a person shown in the moving image as a person to be tracked or whether or not to associate the person with an already registered person. A registrable GUI (Graphical User Interface) is displayed on the display device 170.
 入力装置160は、ユーザ(監視者)が各種情報を入力するための装置である。例えば、マウスやタッチパッド、タッチパネル等のポインティングデバイスやキーボード等が入力装置160に該当する。上述の、追跡人物登録部130による対象人物の登録や、登録された人物との対応付け等の各種処理、タブ25の切換えといった各種処理は、入力装置160に対する操作に基づいてなされる。 The input device 160 is a device for a user (monitor) to input various information. For example, a pointing device such as a mouse, a touch pad, or a touch panel, a keyboard, and the like correspond to the input device 160. Various processes such as registration of the target person by the tracking person registration unit 130, association with the registered person, and switching of the tab 25 are performed based on operations on the input device 160.
 表示装置170は、例えば、液晶や有機EL(Electro Luminescence)等に画像を表示するディスプレイである。表示画面生成部150が作成した表示画面は、表示装置170が表示する。 The display device 170 is a display that displays an image on, for example, a liquid crystal or an organic EL (Electro Luminescence). The display device 170 displays the display screen created by the display screen generation unit 150.
 DB170は、例えば図示しないHDD(Hard Disk Drive)等の各種記憶装置上に構築される。DB180は、撮影動画像181、検出人物情報183、及び人物追跡情報185を管理する。 The DB 170 is constructed on various storage devices such as an HDD (Hard Disk Drive) (not shown). The DB 180 manages the captured moving image 181, the detected person information 183, and the person tracking information 185.
 撮影動画像181は、ビデオカメラ200から入力された動画像である。尚、当該撮影動画像181は、例えば撮影後一定時間経過したものや、人物が映っていないと判別できる部分は削除するようにしても良い。
 検出人物情報183は、ビデオカメラ200により検出された人物の特徴量や、撮影動画像181内での撮影時刻、人物画像等の情報である。
The captured moving image 181 is a moving image input from the video camera 200. For example, the moving image 181 that has been taken for a certain period of time after shooting or a portion that can be determined that a person is not shown may be deleted.
The detected person information 183 is information such as a feature amount of a person detected by the video camera 200, a shooting time in the shooting moving image 181 and a person image.
 人物追跡情報185は、検出人物情報183として検出されている各人物のうち、追跡人物登録部130により追跡対象とされた人物の情報である。複数のビデオカメラ200の映像に映った人物が追跡人物登録部130により同一人物として対応付けられた場合には、その情報も人物追跡情報185に登録される。 The person tracking information 185 is information on a person that is tracked by the tracking person registration unit 130 among the persons detected as the detected person information 183. When the person shown in the video of the plurality of video cameras 200 is associated as the same person by the tracking person registration unit 130, the information is also registered in the person tracking information 185.
 (1.2 表示画面の具体例)
 以下、図2及び図3を参照しながら、表示装置170が表示する表示画面の具体例を説明する。図2及び図3は、表示装置170が、人物監視のために表示する表示画面(以下、監視画面20ともいう。)の具体例を示す図である。まず、図2について説明する。
(1.2 Specific examples of display screen)
Hereinafter, a specific example of the display screen displayed by the display device 170 will be described with reference to FIGS. 2 and 3. 2 and 3 are diagrams illustrating specific examples of a display screen (hereinafter also referred to as a monitoring screen 20) that the display device 170 displays for person monitoring. First, FIG. 2 will be described.
 図2の監視画面20の例では、複数のビデオカメラ200から入力された撮影動画像を表示する動画像表示領域23A乃至23D(以下、総称して動画像表示領域23ともいう。)を含むウィンドウ21と、ウィンドウ21を切換るためのタブ25A乃至25C(以下、総称してタブ25ともいう。)とを含む。 In the example of the monitoring screen 20 in FIG. 2, a window including moving image display areas 23A to 23D (hereinafter also collectively referred to as a moving image display area 23) that displays captured moving images input from a plurality of video cameras 200. 21 and tabs 25A to 25C (hereinafter also collectively referred to as tabs 25) for switching the windows 21.
 ウィンドウ21上に配置される動画像表示領域23は、前述の通り複数のビデオカメラ200から入力されたマルチカメラ映像を表示する。ここで、それぞれの動画像表示領域23に表示されるビデオカメラ200の映像は随時切り換えても良い。例えば、監視対象の人物が表示領域から外れた後、当該人物の移動に併せて、次にその人物が現れると予測できる、若しくは現れたビデオカメラ200の映像に切り変えることが考えられる。 The moving image display area 23 arranged on the window 21 displays multi-camera images input from a plurality of video cameras 200 as described above. Here, the video of the video camera 200 displayed in each moving image display area 23 may be switched at any time. For example, it is conceivable that, after a person to be monitored moves out of the display area, it can be predicted that the person will appear next or move to the video of the video camera 200 that appears as the person moves.
 タブ25は、ウィンドウ21を切換えるためのものである。タブ25で切換え可能なウィンドウ21は、監視対象の人物に応じてそれぞれ設けられる。図2の例であれば、追跡人物登録部130により登録された3人の被監視者に対して、それぞれウィンドウ21(対応する動画像表示領域23を含む)が設定されており、それらをタブ25で切換えられるようになっている。 The tab 25 is for switching the window 21. The windows 21 that can be switched by the tab 25 are provided according to the person to be monitored. In the example of FIG. 2, windows 21 (including corresponding moving image display areas 23) are set for the three monitored persons registered by the tracking person registration unit 130. 25 can be switched.
 ここで、タブ25には、被監視者に対応する人物画像(サムネイル)25A1乃至25C1が配置される。これにより、監視者であるユーザは、それぞれのタブ25で切換可能なウィンドウ21が、どの被監視者に対応するものであるのかを認識することができる。図2の例であれば、人物画像25A1に対応する人物P1が、動画像表示領域23D上に映っている。 Here, in the tab 25, person images (thumbnail) 25A1 to 25C1 corresponding to the monitored person are arranged. As a result, the user who is a monitor can recognize which monitored person the window 21 that can be switched in each tab 25 corresponds to. In the example of FIG. 2, the person P1 corresponding to the person image 25A1 is shown on the moving image display area 23D.
 もし、ウィンドウ21としてユーザに提示している被監視者(図2の例では、人物画像25A1に対応する被監視者)以外の被監視者が、いずれかのビデオカメラ200からの動画像(映像)に映ったと検出された場合、若しくは近い将来映ると次カメラ予測部140が予測した場合には、ウィンドウ21の切換えを促す。図2の例では、タブ25Bの色が変化したり点滅したりすることにより、タブ25Bへの切換えをユーザに促している。 If a monitored person other than the monitored person (monitored person corresponding to the person image 25A1 in the example of FIG. 2) presented to the user as the window 21, a moving image (video) from any of the video cameras 200 is displayed. ), Or if the next camera prediction unit 140 predicts that it will appear in the near future, it prompts the user to switch the window 21. In the example of FIG. 2, the color of the tab 25B changes or blinks to prompt the user to switch to the tab 25B.
 図3は、タブ25Bによりウィンドウ21を切り換えた場合の具体例である。図3に示すように、動画像表示領域23Cに、被監視者(図3の例では、人物画像25B1に対応する被監視者)である人物P2が映っている。 FIG. 3 is a specific example when the window 21 is switched by the tab 25B. As shown in FIG. 3, a person P2 who is a monitored person (a monitored person corresponding to the person image 25B1 in the example of FIG. 3) is shown in the moving image display area 23C.
 尚、図2の例では、タブ25の色の変化や点滅により、表示画面生成部150は監視者であるユーザに、被監視者に係る状態変化(新たな被監視者の検知、または近い将来の被監視者検知の予測)を通知しているが、これに限られるものではない。例えば、ウィンドウメッセージの表示や音声等により報知したりするようにしても良い。
 或いは、新たな被監視者の検知、または被監視者検知の予測にともなって、ユーザ操作を経ずに、強制的にウィンドウ21を切り換えることも考えられる。
In the example of FIG. 2, the display screen generation unit 150 informs the user who is a monitor by a change in the state of the monitored person (detection of a new monitored person or the near future in the near future). Prediction of the person being monitored) is not limited to this. For example, notification may be made by displaying a window message or by voice.
Alternatively, it is conceivable that the window 21 is forcibly switched without any user operation in accordance with detection of a new monitored person or prediction of detected person being monitored.
 (1.3 処理の流れ)
 次に、情報処理サーバ100の処理の流れを、図4を参照しながら説明する。図4は、本実施形態に係る情報処理サーバ100の処理の流れを示すフローチャートである。
(1.3 Process flow)
Next, a processing flow of the information processing server 100 will be described with reference to FIG. FIG. 4 is a flowchart showing a processing flow of the information processing server 100 according to the present embodiment.
 尚、後述の各処理ステップは、処理内容に矛盾を生じない範囲で、任意に順番を変更して若しくは並列に実行することができ、また、各処理ステップ間に他のステップを追加しても良い。更に、便宜上1つのステップとして記載されているステップは複数のステップに分けて実行することもでき、便宜上複数に分けて記載されているステップを1ステップとして実行することもできる。 Each processing step to be described later can be executed in any order or in parallel as long as there is no contradiction in processing contents, and other steps can be added between the processing steps. good. Further, a step described as a single step for convenience can be executed by being divided into a plurality of steps, and a step described as being divided into a plurality of steps for convenience can be executed as one step.
 類似度算出部120は、監視画面20上で表示しているウィンドウ21に係る異なる監視対象人物(被監視者)を検出したか否かを判別する(S401)。例えば、図2の例であれば、人物画像25B1及び25C1に係る被監視者を検出したか否かを、類似度算出部120は判別する。 The similarity calculation unit 120 determines whether or not a different person to be monitored (monitored person) related to the window 21 displayed on the monitoring screen 20 has been detected (S401). For example, in the example of FIG. 2, the similarity calculation unit 120 determines whether or not the monitored person related to the person images 25B1 and 25C1 has been detected.
 その結果、もし、表示していないウィンドウ21に係る被監視者が検出された場合には(S401のYes)、表示画面生成部150は、タブ25の色を変化又は点等させることにより、ウィンドウ21の切換え(タブ25の切換え)を促す。 As a result, if the person to be monitored related to the window 21 that is not displayed is detected (Yes in S401), the display screen generation unit 150 changes the color of the tab 25 or changes the dot, etc. 21 (switching of tab 25) is prompted.
 また、被監視者が検出されたわけではなくとも(S401のNo)、他のウィンドウ21に係る被監視者が、近い将来に(例えば5秒以内に)映ると次カメラ予測部140予測する場合には(S403のYes)、表示画面生成部150は、S405へ進んでウィンドウ21の切換えを促す。 Even if the monitored person is not detected (No in S401), the next camera prediction unit 140 predicts that the monitored person related to the other window 21 will appear in the near future (for example, within 5 seconds). (Yes in S403), the display screen generation unit 150 proceeds to S405 and prompts the window 21 to be switched.
 このように、メイン表示していない追跡対象人物(被監視者)がビデオカメラ200の映像に出現している、若しくは近い将来に出現しそうだと予測できる場合には、表示画面生成部150はタブ25の色を変化させたり点滅させたりすることにより強調表示する。これにより、追跡対象人物が複数いる場合であっても、タブ25により人物毎に分けられた画面で人物を監視することができるため、混乱を防ぐことが可能となる。 In this way, when it is predicted that the tracking target person (monitored person) who is not displaying the main image appears in the video of the video camera 200 or is likely to appear in the near future, the display screen generation unit 150 displays the tab. It is highlighted by changing or blinking 25 colors. Thereby, even when there are a plurality of persons to be tracked, the person can be monitored on the screen divided for each person by the tab 25, so that confusion can be prevented.
 (1.4 ハードウェア構成の具体例)
 以下、図5を参照しながら、上述してきた情報処理サーバ100のハードウェア構成の一例を説明する。尚、前述の通り、情報処理サーバ100の機能は、複数の情報処理装置(例えば、サーバとクライアント)により実現することも可能である。
(1.4 Specific examples of hardware configuration)
Hereinafter, an example of the hardware configuration of the information processing server 100 described above will be described with reference to FIG. As described above, the function of the information processing server 100 can be realized by a plurality of information processing apparatuses (for example, a server and a client).
 図5に示すように、情報処理サーバ100は、プロセッサ501、メモリ503、記憶装置505、入力インタフェース(I/F)507、データI/F509、通信I/F511、及び表示装置513を含む。 As illustrated in FIG. 5, the information processing server 100 includes a processor 501, a memory 503, a storage device 505, an input interface (I / F) 507, a data I / F 509, a communication I / F 511, and a display device 513.
 プロセッサ501は、メモリ503に記憶されているプログラムを実行することにより情報処理サーバ100における様々な処理を制御する。例えば、図1で説明したカメラ制御部110、類似度算出部120、追跡人物登録部130、次カメラ予測部140、及び表示画面生成部150に係る処理は、メモリ503に一時記憶された上で、主にプロセッサ501上で動作するプログラムとして実現可能である。 The processor 501 controls various processes in the information processing server 100 by executing a program stored in the memory 503. For example, the processes related to the camera control unit 110, the similarity calculation unit 120, the tracking person registration unit 130, the next camera prediction unit 140, and the display screen generation unit 150 described in FIG. 1 are temporarily stored in the memory 503. It can be realized as a program that mainly operates on the processor 501.
 メモリ503は、例えばRAM(Random Access Memory)等の記憶媒体である。メモリ503は、プロセッサ501によって実行されるプログラムのプログラムコードや、プログラムの実行時に必要となるデータを一時的に記憶する。例えば、メモリ503の記憶領域には、プログラム実行時に必要となるスタック領域が確保される。 The memory 503 is a storage medium such as a RAM (Random Access Memory). The memory 503 temporarily stores a program code of a program executed by the processor 501 and data necessary for executing the program. For example, a stack area necessary for program execution is secured in the storage area of the memory 503.
 記憶装置505は、例えばHDD(Hard Disk Drive)やフラッシュメモリ等の不揮発性の記憶媒体である。記憶装置505は、オペレーティングシステムや、カメラ制御部110、類似度算出部120、追跡人物登録部130、次カメラ予測部140、及び表示画面生成部150を実現するための各種プログラムや、DB180として格納される撮影動画像181、検出人物情報183、及び人物追跡情報185を含む各種データなどを記憶する。記憶装置505に記憶されているプログラムやデータは、必要に応じてメモリ503にロードされることにより、プロセッサ501から参照される。 The storage device 505 is a nonvolatile storage medium such as an HDD (Hard Disk Drive) or a flash memory. The storage device 505 is stored as an operating system, various programs for realizing the camera control unit 110, the similarity calculation unit 120, the tracking person registration unit 130, the next camera prediction unit 140, and the display screen generation unit 150, and the DB 180. Various data including the captured moving image 181, detected person information 183, and person tracking information 185 are stored. Programs and data stored in the storage device 505 are referred to by the processor 501 by being loaded into the memory 503 as necessary.
 入力I/F507は、ユーザからの入力を受け付けるためのデバイスである。図1で説明した入力装置160は、入力I/F507により実現される。入力I/F507の具体例としては、キーボードやマウス、タッチパネル、各種センサ等がある。入力I/F507は、例えばUSB(Universal Serial Bus)等のインタフェースを介して情報処理サーバ100に接続されても良い。 The input I / F 507 is a device for receiving input from the user. The input device 160 described in FIG. 1 is realized by the input I / F 507. Specific examples of the input I / F 507 include a keyboard, a mouse, a touch panel, and various sensors. The input I / F 507 may be connected to the information processing server 100 via an interface such as USB (Universal Serial Bus).
 データI/F509は、情報処理サーバ100の外部からデータを入力するためのデバイスである。データI/F509の具体例としては、各種記憶媒体に記憶されているデータを読み取るためのドライブ装置等がある。データI/F509は、情報処理サーバ100の外部に設けられることも考えられる。その場合、データI/F509は例えばUSB等のインタフェースを介して情報処理サーバ100へと接続される。 The data I / F 509 is a device for inputting data from outside the information processing server 100. Specific examples of the data I / F 509 include a drive device for reading data stored in various storage media. The data I / F 509 may be provided outside the information processing server 100. In this case, the data I / F 509 is connected to the information processing server 100 via an interface such as a USB.
 通信I/F511は、情報処理サーバ100の外部の装置、例えばビデオカメラ200等との間で有線又は無線によりデータ通信するためのデバイスである。通信I/F511は情報処理サーバ100の外部に設けられることも考えられる。その場合、通信I/F511は、例えばUSB等のインタフェースを介して情報処理サーバ100に接続される。 The communication I / F 511 is a device for performing data communication with a device external to the information processing server 100, such as a video camera 200, by wire or wireless. It is conceivable that the communication I / F 511 is provided outside the information processing server 100. In this case, the communication I / F 511 is connected to the information processing server 100 via an interface such as a USB.
 表示装置513は、例えば監視画面20等の各種情報を表示するためのデバイスであり、例えば、液晶ディスプレイや有機EL(Electro-Luminescence)ディスプレイ等である。表示装置513は、情報処理サーバ100の外部に設けられても良い。その場合、表示装置513は、例えばディスプレイケーブル等を介して情報処理サーバ100に接続される。 The display device 513 is a device for displaying various kinds of information such as the monitoring screen 20, for example, and is a liquid crystal display, an organic EL (Electro-Luminescence) display, or the like. The display device 513 may be provided outside the information processing server 100. In that case, the display device 513 is connected to the information processing server 100 via, for example, a display cable.
 (1.5 本実施形態に係る効果)
 以上説明したように、本実施形態に係る情報処理サーバ100は、複数のビデオカメラ200から入力される動画像を、監視対象者毎に生成されるウィンドウ21上に表示する。ウィンドウ21は、タブ25によりそれぞれ切換可能である。もし、メイン表示していない追跡対象人物(被監視者)がビデオカメラ200の映像に出現している、若しくは近い将来に出現しそうだと予測できる場合には、表示画面生成部150はタブ25の色を変化させたり点滅させたりすることにより強調表示する。これにより、追跡対象人物が複数いる場合であっても、タブ25により人物毎に分けられた画面で人物を監視することができるため、混乱を防ぐことが可能となる。
(1.5 Effects according to this embodiment)
As described above, the information processing server 100 according to the present embodiment displays the moving images input from the plurality of video cameras 200 on the window 21 generated for each monitoring target person. The windows 21 can be switched by tabs 25, respectively. If it can be predicted that a person to be tracked (monitored person) who is not displaying the main screen appears in the video of the video camera 200 or is likely to appear in the near future, the display screen generation unit 150 sets the tab 25. Highlight by changing color or blinking. Thereby, even when there are a plurality of persons to be tracked, the person can be monitored on the screen divided for each person by the tab 25, so that confusion can be prevented.
 (2 第2実施形態)
 以下、第2実施形態を図6を参照しながら説明する。図6は、画像処理システムである監視装置600の機能構成を示すブロック図である。図6に示すように、監視装置600は、入力部610と、登録部620と、表示制御部630とを含む。
(2 Second Embodiment)
The second embodiment will be described below with reference to FIG. FIG. 6 is a block diagram illustrating a functional configuration of the monitoring apparatus 600 that is an image processing system. As shown in FIG. 6, the monitoring device 600 includes an input unit 610, a registration unit 620, and a display control unit 630.
 入力部610は、複数のビデオカメラで撮像された動画像の入力を受ける。登録部620は、入力部610から入力された動画像に映る人物を1以上登録可能である。
 表示制御部630は、ビデオカメラから入力された動画像を、登録部620により登録された人物毎に、切換え可能に表示する。
The input unit 610 receives input of moving images captured by a plurality of video cameras. The registration unit 620 can register one or more persons shown in the moving image input from the input unit 610.
The display control unit 630 displays the moving image input from the video camera in a switchable manner for each person registered by the registration unit 620.
 このように実装することで、本実施形態に係る監視装置600によれば、人物追跡を行う際に、対象人物の特定に係る混乱を抑制することができる。 By implementing in this way, according to the monitoring apparatus 600 according to the present embodiment, it is possible to suppress confusion related to the identification of the target person when performing person tracking.
 (3 第3実施形態)
 次に、第3実施形態に係る監視システム1について説明する。なお、以下の説明において、第1実施形態と同様の構成については同一の符号を付すとともに、説明を省略する。作用効果についても、第1実施形態と同様の点については、必要に応じて説明を省略している。この点、第4実施形態および第5実施形態においても同様である。
 以下、第3実施形態に係る監視システム1と、第1実施携帯に係る監視システム1との差異を中心に説明する。
(3 Third Embodiment)
Next, the monitoring system 1 according to the third embodiment will be described. In the following description, the same components as those in the first embodiment are denoted by the same reference numerals and description thereof is omitted. Regarding the effects, the same points as in the first embodiment are not described as necessary. This also applies to the fourth embodiment and the fifth embodiment.
Hereinafter, the difference between the monitoring system 1 according to the third embodiment and the monitoring system 1 according to the first embodiment mobile phone will be mainly described.
 本実施形態に係る監視システム1の機能構成は、基本的には図1に示す第1実施形態に係るものと同様であるが、表示画面生成部150が生成し、表示装置170に表示させる表示画面が異なる。図7に、本実施形態に係る表示生成部150が生成する表示画面の具体例を示す。 The functional configuration of the monitoring system 1 according to the present embodiment is basically the same as that according to the first embodiment illustrated in FIG. 1, but the display generated by the display screen generation unit 150 and displayed on the display device 170. The screen is different. FIG. 7 shows a specific example of a display screen generated by the display generation unit 150 according to the present embodiment.
 図7に示す表示画面の例では、本実施形態に係る表示画面生成部150が生成する表示画面における各ウィンドウ31内に、追跡対象人物が映っている、若しくは近い将来映ると予測されるビデオカメラ200の動画像と、タイムチャート画像とが配置されている。 In the example of the display screen shown in FIG. 7, a video camera in which a tracking target person is shown in each window 31 on the display screen generated by the display screen generation unit 150 according to the present embodiment or is predicted to appear in the near future. 200 moving images and a time chart image are arranged.
 以下、図7及び図8を参照しながら、本実施形態において表示装置170が表示する表示画面の具体例を説明する。図7の表示画面の具体例では、人物監視のために表示する、タイムチャート画像が配置されている(以下、この表示画面を、監視画面30ともいう。)。 Hereinafter, a specific example of a display screen displayed by the display device 170 in the present embodiment will be described with reference to FIGS. 7 and 8. In the specific example of the display screen of FIG. 7, a time chart image to be displayed for person monitoring is arranged (hereinafter, this display screen is also referred to as a monitoring screen 30).
 図7に示すように、監視画面30は、図2に示す監視画像20と比較して、動画像表示領域23が、動画像表示領域33A乃至33D(以下、総称して動画像表示領域33ともいう。)である点で異なる。この中でも特に、動画像表示領域33Dに、動画像ではなく、タイムチャート画像が配置されている点が、図2に係る監視画像とは異なる。当該タイムチャート画像は、タブ25Bに対応する人物P2が、各カメラのどの時間帯で検出されたかを示すものである。 As shown in FIG. 7, compared to the monitoring image 20 shown in FIG. 2, the monitoring screen 30 has a moving image display area 23 that has moving image display areas 33 </ b> A to 33 </ b> D (hereinafter collectively referred to as a moving image display area 33). It is different in that. Among these, in particular, the time chart image is arranged in the moving image display area 33D instead of the moving image, which is different from the monitoring image according to FIG. The time chart image shows in which time zone of each camera the person P2 corresponding to the tab 25B is detected.
 図8は、動画像表示領域33Dに表示されるタイムチャート画像を拡大した図である。T1は、現在、表示されているウィンドウのタブに対応している人物が、各カメラのどの時間帯で検出されたかを示すタイムチャートである。このタイムチャートにおいて、図中左の列の数字はカメラの番号を表しており、T2は、時間軸を示している。例えば、T2において目盛の間隔が5秒であり、現在最も左の目盛が10時を表しているとする。その場合において、図8がタブ25Bに対応する人物P2の検出状況を表示しているとすると、T1は、人物P2が、カメラ1に10時0分5秒から10時0分10秒までの間映っていたことを表している。また、T1は、人物P2が、カメラ2に10時から10時0分10秒までの間映っていたことを表している。 FIG. 8 is an enlarged view of the time chart image displayed in the moving image display area 33D. T1 is a time chart showing in which time zone of each camera the person corresponding to the tab of the currently displayed window is detected. In this time chart, numbers in the left column in the figure represent camera numbers, and T2 represents a time axis. For example, it is assumed that the interval between the scales is 5 seconds at T2, and the leftmost scale currently represents 10 o'clock. In this case, assuming that FIG. 8 displays the detection status of the person P2 corresponding to the tab 25B, T1 indicates that the person P2 is in the camera 1 from 10: 00: 5 to 10:00:10. It shows that it was reflected. T1 represents that the person P2 was reflected in the camera 2 from 10:00 to 10:00:10.
 T3は、T1全体及びT2を、右方向又は左方向(未来又は過去)にスライドするためのつまみである。T4は、図7における動画像表示領域33A乃至33Cに表示する監視画像を選択するためのボタンである。図7及び図8の例では、カメラ2、4及び6に対応する監視画像が、動画像表示領域33A、33B及び33Cに表示されている。T4におけるボタンの切換え処理は、入力装置160に対する操作に基づいてなされる。例えば入力装置160がマウスの場合、カーソルを該当するボタンの上に置いてクリックすることで、表示画像が切り替えられても良い。又は入力装置160がタッチパネルの場合、ユーザがボタンを直接タッチすることで、表示画像が切り替えられても良い。 T3 is a knob for sliding the entire T1 and T2 in the right direction or the left direction (future or past). T4 is a button for selecting a monitoring image to be displayed in the moving image display areas 33A to 33C in FIG. 7 and 8, the monitoring images corresponding to the cameras 2, 4 and 6 are displayed in the moving image display areas 33A, 33B and 33C. The button switching process at T4 is performed based on an operation on the input device 160. For example, when the input device 160 is a mouse, the display image may be switched by placing the cursor on the corresponding button and clicking. Alternatively, when the input device 160 is a touch panel, the display image may be switched by the user directly touching the button.
 なお、本実施形態に係る表示画面生成部150は、第1実施形態で説明したように、類似度算出部120、追跡人物登録部130、次カメラ予測部140、撮影動画像181、検出人物情報183及び人物追跡情報185等を利用して、タイムチャート画像を生成しても良い。 As described in the first embodiment, the display screen generation unit 150 according to the present embodiment includes the similarity calculation unit 120, the tracking person registration unit 130, the next camera prediction unit 140, the captured moving image 181, and the detected person information. A time chart image may be generated using 183, person tracking information 185, and the like.
 例えば、入力される撮影動画像181内において、追跡人物登録部130によって登録された人物又は予め登録された人物が類似度算出部120により検出されると、表示画面生成部150は、対応する人物のT1におけるその時間帯を塗りつぶすようにしても良い。 For example, when a person registered by the tracking person registration unit 130 or a person registered in advance is detected by the similarity calculation unit 120 in the input moving image 181, the display screen generation unit 150 displays the corresponding person. You may make it fill the time slot | zone in T1.
 また、例えば、図8のT1において、表示画面生成部150は、右から2列目に現在の時間帯の検出状況を表示し、最も右の列は、次カメラ予測部140によって予測された状況を表示するようにしても良い。その場合、表示画面生成部150は、動画像表示領域33Dに、リアルタイムで時々刻々と右から左へ流れるようにT1を表示しても良い。または、表示画面生成部150は、リアルタイムの撮影動画像ではなく、蓄積された過去の撮影動画像(オフラインの動画像データ)を対象としてT1を生成しても良い。 For example, in T1 of FIG. 8, the display screen generation unit 150 displays the current time zone detection status in the second column from the right, and the rightmost column indicates the status predicted by the next camera prediction unit 140. May be displayed. In that case, the display screen generation unit 150 may display T1 in the moving image display area 33D so as to flow from right to left in real time. Alternatively, the display screen generation unit 150 may generate T1 not for real-time captured moving images but for accumulated past captured moving images (offline moving image data).
 以上説明したように、本実施形態に係る表示画面生成部150は、表示画面にタイムチャート画像を配置する。これにより、監視対象人物がどの時間帯にどのカメラで検出されたかを、一目で確認することができる。
 また、本実施形態に係る表示画面生成部150は、どのカメラで撮影した監視画像を、動画像表示領域33に表示するかを切り替え可能なボタンを表示する。これにより、ユーザは、タイムチャートを確認しながら、表示したい画像を任意に切り替えることができる。
As described above, the display screen generation unit 150 according to the present embodiment arranges the time chart image on the display screen. Thereby, it is possible to confirm at a glance which camera the person to be monitored was detected in which time zone.
In addition, the display screen generation unit 150 according to the present embodiment displays a button capable of switching which monitor image captured by which camera is displayed in the moving image display area 33. Thereby, the user can arbitrarily switch an image to be displayed while checking the time chart.
 (4 第4実施形態)
 次に、第4実施形態に係る監視システム1について説明する。本実施形態に係る監視システム1の機能構成は、基本的には図1に示す第1実施形態に係るものと同様であるが、表示画面生成部150が生成し、表示装置170に表示させる表示画面が異なる。図9に、本実施形態に係る表示生成部150が生成する表示画面の具体例を示す。
(4 Fourth embodiment)
Next, the monitoring system 1 according to the fourth embodiment will be described. The functional configuration of the monitoring system 1 according to the present embodiment is basically the same as that according to the first embodiment illustrated in FIG. 1, but the display generated by the display screen generation unit 150 and displayed on the display device 170. The screen is different. FIG. 9 shows a specific example of a display screen generated by the display generation unit 150 according to the present embodiment.
 図9に示すように、本実施形態に係る表示画面生成部150が生成する表示画面(以下、監視画面40ともいう。)における各ウィンドウ41内には、追跡対象人物が映っている、若しくは近い将来映ると予測されるビデオカメラ200の動画像と共に、地図画像が配置されている。 As shown in FIG. 9, a tracking target person is shown in or near each window 41 in a display screen (hereinafter also referred to as a monitoring screen 40) generated by the display screen generation unit 150 according to the present embodiment. A map image is arranged together with a moving image of the video camera 200 predicted to be projected in the future.
 監視画面40を、図2に示す監視画像20と比較すると、動画像表示領域23が、動画像表示領域43A乃至43D(以下、総称して動画像表示領域43ともいう。)である点が異なる。この中でも特に、動画像表示領域43Dに、動画像ではなく、地図画像が配置されている点が大きく異なる。当該地図画像は、タブ25A1に対応する人物と、タブ25B1に対応する人物の軌跡を示す。 2 is different from the monitoring image 20 shown in FIG. 2 in that the moving image display area 23 is a moving image display area 43A to 43D (hereinafter also collectively referred to as a moving image display area 43). . Among these, in particular, the difference is that not the moving image but the map image is arranged in the moving image display area 43D. The map image shows a person corresponding to the tab 25A1 and a locus of the person corresponding to the tab 25B1.
 本実施形態に係る表示画面生成部150は、検出及び追跡の結果に応じて、リアルタイムに地図画像を変化させる。なお、表示画面生成部150は、監視対象人物毎に、タブの色と、地図画像に表示する軌跡の色とを同系色の色で表示しても良い。例えば、タブ25Aと、地図画像におけるタブ25Aに対応する人物の軌跡の色と、を赤系統の色とし、タブ25Bと、地図画像におけるタブ25Bに対応する人物の軌跡の色と、を青系統の色としても良い。また、図9の例ではタブ25Cに対応する人物は地図画像上検出されていないが、例えばタブ25Cと、地図画像におけるタブ25Cに対応する人物の軌跡の色と、を黄系統の色としても良い。 The display screen generation unit 150 according to the present embodiment changes the map image in real time according to the detection and tracking results. Note that the display screen generation unit 150 may display the color of the tab and the color of the locus displayed on the map image in a similar color for each monitoring target person. For example, the tab 25A and the color of the trajectory of the person corresponding to the tab 25A in the map image are red, and the color of the tab 25B and the trajectory of the person corresponding to the tab 25B in the map image is blue. Also good as the color. In the example of FIG. 9, the person corresponding to the tab 25C is not detected on the map image. However, for example, the tab 25C and the color of the locus of the person corresponding to the tab 25C in the map image may be yellow colors. good.
 更に、類似度算出部120が人物を検出すると、表示画面生成部150は、動画像上の人物を、対応する同系の色の矩形によって囲うことで、ユーザに人物の検出を報知しても良い。 Further, when the similarity calculation unit 120 detects a person, the display screen generation unit 150 may notify the user of the person detection by surrounding the person on the moving image with a corresponding similar color rectangle. .
 以上説明したように、本実施形態に係る表示画面生成部150は、表示画面に地図画像を配置し、監視対象人物の移動の軌跡を表示する。これにより、ユーザは、監視対象人物の移動の軌跡を、一目で確認することができる。 As described above, the display screen generation unit 150 according to the present embodiment arranges the map image on the display screen and displays the movement trajectory of the monitoring target person. Thereby, the user can confirm the movement trajectory of the monitoring target person at a glance.
 また、本実施形態に係る表示画面生成部150は、監視対象人物毎に、タブの色と、地図画像に表示する軌跡の色とを同系の色で表示する。これにより、ユーザは、複数の監視対象人物の移動の軌跡を、一目でよりわかりやすく確認することができる。 In addition, the display screen generation unit 150 according to the present embodiment displays the tab color and the locus color displayed on the map image in similar colors for each person to be monitored. Thereby, the user can confirm the movement trajectory of the plurality of monitoring target persons more easily at a glance.
 (5 第5実施形態)
 図10は、第5実施形態を説明するための図である。図10に示すように、本実施形態に係る監視システム1は、監視画面(図10の例では、第1実施形態に係る監視画面20と同様の表示画面)をモバイル端末1000上に表示する。ここで、モバイル端末1000は、ノートパソコン、タブレット端末、PDA、携帯電話、スマートフォン、携帯ゲーム機などを含む。本実施形態において、モバイル端末1000のディスプレイはタッチパネルで構成される。
(5 Fifth embodiment)
FIG. 10 is a diagram for explaining the fifth embodiment. As shown in FIG. 10, the monitoring system 1 according to the present embodiment displays a monitoring screen (in the example of FIG. 10, a display screen similar to the monitoring screen 20 according to the first embodiment) on the mobile terminal 1000. Here, the mobile terminal 1000 includes a notebook computer, a tablet terminal, a PDA, a mobile phone, a smartphone, a portable game machine, and the like. In the present embodiment, the display of the mobile terminal 1000 is configured with a touch panel.
 本実施形態に係る監視システム1は、図1に示した情報処理サーバ100の機能のうち、少なくとも入力装置160及び表示装置170の機能を、モバイル端末1000上で実現するものである。情報処理サーバ100と当該モバイル端末1000との間を、適宜無線LAN(Local Area Network)等の無線通信で接続することにより、情報処理サーバ100と当該モバイル端末1000とが連携することができるので、第1実施形態と同様の機能を実現することが可能となる。 The monitoring system 1 according to the present embodiment realizes at least the functions of the input device 160 and the display device 170 on the mobile terminal 1000 among the functions of the information processing server 100 illustrated in FIG. By appropriately connecting the information processing server 100 and the mobile terminal 1000 by wireless communication such as a wireless LAN (Local Area Network), the information processing server 100 and the mobile terminal 1000 can be linked. It is possible to realize the same function as in the first embodiment.
 図10に示すようにモバイル端末を、入力装置160及び表示装置170として使用することで、例えば、ユーザが手でディスプレイを画面下部から画面上部(図中、矢印方向)へとなでることで、監視システム1はディスプレイ上の表示を、タブの順番に下の被監視者から上の被監視者へ、対応するウィンドウに切り替えるユーザインタフェースを実現することも考えられる。或いは、タブの切換えは、直接タブの画像領域にタッチすることで切り替えても良い。 As shown in FIG. 10, by using the mobile terminal as the input device 160 and the display device 170, for example, the user can monitor the display from the bottom of the screen to the top of the screen (in the direction of the arrow in the figure) by hand. The system 1 may realize a user interface for switching the display on the display from the lower monitored person to the upper monitored person in the order of the tabs to the corresponding window. Alternatively, the tab may be switched by directly touching the image area of the tab.
 本実施形態によれば、モバイル端末上に監視画面が表示されるので、例えば警備員が携帯することで、屋外で被監視者の追跡状況を確認することができる。 According to the present embodiment, since the monitoring screen is displayed on the mobile terminal, the tracking status of the monitored person can be confirmed outdoors, for example, by carrying a guard.
 (6 付記事項)
 尚、前述の実施形態の構成は、組み合わせたり或いは一部の構成部分を入れ替えたりしてもよい。また、本発明の構成は前述の実施形態のみに限定されるものではなく、本発明の要旨を逸脱しない範囲内において種々変更を加えてもよい。
 尚、前述の各実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。
(6 Appendix)
Note that the configurations of the above-described embodiments may be combined or some components may be replaced. The configuration of the present invention is not limited to the above-described embodiment, and various modifications may be made without departing from the scope of the present invention.
A part or all of each of the above-described embodiments can be described as in the following supplementary notes, but is not limited thereto.
 (付記1)
 複数のビデオカメラで撮像された動画像の入力を受ける入力手段と、前記入力手段から入力された動画像に映る人物を1以上登録可能な登録手段と、前記ビデオカメラから入力された動画像を、前記登録手段により登録された人物毎に、切換え可能に表示する表示制御手段とを備える画像処理システム。
(Appendix 1)
Input means for receiving input of moving images picked up by a plurality of video cameras, registration means for registering one or more persons appearing in the moving images input from the input means, and moving images input from the video cameras An image processing system comprising: a display control unit that displays a switchable manner for each person registered by the registration unit.
 (付記2)
 前記表示制御手段は、前記ビデオカメラから入力された動画像を、前記登録手段により登録された人物にそれぞれ対応付けられたウィンドウを切換え可能に表示する、付記1記載の画像処理システム。
(Appendix 2)
The image processing system according to claim 1, wherein the display control unit displays the moving image input from the video camera so that a window associated with each person registered by the registration unit can be switched.
 (付記3)
 前記表示制御手段は、前記ビデオカメラから入力された動画像に、前記登録手段により登録された人物が撮像されている場合に、前記ウィンドウの切換えを促す、付記2記載の画像処理システム。
(Appendix 3)
The image processing system according to appendix 2, wherein the display control means urges switching of the window when a person registered by the registration means is captured in a moving image input from the video camera.
 (付記4)
 前記表示制御手段は、前記ビデオカメラから入力された動画像に、前記登録手段により登録された人物が映ると予測できる場合に、前記ウィンドウの切換えを促す、付記2又は付記3記載の画像処理システム。
(Appendix 4)
The image processing system according to appendix 2 or appendix 3, wherein the display control unit urges switching of the window when it can be predicted that the person registered by the registration unit appears in the moving image input from the video camera. .
 (付記5)
 前記ウィンドウ上には、それぞれ、前記入力手段から入力された動画像が複数配置される、付記2又は付記3記載の画像処理システム。
(Appendix 5)
The image processing system according to Supplementary Note 2 or Supplementary Note 3, wherein a plurality of moving images input from the input unit are arranged on the window.
 (付記6)
 ビデオカメラで撮像された動画像の入力を受ける入力手段と、前記ビデオカメラから入力された動画像を、登録された人物毎に、切換え可能に表示する表示制御手段とを備え、前記表示制御手段は、登録された人物毎に、当該人物を撮影したビデオカメラ及び撮影した時間に関する情報を表示する、画像処理システム。
(Appendix 6)
Input means for receiving an input of a moving image captured by a video camera; and display control means for displaying the moving image input from the video camera for each registered person so as to be switchable, the display control means Is an image processing system that displays, for each registered person, information related to the video camera that took the person and the time when the person was taken.
 (付記7)
 ビデオカメラで撮像された動画像の入力を受ける入力手段と、前記ビデオカメラから入力された動画像を、登録された人物毎に、切換え可能に表示する表示制御手段とを備え、前記表示制御手段は、登録された人物毎の移動の軌跡を表した地図情報を表示する、画像処理システム。
(Appendix 7)
Input means for receiving an input of a moving image captured by a video camera; and display control means for displaying the moving image input from the video camera for each registered person so as to be switchable, the display control means The image processing system which displays the map information showing the locus of movement for each registered person.
 (付記8)
 前記表示制御手段は、タッチパネルをインタフェースとするモバイル端末に情報を表示する付記1乃至付記7のいずれか1項に記載の画像処理システム。
(Appendix 8)
The image processing system according to any one of supplementary notes 1 to 7, wherein the display control means displays information on a mobile terminal having a touch panel as an interface.
 (付記9)
 複数のビデオカメラで撮像された動画像の入力を受けるステップと、前記入力された動画像に映る人物を1以上登録するステップと、前記ビデオカメラから入力された動画像を、前記登録された人物毎に、切換え可能に表示するステップとを画像処理システムが行う画像処理方法。
(Appendix 9)
A step of receiving input of moving images taken by a plurality of video cameras; a step of registering one or more persons appearing in the input moving images; and a step of registering moving images input from the video cameras An image processing method in which an image processing system performs a step of switching display each time.
 (付記10)
 前記ビデオカメラから入力された動画像を、登録された人物にそれぞれ対応付けられたウィンドウを切換え可能に表示する、付記9記載の画像処理方法。
(Appendix 10)
The image processing method according to appendix 9, wherein the moving image input from the video camera is displayed so that a window associated with each registered person can be switched.
 (付記11)
 前記ビデオカメラから入力された動画像に、前記登録手段により登録された人物が撮像されている場合に、前記ウィンドウの切換えを促す、付記10記載の画像処理方法。
(Appendix 11)
The image processing method according to appendix 10, wherein the window switching is urged when a person registered by the registration unit is captured in a moving image input from the video camera.
 (付記12)
 前記ビデオカメラから入力された動画像に、前記登録手段により登録された人物が映ると予測できる場合に、前記ウィンドウの切換えを促す、付記10又は付記11記載の画像処理方法。
(Appendix 12)
The image processing method according to appendix 10 or appendix 11, wherein the window switching is urged when it can be predicted that the person registered by the registration unit appears in the moving image input from the video camera.
 (付記13)
 前記ウィンドウ上には、それぞれ、前記入力手段から入力された動画像が複数配置される、付記10又は付記11記載の画像処理方法。
(Appendix 13)
12. The image processing method according to appendix 10 or appendix 11, wherein a plurality of moving images input from the input unit are arranged on the window.
 (付記14)
 ビデオカメラで撮像された動画像の入力を受けるステップと、前記ビデオカメラから入力された動画像を、登録された人物毎に、切換え可能に表示するステップとを画像処理システムが行い、画像処理システムは、登録された人物毎に、当該人物を撮影したビデオカメラ及び撮影した時間に関する情報を表示する、画像処理方法。
(Appendix 14)
The image processing system performs a step of receiving input of a moving image captured by the video camera and a step of displaying the moving image input from the video camera for each registered person in a switchable manner. The image processing method which displays the information regarding the video camera which image | photographed the said person, and the image | photographing time for every registered person.
 (付記15)
 ビデオカメラで撮像された動画像の入力を受けるステップと、前記ビデオカメラから入力された動画像を、登録された人物毎に、切換え可能に表示するステップとを画像処理システムが行い、画像処理システムは、登録された人物毎の移動の軌跡を表した地図情報を表示する、画像処理方法。
(Appendix 15)
The image processing system performs a step of receiving input of a moving image captured by the video camera and a step of displaying the moving image input from the video camera for each registered person in a switchable manner. The image processing method of displaying the map information showing the locus of movement for each registered person.
 (付記16)
 タッチパネルをインタフェースとするモバイル端末に情報を表示する付記9乃至付記15のいずれか1項に記載の画像処理方法。
(Appendix 16)
The image processing method according to any one of supplementary notes 9 to 15, wherein information is displayed on a mobile terminal having a touch panel as an interface.
 (付記17)
 複数のビデオカメラで撮像された動画像の入力を受ける処理と、前記入力された動画像に映る人物を1以上登録する処理と、前記ビデオカメラから入力された動画像を、前記登録された人物毎に、切換え可能に表示する処理とをコンピュータに実行させるプログラム。
(Appendix 17)
A process of receiving input of moving images picked up by a plurality of video cameras, a process of registering one or more persons appearing in the input moving images, and a moving person input from the video cameras; A program that causes a computer to execute a process of switching display each time.
 (付記18)
 前記ビデオカメラから入力された動画像を、登録された人物にそれぞれ対応付けられたウィンドウを切換え可能に表示する、付記17記載のプログラム。
(Appendix 18)
18. The program according to appendix 17, wherein the moving image input from the video camera is displayed so that a window associated with each registered person can be switched.
 (付記19)
 前記ビデオカメラから入力された動画像に、前記登録手段により登録された人物が撮像されている場合に、前記ウィンドウの切換えを促す、付記18記載のプログラム。
(Appendix 19)
The program according to appendix 18, which prompts switching of the window when a person registered by the registration unit is captured in a moving image input from the video camera.
 (付記20)
 前記ビデオカメラから入力された動画像に、前記登録手段により登録された人物が映ると予測できる場合に、前記ウィンドウの切換えを促す、付記18又は付記19記載のプログラム。
(Appendix 20)
The program according to appendix 18 or appendix 19, wherein the window is switched when the person registered by the registration unit can be predicted to be reflected in the moving image input from the video camera.
 (付記21)
 前記ウィンドウ上には、それぞれ、前記入力手段から入力された動画像が複数配置される、付記18又は付記19記載の画像処理方法。
(Appendix 21)
20. The image processing method according to appendix 18 or appendix 19, wherein a plurality of moving images input from the input unit are arranged on the window.
 (付記22)
 ビデオカメラで撮像された動画像の入力を受ける処理と、前記ビデオカメラから入力された動画像を、登録された人物毎に、切換え可能に表示する処理とをコンピュータに実行させ、登録された人物毎に、当該人物を撮影したビデオカメラ及び撮影した時間に関する情報を表示する、プログラム。
(Appendix 22)
A process for receiving input of a moving image picked up by a video camera and a process for displaying the moving image input from the video camera for each registered person in a switchable manner. A program that displays information about the video camera that took the person and the time when the person was taken.
 (付記23)
 ビデオカメラで撮像された動画像の入力を受ける処理と、前記ビデオカメラから入力された動画像を、登録された人物毎に、切換え可能に表示する処理とをコンピュータに実行させ、登録された人物毎の移動の軌跡を表した地図情報を表示する、プログラム。
(Appendix 23)
A process for receiving input of a moving image picked up by a video camera and a process for displaying the moving image input from the video camera for each registered person in a switchable manner. A program that displays map information showing the trajectory of each movement.
 (付記24)
 タッチパネルをインタフェースとするモバイル端末に情報を表示する付記17乃至付記22のいずれか1項に記載のプログラム。
(Appendix 24)
Item 23. The program according to any one of appendix 17 to appendix 22, which displays information on a mobile terminal having a touch panel as an interface.
 この出願は、2012年7月31日に出願された日本出願特願2012-170406を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2012-170406 filed on July 31, 2012, the entire disclosure of which is incorporated herein.
 1・・・監視システム、20・・・監視画面、21・・・ウィンドウ、23A、23B、23C、23D・・・動画像表示領域、25A、25B、25C・・・タブ、25A1、25B1、25C1・・・人物画像、100・・・情報処理サーバ、110・・・カメラ制御部、120・・・類似度算出部、130・・・追跡人物登録部、140・・・次カメラ予測部、150・・・表示画面生成部、160・・・入力装置、170・・・表示装置、180・・・データベース(DB)、181・・・撮影動画像、183・・・検出人物情報、185・・・人物追跡情報、200・・・ビデオカメラ、P1、P2・・・人物 DESCRIPTION OF SYMBOLS 1 ... Monitoring system, 20 ... Monitoring screen, 21 ... Window, 23A, 23B, 23C, 23D ... Moving image display area, 25A, 25B, 25C ... Tab, 25A1, 25B1, 25C1 ... person image, 100 ... information processing server, 110 ... camera control unit, 120 ... similarity calculation unit, 130 ... tracking person registration unit, 140 ... next camera prediction unit, 150 ... Display screen generation unit, 160 ... input device, 170 ... display device, 180 ... database (DB), 181 ... moving image, 183 ... detected person information, 185 ...・ Person tracking information, 200 ... video camera, P1, P2 ... person

Claims (7)

  1.  複数のビデオカメラで撮像された動画像の入力を受ける入力手段と、
     前記入力手段から入力された動画像に映る人物を1以上登録可能な登録手段と、
     前記ビデオカメラから入力された動画像を、前記登録手段により登録された人物毎に、切換え可能に表示する表示制御手段と
    を備える画像処理システム。
    Input means for receiving input of moving images captured by a plurality of video cameras;
    Registration means capable of registering one or more persons appearing in the moving image input from the input means;
    An image processing system comprising: a display control unit that displays a moving image input from the video camera for each person registered by the registration unit in a switchable manner.
  2.  前記表示制御手段は、前記ビデオカメラから入力された動画像を、前記登録手段により登録された人物にそれぞれ対応付けられたウィンドウを切換え可能に表示する、
    請求項1記載の画像処理システム。
    The display control means displays the moving image input from the video camera so that the windows respectively associated with the persons registered by the registration means can be switched.
    The image processing system according to claim 1.
  3.  前記表示制御手段は、前記ビデオカメラから入力された動画像に、前記登録手段により登録された人物が撮像されている場合に、前記ウィンドウの切換えを促す、
    請求項2記載の画像処理システム。
    The display control means urges switching of the window when a person registered by the registration means is captured in a moving image input from the video camera.
    The image processing system according to claim 2.
  4.  前記表示制御手段は、前記ビデオカメラから入力された動画像に、前記登録手段により登録された人物が映ると予測できる場合に、前記ウィンドウの切換えを促す、
    請求項2又は請求項3記載の画像処理システム。
    The display control means urges switching of the window when it can be predicted that the person registered by the registration means appears in the moving image input from the video camera.
    The image processing system according to claim 2 or 3.
  5.  前記ウィンドウ上には、それぞれ、前記入力手段から入力された動画像が複数配置される、
    請求項2又は請求項3記載の画像処理システム。
    A plurality of moving images input from the input unit are arranged on the window, respectively.
    The image processing system according to claim 2 or 3.
  6.  複数のビデオカメラで撮像された動画像の入力を受けるステップと、
     前記入力された動画像に映る人物を1以上登録するステップと、
     前記ビデオカメラから入力された動画像を、前記登録された人物毎に、切換え可能に表示するステップと
    を画像処理システムが行う画像処理方法。
    Receiving input of moving images captured by a plurality of video cameras;
    Registering one or more persons appearing in the input moving image;
    An image processing method in which an image processing system performs a step of displaying a moving image input from the video camera in a switchable manner for each registered person.
  7.  複数のビデオカメラで撮像された動画像の入力を受ける処理と、
     前記入力された動画像に映る人物を1以上登録する処理と、
     前記ビデオカメラから入力された動画像を、前記登録された人物毎に、切換え可能に表示する処理と
    をコンピュータに実行させるプログラム。
    A process of receiving input of moving images captured by a plurality of video cameras;
    A process of registering one or more persons appearing in the input moving image;
    A program for causing a computer to execute a process of switching a moving image input from the video camera for each registered person.
PCT/JP2013/066565 2012-07-31 2013-06-17 Image processing system, image processing method, and program WO2014021004A1 (en)

Priority Applications (12)

Application Number Priority Date Filing Date Title
CN201380040754.3A CN104718749A (en) 2012-07-31 2013-06-17 Image processing system, image processing method, and program
RU2015106938A RU2015106938A (en) 2012-07-31 2013-06-17 IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD AND PROGRAM
US14/416,716 US10841528B2 (en) 2012-07-31 2013-06-17 Systems, methods and apparatuses for tracking persons by processing images
JP2014528039A JP6332833B2 (en) 2012-07-31 2013-06-17 Image processing system, image processing method, and program
BR112015001949-8A BR112015001949B1 (en) 2012-07-31 2013-06-17 IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND STORAGE MEDIA
SG11201500693QA SG11201500693QA (en) 2012-07-31 2013-06-17 Image processing system, image processing method, and program
MX2015001292A MX2015001292A (en) 2012-07-31 2013-06-17 Image processing system, image processing method, and program.
US16/286,430 US10778931B2 (en) 2012-07-31 2019-02-26 Image processing system, image processing method, and program
US16/286,449 US10999635B2 (en) 2012-07-31 2019-02-26 Image processing system, image processing method, and program
US16/378,081 US10750113B2 (en) 2012-07-31 2019-04-08 Image processing system, image processing method, and program
US16/925,183 US11343575B2 (en) 2012-07-31 2020-07-09 Image processing system, image processing method, and program
US17/563,261 US20220124410A1 (en) 2012-07-31 2021-12-28 Image processing system, image processing method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012170406 2012-07-31
JP2012-170406 2012-07-31

Related Child Applications (4)

Application Number Title Priority Date Filing Date
US14/416,716 A-371-Of-International US10841528B2 (en) 2012-07-31 2013-06-17 Systems, methods and apparatuses for tracking persons by processing images
US16/286,430 Continuation US10778931B2 (en) 2012-07-31 2019-02-26 Image processing system, image processing method, and program
US16/286,449 Continuation US10999635B2 (en) 2012-07-31 2019-02-26 Image processing system, image processing method, and program
US16/378,081 Continuation US10750113B2 (en) 2012-07-31 2019-04-08 Image processing system, image processing method, and program

Publications (1)

Publication Number Publication Date
WO2014021004A1 true WO2014021004A1 (en) 2014-02-06

Family

ID=50027694

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/066565 WO2014021004A1 (en) 2012-07-31 2013-06-17 Image processing system, image processing method, and program

Country Status (10)

Country Link
US (6) US10841528B2 (en)
JP (1) JP6332833B2 (en)
CN (1) CN104718749A (en)
AR (1) AR091912A1 (en)
BR (1) BR112015001949B1 (en)
MX (1) MX2015001292A (en)
MY (1) MY171395A (en)
RU (1) RU2015106938A (en)
SG (1) SG11201500693QA (en)
WO (1) WO2014021004A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015154465A (en) * 2014-02-19 2015-08-24 キヤノン株式会社 Display control device, display control method, and program
JP2016184229A (en) * 2015-03-25 2016-10-20 東芝テック株式会社 Demand prediction device and program
CN107431786A (en) * 2015-03-16 2017-12-01 佳能株式会社 Image processing equipment, image processing system, image processing method and computer program
JP2020191645A (en) * 2020-07-16 2020-11-26 日本電気株式会社 Information processing device, control method, and program
JP2022033600A (en) * 2020-08-17 2022-03-02 横河電機株式会社 Device, system, method, and program
WO2022154387A1 (en) * 2021-01-13 2022-07-21 삼성전자 주식회사 Electronic device and operation method therefor
US11532160B2 (en) 2016-11-07 2022-12-20 Nec Corporation Information processing apparatus, control method, and program

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017046192A (en) * 2015-08-26 2017-03-02 株式会社リコー Information processing device, program, and information processing method
US10699422B2 (en) 2016-03-18 2020-06-30 Nec Corporation Information processing apparatus, control method, and program
US20180176512A1 (en) * 2016-10-26 2018-06-21 Ring Inc. Customizable intrusion zones associated with security systems
JP6873740B2 (en) * 2017-02-24 2021-05-19 ソニー・オリンパスメディカルソリューションズ株式会社 Endoscope device and edge detection method
EP3379471A1 (en) * 2017-03-21 2018-09-26 Canon Kabushiki Kaisha Image processing apparatus, method of controlling image processing apparatus, and storage medium
CN110798653B (en) * 2018-08-01 2021-09-14 华为技术有限公司 Image processing method and video monitoring system based on multi-machine cooperation
JP6573346B1 (en) * 2018-09-20 2019-09-11 パナソニック株式会社 Person search system and person search method
CN110944109B (en) * 2018-09-21 2022-01-14 华为技术有限公司 Photographing method, device and equipment
WO2021033703A1 (en) * 2019-08-22 2021-02-25 日本電気株式会社 Display control device, display control method, program, and display control system
CN112579593A (en) * 2019-09-30 2021-03-30 华为技术有限公司 Population database sorting method and device
US11954941B2 (en) * 2020-01-14 2024-04-09 EMC IP Holding Company LLC Facial recognition IOT application for multi-stream video using forecasting tools
JP7064642B1 (en) * 2021-07-14 2022-05-10 日本コンピュータビジョン株式会社 Information processing equipment, information processing methods and information processing programs

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08221592A (en) * 1995-02-16 1996-08-30 Matsushita Electric Ind Co Ltd Interactive information providing device
JP2001236514A (en) * 2000-02-24 2001-08-31 Nippon Hoso Kyokai <Nhk> Automatic figure index generator

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7304662B1 (en) * 1996-07-10 2007-12-04 Visilinx Inc. Video surveillance system and method
US6061055A (en) 1997-03-21 2000-05-09 Autodesk, Inc. Method of tracking objects with an imaging device
US20030025599A1 (en) * 2001-05-11 2003-02-06 Monroe David A. Method and apparatus for collecting, sending, archiving and retrieving motion video and still images and notification of detected events
US6720990B1 (en) * 1998-12-28 2004-04-13 Walker Digital, Llc Internet surveillance system and method
JP4672104B2 (en) 2000-04-26 2011-04-20 パナソニック株式会社 Digital image recording / playback device for monitoring
JP4195991B2 (en) * 2003-06-18 2008-12-17 パナソニック株式会社 Surveillance video monitoring system, surveillance video generation method, and surveillance video monitoring server
US8174572B2 (en) * 2005-03-25 2012-05-08 Sensormatic Electronics, LLC Intelligent camera selection and object tracking
US7843491B2 (en) 2005-04-05 2010-11-30 3Vr Security, Inc. Monitoring and presenting video surveillance data
US9036028B2 (en) * 2005-09-02 2015-05-19 Sensormatic Electronics, LLC Object tracking and alerts
JP2007241377A (en) 2006-03-06 2007-09-20 Sony Corp Retrieval system, imaging apparatus, data storage device, information processor, picked-up image processing method, information processing method, and program
JP4881766B2 (en) 2007-03-06 2012-02-22 パナソニック株式会社 Inter-camera link relation information generation device
JP4933354B2 (en) 2007-06-08 2012-05-16 キヤノン株式会社 Information processing apparatus and information processing method
US20090153654A1 (en) 2007-12-18 2009-06-18 Enge Amy D Video customized to include person-of-interest
US8601494B2 (en) 2008-01-14 2013-12-03 International Business Machines Corporation Multi-event type monitoring and searching
DK2260646T3 (en) * 2008-03-28 2019-04-23 On Net Surveillance Systems Inc METHOD AND SYSTEMS FOR VIDEO COLLECTION AND ANALYSIS THEREOF
JP5027759B2 (en) 2008-08-19 2012-09-19 本田技研工業株式会社 Visual support device for vehicle
MX2012009579A (en) * 2010-02-19 2012-10-01 Toshiba Kk Moving object tracking system and moving object tracking method.
US9252897B2 (en) * 2010-11-10 2016-02-02 Verizon Patent And Licensing Inc. Multi-feed event viewing
GB2485969A (en) * 2010-11-12 2012-06-06 Sony Corp Video surveillance with anticipated arrival time of object in another camera view
US8908034B2 (en) 2011-01-23 2014-12-09 James Bordonaro Surveillance systems and methods to monitor, recognize, track objects and unusual activities in real time within user defined boundaries in an area
CN102170560A (en) * 2011-02-25 2011-08-31 李兆全 Radio-frequency-identification (RFID)-based closed circuit television system and monitoring method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08221592A (en) * 1995-02-16 1996-08-30 Matsushita Electric Ind Co Ltd Interactive information providing device
JP2001236514A (en) * 2000-02-24 2001-08-31 Nippon Hoso Kyokai <Nhk> Automatic figure index generator

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015154465A (en) * 2014-02-19 2015-08-24 キヤノン株式会社 Display control device, display control method, and program
CN107431786A (en) * 2015-03-16 2017-12-01 佳能株式会社 Image processing equipment, image processing system, image processing method and computer program
US10572736B2 (en) 2015-03-16 2020-02-25 Canon Kabushiki Kaisha Image processing apparatus, image processing system, method for image processing, and computer program
JP2016184229A (en) * 2015-03-25 2016-10-20 東芝テック株式会社 Demand prediction device and program
US11532160B2 (en) 2016-11-07 2022-12-20 Nec Corporation Information processing apparatus, control method, and program
JP2020191645A (en) * 2020-07-16 2020-11-26 日本電気株式会社 Information processing device, control method, and program
JP7052833B2 (en) 2020-07-16 2022-04-12 日本電気株式会社 Information processing equipment, control methods, and programs
JP2022033600A (en) * 2020-08-17 2022-03-02 横河電機株式会社 Device, system, method, and program
US11657515B2 (en) 2020-08-17 2023-05-23 Yokogawa Electric Corporation Device, method and storage medium
JP7415848B2 (en) 2020-08-17 2024-01-17 横河電機株式会社 Apparatus, system, method and program
WO2022154387A1 (en) * 2021-01-13 2022-07-21 삼성전자 주식회사 Electronic device and operation method therefor

Also Published As

Publication number Publication date
US20150208015A1 (en) 2015-07-23
US20200344436A1 (en) 2020-10-29
US10778931B2 (en) 2020-09-15
BR112015001949B1 (en) 2023-04-25
MY171395A (en) 2019-10-10
US20190199957A1 (en) 2019-06-27
JPWO2014021004A1 (en) 2016-07-21
US10841528B2 (en) 2020-11-17
US10750113B2 (en) 2020-08-18
JP6332833B2 (en) 2018-05-30
US10999635B2 (en) 2021-05-04
AR091912A1 (en) 2015-03-11
US20190238786A1 (en) 2019-08-01
US11343575B2 (en) 2022-05-24
US20220124410A1 (en) 2022-04-21
CN104718749A (en) 2015-06-17
SG11201500693QA (en) 2015-04-29
MX2015001292A (en) 2015-04-08
BR112015001949A2 (en) 2017-07-04
RU2015106938A (en) 2016-09-20
US20190199956A1 (en) 2019-06-27

Similar Documents

Publication Publication Date Title
JP6332833B2 (en) Image processing system, image processing method, and program
US9870684B2 (en) Information processing apparatus, information processing method, program, and information processing system for achieving a surveillance camera system
JP6210234B2 (en) Image processing system, image processing method, and program
WO2014050432A1 (en) Information processing system, information processing method and program
US9298987B2 (en) Information processing apparatus, information processing method, program, and information processing system
JP6347211B2 (en) Information processing system, information processing method, and program
US10623659B2 (en) Image processing system, image processing method, and program
JPWO2014045670A1 (en) Image processing system, image processing method, and program
JP2014170367A (en) Object detection device, object detection method, object detection system and program
KR20210012634A (en) Computer device to communicate with network system including plurality of cameras and method of operating thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13825914

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14416716

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2014528039

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: MX/A/2015/001292

Country of ref document: MX

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2015106938

Country of ref document: RU

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112015001949

Country of ref document: BR

122 Ep: pct application non-entry in european phase

Ref document number: 13825914

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 112015001949

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20150128