WO2014041912A1 - 画像処理システム、画像処理方法及びプログラム - Google Patents
画像処理システム、画像処理方法及びプログラム Download PDFInfo
- Publication number
- WO2014041912A1 WO2014041912A1 PCT/JP2013/070697 JP2013070697W WO2014041912A1 WO 2014041912 A1 WO2014041912 A1 WO 2014041912A1 JP 2013070697 W JP2013070697 W JP 2013070697W WO 2014041912 A1 WO2014041912 A1 WO 2014041912A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- person
- video
- image
- camera
- predicted
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Definitions
- Some aspects according to the present invention relate to an image processing system, an image processing method, and a program.
- Patent Document 1 discloses an apparatus that can appropriately perform tracking (monitoring) of a person across cameras using connection relationship information between cameras. This apparatus obtains the correspondence between persons according to the similarity of the person feature amount between a point appearing in the camera field of view (In point) and a point disappearing from the camera field of view (Out point).
- Some aspects of the present invention have been made in view of the above-described problems, and provide an image processing system, an image processing method, and a program capable of suppressing an error related to the association of a person appearing in a moving image. This is one of the purposes.
- An image processing system predicts an input unit that receives input of moving images picked up by a plurality of video cameras, and a video camera in which an object detected in the moving image input by the input unit appears next.
- Display control means for displaying on the display device a moving image predicted by the prediction means from the video camera.
- the image processing method includes a step of receiving input of moving images captured by a plurality of video cameras, a step of predicting a video camera in which an object detected in the input moving images appears next, In accordance with the degree of similarity between the detected object and another object that may appear in the predicted moving image of the video camera, the ease of confusion of the object is notified, and the video from the predicted video camera
- the image processing system performs the step of displaying the image on the display device.
- the program according to the present invention includes a process of receiving input of moving images captured by a plurality of video cameras, a process of predicting a video camera in which an object detected in the input moving images is next displayed, and the detected According to the degree of similarity between the detected object and another object that may be reflected in the predicted moving picture of the video camera, and the moving image from the predicted video camera is displayed.
- the computer executes the process of displaying on the display device.
- “part”, “means”, “apparatus”, and “system” do not simply mean physical means, but “part”, “means”, “apparatus”, “system”. This includes the case where the functions possessed by "are realized by software. Further, even if the functions of one “unit”, “means”, “apparatus”, and “system” are realized by two or more physical means or devices, two or more “parts” or “means”, The functions of “device” and “system” may be realized by a single physical means or device.
- an image processing system an image processing method, and a program that can suppress an error related to the association of a person shown in a moving image.
- FIG. 1 is a block diagram showing a system configuration of the monitoring system 1.
- the monitoring system 1 is broadly divided into an information processing server 100, a plurality of video cameras 200 that capture moving images (video cameras 200A to 200N are collectively referred to as video cameras 200), a display device 300, and an input.
- the apparatus 400 is comprised.
- the monitoring system 1 is described as a system for monitoring a person photographed by the video camera 200, the monitoring target is not limited to this.
- the monitoring target is not limited to this.
- it may be a moving object (object / moving body) such as a car or a motorcycle.
- the video camera 200 captures a moving image, determines whether or not there is a person in the captured (captured) moving image, and then displays information such as a position and a feature amount relating to the person together with the captured moving image. It transmits to the information processing server 100.
- the video camera 200 can also track a person in a captured moving image. It should be noted that processes such as person detection, feature extraction, and person tracking in the camera may be performed on the information processing server 100 or another information processing apparatus (not shown), for example.
- the information processing server 100 performs various processes such as detecting a person, registering a person to be tracked, and tracking a registered person by analyzing a moving image captured by the video camera 200.
- the present invention is not limited to this, and for example, after being captured by the video camera 200, stored. It is also possible to monitor (analyze) moving images stored in a device (for example, HDD (Hard Disk Drive), VCR (Video Cassette Recorder), etc. Further, the moving images stored in the storage device may be reversed. It is also possible to monitor by playing back (reverse playback) because it is usually necessary to check what action the person took before the action when the person took a suspicious action. It is extremely useful to have such monitoring means by reverse regeneration.
- a device for example, HDD (Hard Disk Drive), VCR (Video Cassette Recorder), etc.
- the moving images stored in the storage device may be reversed. It is also possible to monitor by playing back (reverse playback) because it is usually necessary to check what action the person took before the action when the person took a suspicious action. It is extremely useful to have such monitoring means by reverse regeneration.
- the information processing server 100 In the person monitoring by the information processing server 100, the information processing server 100 outputs a monitoring screen to the display device 300 and receives operation signals related to various operation inputs related to person monitoring from the input device 400. More specifically, for example, by displaying a plurality of moving images input from the video camera 200 on the monitoring screen displayed on the display device 300, the user who is the supervisor can grasp where the person is now. Like that.
- a user who is a monitor looks at the display device 300, and when the person to be monitored shown in the video (moving image) of a certain video camera 200 appears in the video of another video camera 200, the user turns the input device 400 on.
- the two persons are associated with the same person.
- the present monitoring system 1 realizes highly accurate person association.
- the display device 300 is, for example, a display that displays an image on a liquid crystal, an organic EL (Electro Luminescence), or the like.
- the display device 300 displays the monitoring screen output from the information processing server 100.
- the input device 400 is a device for a user (monitor) to input various information.
- a pointing device such as a mouse, a touch pad, or a touch panel, a keyboard, and the like correspond to the input device 400.
- Various processes such as registration of a person to be monitored and association between the registered person and a person newly appearing in the video camera 200 (association as the same person) are performed based on a user's operation on the input device 400.
- the information processing server 100 may be realized as a single client, or the information processing server 100, the display device 300, and the input device 400 may be realized as a single information processing device. Is also possible.
- the functions of the information processing server 100 may be realized by a plurality of information processing apparatuses.
- a person to be monitored (person registered as a monitoring target) reflected in the video of a certain video camera 200 looks at the other video camera 200 when viewing the display device 300. If it appears, the input device 400 is operated to associate the two persons as the same person. However, if there are a plurality of persons with similar appearances at the location to be monitored, there is a high possibility that an error will occur even if the person corresponds. Therefore, in the monitoring system 1 according to the present embodiment, when there is a person who looks similar to the person to be monitored, an error relating to the association is suppressed by notifying the user to that effect and calling attention. is doing.
- the shooting position of “Camera 008” is a place where a person can be predicted to appear next to the right direction of “Camera 001” and the right direction of “Camera 003”, and the appearance time is around time t + 1. It shall be.
- the appearance of the person X is similar to that of the persons A and B (that is, the feature amount is approximate. For example, this corresponds to the case where the color of the clothes is close), and the person Y is The appearance of person C is similar. In this case, since only the person C has a similar characteristic with respect to the person Y, the person Y and the person C are likely to be the same, and the user who is a monitor may mistake the association. Is low.
- the monitoring system 1 when there is a person having characteristics similar to the person to be monitored, the user is alerted to that effect in order to suppress the association error.
- a specific example of a display screen for alerting will be described with reference to FIGS. 3 to 5.
- the information processing server 100 predicts which video camera 200 the next person to be monitored will appear in, and displays the video from the video camera 200 on the display device 300.
- the images of the plurality of video cameras 200 can be displayed on the monitoring screen of the display device 300.
- the same monitoring is performed on the images from a plurality of video cameras 200 that are likely to appear next to the person to be monitored (for example, about four may be selected in the order of high possibility of appearing). It can be considered to be arranged on the screen for use.
- 3 to 5 are diagrams illustrating specific examples of the moving image display area 30 related to the video of one video camera 200 in the monitoring screen displayed by the display device 300.
- FIG. 4 shows a moving image display area 30 displayed on the display device 300 when there is a possibility that not only the person to be monitored but also the person who is easily confused with the person to be monitored appears from the same door. This is a specific example.
- the image 32 that prompts the user to call attention is arranged in the vicinity of the image 31 indicating that the person to be monitored is likely to appear.
- an image 31 is displayed on the moving image display area 30. Furthermore, not only the person to be monitored but also a person who is easily confused with the person to be monitored (a person with a similar appearance (for example, a person with a similar feature amount)) may appear at a time close to the person to be monitored. If it is high, the image 32 for alerting is displayed to inform the user, and when the person appearing in the moving image display area 30 and the person to be monitored are associated with each other, sufficient attention is required. Encourage you to put on.
- FIG. 5 is a diagram showing a specific example when a person appears in the video of the moving image display area 30.
- an image 31 indicating that there is a high possibility of being a person to be monitored is arranged around the person who has appeared, and there is a possibility that the person is likely to be confused with the person to be monitored.
- An image 32 indicating high is arranged.
- the monitoring system 1 indicates that the person to be monitored is highly likely to appear (or is highly likely to be a person to be monitored) with the image 31, and may be confused with the person to be monitored.
- the user is notified with an image 32 that there is a high possibility of being a person.
- the method of alerting the user and the shapes of the image 31 and the image 32 are not limited thereto.
- a display method such as changing the color of the image 31 or blinking (instead of displaying the image 32) is used. You may take it.
- the method of presenting the ease of confusion when a person appears is presented as message information such as “There are multiple similar persons in the next appearing person” instead of presenting as in the image 32. You may do it.
- the text to be presented may be displayed as a stationary text or may be displayed as a scrolling text.
- various text presentation methods that call attention can be used.
- the degree of ease of confusion (corresponding confusion ratio described later) is displayed as a number, or it is easy to confuse with an indicator such as a bar whose length changes according to the degree of the number. May be presented.
- various display (notification) methods may be used, such as blinking or changing the color with time to alert the user.
- a sound for calling attention together with the image 32 may be sounded to urge (notify) the user's attention.
- various notification methods for prompting the user's attention can be used.
- the moving image display area 30 of the moving image that is most likely to appear by the person to be monitored is the darkest color
- the moving image display area 30 of the moving image that is most likely to be the next is thinner than that. It is only necessary to display the image 31 by making the moving image display area 30 of the moving image that is unlikely to be thinnest the thinnest.
- color-coding it is possible to present the image 31 so as to blink, and to change the blinking speed according to the degree of possibility.
- the moving image display areas 30 may be arranged in order of the possibility that a person to be monitored appears.
- the information processing server 1 if there is a possibility of appearing at the same time (near time that falls within a certain range), a person that may be confused (confused) If there is a person who is highly likely) and a person who is not so (a person who has a sufficiently low possibility of confusion (a person whose confusion rate is lower than a sufficiently low threshold described later)), the figure appears before the person appears. It is suggested that there is a possibility of confusion as shown in FIG.
- the person When a person appears in the video, the person may be a person who is not likely to be confused by the user (the possibility that the user is confused is sufficiently low). In this case, after determining whether or not there is a possibility of confusion according to the feature amount of the person who has appeared (in the method described later, after calculating the confusion rate, whether or not the confusion rate exceeds the threshold) If there is no possibility of confusion, the image 32 that calls attention in FIG. 5 is not displayed. On the contrary, when there are a plurality of persons who are highly likely to be confused and the person who appears is highly likely to be confused, the image 32 shown in FIG. You may make it strongly call attention. Details of the calculation of the confusion rate indicating the possibility of confusion will be described later.
- the monitoring system 1 includes an image acquisition unit 601 (the image acquisition units 601A to 601N are collectively referred to as an image acquisition unit 601), an object detection / tracking unit 610 (an object detection / tracking unit 610A to 610A). 610N is collectively referred to as an object detection / tracking unit 610.), an object tracking information DB 620, a next camera prediction unit 630, camera arrangement information 640, an inter-camera association unit 650, an association confusion rate calculation unit 660, and an inter-camera correspondence
- the attached information 670, the display control unit 680, and the display device 300 are included.
- the image acquisition unit 601 acquires a captured moving image when the video camera 200 captures an actual scene.
- a moving image (video) taken by the video camera 200 is recorded (recorded) in a storage device such as an HDD and then reproduced (in the case of a VCR, the reproduced analog signal is captured). To get.
- reproduction refers to generating encoded picture data by decoding encoded moving image data, and displaying the generated result on a display screen is not included in reproduction.
- the playback speed need not be the actual speed (recorded actual speed), and if possible, the playback (decoding) may be performed at a speed higher than the real time. It is also conceivable to skip all frames of the video and skip the frames. For example, in the case of encoding with an encoding method such as MPEG-2, there are I, P, and B pictures in the video. Among these, only I pictures or only I pictures and P pictures are present. May be decoded.
- the object detection / tracking unit 610 includes an object detection unit 611 (the object detection units 611A to 611N are collectively referred to as an object detection unit 611), and an object tracking unit 613 (the object tracking units 613A to 613N are collectively referred to as an object tracking unit). And an object feature amount extraction unit 615 (object feature amount extraction units 615A to 615N are collectively referred to as an object feature amount extraction unit 615).
- the object detection / tracking unit 610 detects a person as an object from the moving images acquired by the image acquisition unit 601 in the object detection unit 611, and detects the object in the object feature amount extraction unit 615 by the object detection unit 611. The feature amount of the person is calculated from the person area.
- a detector that has learned features such as the shape of a person or part thereof, A person can be extracted by applying to the extracted moving body region. Further, as the feature amount of the person, the color of the clothes or the feature of the pattern worn by the person can be extracted in the form of a color histogram or an edge histogram.
- the object tracking unit 613 tracks each person extracted as an object within the same angle of view (within the same video) by comparing time-series images (frames), and for each detected / tracked person. Then, object tracking information (time series data of the position of the person as the object and the feature amount information) is generated. For tracking a person between frames, for example, tracking using a mean shift method or tracking using a particle filter may be used.
- the object tracking unit 613 stores the generated object tracking information in the object tracking information DB (database) 620 and outputs it to the next camera prediction unit 630.
- the next camera prediction unit 630 obtains which image next when the person goes out of the angle of view of the video (out of frame) from the object tracking information generated by the object tracking unit 613 and the camera arrangement information 640.
- next camera prediction information indicating the result is generated.
- the camera arrangement information 640 is information describing a spatial positional relationship between a plurality of arranged cameras. Specifically, for example, an adjacency relationship between cameras, a distance between cameras (or a camera) Information such as the average time required to move between).
- the adjacency relationship is information indicating whether the cameras are adjacent to each other, and if they are adjacent, how far and in what direction the cameras are located.
- the adjacency information is described in association with the angle of view of the camera.
- the next camera prediction unit 630 can select an adjacent camera according to the direction in which the person goes out of the frame.
- the next camera prediction information generated by the next camera prediction unit 630 includes the result of calculating the appearance probability of the person, the predicted appearance position within the angle of view, and the predicted appearance time for each image acquisition unit 601 (for each video camera 200), Are generated for each person to be tracked. For example, when the person A is reflected on the camera 01 and is out of the frame in the direction of the camera 02, when the prediction is performed using the average movement time between the cameras, the time when the average movement time is added to the time when the frame is out. The appearance probability can be calculated using the largest probability distribution. At this time, instead of using the average moving time, the time to reach the camera 02 is predicted by calculating the moving speed before the frame out from the tracking result of the camera 01, and the probability distribution is calculated based on the time. May be.
- various shapes such as a Gaussian distribution can be used as the probability distribution.
- information related to variations in arrival time from the camera 01 to the camera 02 is important. . This can be obtained by a method of measuring in advance and storing it as data, or learning from information associated with the user. Also, if there are adjacent cameras other than the camera 02, the probability of a person moving in the direction of each adjacent camera is estimated, and the probability is calculated by multiplying this value by the above-mentioned appearance probability. Also good. For this estimation, a result measured in advance can be used.
- the camera-to-camera association unit 650 compares the feature amount included in the next camera prediction information with the feature amount of the person detected in the video of the video camera 200 that may appear next.
- the distance between the feature amounts is small (or when the similarity between the feature amounts is high)
- the persons are associated with each other, and the association information is set as the inter-camera association information 670 in the inter-camera association information DB 670.
- the association is determined based on this information.
- the association confusion rate calculation unit 660 calculates the similarity between the feature quantities between the objects whose appearance prediction times are close (for example, the difference in the appearance prediction times falls within a certain time) from the next camera prediction information for each person. . More specifically, the association confusion rate calculation unit 660 displays images of a person who has high similarity to the person to be monitored (for example, a person whose feature amount similarity exceeds a threshold) from the own camera and the other video camera 200. Depending on whether or not it is detected in step (3), a measure of the possibility of erroneous association when the person to be monitored appears in the next camera is calculated as the association confusion rate.
- the inter-camera association information 670 when referring to the inter-camera association information DB 670, if a person who has a possibility of corresponding has already appeared in the next camera, the inter-camera association information 670 is used to identify a plurality of persons associated with the person. The similarity may be evaluated, and the association confusion rate may be calculated according to the result. For example, when the degree of similarity between a person to be monitored and a plurality of other persons is high, there is a high possibility that a user who is a supervisor will be confused. On the other hand, when the similarity with other persons is low, the user who is a monitor is unlikely to be confused, so the confusion rate may be set low.
- the association confusion rate can be calculated as follows, for example.
- the feature quantities of these persons are compared with the feature quantities of the monitored person.
- the similarity is calculated.
- F (x) (F (x) is a monotonic non-decreasing function related to x and takes a value from 0 to 1) representing the ease of confusion when the similarity is x.
- the association confusion rate can be calculated by the following equation, for example.
- the confusion rate is a probability of selecting N persons other than the monitoring target person from N + 1. This is an expanded version of this formula.
- the association confusion rate is, for example, It can be calculated by the formula.
- the association confusion ratio may be calculated by calculating the similarity by comparing the feature quantity of the person who appears and the feature quantities of N persons other than the monitoring target person and then calculating the probability of association. More specifically, if the similarity of the i-th person is S i ′ and the probability of correspondence when the similarity is x is P (x), the association confusion rate can be calculated as follows.
- the display control unit 680 selects information to be presented to a user who is a monitor (a tracking target person (monitoring target person) next) from the next camera prediction information for each person, the association confusion rate, and the inter-camera association information 670.
- An image (as specific examples, shown as images 31 and 32 in FIGS. 3 to 4) showing where the camera image appears, and information such as how easily it is confused when it appears. Image).
- information indicating that the person is an associated candidate person specifically, as an image 31 in FIG. 5
- information for presenting the ease of confusion based on the confusion rate (specifically, an image illustrated as an image 32 in FIG. 5) is generated.
- FIG. 7 is a flowchart showing a processing flow of the information processing server 100 according to the present embodiment.
- Each processing step to be described later can be executed in any order or in parallel as long as there is no contradiction in processing contents, and other steps can be added between the processing steps. good. Further, a step described as a single step for convenience can be executed by being divided into a plurality of steps, and a step described as being divided into a plurality of steps for convenience can be executed as one step.
- the object detection unit 611 detects whether or not a person as a detection target object is reflected in the image acquired by the image acquisition unit 601 (S701). As a result, when a person is detected (Yes in S701), the object feature amount extraction unit 615 calculates the feature amount of the person (S703).
- the object tracking unit 613 tracks the object between frames, and registers it in the object tracking information DB 620 as object tracking information together with the calculated feature amount within the same angle of view (S705).
- the next camera prediction unit 630 determines whether the person to be monitored that has been framed out from the angle of view of the video acquired by the image acquisition unit 601. Next, it is predicted which image acquisition unit 601 is likely to appear in the video (S707).
- the association confusion rate calculation unit 660 includes the feature amount of the person predicted to appear in the next camera by the next camera prediction, and the person predicted to appear in the next camera at the predicted appearance time close to the monitoring target person.
- the feature amounts are compared and the degree of similarity is calculated (S709). If the distance between feature amounts is small (the degree of similarity between feature amounts is high. These determinations can be made based on, for example, whether or not a threshold value is exceeded). If there is a person (S711) Yes), the association confusion rate calculation unit 660 determines whether a person has already appeared in the predicted next camera (S713).
- the association confusion rate calculation unit 660 calculates a confusion rate indicating a measure of error when a person to be monitored appears in the next camera predicted by the next camera prediction unit 630 (S715).
- the confusion rate is set to be large when there is a high possibility that a plurality of persons with similar feature quantities appear at the same or near appearance prediction time, and when there are no persons with similar feature quantities or near appearance prediction times. If it is predicted that there will be no person appearing in, it is set low.
- the display control unit 680 indicates the appearance location of the monitoring target person on the video of the next camera predicted by the next camera prediction unit 630 (for example, the image 31 shown in FIGS. 3 and 4), and if the monitoring target person If the confusion rate is high, a display screen that alerts the user not to confuse (for example, the image 32 shown in FIG. 4) is generated and displayed on the display device 300 (S719).
- the functions of the information processing server 100 can be realized by a plurality of information processing apparatuses (for example, a server and a client).
- the information processing server 100 includes a processor 801, a memory 803, a storage device 805, an input interface (I / F) 807, a data I / F 809, a communication I / F 811, and a display device 813.
- a processor 801 a memory 803, a storage device 805, an input interface (I / F) 807, a data I / F 809, a communication I / F 811, and a display device 813.
- the processor 801 controls various processes in the information processing server 100 by executing a program stored in the memory 803. For example, the processes related to the next camera prediction unit 630, the inter-camera association unit 650, the association confusion rate calculation unit 660, and the display control unit 680 described in FIG. It can be realized as a program operating on the processor 801.
- the memory 803 is a storage medium such as a RAM (Random Access Memory).
- the memory 803 temporarily stores a program code of a program executed by the processor 801 and data necessary for executing the program. For example, a stack area necessary for program execution is secured in the storage area of the memory 803.
- the storage device 805 is a non-volatile storage medium such as an HDD, a flash memory, or a VCR.
- the storage device 805 includes an operating system, various programs for realizing a next camera prediction unit 630, an inter-camera association unit 650, an association confusion rate calculation unit 660, and a display control unit 680, an object tracking information DB 620, a camera.
- Various data including the arrangement information 640 and the inter-camera association information DB 670 are stored.
- Programs and data stored in the storage device 805 are referred to by the processor 801 by being loaded into the memory 803 as necessary.
- the input I / F 807 is a device for receiving input from the user.
- the input device 400 described in FIG. 1 can also be realized as an input I / F 807.
- Specific examples of the input I / F 807 include a keyboard, a mouse, a touch panel, and various sensors.
- the input I / F 807 may be connected to the information processing server 100 via an interface such as USB (Universal Serial Bus), for example.
- USB Universal Serial Bus
- the data I / F 809 is a device for inputting data from outside the information processing server 100.
- Specific examples of the data I / F 809 include a drive device for reading data stored in various storage media.
- the data I / F 809 may be provided outside the information processing server 100. In this case, the data I / F 809 is connected to the information processing server 100 via an interface such as a USB.
- the communication I / F 811 is a device for data communication with an external device of the information processing server 100, for example, a video camera 200 or the like by wire or wireless.
- the communication I / F 811 may be provided outside the information processing server 100. In that case, the communication I / F 811 is connected to the information processing server 100 via an interface such as a USB.
- the display device 813 is a device for displaying various information such as a monitoring screen, for example, and the display device 300 described with reference to FIG. 1 can also be realized as the display device 813.
- Specific examples of the display device 813 include a liquid crystal display and an organic EL (Electro-Luminescence) display.
- the display device 813 may be provided outside the information processing server 100. In that case, the display device 813 is connected to the information processing server 100 via, for example, a display cable.
- the video acquired by the image acquisition unit 601 is mainly a real-time video captured by the video camera 200 has been mainly described.
- the video is not limited to this, and is stored in, for example, a storage medium.
- the video may be reproduced in the forward direction, or the video stored in the storage medium may be reproduced in the reverse direction.
- these cases will be briefly described.
- the object (person) detection / tracking process does not need to be performed in real time and can be performed at a speed higher than the playback speed, or video playback. It may be processed before.
- an object to be tracked is designated, it is determined whether or not to go out of the angle of view of the camera.
- the next camera prediction information calculated by the next camera prediction unit 630 is included in the next camera prediction information.
- the inter-camera association unit 650 reads (searches) an object as a candidate from the object tracking information DB 620, calculates a similarity between the objects, and obtains an association candidate.
- the tracking information of the corresponding time is not generated by the next camera before the search, the search is performed after waiting for the generation.
- the association confusion rate calculation unit 660 calculates the association confusion rate, and the display control unit 680 displays the candidate object screen. And information indicating that the object is a candidate object (for example, the image 31 illustrated in FIGS. 3 to 5), and information indicating the ease of confusion (for example, the image 32 illustrated in FIGS. 4 and 5). Is generated and displayed on the display device 300. At this time, the information may be presented in descending order of possibility of being a candidate in accordance with the consistency with the prediction time and the level of similarity.
- the processing for the recorded video described in “1.7.1” can be applied to reverse playback.
- the reverse reproduction is effective, for example, when an object that behaves suspiciously at a certain point in time is used as a tracking target and the steps up to that point are followed.
- the process for reverse playback is basically the same as “1.7.1,” except that the time axis is searched in the reverse direction. That is, the time when the tracking target object enters the angle of view of the camera is obtained from the tracking information, and when the angle deviates from the angle of view, the next camera prediction is used to predict the next camera prediction information in the reverse direction of time. Become.
- FIG. 9 is a block diagram illustrating a functional configuration of a monitoring apparatus 900 that is an image processing system.
- the monitoring device 900 includes an input unit 910, a prediction unit 920, and a display control unit 930.
- the input unit 910 receives input of moving images captured by a plurality of video cameras.
- the prediction unit 920 predicts the video camera in which the object detected in the moving image input from the input unit is next reflected.
- the display control unit 930 determines whether the objects detected in the moving image and the objects that are likely to be reflected in the moving image of the video camera predicted by the prediction unit 920 are similar to other objects. , The user who is the supervisor is notified. Further, the display control unit 930 displays the moving image predicted by the prediction unit 920 from the video camera on a display device (not shown).
- (Appendix 1) Input means for receiving input of moving images picked up by a plurality of video cameras, prediction means for predicting a video camera in which an object detected in the moving images input by the input means is next displayed, and the detected objects According to the similarity with other objects that may be reflected in the video image of the video camera predicted by the prediction means, and notifies the ease of confusion of the objects, and from the video camera predicted by the prediction means
- An image processing system comprising display control means for displaying the moving image of the image on a display device.
- the display control means is similar to other objects that may appear in the video camera within a predetermined time from the time when the detected object is predicted to be reflected in the video camera predicted by the prediction means.
- the image processing system according to appendix 1 which notifies the ease of confusion of objects according to
- Appendix 3 The image processing system according to appendix 1 or appendix 2, wherein the input unit receives the moving image stored in a storage device after being captured by a plurality of video cameras.
- Appendix 4 The image processing system according to appendix 3, wherein the input means receives the moving images in the reverse order of the shooting order.
- the display control means displays an image in the vicinity of a position where the object is predicted to appear on the video image of the video camera predicted by the prediction means, thereby informing the ease of confusion of the objects.
- the image processing system according to any one of claims 4 to 4.
- Appendix 8 The image processing method according to appendix 6 or appendix 7, wherein the moving image stored in a storage device is received after being captured by a plurality of video cameras.
- (Appendix 11) A process of receiving input of moving images picked up by a plurality of video cameras; a process of predicting a video camera in which an object detected in the input moving images is next; the detected object; Processing for notifying the ease of confusion of objects according to the degree of similarity with other objects that may appear in a video image of the video camera, and displaying the predicted video image from the video camera on a display device; A program that causes a computer to execute.
- Appendix 12 Objects are easily confused according to the degree of similarity with other objects that may appear in the video camera within a certain time from the time when the detected object is predicted to appear in the predicted video camera. The program according to appendix 11, which informs the safety.
- DESCRIPTION OF SYMBOLS 1 ... Surveillance system 30 ... Moving image display area, 31, 32 ... Image, 100 ... Information processing server, 200 ... Video camera, 300 ... Display apparatus, 400 ... Input device, 601 ... image acquisition unit, 610 ... object detection / tracking unit, 611 ... object detection unit, 613 ... object tracking unit, 615 ... object feature detection unit, 620 ... Object tracking information DB, 630 ... next camera prediction unit, 640 ... next camera arrangement information DB, 650 ... inter-camera association unit, 660 ... association confusion rate calculation unit, 670 ... camera Inter-association information DB, 680 ... display control unit, 801 ... processor, 803 ... memory, 805 ... storage device, 807 ... input interface, 809 Data interface, 811 ... communication interface, 813 ... display unit, 900 ... monitor device, 910 ... input section, 920 ... prediction unit, 930 ... display control unit
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Closed-Circuit Television Systems (AREA)
- Alarm Systems (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
Description
図1乃至図8は、第1実施形態を説明するための図である。以下、これらの図を参照しながら、以下の流れに沿って本実施形態を説明する。まず「1.1」でシステム構成の概要を示すとともに、「1.2」で表示画面の具体例を示すことで、第1実施形態全体の概要を示す。その上で、「1.3」でシステムの機能構成を説明し、「1.4」で処理の流れを、「1.5」で、本システムを実現可能なハードウェア構成の具体例を示す。最後に、「1.6」以降で、本実施形態に係る効果や変形例などを説明する。
図1を参照しながら、本実施形態に係る画像処理システムである監視システム1のシステム構成を説明する。図1は、監視システム1のシステム構成を示すブロック図である。
なお、人物の検出や特徴量の抽出、カメラ内の人物追跡などの処理は、例えば情報処理サーバ100や、図示しない他の情報処理装置上で行なっても良い。
(1.2.1 人物監視の概略)
以下、図2を参照しながら、人物監視の概略を説明する。
以下、図3乃至図5を参照しながら、注意喚起する場合の表示画面の具体例を説明する。
本実施形態において、情報処理サーバ100は、監視対象の人物が次にどのビデオカメラ200の映像に登場するかを予測し、当該ビデオカメラ200の映像を表示装置300上に表示する。
なおここで、表示装置300の監視用画面上には、複数のビデオカメラ200の映像を表示することができる。例えば、監視対象の人物が次に登場する可能性の高い複数台(例えば、登場する可能性の高い順に4台程度選ぶようにしても良い。)のビデオカメラ200からの映像を、同一の監視用画面上に配置することが考えられる。
図3乃至図5は、表示装置300が表示する監視用画面のうち、1台のビデオカメラ200の映像に係る動画像表示領域30の具体例を示す図である。
図3の例では、動画像表示領域30上には、近い将来、監視対象の人物が、撮像されたドアから現れる可能性が高いことを示す画像31が、動画像に重畳して表示されている。
以下、図6を参照しながら、監視システム1の機能構成を説明する。なお、図6にはユーザが監視対象の人物を登録したり、或いは対応付けたりといった、入力装置400を含む各機能構成については記載を省略している。
次に、監視システム1の処理の流れを、図7を参照しながら説明する。図7は、本実施形態に係る情報処理サーバ100の処理の流れを示すフローチャートである。
以下、図8を参照しながら、上述してきた情報処理サーバ100をコンピュータにより実現する場合のハードウェア構成の一例を説明する。なお、前述の通り、情報処理サーバ100の機能は複数の情報処理装置(例えば、サーバとクライアント)により実現することも可能である。
以上説明したように、本実施形態に係る監視システム1では、追跡対象(監視対象)の人物(オブジェクト)を追跡する際に、当該追跡対象の人物が登場する映像/位置をユーザにわかりやすく提示する(たとえば、図3乃至図5に具体例を示した画像31)。これにより、監視者であるユーザは、監視対象者と同一人物の対応付けをしやすくなる。
上記では、画像取得部601が取得する映像が、主にビデオカメラ200が撮像したリアルタイムの映像である場合を中心に説明してきたが、これに限られるものではなく、例えば、記憶媒体に記憶された映像を順方向に再生したものであったり、記憶媒体に記憶された映像を逆方向に再生したものであったりしても良い。以下、これらの場合について簡単に説明する。
記憶媒体に記憶された映像を対象に処理する場合には、オブジェクト(人物)の検出・追跡の処理は、リアルタイムで行う必要はなく、再生速度よりも高速に処理したり、或いは、映像の再生前に処理したりしても良い。追跡対象となるオブジェクトが指定された場合には、カメラの画角から外に出るかどうかを判定し、画角外に出た場合に、次カメラ予測部630で算出される次カメラ予測情報に基づいて、カメラ間対応付け部650で候補となるオブジェクトをオブジェクト追跡情報DB620から読み込む(探索する)と共に、オブジェクト間の類似度を算出し、対応付けの候補を求める。ここで、もし探索前に次カメラで対応する時刻の追跡情報が生成されていない場合には、生成されるのを待ってから探索することになる。
この際、予測時間との整合性や類似性の高さに応じて、候補である可能性が高い順に提示するようにしてもよい。
「1.7.1」で説明した記録済み映像に対する処理は、逆再生する場合にも適用できる。逆再生は、例えば、ある時点で怪しい行動をしたオブジェクトを追跡対象として、その時点までの足取りを追う場合等に有効である。逆再生する場合の処理は基本的に「1.7.1」と同様の処理となるが、時間軸を逆方向に探索していく点が異なる。すなわち、追跡対象オブジェクトがあるカメラの画角に入ってくる時刻を追跡情報から求め、画角から外れた時に次カメラ予測により、時間の逆方向に予測して次カメラ予測情報を生成することになる。
以下、第2の実施形態を図9を参照しながら説明する。図9は、画像処理システムである監視装置900の機能構成を示すブロック図である。図9に示すように、監視装置900は、入力部910と、予測部920と、表示制御部930とを含む。
このように実装することで、本実施形態に係る監視装置900によれば、動画像に映る人物の対応付けに係る誤りを抑制することができるようになる。
なお、前述の実施形態の構成は、組み合わせたり或いは一部の構成部分を入れ替えたりしてもよい。また、本発明の構成は前述の実施形態のみに限定されるものではなく、本発明の要旨を逸脱しない範囲内において種々変更を加えてもよい。
複数のビデオカメラで撮像された動画像の入力を受ける入力手段と、前記入力手段により入力された動画像で検出されたオブジェクトが次に映るビデオカメラを予測する予測手段と、前記検出されたオブジェクトと、前記予測手段で予測したビデオカメラの動画像に映る可能性のある他のオブジェクトとの類似度に応じて、オブジェクトの混同しやすさを報知すると共に、前記予測手段で予測したビデオカメラからの動画像を表示装置に表示する表示制御手段とを備える画像処理システム。
前記表示制御手段は、前記予測手段により予測されたビデオカメラに前記検出されたオブジェクトが映ると予測される時刻から一定時間内に、当該ビデオカメラに映る可能性のあるほかのオブジェクトとの類似度に応じて、オブジェクトの混同しやすさを報知する、付記1に記載の画像処理システム。
前記入力手段は、複数のビデオカメラで撮像された後、記憶装置に記憶された前記動画像の入力を受ける、付記1又は付記2記載の画像処理システム。
前記入力手段は、前記動画像を撮影順とは逆順に入力を受ける、付記3記載の画像処理システム。
前記表示制御手段は、前記予測手段で予測したビデオカメラの動画像上の、オブジェクトが現れると予測される位置の近傍に画像を表示することにより、オブジェクトの混同しやすさを報知する、付記1乃至付記4のいずれか1項記載の画像処理システム。
複数のビデオカメラで撮像された動画像の入力を受けるステップと、前記入力された動画像で検出されたオブジェクトが次に映るビデオカメラを予測するステップと、前記検出されたオブジェクトと、前記予測したビデオカメラの動画像に映る可能性のある他のオブジェクトとの類似度に応じて、オブジェクトの混同しやすさを報知すると共に、前記予測したビデオカメラからの動画像を表示装置に表示するステップとを画像処理システムが行う、画像処理方法。
前記予測されたビデオカメラに前記検出されたオブジェクトが映ると予測される時刻から一定時間内に、当該ビデオカメラに映る可能性のあるほかのオブジェクトとの類似度に応じて、オブジェクトの混同しやすさを報知する、付記6に記載の画像処理方法。
複数のビデオカメラで撮像された後、記憶装置に記憶された前記動画像の入力を受ける、付記6又は付記7記載の画像処理方法。
前記動画像を撮影順とは逆順に入力を受ける、付記8記載の画像処理方法。
前記予測したビデオカメラの動画像上の、オブジェクトが現れると予測される位置の近傍に画像を表示することにより、オブジェクトの混同しやすさを報知する、付記6乃至付記9のいずれか1項記載の画像処理方法。
複数のビデオカメラで撮像された動画像の入力を受ける処理と、前記入力された動画像で検出されたオブジェクトが次に映るビデオカメラを予測する処理と、前記検出されたオブジェクトと、前記予測したビデオカメラの動画像に映る可能性のある他のオブジェクトとの類似度に応じて、オブジェクトの混同しやすさを報知すると共に、前記予測したビデオカメラからの動画像を表示装置に表示する処理とをコンピュータに実行させるプログラム。
前記予測されたビデオカメラに前記検出されたオブジェクトが映ると予測される時刻から一定時間内に、当該ビデオカメラに映る可能性のあるほかのオブジェクトとの類似度に応じて、オブジェクトの混同しやすさを報知する、付記11に記載のプログラム。
複数のビデオカメラで撮像された後、記憶装置に記憶された前記動画像の入力を受ける、付記11又は付記12記載のプログラム。
前記動画像を撮影順とは逆順に入力を受ける、付記13記載のプログラム。
前記予測したビデオカメラの動画像上の、オブジェクトが現れると予測される位置の近傍に画像を表示することにより、オブジェクトの混同しやすさを報知する、付記11乃至付記14のいずれか1項記載のプログラム。
Claims (7)
- 複数のビデオカメラで撮像された動画像の入力を受ける入力手段と、
前記入力手段により入力された動画像で検出されたオブジェクトが次に映るビデオカメラを予測する予測手段と、
前記検出されたオブジェクトと、前記予測手段で予測したビデオカメラの動画像に映る可能性のある他のオブジェクトとの類似度に応じて、オブジェクトの混同しやすさを報知すると共に、前記予測手段で予測したビデオカメラからの動画像を表示装置に表示する表示制御手段と
を備える画像処理システム。 - 前記表示制御手段は、前記予測手段により予測されたビデオカメラに前記検出されたオブジェクトが映ると予測される時刻から一定時間内に、当該ビデオカメラに映る可能性のあるほかのオブジェクトとの類似度に応じて、オブジェクトの混同しやすさを報知する、
請求項1に記載の画像処理システム。 - 前記入力手段は、複数のビデオカメラで撮像された後、記憶装置に記憶された前記動画像の入力を受ける、
請求項1又は請求項2記載の画像処理システム。 - 前記入力手段は、前記動画像を撮影順とは逆順に入力を受ける、
請求項3記載の画像処理システム。 - 前記表示制御手段は、前記予測手段で予測したビデオカメラの動画像上の、オブジェクトが現れると予測される位置の近傍に画像を表示することにより、オブジェクトの混同しやすさを報知する、
請求項1乃至請求項4のいずれか1項記載の画像処理システム。 - 複数のビデオカメラで撮像された動画像の入力を受けるステップと、
前記入力された動画像で検出されたオブジェクトが次に映るビデオカメラを予測するステップと、
前記検出されたオブジェクトと、前記予測したビデオカメラの動画像に映る可能性のある他のオブジェクトとの類似度に応じて、オブジェクトの混同しやすさを報知すると共に、前記予測したビデオカメラからの動画像を表示装置に表示するステップと
を画像処理システムが行う、画像処理方法。 - 複数のビデオカメラで撮像された動画像の入力を受ける処理と、
前記入力された動画像で検出されたオブジェクトが次に映るビデオカメラを予測する処理と、
前記検出されたオブジェクトと、前記予測したビデオカメラの動画像に映る可能性のある他のオブジェクトとの類似度に応じて、オブジェクトの混同しやすさを報知すると共に、前記予測したビデオカメラからの動画像を表示装置に表示する処理と
をコンピュータに実行させるプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014535422A JP6213843B2 (ja) | 2012-09-13 | 2013-07-31 | 画像処理システム、画像処理方法及びプログラム |
US14/427,730 US9684835B2 (en) | 2012-09-13 | 2013-07-31 | Image processing system, image processing method, and program |
BR112015005258A BR112015005258A2 (pt) | 2012-09-13 | 2013-07-31 | sistema de processamento de imagem, método de processamento de imagem e programa |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012201942 | 2012-09-13 | ||
JP2012-201942 | 2012-09-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014041912A1 true WO2014041912A1 (ja) | 2014-03-20 |
Family
ID=50278040
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/070697 WO2014041912A1 (ja) | 2012-09-13 | 2013-07-31 | 画像処理システム、画像処理方法及びプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US9684835B2 (ja) |
JP (1) | JP6213843B2 (ja) |
BR (1) | BR112015005258A2 (ja) |
WO (1) | WO2014041912A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018165849A (ja) * | 2017-03-28 | 2018-10-25 | 達広 佐野 | カメラによる属性収集システム |
US10891740B2 (en) | 2017-05-29 | 2021-01-12 | Kabushiki Kaisha Toshiba | Moving object tracking apparatus, moving object tracking method, and computer program product |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9443298B2 (en) | 2012-03-02 | 2016-09-13 | Authentect, Inc. | Digital fingerprinting object authentication and anti-counterfeiting system |
US8774455B2 (en) | 2011-03-02 | 2014-07-08 | Raf Technology, Inc. | Document fingerprinting |
US20150288928A1 (en) * | 2014-04-08 | 2015-10-08 | Sony Corporation | Security camera system use of object location tracking data |
US9934453B2 (en) * | 2014-06-19 | 2018-04-03 | Bae Systems Information And Electronic Systems Integration Inc. | Multi-source multi-modal activity recognition in aerial video surveillance |
KR102174839B1 (ko) * | 2014-12-26 | 2020-11-05 | 삼성전자주식회사 | 보안 시스템 및 그 운영 방법 및 장치 |
US10572883B2 (en) | 2016-02-19 | 2020-02-25 | Alitheon, Inc. | Preserving a level of confidence of authenticity of an object |
EP3236401A1 (en) | 2016-04-18 | 2017-10-25 | Alitheon, Inc. | Authentication-triggered processes |
EP3244344A1 (en) * | 2016-05-13 | 2017-11-15 | DOS Group S.A. | Ground object tracking system |
US10740767B2 (en) | 2016-06-28 | 2020-08-11 | Alitheon, Inc. | Centralized databases storing digital fingerprints of objects for collaborative authentication |
US10915612B2 (en) | 2016-07-05 | 2021-02-09 | Alitheon, Inc. | Authenticated production |
US9998907B2 (en) * | 2016-07-25 | 2018-06-12 | Kiana Analytics Inc. | Method and apparatus for uniquely identifying wireless devices |
US10902540B2 (en) | 2016-08-12 | 2021-01-26 | Alitheon, Inc. | Event-driven authentication of physical objects |
US10839528B2 (en) | 2016-08-19 | 2020-11-17 | Alitheon, Inc. | Authentication-based tracking |
US20200134520A1 (en) * | 2017-03-14 | 2020-04-30 | Rutgers, The State University Of New Jersey | Method and system for dynamically improving the performance of security screening |
US11055538B2 (en) * | 2017-03-31 | 2021-07-06 | Disney Enterprises, Inc. | Object re-identification with temporal context |
US11062118B2 (en) | 2017-07-25 | 2021-07-13 | Alitheon, Inc. | Model-based digital fingerprinting |
JP7246005B2 (ja) * | 2017-10-05 | 2023-03-27 | パナソニックIpマネジメント株式会社 | 移動体追跡装置及び移動体追跡方法 |
EP3514715A1 (en) | 2018-01-22 | 2019-07-24 | Alitheon, Inc. | Secure digital fingerprint key object database |
US11501568B2 (en) * | 2018-03-23 | 2022-11-15 | Nec Corporation | Information processing apparatus, person search system, place estimation method, and non-transitory computer readable medium storing program |
US11140308B2 (en) * | 2018-07-25 | 2021-10-05 | International Business Machines Corporation | Life-logging system with third-person perspective |
JP7229698B2 (ja) * | 2018-08-20 | 2023-02-28 | キヤノン株式会社 | 情報処理装置、情報処理方法及びプログラム |
SG10201807678WA (en) * | 2018-09-06 | 2020-04-29 | Nec Asia Pacific Pte Ltd | A method for identifying potential associates of at least one target person, and an identification device |
US10963670B2 (en) | 2019-02-06 | 2021-03-30 | Alitheon, Inc. | Object change detection and measurement using digital fingerprints |
EP3734506A1 (en) | 2019-05-02 | 2020-11-04 | Alitheon, Inc. | Automated authentication region localization and capture |
EP3736717A1 (en) | 2019-05-10 | 2020-11-11 | Alitheon, Inc. | Loop chain digital fingerprint method and system |
US11250271B1 (en) * | 2019-08-16 | 2022-02-15 | Objectvideo Labs, Llc | Cross-video object tracking |
US11238146B2 (en) | 2019-10-17 | 2022-02-01 | Alitheon, Inc. | Securing composite objects using digital fingerprints |
EP3859603A1 (en) | 2020-01-28 | 2021-08-04 | Alitheon, Inc. | Depth-based digital fingerprinting |
EP3885984A1 (en) | 2020-03-23 | 2021-09-29 | Alitheon, Inc. | Facial biometrics system and method of using digital fingerprints |
US11341348B2 (en) | 2020-03-23 | 2022-05-24 | Alitheon, Inc. | Hand biometrics system and method using digital fingerprints |
EP3929806A3 (en) | 2020-04-06 | 2022-03-09 | Alitheon, Inc. | Local encoding of intrinsic authentication data |
US11663849B1 (en) | 2020-04-23 | 2023-05-30 | Alitheon, Inc. | Transform pyramiding for fingerprint matching system and method |
US11983957B2 (en) | 2020-05-28 | 2024-05-14 | Alitheon, Inc. | Irreversible digital fingerprints for preserving object security |
EP3926496A1 (en) | 2020-06-17 | 2021-12-22 | Alitheon, Inc. | Asset-backed digital security tokens |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005012415A (ja) * | 2003-06-18 | 2005-01-13 | Matsushita Electric Ind Co Ltd | 監視映像モニタリングシステム、監視映像生成方法、および監視映像モニタリングサーバ |
JP2008219570A (ja) * | 2007-03-06 | 2008-09-18 | Matsushita Electric Ind Co Ltd | カメラ間連結関係情報生成装置 |
JP2011227654A (ja) * | 2010-04-19 | 2011-11-10 | Panasonic Corp | 照合装置 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ITRM20050192A1 (it) * | 2005-04-20 | 2006-10-21 | Consiglio Nazionale Ricerche | Sistema per la rilevazione e la classificazione di eventi durante azioni in movimento. |
TWI489394B (zh) * | 2008-03-03 | 2015-06-21 | Videoiq Inc | 用於追蹤、索引及搜尋之物件匹配 |
-
2013
- 2013-07-31 WO PCT/JP2013/070697 patent/WO2014041912A1/ja active Application Filing
- 2013-07-31 JP JP2014535422A patent/JP6213843B2/ja active Active
- 2013-07-31 US US14/427,730 patent/US9684835B2/en active Active
- 2013-07-31 BR BR112015005258A patent/BR112015005258A2/pt not_active IP Right Cessation
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005012415A (ja) * | 2003-06-18 | 2005-01-13 | Matsushita Electric Ind Co Ltd | 監視映像モニタリングシステム、監視映像生成方法、および監視映像モニタリングサーバ |
JP2008219570A (ja) * | 2007-03-06 | 2008-09-18 | Matsushita Electric Ind Co Ltd | カメラ間連結関係情報生成装置 |
JP2011227654A (ja) * | 2010-04-19 | 2011-11-10 | Panasonic Corp | 照合装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018165849A (ja) * | 2017-03-28 | 2018-10-25 | 達広 佐野 | カメラによる属性収集システム |
US10891740B2 (en) | 2017-05-29 | 2021-01-12 | Kabushiki Kaisha Toshiba | Moving object tracking apparatus, moving object tracking method, and computer program product |
Also Published As
Publication number | Publication date |
---|---|
US9684835B2 (en) | 2017-06-20 |
US20150248587A1 (en) | 2015-09-03 |
BR112015005258A2 (pt) | 2017-07-04 |
JP6213843B2 (ja) | 2017-10-18 |
JPWO2014041912A1 (ja) | 2016-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6213843B2 (ja) | 画像処理システム、画像処理方法及びプログラム | |
JP6741130B2 (ja) | 情報処理システム、情報処理方法及びプログラム | |
JP7131599B2 (ja) | 情報処理システム、情報処理方法及びプログラム | |
JP6210234B2 (ja) | 画像処理システム、画像処理方法及びプログラム | |
JP6347211B2 (ja) | 情報処理システム、情報処理方法及びプログラム | |
WO2014050432A1 (ja) | 情報処理システム、情報処理方法及びプログラム | |
JP6292540B2 (ja) | 情報処理システム、情報処理方法及びプログラム | |
JP6233721B2 (ja) | 情報処理システム、情報処理方法及びプログラム | |
JP6954416B2 (ja) | 情報処理装置、情報処理方法、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13837122 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014535422 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14427730 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112015005258 Country of ref document: BR |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13837122 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 112015005258 Country of ref document: BR Kind code of ref document: A2 Effective date: 20150310 |