WO2014155979A1 - 追尾処理装置及びこれを備えた追尾処理システム並びに追尾処理方法 - Google Patents
追尾処理装置及びこれを備えた追尾処理システム並びに追尾処理方法 Download PDFInfo
- Publication number
- WO2014155979A1 WO2014155979A1 PCT/JP2014/001174 JP2014001174W WO2014155979A1 WO 2014155979 A1 WO2014155979 A1 WO 2014155979A1 JP 2014001174 W JP2014001174 W JP 2014001174W WO 2014155979 A1 WO2014155979 A1 WO 2014155979A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tracking
- camera
- cameras
- information
- moving object
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19608—Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
Definitions
- the present invention relates to a tracking processing device for tracking a moving object using images captured by a plurality of cameras, a tracking processing system including the tracking processing device, and a tracking processing method.
- a captured image of a camera installed at a plurality of places where the moving object passes is acquired, and each camera or a plurality of cameras is used by using the captured image.
- occlusion that is, a tracking target behind the camera is hidden by other objects existing in front of each camera by the positional relationship between the camera and the moving object (tracking target)). State
- a technology has been developed that reduces the influence of such occlusion and enables high-precision tracking. For example, a plurality of cameras that capture a monitoring space from different viewpoints are set, a plurality of hypotheses representing combinations of the predicted position of a moving object at the current time and one monitoring camera are set, and a hypothesis having a high likelihood (ie, A tracking processing device that obtains the position of a moving object at the current time based on a hypothesis suitable for tracking of a moving object is known (see Patent Document 1).
- a blind spot may occur due to an obstacle (a building such as a wall and a pillar, a fixture, etc.) existing in the surveillance space.
- a range excluding such blind spots is defined as an effective shooting range.
- the present invention has been devised in view of such problems of the prior art, and when tracking a moving object using images captured by a plurality of cameras, the accuracy of the tracking process between the cameras is improved. It is a main object of the present invention to provide a tracking processing device, a tracking processing system including the tracking processing device, and a tracking processing method.
- the tracking processing device is a tracking processing device that tracks a moving object between the cameras using images captured by a plurality of cameras, and the image of the moving object acquired from the captured image with respect to each camera.
- Storage means for storing a plurality of in-camera tracking information including information, and by narrowing down the plurality of in-camera tracking information, inter-camera tracking information used for the tracking process of the moving object between the plurality of cameras
- a narrowing-down unit for extraction and an association processing unit for associating the moving object in the captured image between the plurality of cameras based on the inter-camera tracking information.
- the present invention when a moving object is tracked using images captured by a plurality of cameras, it is possible to improve the accuracy of tracking processing between the cameras.
- a first invention made to solve the above-described problems is a tracking processing device that tracks a moving object between cameras using images captured by a plurality of cameras, and is obtained from the captured images with respect to each camera.
- the tracking processing device when tracking a moving object (tracking target) using images captured by a plurality of cameras, the tracking information in the camera in each camera is narrowed down (that is, the plurality of cameras). Tracking processing between multiple cameras (corresponding to moving objects) by extracting inter-camera tracking information used for tracking processing between multiple cameras. The accuracy of the attaching process) can be improved.
- the image processing device further includes a similarity calculation unit that calculates a similarity between the plurality of in-camera tracking information with respect to the image information of the moving object, The narrowing down is performed by eliminating in-camera tracking information having a relatively low similarity among the plurality of in-camera tracking information.
- the in-camera tracking information since the plurality of in-camera tracking information is narrowed down based on the mutual similarity, the in-camera tracking information (occlusion etc.) that is not suitable for the tracking processing between the plurality of cameras. Can be easily and reliably excluded from the processing target.
- the camera further includes an invalid area setting unit that sets an invalid area in the photographed image with respect to each camera, and the narrowing-down unit is provided in the plurality of cameras. The narrowing down is performed by excluding tracking information in the camera related to the moving object located in the invalid area in the tracking information.
- In-camera tracking information related to a moving object located within a preset invalid area for example, an area where an obstacle exists
- In-camera tracking information that is not suitable for tracking processing can be easily and reliably excluded from processing targets.
- the invalid area can be reset by a user in the third invention.
- the tracking processing device since the user can reset a preset invalid area, the accuracy of the invalid area is improved, and the camera is not suitable for the tracking process between a plurality of cameras.
- the inner tracking information can be more reliably excluded from the processing target.
- the invalid area setting means is based on the position information of the moving object related to the in-camera tracking information previously excluded by the narrowing-down means. An invalid area is set.
- the invalid area is set using the position information of the moving object related to the in-camera tracking information that has been excluded in the past. It becomes possible.
- the first operation mode that prioritizes processing accuracy and the second operation mode that prioritizes processing speed regarding the processing of the association processing means associates the moving object based on the inter-camera tracking information when the first operation mode is selected, while the second operation mode is selected. When the operation mode is selected, the moving object is associated based on the in-camera tracking information.
- the tracking processing device According to the tracking processing device according to the sixth aspect of the present invention, it is possible to appropriately execute the processing of the association processing means according to the priority between the processing accuracy and the processing speed.
- a moving direction for calculating a moving direction vector of the moving object based on position information of the moving object in the plurality of in-camera tracking information further includes a calculation unit, and the narrowing-down unit extracts the tracking information between cameras based on a degree of coincidence of angles of the moving direction vectors between the plurality of cameras.
- the tracking processing device of the seventh aspect of the invention since the angle of the moving direction vector of the moving object tends to coincide with the direction of the moving object, tracking within the camera having a high degree of coincidence with respect to the angle of the moving direction vector. By extracting information as tracking information between cameras, it is possible to improve the accuracy of tracking processing between the cameras.
- the association result of the moving object by the association processing unit is presented to the user, and whether or not the association result is appropriate is indicated to the user.
- the apparatus further includes a result presentation means for determining.
- the accuracy of tracking processing among a plurality of cameras can be further improved because the user determines whether or not the matching result of moving objects is appropriate.
- a ninth invention is a tracking process comprising the tracking processing device according to any of the first to eighth inventions, the plurality of cameras, and an in-camera tracking device that generates the in-camera tracking information. System.
- the tenth aspect of the invention is a tracking processing method for tracking a moving object between the cameras using images captured by a plurality of cameras, the image of the moving object acquired from the captured image with respect to each camera.
- Tracking information acquisition step for acquiring a plurality of in-camera tracking information including information, and inter-camera tracking used for tracking the moving object between the plurality of cameras by narrowing down the plurality of in-camera tracking information
- a tracking information narrowing step for extracting information
- an association processing step for associating the moving object in the captured image between the plurality of cameras based on the inter-camera tracking information.
- FIG. 1 is a configuration diagram of a tracking processing system according to an embodiment of the present invention
- FIG. 2 is an explanatory diagram showing an example of camera arrangement in the tracking processing system
- FIG. 3 is an operation (A) by a user of the tracking processing system. It is explanatory drawing of mode selection and (B) invalid area
- the tracking processing system 1 is connected to a plurality of (here, three) cameras 2 for capturing moving objects such as a person to be tracked or a car, and to each camera 2, and the moving objects in the captured image of each camera 2.
- Camera tracking processing device 3 that performs the tracking processing (hereinafter referred to as “in-camera tracking processing”), and the result of tracking processing of the moving object by the camera tracking processing device 3 (hereinafter referred to as “in-camera tracking information”).
- the inter-camera tracking processing device 4 that performs the tracking processing between the plurality of cameras 2.
- the tracking processing between cameras and the tracking processing device 4 between cameras are simply referred to as “tracking processing” and “tracking processing device 4”, and are distinguished from the in-camera tracking processing and the in-camera tracking processing device 3. .
- the plurality of cameras 2 are each composed of a video camera for monitoring, and sequentially transmit a plurality of photographed images (color moving images) to the tracking processing device 3 in the camera.
- Each camera 2 is installed on a wall or ceiling of a building or the like so that it can shoot a place that needs to be monitored.
- a person (moving object) H that passes through the passage 11 in the direction of the arrow is a tracking target, and the size and angle (body direction) of the person H to be photographed are approximately the same.
- the arrangement of each camera 2 (that is, the positional relationship between each camera and the position of a passage or the like where a person moves) is determined.
- Each camera 2 has a pan / tilt function and a zoom function in order to adjust the size and angle of the person H to be photographed.
- the shooting direction of each camera 2 is arranged obliquely downward with respect to the vertical direction (vertical direction in the drawing).
- the shooting direction of 2 is arbitrary.
- the fields of view (shooting ranges) 12 of the plurality of cameras 2 do not overlap each other, but a configuration in which the fields of view are overlapped as necessary is also possible.
- the number of cameras used in the tracking processing system 1 can be changed as necessary.
- the in-camera tracking processing device 3 is composed of a PC (Personal Computer) connected to the camera 2 by a network such as a dedicated cable or LAN, but is not limited to this, and an electronic device incorporating a server and a microcomputer for executing tracking processing. You may comprise from an apparatus etc.
- one in-camera tracking processing device 3 is provided for each camera 2.
- the present invention is not limited to this, and images captured by a plurality of cameras 2 are individually processed using one in-camera tracking processing device 3. May be. Further, a configuration in which the function of the in-camera tracking processing device 3 is added to each camera 2 and the tracking processing device 4 described in detail later is also possible.
- the in-camera tracking processing device 3 performs in-camera tracking processing of a photographed person based on the video signal (captured image) input from the camera 2. This in-camera tracking process is performed using a known person tracking (tracking) technique. For example, the in-camera tracking processing device 3 compares a captured image of the camera 2 with a background image acquired in advance (calculates a difference in luminance value for each pixel) (background difference), and a change region where a moving object exists To extract.
- the background image is a captured image in which no moving object is captured, and is captured in advance by each camera 2.
- the in-camera tracking processing device 3 detects a rectangular person region including a part of the person and the background using a known feature amount (for example, HOG (Histogram of Oriented Gradient) feature amount) in the change region. To do.
- the in-camera tracking processing device 3 detects a person area for each captured image, and tracks the person area using a known template matching or tracking filter (for example, a particle filter).
- the in-camera tracking processing device 3 stores information on each detected person image (pixel information of the person area) and related information (person position information, etc.) in a predetermined memory as in-camera tracking information.
- the plurality of in-camera tracking information obtained by the in-camera tracking processing is sequentially sent to the tracking processing device 4.
- the in-camera tracking process is not limited to the above method.
- a set (region) of pixels in which a motion is caused by a difference between frames may be detected as a person in a photographed image whose photographing time varies.
- tracking information in the camera is obtained for each person.
- the tracking processing device 4 is composed of a PC connected to the in-camera tracking processing device 3 through a network such as a dedicated cable or LAN, but is not limited to this, and an electronic device in which a server or a microcomputer for executing the tracking processing is incorporated. May be configured.
- the tracking processing device 4 includes a storage unit (storage unit) 21 that stores in-camera tracking information sequentially input from the in-camera tracking processing device 3, and a tracking condition setting that sets processing conditions in the tracking processing device 4 based on user input information. Part 23. User input information is input by the user via the input unit 22.
- the invalid area setting unit 35 sets an invalid area in the captured image (shooting range) of each camera 2 based on the user input information, and the operation mode selection unit (operation mode selection means) 36 Then, the operation mode (first operation mode or second operation mode) of the association processing unit 27 is selected based on the user input information.
- the tracking processing device 4 also relates to each camera 2 with a similarity calculation unit (similarity calculation means) 24 that calculates the mutual similarity of the plurality of in-camera tracking information, and the position of the person in the plurality of in-camera tracking information.
- a movement direction calculation unit (movement direction calculation means) 25 that calculates a movement direction vector of the person and a plurality of in-camera tracking information are narrowed down for each camera 2 (that is, inappropriate in-camera tracking information)
- a narrowing unit (narrowing means) 26 for extracting tracking information (hereinafter referred to as “inter-camera tracking information”) to be used for the tracking processing of the person between the cameras 2, and the extracted camera
- an association processing unit (association processing means) 27 for associating a person in a captured image between the cameras 2, and the person of the person by the association processing unit 27 Together presenting respond with results to the user, and a corresponding tracking result presentation unit for the appropriateness of the correlation result is determined to
- the output unit 29 includes a liquid crystal display
- the input unit 22 includes a keyboard and a mouse, but is not limited thereto.
- the output unit 29 and the input unit 22 are configured using other devices such as a touch panel display. May be.
- the user input information includes information on the position and size of the invalid area in the captured image, which will be described later, and selection information on the operation mode (here, high-precision mode or high-speed mode) for the association processing of the association processing unit 27. Is included at least.
- the inter-camera tracking information in addition to information of a person image (pixel information of a person area) photographed by the camera 2, related information such as a position of the person associated with the person, a shooting time, and a vector of the person's moving direction Is included.
- the information on the position of the person is constituted by a position history of the person area (person movement trajectory).
- the position of the person can be specified by various known methods.
- the coordinates of the head of the person are used as the reference position of the person.
- the tracking condition setting unit 23 has a GUI (Graphical User Interface) function. As shown in FIG. 3, on the user input screen displayed on the output unit 29, the user can track the tracking condition and the like via the input unit 22. The user input information can be input. In the operation mode selection screen relating to the association processing in FIG. 3A, the user clicks a mouse to a high accuracy mode (first operation mode) that prioritizes processing accuracy and a high speed mode (second operation mode) that prioritizes processing speed. You can choose either.
- first operation mode high accuracy mode
- second operation mode high speed mode
- the user can set the invalid area 31 in the captured image (imaging range) 30. More specifically, the user can drag the mouse pointer over an area where a valid human image cannot be obtained while observing the captured image, thereby dragging the invalid area 31 (indicated by shading in the captured image 30). Rectangular area) can be set.
- the invalid area 31 is held as a candidate area until it is confirmed by the user (for example, when the user presses a confirmation button (not shown)), and the user can perform resetting (correction of position and size) in a timely manner.
- the user can move the invalid area 31 by dragging the mouse pointer over the invalid area 31 (candidate area), and the ⁇ shown in each side defining the invalid area 31
- the size of the invalid area 31 can be changed by dragging the mouse pointer over any of the (black square) marks.
- the present invention is not limited to this, and the user may set (or reset) the invalid area 31 by inputting coordinates that define the invalid area 31 (for example, coordinates of four corners of the rectangular area).
- the vicinity of the fence 32, the utility pole 33, and the roadside tree 34 where the occlusion is likely to occur can be set as an invalid area. Accordingly, the periphery of other obstacles (such as buildings and fixtures) can be set as an invalid area.
- the invalid area is not limited to an area where the occlusion is likely to occur, but an area where the direction of the person is different from other places (for example, a place such as a path whose direction is different from the main passage), or a dark area (for example In other words, it is possible to set an area where the reliability of the image cannot be sufficiently obtained by being located in the peripheral portion of the shooting range of the camera.
- FIG. 3B only one rectangular invalid area 31 is shown.
- the invalid area 31 can be set in an arbitrary shape and an arbitrary number (including 0).
- the information on the position and size of the invalid area 31 that has been set constitutes a part of setting data relating to the tracking processing conditions that will be described in detail later, and is stored in the storage unit 21 as an invalid area map associated with each camera 2. .
- the similarity calculation unit 24 evaluates the similarity between the tracking information in the camera by comparing the person images for the tracking information in the camera. More specifically, the similarity calculation unit 24 creates an RGB color histogram for the entire human region or for each partial region obtained by dividing the human region, and calculates the similarity using a Bhattacharyya coefficient. The color histogram is normalized to make it less susceptible to differences in person area size. Further, when the person area is divided, the similarity between the color histograms is compared based on the average of the similarity of each area.
- the similarity calculation (similarity evaluation) by the similarity calculation unit 24 is not limited to the one shown here, and other known methods can be used. However, the use of the color histogram has an advantage that it is possible to avoid the problem of the difference in size between human images.
- the similarity calculation unit 24 uses a publicly known method (for example, using feature quantities relating to the outer shape of the head, torso, legs, etc. that are common to the person) and an evaluation area (for example, the head and torso) of the person image. And the similarity may be calculated for the evaluation region.
- the narrowing-down unit 26 can acquire the invalid area map set by the invalid area setting unit 35 and narrow down a plurality of in-camera tracking information based on the invalid area map.
- the narrowing-down unit 26 eliminates the tracking information in the camera related to the person existing in the invalid area set by the user, so that the tracking information in the camera that is not suitable for the tracking processing between the cameras 2 is subject to the tracking processing. Almost and reliably excluded from
- the narrowing-down unit 26 can acquire the similarity calculated by the similarity calculation unit 24, and can narrow down a plurality of in-camera tracking information based on the similarity.
- the narrowing-down unit 26 narrows down a plurality of in-camera tracking information based on the degree of similarity between them, and thereby performs in-camera tracking information that is not suitable for the tracking process between the cameras 2 (the orientation of the person is different from that of other images). Different images, images with occlusions, etc.) are easily and reliably excluded from the target of the tracking process.
- the narrowing-down unit 26 can acquire the moving direction vector calculated by the moving direction calculating unit 25 and can narrow down a plurality of in-camera tracking information based on the moving direction vector. In other words, since the angle of the moving direction vector indicating the moving direction of the person tends to coincide with the direction of the person, the narrowing-down unit 26 extracts the tracking information in the camera having a high degree of matching between the angles of the moving direction vector between the cameras. By extracting as tracking information, the accuracy of the tracking process between the cameras 2 is improved.
- each of the units 23 to 28 in the tracking processing device 4 includes a CPU (Central Processing Unit) that performs calculation and control according to a predetermined program (tracking processing program or the like), and a ROM (Read Only Memory) and RAM (read only memory) functioning as work memory.
- the storage unit 21 may be any storage unit that can store information necessary for the tracking process in the tracking processing device 4 at least temporarily.
- the storage unit 21 is composed of a hard disk here, but is not limited thereto. The storage device can be used.
- FIG. 4 is a flowchart showing the flow of the tracking process by the tracking processing device
- FIG. 5 is a flowchart showing the details of step ST102 in FIG. 4
- FIGS. 6, 7 and 8 are respectively in FIG.
- FIG. 9 is an explanatory diagram showing an overview of the processing in steps ST201, ST202, and ST203
- FIG. 9 is a diagram showing an example of a screen display related to the presentation of the tracking result in step ST106 in FIG.
- the tracking condition setting unit 23 creates setting data related to the tracking process condition based on the user input information input by the user (ST101).
- the setting data created here includes at least the invalid area map described above and the operation mode selection information of the association processing unit 27.
- the tracking condition setting unit 23 does not necessarily need to newly create setting data, and can partially correct or divert the setting data created in the past processing.
- the narrowing-down unit 26 performs the process of narrowing down the tracking information in the camera for each camera 2 (ST102).
- the narrowing-down unit 26 narrows down a plurality of in-camera tracking information based on the invalid area map (ST201).
- ST201 for example, as shown in FIG. 6, the positional relationship of the persons a1 to a6 in the captured image 30 with the invalid area 31 (see also FIG. 3B) set by the user is compared, and the invalidity is obtained.
- the in-camera tracking information regarding the persons a1 to a3 determined to exist in the area 31 is excluded, and the tracking information within the camera regarding the persons a4 to a6 is narrowed down.
- whether or not the persons a1 to a6 are present in the invalid area 31 is determined depending on whether or not the entire person area relating to each of the persons a1 to a6 is located in the invalid area 31.
- various methods can be employed.
- the narrowing-down unit 26 may perform the determination based on whether or not the reference coordinates (for example, the center coordinates of the head) in the person area are present in the invalid area 31.
- FIGS. 7A, 8A, and 8B described later are also included. The same.
- the narrowing-down unit 26 narrows down a plurality of in-camera tracking information based on the similarity acquired from the similarity calculation unit 24 (ST202).
- the similarity calculation unit 24 calculates the mutual similarity for the persons b1 to b4 in the captured image 30 shown in FIG.
- FIG. 7A a plurality of persons in the photographed image 30 are simplified by circles (corresponding to substantially heads) and ellipses (corresponding to approximately trunks and legs) (described later in FIG. 8). The same applies to.)
- FIG. 7B is a table summarizing mutual similarities.
- the similarity of the person b1 (vertical square) to each of the persons b2, b3, and b4 (horizontal square) is 76, 72, 71, and the total of their similarities is 219.
- the magnitude of each numerical value indicates the degree of similarity.
- the sum of the similarities of the persons b2, b3, and b4 (vertical squares) is 254, 247, and 248, respectively.
- the difference in the similarity between the persons b1 to b4 shown in FIG. 7B is due to the occurrence of occlusion, false detection of the person, and the like.
- the narrowing-down unit 26 uses the total similarity shown in FIG. 7B as the evaluation value used in the process of ST202. Then, the narrowing down unit 26 excludes the tracking information in the camera related to the person b1 having the smallest sum of similarities, and narrows down the tracking information in the camera to those related to the persons b2 to b4.
- the similarity is evaluated for the persons b1 to b4.
- the configuration is such that the person b1 with the smallest evaluation value (total similarity) is excluded.
- a predetermined threshold is set for the evaluation value, and the tracking information in the camera related to the person that does not satisfy the threshold is set.
- a configuration that excludes all is also possible. Or you may extract the tracking information in a camera regarding one person (here person b2) with the highest evaluation value as tracking information between cameras (all others are excluded).
- ST202 is performed independently of the narrowing-down process of ST201, but the process of ST202 can also be executed for the persons narrowed down in ST201 (for example, persons a4 to a6 in FIG. 6). In that case, the processing order of ST201 and ST202 can be changed.
- the narrowing-down unit 26 narrows down a plurality of in-camera tracking information based on the moving direction vector acquired from the moving direction calculating unit 25 (ST203).
- ST203 the narrowing-down process of ST203, as shown in FIG. 8A, the movement direction vectors vc1 to vc7 of the persons c1 to c7 are acquired for the captured image 30a of the first camera 2a, and the shooting of the second camera 2b is performed.
- the movement direction vectors vd1 to vd5 of the persons d1 to d5 are acquired.
- moving direction vectors vc1 to vc7 and moving direction vectors vd1 to vd5 are calculated based on the coordinates of a specific part (here, the top of the head) of the person whose shooting time is around (that is, around time in the movement locus).
- the narrowing-down unit 26 compares the angles of the movement direction vectors vc1 to vc7 in the captured image of the first camera 2a with the angles of the movement direction vectors vd1 to vd5 in the captured image of the second camera 2b, respectively, and the degree of coincidence thereof is the highest.
- In-camera tracking information relating to a person here, person c4 and person d3 having a moving direction vector (the angle between them being the smallest) is a pair of inter-camera tracking information (that is, a target of tracking processing between cameras). select.
- the peripheral portions 41a and 41b of the captured images 30a and 30b painted with diagonal lines cause distortion of the subject. Since the reliability of the image is lowered, it is preferable to exclude the persons c1 and c8 and the persons d1 and d6 located at the peripheral portions 41a and 41b from the selection target in advance. Further, it is preferable that the narrowing-down unit 26 determines the magnitude of the moving direction vector, and excludes the vector having a magnitude equal to or smaller than a predetermined threshold from the processing target in ST203. As a result, it is possible to avoid a decrease in the reliability of the angle of the movement vector when the person moves at a low speed or is stationary.
- ST203 is performed independently of the narrowing-down process of ST201 and ST202, the process of ST203 can be executed for a person image narrowed down by at least one of ST201 and ST202. In that case, the processing order of ST201 to ST203 can be changed.
- the association processing unit 27 confirms the operation mode selected based on the setting data (ST103), and when the high accuracy mode is selected (Yes), the camera extracted in ST102 Based on the inter-tracking information, a person is associated between the cameras 2 (ST104).
- one inter-camera tracking information is extracted for each camera 2 (that is, a plurality of in-camera tracking information is excluded except for one), but two or more inter-camera tracking is performed from each camera 2.
- a configuration in which information is extracted is also possible.
- the association processing unit 27 associates persons between the cameras 2 based on the plurality of in-camera tracking information before being processed in ST102 ( ST105).
- a person is associated between the cameras 2 based on one in-camera tracking information appropriately selected from a plurality of in-camera tracking information.
- the association processing unit 27 selects one in-camera tracking information (for example, the earliest or latest shooting time) based on the shooting time of a predetermined person detected from the shot image of each camera 2. be able to.
- the tracking result presentation unit 28 presents the result of the association process in ST104 or ST105 to the user (ST106).
- ST106 as shown in FIG. 9, the person images e1 and e2 in the pair of in-camera tracking information associated between the two cameras are displayed on the screen.
- the user visually checks whether the person images e1 and e2 belong to the same person, and if it is determined that they belong to the same person, presses the “Yes” button 45 to approve the association. On the other hand, if it is determined that they are not the same person, the association can be rejected by pressing the “No” button 46.
- the process may be terminated as being impossible to be associated, but preferably, a plurality of person images that are candidates for association are displayed, and the person image most suitable for the user among them is displayed. It is good to select and confirm the result of a matching process.
- the process of ST106 may be executed not only to associate one in-camera tracking information of each camera with each other but also to associate a plurality of in-camera tracking information of each camera with each other. Further, the processes of ST104 and ST105 are similarly executed between other different cameras.
- FIG. 10 is an explanatory view showing a modified example of the invalid area setting in the tracking processing device.
- the invalid area 31 is set by the user, but the tracking processing device 4 can set the invalid area (or its candidate area) from the past tracking process result. .
- the tracking processing device 4 sequentially stores the results of past narrowing processing in the storage unit 21 in order to set an invalid area. That is, the storage unit 21 stores data such as the coordinate information of the person area regarding the tracking information in the camera that has been excluded in the past by ST201 to ST203 (or at least one of them) in FIG. Yes. As shown in FIG. 10, the invalid area setting unit 35 determines the appearance frequency of persons (persons f1 to f6 in FIG. 10) in the in-camera tracking information excluded from the captured image 30 based on the result of past narrowing processing. A rectangular area surrounding the high area can be set as the invalid areas 51 and 52.
- the estimation accuracy of the invalid areas 51 and 52 can be further improved by considering the shooting time of the excluded tracking information in the camera (that is, refer to only the result of the narrowing-down process related to the person shot in the same time zone). Rise.
- the invalid areas 51 and 52 can be reset by the user in the same manner as the invalid area 31 shown in FIG.
- the tracking processing device, the tracking processing system including the tracking processing method, and the tracking processing method according to the present invention improve the accuracy of tracking processing between the cameras when tracking a moving object using captured images from a plurality of cameras. This is useful as a tracking processing apparatus for tracking a moving object using images captured by a plurality of cameras, a tracking processing system including the tracking processing apparatus, a tracking processing method, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
- Alarm Systems (AREA)
Abstract
Description
2 カメラ
3 カメラ内追尾処理装置
4 カメラ間追尾処理装置(追尾処理装置)
21 記憶部(記憶手段)
22 入力部
23 追尾条件設定部
24 類似度算出部(類似度算出手段)
25 移動方向算出部(移動方向算出手段)
26 絞り込み部(絞り込み手段)
27 対応付け処理部(対応付け処理手段)
28 追尾結果提示部(結果提示手段)
29 出力部
30 撮影画像
31 無効領域
32 無効領域設定部(無効領域設定手段)
Claims (10)
- 複数のカメラによる撮影画像を用いて当該カメラ間で移動物体を追尾する追尾処理装置であって、
前記各カメラに関し、前記撮影画像から取得された前記移動物体の画像情報を含む複数のカメラ内追尾情報を記憶する記憶手段と、
前記複数のカメラ内追尾情報の絞り込みを行うことにより、前記複数のカメラ間での前記移動物体の追尾処理に用いるカメラ間追尾情報を抽出する絞り込み手段と、
前記カメラ間追尾情報に基づき、前記複数のカメラ間で前記撮影画像における前記移動物体の対応付けを行う対応付け処理手段と
を備えたことを特徴とする追尾処理装置。 - 前記移動物体の画像情報について前記複数のカメラ内追尾情報の相互の類似度を算出する類似度算出手段を更に備え、
前記絞り込み手段は、前記複数のカメラ内追尾情報のうち前記類似度が相対的に低いカメラ内追尾情報を排除することにより前記絞り込みを行うことを特徴とする請求項1に記載の追尾処理装置。 - 前記各カメラに関し、前記撮影画像における無効領域を設定する無効領域設定手段を更に備え、
前記絞り込み手段は、前記複数のカメラ内追尾情報のうち前記無効領域内に位置する前記移動物体に関するカメラ内追尾情報を排除することにより前記絞り込みを行うことを特徴とする請求項1または請求項2に記載の追尾処理装置。 - 前記無効領域は、ユーザによって再設定可能であることを特徴とする請求項3に記載の追尾処理装置。
- 前記無効領域設定手段は、前記絞り込み手段によって過去に排除された前記カメラ内追尾情報に関する前記移動物体の位置情報に基づき、前記無効領域を設定することを特徴とする請求項3または請求項4に記載の追尾処理装置。
- 前記対応付け処理手段の処理に関し、処理精度を優先する第1動作モードと、処理速度を優先する第2動作モードとを選択する動作モード選択手段を更に備え、
前記対応付け処理手段は、前記第1動作モードが選択された場合、前記カメラ間追尾情報に基づき前記移動物体の対応付けを行う一方、前記第2動作モードが選択された場合、前記カメラ内追尾情報に基づき前記移動物体の対応付けを行うことを特徴とする請求項1から請求項5のいずれかに記載の追尾処理装置。 - 前記複数のカメラ内追尾情報における前記移動物体の位置情報に基づき、当該移動物体の移動方向ベクトルを算出する移動方向算出手段を更に備え、
前記絞り込み手段は、前記複数のカメラ間での前記移動方向ベクトルの角度の一致度合いに基づき、前記カメラ間追尾情報を抽出することを特徴とする請求項1から請求項6のいずれかに記載の追尾処理装置。 - 前記対応付け処理手段による前記移動物体の対応付け結果をユーザに提示すると共に、当該対応付け結果の適否をユーザに判定させる結果提示手段を更に備えたことを特徴とする請求項1から請求項7のいずれかに記載の追尾処理装置。
- 請求項1から請求項8のいずれかに係る追尾処理装置と、前記複数のカメラと、前記カメラ内追尾情報を生成するカメラ内追尾装置とを備えた追尾処理システム。
- 複数のカメラによる撮影画像を用いて当該カメラ間で移動物体を追尾する追尾処理方法であって、
前記各カメラに関し、前記撮影画像から取得された前記移動物体の画像情報を含む複数のカメラ内追尾情報を取得する追尾情報取得ステップと、
前記複数のカメラ内追尾情報の絞り込みを行うことにより、前記複数のカメラ間での前記移動物体の追尾処理に用いるカメラ間追尾情報を抽出する追尾情報絞り込みステップと、
前記カメラ間追尾情報に基づき、前記複数のカメラ間で前記撮影画像における前記移動物体の対応付けを行う対応付け処理ステップと
を有することを特徴とする追尾処理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/780,086 US10445887B2 (en) | 2013-03-27 | 2014-03-04 | Tracking processing device and tracking processing system provided with same, and tracking processing method |
GB1516981.6A GB2529943B (en) | 2013-03-27 | 2014-03-04 | Tracking processing device and tracking processing system provided with same, and tracking processing method |
DE112014001658.6T DE112014001658T5 (de) | 2013-03-27 | 2014-03-04 | Nachverfolgungsverarbeitungsvorrichtung und Nachverfolgungsverarbeitungssystem, das damit ausgestattet ist, und Nachverfolgungsverarbeitungsverfahren |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013066294A JP6273685B2 (ja) | 2013-03-27 | 2013-03-27 | 追尾処理装置及びこれを備えた追尾処理システム並びに追尾処理方法 |
JP2013-066294 | 2013-03-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014155979A1 true WO2014155979A1 (ja) | 2014-10-02 |
Family
ID=51623000
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/001174 WO2014155979A1 (ja) | 2013-03-27 | 2014-03-04 | 追尾処理装置及びこれを備えた追尾処理システム並びに追尾処理方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10445887B2 (ja) |
JP (1) | JP6273685B2 (ja) |
DE (1) | DE112014001658T5 (ja) |
GB (1) | GB2529943B (ja) |
WO (1) | WO2014155979A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108370412A (zh) * | 2015-12-02 | 2018-08-03 | 索尼公司 | 控制装置、控制方法和程序 |
CN109345748A (zh) * | 2018-10-31 | 2019-02-15 | 北京锐安科技有限公司 | 用户设备关联方法、装置、服务端、检测设备及介质 |
TWI688924B (zh) * | 2019-04-15 | 2020-03-21 | 勝品電通股份有限公司 | 追蹤辨識監控系統 |
CN115984318A (zh) * | 2023-03-20 | 2023-04-18 | 宝略科技(浙江)有限公司 | 一种基于特征最大关联概率的跨摄像头行人跟踪方法 |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5834249B2 (ja) | 2013-11-20 | 2015-12-16 | パナソニックIpマネジメント株式会社 | 人物移動分析装置、人物移動分析システムおよび人物移動分析方法 |
US20160294960A1 (en) * | 2014-03-30 | 2016-10-06 | Gary Stephen Shuster | Systems, Devices And Methods For Person And Object Tracking And Data Exchange |
JP5999394B2 (ja) | 2015-02-20 | 2016-09-28 | パナソニックIpマネジメント株式会社 | 追跡支援装置、追跡支援システムおよび追跡支援方法 |
JP6492746B2 (ja) | 2015-02-23 | 2019-04-03 | 富士通株式会社 | 画像処理プログラム、画像処理装置、及び画像処理方法 |
EP3267395B1 (en) * | 2015-03-04 | 2019-08-28 | Panasonic Intellectual Property Management Co., Ltd. | Person tracking method and person tracking device |
JP6495705B2 (ja) * | 2015-03-23 | 2019-04-03 | 株式会社東芝 | 画像処理装置、画像処理方法、画像処理プログラムおよび画像処理システム |
US9916496B2 (en) | 2016-03-25 | 2018-03-13 | Zero Latency PTY LTD | Systems and methods for operating a virtual reality environment using colored marker lights attached to game objects |
US10486061B2 (en) | 2016-03-25 | 2019-11-26 | Zero Latency Pty Ltd. | Interference damping for continuous game play |
US10071306B2 (en) | 2016-03-25 | 2018-09-11 | Zero Latency PTY LTD | System and method for determining orientation using tracking cameras and inertial measurements |
US10421012B2 (en) | 2016-03-25 | 2019-09-24 | Zero Latency PTY LTD | System and method for tracking using multiple slave servers and a master server |
US10717001B2 (en) | 2016-03-25 | 2020-07-21 | Zero Latency PTY LTD | System and method for saving tracked data in the game server for replay, review and training |
US10751609B2 (en) | 2016-08-12 | 2020-08-25 | Zero Latency PTY LTD | Mapping arena movements into a 3-D virtual world |
JP6659524B2 (ja) * | 2016-11-18 | 2020-03-04 | 株式会社東芝 | 移動体追跡装置、表示装置および移動体追跡方法 |
JP6833617B2 (ja) * | 2017-05-29 | 2021-02-24 | 株式会社東芝 | 移動体追跡装置、移動体追跡方法およびプログラム |
JP6938270B2 (ja) * | 2017-08-09 | 2021-09-22 | キヤノン株式会社 | 情報処理装置および情報処理方法 |
JP6412998B1 (ja) * | 2017-09-29 | 2018-10-24 | 株式会社Qoncept | 動体追跡装置、動体追跡方法、動体追跡プログラム |
WO2020019356A1 (zh) * | 2018-07-27 | 2020-01-30 | 华为技术有限公司 | 一种终端切换摄像头的方法及终端 |
US10740637B2 (en) * | 2018-09-18 | 2020-08-11 | Yoti Holding Limited | Anti-spoofing |
WO2020174566A1 (ja) * | 2019-02-26 | 2020-09-03 | 日本電気株式会社 | 監視装置、追跡方法、及び非一時的なコンピュータ可読媒体 |
CN110232706B (zh) * | 2019-06-12 | 2022-07-29 | 睿魔智能科技(深圳)有限公司 | 多人跟拍方法、装置、设备及存储介质 |
JP7491321B2 (ja) | 2020-02-03 | 2024-05-28 | コニカミノルタ株式会社 | 再同定装置、再同定プログラム、および再同定方法 |
CN112200841B (zh) * | 2020-09-30 | 2021-08-27 | 杭州海宴科技有限公司 | 一种基于行人体态的跨域多摄像头跟踪方法和装置 |
US12033390B2 (en) * | 2021-10-26 | 2024-07-09 | Hitachi, Ltd. | Method and apparatus for people flow analysis with inflow estimation |
WO2024071587A1 (ko) * | 2022-09-29 | 2024-04-04 | 삼성전자 주식회사 | 객체를 추적하는 방법 및 전자 장치 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0583712A (ja) * | 1991-09-20 | 1993-04-02 | Hitachi Ltd | 画像処理システム及び監視システム |
JPH06325180A (ja) * | 1993-05-14 | 1994-11-25 | Matsushita Electric Ind Co Ltd | 移動体自動追跡装置 |
JP2003087771A (ja) * | 2001-09-07 | 2003-03-20 | Oki Electric Ind Co Ltd | 監視システム及び方法 |
JP2006093955A (ja) * | 2004-09-22 | 2006-04-06 | Matsushita Electric Ind Co Ltd | 映像処理装置 |
JP2007135093A (ja) * | 2005-11-11 | 2007-05-31 | Sony Corp | 映像監視システム及び方法 |
JP2008219570A (ja) * | 2007-03-06 | 2008-09-18 | Matsushita Electric Ind Co Ltd | カメラ間連結関係情報生成装置 |
JP2011193187A (ja) * | 2010-03-15 | 2011-09-29 | Omron Corp | 監視カメラ端末 |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150297949A1 (en) * | 2007-06-12 | 2015-10-22 | Intheplay, Inc. | Automatic sports broadcasting system |
JP4007899B2 (ja) * | 2002-11-07 | 2007-11-14 | オリンパス株式会社 | 運動検出装置 |
JP4700477B2 (ja) * | 2005-11-15 | 2011-06-15 | 株式会社日立製作所 | 移動体監視システムおよび移動体特徴量算出装置 |
JP4241763B2 (ja) * | 2006-05-29 | 2009-03-18 | 株式会社東芝 | 人物認識装置及びその方法 |
JP2008035301A (ja) * | 2006-07-31 | 2008-02-14 | Hitachi Ltd | 移動体追跡装置 |
WO2009076182A1 (en) * | 2007-12-13 | 2009-06-18 | Clemson University | Vision based real time traffic monitoring |
DE112009000480T5 (de) * | 2008-03-03 | 2011-04-07 | VideoIQ, Inc., Bedford | Dynamische Objektklassifikation |
JP5180733B2 (ja) | 2008-08-19 | 2013-04-10 | セコム株式会社 | 移動物体追跡装置 |
JP5634266B2 (ja) | 2008-10-17 | 2014-12-03 | パナソニック株式会社 | 動線作成システム、動線作成装置及び動線作成方法 |
JP5499853B2 (ja) * | 2010-04-08 | 2014-05-21 | 株式会社ニコン | 電子カメラ |
GB2515926B (en) * | 2010-07-19 | 2015-02-11 | Ipsotek Ltd | Apparatus, system and method |
TWI452540B (zh) * | 2010-12-09 | 2014-09-11 | Ind Tech Res Inst | 影像式之交通參數偵測系統與方法及電腦程式產品 |
JP2012163940A (ja) | 2011-01-17 | 2012-08-30 | Ricoh Co Ltd | 撮像装置、撮像方法、及び撮像プログラム |
US9781336B2 (en) | 2012-01-30 | 2017-10-03 | Panasonic Intellectual Property Management Co., Ltd. | Optimum camera setting device and optimum camera setting method |
JPWO2014050432A1 (ja) | 2012-09-27 | 2016-08-22 | 日本電気株式会社 | 情報処理システム、情報処理方法及びプログラム |
JP6406241B2 (ja) | 2013-02-15 | 2018-10-17 | 日本電気株式会社 | 情報処理システム、情報処理方法及びプログラム |
JP5506990B1 (ja) | 2013-07-11 | 2014-05-28 | パナソニック株式会社 | 追跡支援装置、追跡支援システムおよび追跡支援方法 |
JP5506989B1 (ja) | 2013-07-11 | 2014-05-28 | パナソニック株式会社 | 追跡支援装置、追跡支援システムおよび追跡支援方法 |
-
2013
- 2013-03-27 JP JP2013066294A patent/JP6273685B2/ja active Active
-
2014
- 2014-03-04 US US14/780,086 patent/US10445887B2/en active Active
- 2014-03-04 DE DE112014001658.6T patent/DE112014001658T5/de active Pending
- 2014-03-04 GB GB1516981.6A patent/GB2529943B/en active Active
- 2014-03-04 WO PCT/JP2014/001174 patent/WO2014155979A1/ja active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0583712A (ja) * | 1991-09-20 | 1993-04-02 | Hitachi Ltd | 画像処理システム及び監視システム |
JPH06325180A (ja) * | 1993-05-14 | 1994-11-25 | Matsushita Electric Ind Co Ltd | 移動体自動追跡装置 |
JP2003087771A (ja) * | 2001-09-07 | 2003-03-20 | Oki Electric Ind Co Ltd | 監視システム及び方法 |
JP2006093955A (ja) * | 2004-09-22 | 2006-04-06 | Matsushita Electric Ind Co Ltd | 映像処理装置 |
JP2007135093A (ja) * | 2005-11-11 | 2007-05-31 | Sony Corp | 映像監視システム及び方法 |
JP2008219570A (ja) * | 2007-03-06 | 2008-09-18 | Matsushita Electric Ind Co Ltd | カメラ間連結関係情報生成装置 |
JP2011193187A (ja) * | 2010-03-15 | 2011-09-29 | Omron Corp | 監視カメラ端末 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108370412A (zh) * | 2015-12-02 | 2018-08-03 | 索尼公司 | 控制装置、控制方法和程序 |
US11025808B2 (en) | 2015-12-02 | 2021-06-01 | Sony Corporartion | Control apparatus and control method |
CN109345748A (zh) * | 2018-10-31 | 2019-02-15 | 北京锐安科技有限公司 | 用户设备关联方法、装置、服务端、检测设备及介质 |
TWI688924B (zh) * | 2019-04-15 | 2020-03-21 | 勝品電通股份有限公司 | 追蹤辨識監控系統 |
CN115984318A (zh) * | 2023-03-20 | 2023-04-18 | 宝略科技(浙江)有限公司 | 一种基于特征最大关联概率的跨摄像头行人跟踪方法 |
CN115984318B (zh) * | 2023-03-20 | 2023-06-13 | 宝略科技(浙江)有限公司 | 一种基于特征最大关联概率的跨摄像头行人跟踪方法 |
Also Published As
Publication number | Publication date |
---|---|
US10445887B2 (en) | 2019-10-15 |
GB2529943A (en) | 2016-03-09 |
JP2014192700A (ja) | 2014-10-06 |
JP6273685B2 (ja) | 2018-02-07 |
DE112014001658T5 (de) | 2016-01-21 |
US20160063731A1 (en) | 2016-03-03 |
GB201516981D0 (en) | 2015-11-11 |
GB2529943B (en) | 2019-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6273685B2 (ja) | 追尾処理装置及びこれを備えた追尾処理システム並びに追尾処理方法 | |
JP6806188B2 (ja) | 情報処理システム、情報処理方法及びプログラム | |
JP6555906B2 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
US10691947B2 (en) | Monitoring device | |
JP5484184B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
CN108198199B (zh) | 运动物体跟踪方法、运动物体跟踪装置和电子设备 | |
JP6579950B2 (ja) | カメラの撮影画像に映る人物を検出する画像解析装置、プログラム及び方法 | |
TWI438702B (zh) | 影像環境邊界之動態設定方法及人員活動內容之即時判定方法 | |
KR102144394B1 (ko) | 영상 정합 장치 및 이를 이용한 영상 정합 방법 | |
JP6803525B2 (ja) | 顔検出装置およびこれを備えた顔検出システムならびに顔検出方法 | |
US10762372B2 (en) | Image processing apparatus and control method therefor | |
JP7188240B2 (ja) | 人検出装置および人検出方法 | |
JP2010057105A (ja) | オブジェクトの3次元追跡方法およびシステム | |
JP2015194901A (ja) | 追跡装置および追尾システム | |
JP2020106970A (ja) | 人検出装置および人検出方法 | |
US20230419500A1 (en) | Information processing device and information processing method | |
KR20080079506A (ko) | 촬영장치 및 이의 대상 추적방법 | |
Hadi et al. | Fusion of thermal and depth images for occlusion handling for human detection from mobile robot | |
Liu et al. | Visualization of cross-view multi-object tracking for surveillance videos in crossroad | |
JP6698058B2 (ja) | 画像処理装置 | |
JP6548306B2 (ja) | カメラの撮影画像に映る人物を追跡する画像解析装置、プログラム及び方法 | |
Chang et al. | Automatic cooperative camera system for real-time bag detection in visual surveillance | |
Wen et al. | Feature-level image fusion for SAR and optical images | |
JP7359306B2 (ja) | 追跡装置、追跡システム、追跡方法、およびプログラム | |
JP2021033343A (ja) | 人物検知装置、方法およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14773970 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 1516981 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20140304 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1516981.6 Country of ref document: GB Ref document number: 14780086 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1120140016586 Country of ref document: DE Ref document number: 112014001658 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14773970 Country of ref document: EP Kind code of ref document: A1 |