WO2013183738A1 - 情報処理装置、情報処理方法、プログラムおよび監視カメラシステム - Google Patents
情報処理装置、情報処理方法、プログラムおよび監視カメラシステム Download PDFInfo
- Publication number
- WO2013183738A1 WO2013183738A1 PCT/JP2013/065758 JP2013065758W WO2013183738A1 WO 2013183738 A1 WO2013183738 A1 WO 2013183738A1 JP 2013065758 W JP2013065758 W JP 2013065758W WO 2013183738 A1 WO2013183738 A1 WO 2013183738A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- map
- conversion
- person
- camera
- area
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
- G06F16/786—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using motion, e.g. object motion or camera motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/188—Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the present technology relates to an information processing apparatus, an information processing method, a program, and a monitoring camera system, and more particularly, to an information processing apparatus that performs processing for displaying an existing position of an object captured by a camera on a map.
- Patent Document 1 proposes that a user manually inputs information on a point on a camera image and a corresponding point on a map, and calculates a parameter of a conversion formula based on the information. Yes.
- the purpose of this technology is to reduce user's effort and improve usability.
- a conversion unit that converts the position of the object on the camera image into a position on the map using a conversion formula; Based on the conversion result, a display unit that displays the position of the object on the map; An area setting unit for setting an available area of the object on the map; Based on the conversion result at each time within a predetermined time and the set existence possible area of the object, the conversion is performed so that the locus on the map of the object is within the existence possible area.
- An information processing apparatus includes a parameter determination unit that determines a parameter of an expression.
- the position of the object on the camera image for example, a person is converted into a position on the map using a conversion formula by the conversion unit.
- the display unit displays the position of the object on the map based on the conversion result. In this case, for example, an icon indicating the object is displayed at the position of the object on the map.
- the area where the target object can exist on the map is set by the area setting unit. This setting is performed based on, for example, designation of an area where the object can exist on the map by the user.
- the parameter determination unit determines the parameters of the conversion formula. That is, the parameters of the conversion formula are determined so that the trajectory on the map of the target object is within the existing target area based on the conversion result at each time within the predetermined time and the set target target target target area. Is done.
- the parameter determination unit obtains and uses a trajectory on the map for each object.
- the object is a person
- a person extracting unit that extracts a person from a camera image and obtains the position of the person is further provided.
- an initial value setting unit that sets initial values of parameters of the conversion formula based on the user input value and the fixed value is further provided.
- the parameters of the conversion formula are determined so that the locus on the map of the target object is within the existence possible area according to the user designation. Therefore, the parameters of the conversion formula can be determined with a small burden on the user. Therefore, it is possible to reduce the user's trouble and improve the usability.
- the parameter determination unit determines the parameter of the conversion formula based on the conversion result of the immediately preceding fixed time and the set target object existence area at fixed time intervals. May be.
- the parameters of the conversion formula can be updated so as to be more optimal at regular time intervals, and it is possible to cope with changes with time due to some factors.
- the parameter determination unit determines the parameter of the conversion formula so that the trajectory on the map of the target object is farther from the boundary of the set target object possible area. It may be made like. Thereby, the parameters of the conversion formula can be determined to be more optimal.
- the parameter determination unit may determine the parameters of the conversion formula so that the moving speed of the object on the map is constant. Thereby, the parameters of the conversion formula can be determined to be more optimal.
- FIG. 1 shows an example of a surveillance camera system 10 as an embodiment.
- the monitoring camera system 10 includes a monitoring camera 11, a personal computer (PC) 12 as an information processing apparatus that processes a captured image of the monitoring camera 11, and a monitor 13.
- PC personal computer
- the surveillance camera system 10 is a system that shows on the map displayed on the monitor 13 the position where the person 21 reflected in the surveillance camera 11 exists.
- the surveillance camera system 10 is intended to assist the security staff (Security Stuff) 22 who monitors the captured image of the surveillance camera 11, and where the person 21 reflected in the surveillance camera 11 is located on the map. It is possible to grasp whether it is present.
- the map means, for example, a view of the space where the surveillance camera 11 is installed as viewed from above.
- this map is a sketch of the floor of the building where the surveillance camera 11 is installed, as shown.
- a camera icon (camera icon) is displayed corresponding to the installation position of the surveillance camera 11
- a person icon (human icon) is displayed corresponding to the position of the person 21.
- the personal computer (hereinafter simply referred to as “computer”) 12 calculates the position of the person 21 on the map from the position of the person 21 reflected on the monitoring camera 11 on the camera image.
- Expressions (1) and (2) below are expressions for calculating the position (u, v) on the map from the position (x, y) of the person 21 on the camera image.
- Equation (3) shows a specific example of a function form. However, this example is an example, and the present invention is not limited to this.
- (W, H) of this function is a known quantity
- (X, Y, Z, ⁇ , ⁇ , ⁇ , f, s) are variables determined by the installation state of the camera.
- These eight variables are parameters of the conversion formula, and are hereinafter referred to as “camera parameters”. That is, in order to calculate the position (u, v) on the map of the person 21 from the position (x, y) on the camera image of the person 21 reflected on the monitoring camera 11, the personal computer 12 uses this camera parameter. The process of determining is executed at regular time intervals.
- the point of this technology is that the user specifies the possible area of the person 21 on the map in advance. That is, for example, when an office floor plan as shown in FIG. 3A is used as a map, as shown in FIG. 3B, an area that can be seen from the camera and that can exist, such as this example Then, the user specifies the area of the corridor in advance. In FIG. 3B, the hatched area is that area. In this example, the interior of each room is not visible to the camera, so it does not fall into that area. The user designates the area using a draw tool or the like.
- the person 21 is detected from the camera image at each time, and the detection position is sequentially recorded, thereby acquiring the trajectory on the camera image of the person reflected in the camera in the past.
- camera parameters are obtained from information on a person's possible area and information on past human trajectories.
- the camera parameters are obtained by quantifying the degree of fit and the degree of protrusion of the person in the possible existence area and using the result as an evaluation function.
- FIG. 7 shows a functional block diagram of the monitoring camera 11 and the computer 12 constituting the monitoring camera system 10 shown in FIG.
- the surveillance camera 11 includes an imaging unit 111 and an image data transmission unit 112.
- the imaging unit 111 includes a CMOS image sensor, a CCD image sensor, and the like, and obtains a captured image.
- the image data transmission unit 112 transmits the captured image (camera image) to the computer 12 as an information processing apparatus by wireless or wired.
- the computer 12 includes an image data receiving unit 121, a person position extracting unit 122, an ID assigning unit 123, a person position recording unit 124, a trajectory information holding unit 125, a map position calculating unit 126, and a camera parameter holding unit. 127 and a camera parameter update unit 128. Each of these parts is a part that exists for each camera.
- the computer 12 includes a map position display unit 131, a possible area information input unit 132, and a possible area information holding unit 133. These units are common to the cameras.
- the image data receiving unit 121 receives a captured image (camera image) sent from the monitoring camera 11.
- the person position extraction unit 122 extracts a position (x, y) on the person image shown in the camera image.
- the ID assigning unit 123 assigns an ID (serial number) for each person to the extracted person position.
- the person position recording unit 124 records information on the person position to which the ID is assigned in the trajectory information holding unit 125.
- the information of person positions having the same ID is collectively referred to as “trajectory”.
- the camera parameter update unit 128 obtains and updates camera parameters based on the information about the trajectories held in the trajectory information holding unit 125 and the possible area of the person 21 at regular time intervals.
- the camera parameter holding unit 127 holds camera parameters updated at regular intervals.
- the on-map position calculation unit 126 calculates a position (u, v) on the map from the extracted person position (x, y) using the stored parameters of the conversion formula.
- the map position display unit 131 indicates the presence of the person 21 at the calculated map position (u, v) on the map displayed on the screen of the monitor 13. For example, an icon indicating the person 21 is displayed at the position (u, v) on the map.
- the presence area information input unit 132 is an input unit for the user to specify an area where the person 21 can exist on the map. Based on the user designation, the possible area of the person 21 is set, and the possible area information holding unit 133 holds the information.
- step ST5 to step ST12 are performed in parallel for each camera.
- step ST ⁇ b> 2 the computer 12 prepares image data of a map (plan view viewed from above) of the area where the monitoring camera 11 is installed and displays it on the screen of the monitor 13.
- step ST ⁇ b> 3 the computer 12 sets an area that can be seen from the monitoring camera on the map and can contain a person (existable area). The computer 12 performs this setting based on the user's area designation input.
- step ST4 the computer 12 sets initial values of camera parameters.
- initial values For each camera, an approximate position on the camera map, installation height, camera direction, depression angle, rotation angle, and scale are set. Since the position of the camera on the map and the direction of the camera are different for each camera, for example, the user is asked to specify an approximate value.
- general values may be assigned in advance to each camera as fixed values.
- step ST5 the computer 12 acquires a camera image (captured image).
- step ST6 when the person 21 is reflected in the camera image, the computer 12 extracts the position (x, y) of the person 21 on the image. When there are a plurality of persons 21, the positions of all the persons 21 are obtained.
- the position refers to the position of the foot of the person 21.
- the position of the upper left corner of the rectangle on the camera image is (xul, yul)
- the position of the lower right corner on the camera image is (xlr, ylr)
- x and y are values represented by the following formulas (4) and (5), respectively.
- x (xlr ⁇ xul) / 2
- y ylr (5)
- the method using the background difference is described in, for example, the non-patent document “Adaptive background mixture models—for real-time tracking” C. Stauffer, “W.E.L.” Grimson, “Computer Vision” and “Pattern Recognition”, “1999”.
- a method for discovering and extracting features unique to a person who has learned in advance is described in, for example, non-patent literature ⁇ Histograms of Oriented Gradients for Human Detection '' N.Dalal, B.Triggs, Computer Vision and Pattern Recognition, 2005 '' Are listed.
- step ST7 the computer 12 assigns a person ID. That is, an ID (serial number) is assigned to the person position extracted in step ST6. For example, immediately after the start of processing, if there is one extracted person position, the number is “1”, and if there are three extracted person positions, the numbers are “1”, “2”, and “3”. Assign.
- ID serial number
- ID assignment is tracking. This is performed so that the position of the same person at each time can be collectively handled in a later step as the locus of the person. That is, if there is a person position that is considered to be the same as this person among the person positions extracted at the previous time with respect to the person position extracted from the camera image at the current time, The same ID as that assigned to the person position is assigned.
- the following procedure is performed for each person position extracted from the camera image at the current time.
- the allocation process ends here, and the subsequent steps are not performed.
- step ST8 the computer 12 records the extracted person position (x, y) information on the hard disk of the computer 12 together with the time and ID.
- the information on the person position having the same ID is collectively referred to as a locus.
- step ST9 the computer 12 converts the position (u, v) of the person 21 on the map from the position (x, y) of the person 21 on the camera image to the above-described conversion formula (formula (1)). , See formula (2)).
- the calculation is performed independently for each position.
- step ST10 based on the calculated position of the person 21 on the map, the computer 12 adds a human icon indicating the person to the corresponding position on the map image as shown in FIG. indicate.
- the computer 12 determines whether or not a certain time has passed since the previous camera parameter update process.
- the fixed time is an amount set in advance and is, for example, a time such as a half day, a day, or a week.
- step ST5 If the predetermined time has not elapsed, the computer 12 returns to the processing of step ST5 and repeats the same processing as described above at every fixed time, for example, every one frame or every several frames. On the other hand, if the predetermined time has elapsed, the computer 12 performs a camera parameter update process in step ST12, and then returns to the process of step ST5.
- the camera parameter update process is a process for obtaining a camera parameter that maximizes the value of the following formula (6) and replacing the current camera parameter with the obtained camera parameter.
- the computer 12 uses a general optimization method, for example, the steepest descent method (hill climbing method), the quasi-Newton method, the Levenberg-Markert method, etc., for the camera parameter that maximizes the value of the equation (6).
- the steepest descent method hill climbing method
- the quasi-Newton method the quasi-Newton method
- the Levenberg-Markert method etc.
- the flowchart of FIG. 11 shows the procedure of the camera parameter update process.
- the computer 12 sets the current camera parameter H to H 0 .
- the computer 12 obtains a camera parameter H (hat) that maximizes the value of the above equation (6).
- the computer 12 replaces the current camera parameter H with H (hat) obtained in step ST22.
- the computer 12 calculates p (H) in the above equation (6) as shown in the following equation (7).
- p (H) N (X 0 , ⁇ X 2 ) ⁇ N (Y 0 , ⁇ Y 2 ) ⁇ N (Z 0 , ⁇ Z 2 ) ⁇ N ( ⁇ 0 , ⁇ ⁇ 2 ) ⁇ N ( ⁇ 0 , ( ⁇ ⁇ 2 ) ⁇ N ( ⁇ 0 , ⁇ ⁇ 2 ) ⁇ N (f 0 , ⁇ f 2 ) ⁇ N (s 0 , ⁇ s 2 ) (7)
- N ( ⁇ , ⁇ 2 ) represents a normal distribution with an average value ⁇ and a variance ⁇ 2 .
- N (X 0 , ⁇ X 2 ) is as shown in the following formula (8).
- the variance ( ⁇ X 2 , ⁇ Y 2 , ⁇ Z 2 , ⁇ ⁇ 2 , ⁇ ⁇ 2 , ⁇ ⁇ 2 , ⁇ f 2 , ⁇ s 2 ) of each parameter is determined according to the respective feature (for example, (X, Y, Z) is set in advance in view of general variation in camera position input from the user, and depression angle ( ⁇ ) and rotation angle ( ⁇ ), which are typical ranges for surveillance cameras). .
- the computer 12 calculates p (Pi
- H) E 1 (Pi, H) ⁇ E 2 (Pi, H) (9)
- E 1 (Pi, H) is a function for evaluating the degree of fit of the locus Pi with respect to the person's possible area, and is calculated as shown in the following equation (10).
- L is the total number of points (person positions) constituting the i-th trajectory.
- (X i j , y i j ) represents the coordinate value of the person position at the j-th time in the i-th trajectory.
- d min represents the shortest distance from the point (u, v) to the boundary of the person's possible area as shown in FIG.
- E 1 (Pi, H) gives a higher evaluation as the trajectory is located inside the human existence possible area. This is based on the premise that there are generally more people walking in the center than at the end of the passage.
- E 2 (Pi, H) is a function for evaluating a certain degree of walking speed on the trajectory Pi, and gives a higher evaluation value as the distance between adjacent points constituting the trajectory is constant.
- This E 2 (Pi, H) is calculated as shown in Equation (11) below. In this case, the dispersion of distances between adjacent points is obtained. This is based on the premise that when a person normally walks, the walking speed is approximately constant, that is, there is little variation.
- H) E 1 (Pi, H) ⁇ E 2 (Pi, H) ⁇ E 3 (Pi, H) (12)
- N is the total number of person positions accumulated for a certain time.
- xj and yj are the j-th position coordinates.
- Equation (13) allows a slight error in the extraction of the person position.
- the mathematical expression (13) does not include evaluation regarding walking speed.
- Equation (13) When using Equation (13), the evaluation is simplified compared to using Equation (6). However, the emphasis is on ease of mounting and lightness of processing. This is because it is only necessary to have information on the position of each moment of the person extracted during a certain period in the past, and no “trajectory” is required. That is, since the assignment operation of the person ID in step ST7 in the flowchart of FIG. 8 is not required, the mounting is facilitated and the processing is lightened accordingly.
- the camera parameters are determined so that the position (trajectory) of the person on the map falls within the possible area according to the user designation. Is. That is, the user only has to specify the possible area, and the camera parameters can be determined with a small burden on the user. Therefore, it is possible to reduce the user's trouble and improve the usability.
- camera parameters are determined and updated at regular time intervals based on the conversion result of the immediately preceding constant time and the set human existence area. is there. Therefore, the camera parameters can be updated to be more optimal at regular intervals, and it is possible to cope with changes with time due to some factors.
- the camera parameters are determined so that the position (trajectory) of the person on the map is farther from the boundary of the set object possible area. It is. Accordingly, the camera parameters can be determined to be more optimal.
- the parameters of the conversion formula are determined so that the moving speed of the person on the map is constant. Accordingly, the camera parameters can be determined to be more optimal.
- the target object is a person
- the monitoring target is not limited to a person.
- An object other than a person or a moving object such as an automated person, a motorcycle, or a bicycle can be used as a target.
- the present technology is similarly applied to a surveillance camera system in which the surveillance camera 11 is arranged in an urban area. it can.
- the map in that case is a view of the city area where the surveillance camera 11 is arranged as seen from above.
- this technique can also take the following structures.
- a conversion unit that converts the position of an object on a camera image into a position on a map using a conversion formula; Based on the conversion result, a display unit that displays the position of the object on the map; An area setting unit for setting an available area of the object on the map; Based on the conversion result at each time within a predetermined time and the set existence possible area of the object, the conversion is performed so that the locus on the map of the object is within the existence possible area.
- An information processing apparatus comprising: a parameter determination unit that determines a parameter of the formula.
- the parameter determination unit The information processing apparatus according to (1), wherein a parameter of the conversion formula is determined for each fixed time based on the conversion result of the immediately preceding fixed time and the set possible area of the object. (3) The parameter determination unit The parameter of the conversion formula is determined so that the trajectory of the object on the map is farther from the boundary of the set object possible area. In (1) or (2), The information processing apparatus described. (4) The parameter determination unit The information processing apparatus according to any one of (1) to (3), wherein parameters of the conversion formula are determined so that a moving speed of the object on the map is constant. (5) The parameter determination unit The information processing apparatus according to any one of (1) to (4), wherein when there are a plurality of objects, a trajectory on the map is obtained and used for each object.
- the object is a person
- the information processing apparatus according to any one of (1) to (5), further including a person extraction unit that extracts the person from the camera image and obtains the position of the person.
- the information processing apparatus according to any one of (1) to (6), further including an initial value setting unit that sets initial values of the parameters of the conversion formula based on a user input value and a fixed value.
- a conversion step of converting the position of the object on the camera image into a position on the map using a conversion formula Based on the conversion result, a display step for displaying the location of the object on the map; An area setting step for setting a possible area of the object on the map; Based on the conversion result at each time within a predetermined time and the set existence possible area of the object, the conversion is performed so that the locus on the map of the object is within the existence possible area.
- (9) computer Conversion means for converting the position of the object on the camera image into a position on the map using a conversion formula; Display means for displaying the location of the object on the map based on the conversion result; Area setting means for setting the possible area of the object on the map; Based on the conversion result at each time within a predetermined time and the set possible area of the target object, the conversion is performed so that the locus of the target object on the map is within the possible area.
- a program that functions as a parameter determination means that determines the parameters of an expression.
- (10) a surveillance camera; An information processing device that processes a captured image of the monitoring camera, The information processing apparatus A conversion unit that converts the position of the object on the camera image into a position on the map using a conversion formula; Based on the conversion result, a display unit that displays the location of the object on the map; An area setting unit for setting an available area of the object on the map; Based on the conversion result at each time within a predetermined time and the set possible area of the target object, the conversion is performed so that the locus of the target object on the map is within the possible area.
- a surveillance camera system comprising: a parameter determination unit that determines a parameter of the formula.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
- Alarm Systems (AREA)
Abstract
Description
カメラ画像上の対象物の位置を、変換式を用いて地図上の位置に変換する変換部と、
上記変換結果に基づいて、上記地図上に上記対象物の存在位置を表示する表示部と、
上記地図上における上記対象物の存在可能領域を設定する領域設定部と、
所定時間内の各時刻における上記変換結果と、上記設定された上記対象物の存在可能領域に基づいて、上記対象物の上記地図上における軌跡が上記存在可能領域の内部に収まるように、上記変換式のパラメータを決定するパラメータ決定部とを備える
情報処理装置にある。
1.実施の形態
2.変形例
[監視カメラシステムの構成例]
図1は、実施の形態としての監視カメラシステム10の一例を示している。この監視カメラシステム10は、監視カメラ11と、この監視カメラ11の撮像画像を処理する情報処理装置としてのパーソナルコンピュータ(PC)12と、モニタ13を、備えている。
x=(xlr-xul)/2 ・・・(4)
y=ylr ・・・(5)
p(H) = N(X0,σX 2)・N(Y0,σY 2)・N(Z0,σZ 2)・N(θ0,σθ 2)・N(φ0,σφ 2)
・N(η0,ση 2)・N(f0,σf 2)・N(s0,σs 2) ・・・(7)
p(Pi|H) = E1(Pi,H)・E2(Pi,H) ・・・(9)
p(Pi|H) = E1(Pi,H)・E2(Pi,H)・E3(Pi,H) ・・・(12)
なお、上述実施の形態においては、対象物が人物である例を示したが、監視対象は人物に限定されない。人物以外の動物、あるいは自動者、バイク、自転車などの移動物体を対象物とすることもできる。
(1)カメラ画像上の対象物の位置を、変換式を用いて地図上の位置に変換する変換部と、
上記変換結果に基づいて、上記地図上に上記対象物の存在位置を表示する表示部と、
上記地図上における上記対象物の存在可能領域を設定する領域設定部と、
所定時間内の各時刻における上記変換結果と、上記設定された上記対象物の存在可能領域に基づいて、上記対象物の上記地図上における軌跡が上記存在可能領域の内部に収まるように、上記変換式のパラメータを決定するパラメータ決定部とを備える
情報処理装置。
(2)上記パラメータ決定部は、
一定時間毎に、直前の一定時間の上記変換結果と、上記設定された上記対象物の存在可能領域に基づいて、上記変換式のパラメータを決定する
前記(1)に記載の情報処理装置。
(3)上記パラメータ決定部は、
上記対象物の上記地図上における軌跡が、上記設定された上記対象物の存在可能領域の境界からより遠い位置となるように、上記変換式のパラメータを決定する
前記(1)または(2)に記載の情報処理装置。
(4)上記パラメータ決定部は、
上記対象物の上記地図上における移動速度が一定となるように、上記変換式のパラメータを決定する
前記(1)から(3)のいずれかに記載の情報処理装置。
(5)上記パラメータ決定部は、
複数の対象物が存在するとき、対象物毎に上記地図上における軌跡を求めて使用する
前記(1)から(4)のいずれかに記載の情報処理装置。
(6)上記対象物は人物であり、
上記カメラ画像から上記人物を抽出し、該人物の位置を得る人物抽出部をさらに備える
前記(1)から(5)のいずれかに記載の情報処理装置。
(7)ユーザ入力値および固定値に基づいて上記変換式のパラメータの初期値を設定する初期値設定部をさらに備える
前記(1)から(6)のいずれかに記載の情報処理装置。
(8)カメラ画像上の対象物の位置を地図上の位置に変換式を用いて変換する変換ステップと、
上記変換結果に基づいて、上記地図上に上記対象物の存在位置を表示する表示ステップと、
上記地図上における上記対象物の存在可能領域を設定する領域設定ステップと、
所定時間内の各時刻における上記変換結果と、上記設定された上記対象物の存在可能領域に基づいて、上記対象物の上記地図上における軌跡が上記存在可能領域の内部に収まるように、上記変換式のパラメータを決定するパラメータ決定ステップとを備える
情報処理方法。
(9)コンピュータを、
カメラ画像上の対象物の位置を地図上の位置に変換式を用いて変換する変換手段と、
上記変換結果に基づいて、上記地図上に上記対象物の存在位置を表示する表示手段と、
上記地図上における上記対象物の存在可能領域を設定する領域設定手段と、
所定時間内の各時刻における上記変換結果と、上記設定された上記対象物の存在可能領域に基づいて、上記対象物の上記地図上における軌跡が上記存在可能領域の内部に収まるように、上記変換式のパラメータを決定するパラメータ決定手段と
して機能させるプログラム。
(10)監視カメラと、
上記監視カメラの撮像画像を処理する情報処理装置とを備え、
上記情報処理装置は、
カメラ画像上の対象物の位置を地図上の位置に変換式を用いて変換する変換部と、
上記変換結果に基づいて、上記地図上に上記対象物の存在位置を表示する表示部と、
上記地図上における上記対象物の存在可能領域を設定する領域設定部と、
所定時間内の各時刻における上記変換結果と、上記設定された上記対象物の存在可能領域に基づいて、上記対象物の上記地図上における軌跡が上記存在可能領域の内部に収まるように、上記変換式のパラメータを決定するパラメータ決定部とを有する
監視カメラシステム。
11・・・監視カメラ
12・・・パーソナルコンピュータ
13・・・モニタ
21・・・人物
22・・・警備員
111・・・撮像部
112・・・画像データ送信部
121・・・画像データ受信部
122・・・人物位置抽出部
123・・・ID割り当て部
124・・・人物位置記録部
125・・・軌跡情報保持部
126・・・地図上位置算出部
127・・・カメラパラメータ保持部
128・・・カメラパラメータ更新部
131・・・地図上位置表示部
132・・・存在可能領域情報入力部
133・・・存在可能領域情報保持部
Claims (10)
- カメラ画像上の対象物の位置を、変換式を用いて地図上の位置に変換する変換部と、
上記変換結果に基づいて、上記地図上に上記対象物の存在位置を表示する表示部と、
上記地図上における上記対象物の存在可能領域を設定する領域設定部と、
所定時間内の各時刻における上記変換結果と、上記設定された上記対象物の存在可能領域に基づいて、上記対象物の上記地図上における軌跡が上記存在可能領域の内部に収まるように、上記変換式のパラメータを決定するパラメータ決定部とを備える
情報処理装置。 - 上記パラメータ決定部は、
一定時間毎に、直前の一定時間の上記変換結果と、上記設定された上記対象物の存在可能領域に基づいて、上記変換式のパラメータを決定する
請求項1に記載の情報処理装置。 - 上記パラメータ決定部は、
上記対象物の上記地図上における軌跡が、上記設定された上記対象物の存在可能領域の境界からより遠い位置となるように、上記変換式のパラメータを決定する
請求項1に記載の情報処理装置。 - 上記パラメータ決定部は、
上記対象物の上記地図上における移動速度が一定となるように、上記変換式のパラメータを決定する
請求項1に記載の情報処理装置。 - 上記パラメータ決定部は、
複数の対象物が存在するとき、対象物毎に上記地図上における軌跡を求めて使用する
請求項1に記載の情報処理装置。 - 上記対象物は人物であり、
上記カメラ画像から上記人物を抽出し、該人物の位置を得る人物抽出部をさらに備える
請求項1に記載の情報処理装置。 - ユーザ入力値および固定値に基づいて上記変換式のパラメータの初期値を設定する初期値設定部をさらに備える
請求項1に記載の情報処理装置。 - カメラ画像上の対象物の位置を地図上の位置に変換式を用いて変換する変換ステップと、
上記変換結果に基づいて、上記地図上に上記対象物の存在位置を表示する表示ステップと、
上記地図上における上記対象物の存在可能領域を設定する領域設定ステップと、
所定時間内の各時刻における上記変換結果と、上記設定された上記対象物の存在可能領域に基づいて、上記対象物の上記地図上における軌跡が上記存在可能領域の内部に収まるように、上記変換式のパラメータを決定するパラメータ決定ステップとを備える
情報処理方法。 - コンピュータを、
カメラ画像上の対象物の位置を地図上の位置に変換式を用いて変換する変換手段と、
上記変換結果に基づいて、上記地図上に上記対象物の存在位置を表示する表示手段と、
上記地図上における上記対象物の存在可能領域を設定する領域設定手段と、
所定時間内の各時刻における上記変換結果と、上記設定された上記対象物の存在可能領域に基づいて、上記対象物の上記地図上における軌跡が上記存在可能領域の内部に収まるように、上記変換式のパラメータを決定するパラメータ決定手段と
して機能させるプログラム。 - 監視カメラと、
上記監視カメラの撮像画像を処理する情報処理装置とを備え、
上記情報処理装置は、
カメラ画像上の対象物の位置を地図上の位置に変換式を用いて変換する変換部と、
上記変換結果に基づいて、上記地図上に上記対象物の存在位置を表示する表示部と、
上記地図上における上記対象物の存在可能領域を設定する領域設定部と、
所定時間内の各時刻における上記変換結果と、上記設定された上記対象物の存在可能領域に基づいて、上記対象物の上記地図上における軌跡が上記存在可能領域の内部に収まるように、上記変換式のパラメータを決定するパラメータ決定部とを有する
監視カメラシステム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP13801420.4A EP2860970A4 (en) | 2012-06-08 | 2013-06-06 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING, PROGRAM AND MONITORING CAMERA SYSTEM |
CN201380029066.7A CN104335577B (zh) | 2012-06-08 | 2013-06-06 | 信息处理设备、信息处理方法和监视摄像机系统 |
US14/404,113 US9886761B2 (en) | 2012-06-08 | 2013-06-06 | Information processing to display existing position of object on map |
JP2014520061A JP6206405B2 (ja) | 2012-06-08 | 2013-06-06 | 情報処理装置、情報処理方法、プログラムおよび監視カメラシステム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-131202 | 2012-06-08 | ||
JP2012131202 | 2012-06-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013183738A1 true WO2013183738A1 (ja) | 2013-12-12 |
Family
ID=49712125
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/065758 WO2013183738A1 (ja) | 2012-06-08 | 2013-06-06 | 情報処理装置、情報処理方法、プログラムおよび監視カメラシステム |
Country Status (5)
Country | Link |
---|---|
US (1) | US9886761B2 (ja) |
EP (1) | EP2860970A4 (ja) |
JP (1) | JP6206405B2 (ja) |
CN (1) | CN104335577B (ja) |
WO (1) | WO2013183738A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2015093330A1 (ja) * | 2013-12-17 | 2017-03-16 | シャープ株式会社 | 認識データ伝送装置、認識データ記録装置及び認識データ記録方法 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2854397B1 (en) | 2012-05-23 | 2020-12-30 | Sony Corporation | Surveillance camera administration device, surveillance camera administration method, and program |
GB201613138D0 (en) * | 2016-07-29 | 2016-09-14 | Unifai Holdings Ltd | Computer vision systems |
US10582095B2 (en) * | 2016-10-14 | 2020-03-03 | MP High Tech Solutions Pty Ltd | Imaging apparatuses and enclosures |
US11232687B2 (en) * | 2017-08-07 | 2022-01-25 | Standard Cognition, Corp | Deep learning-based shopper statuses in a cashier-less store |
US11417013B2 (en) * | 2020-10-13 | 2022-08-16 | Sensormatic Electornics, LLC | Iterative layout mapping via a stationary camera |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1149100A (ja) * | 1997-08-05 | 1999-02-23 | Mitsubishi Electric Corp | エプロン監視装置 |
WO2009110417A1 (ja) * | 2008-03-03 | 2009-09-11 | ティーオーエー株式会社 | 旋回型カメラの設置条件特定装置および方法ならびに当該設置条件特定装置を備えるカメラ制御システム |
JP2010193170A (ja) | 2009-02-18 | 2010-09-02 | Mitsubishi Electric Corp | カメラキャリブレーション装置及び監視エリア設定装置 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6816184B1 (en) * | 1998-04-30 | 2004-11-09 | Texas Instruments Incorporated | Method and apparatus for mapping a location from a video image to a map |
US7522186B2 (en) * | 2000-03-07 | 2009-04-21 | L-3 Communications Corporation | Method and apparatus for providing immersive surveillance |
KR100392727B1 (ko) * | 2001-01-09 | 2003-07-28 | 주식회사 한국씨씨에스 | 컴퓨터 원격감시 제어방식의 폐쇄회로 텔레비전 시스템,이에 이용되는 컴퓨터 비디오 매트릭스 스위처 및제어프로그램 |
JP2004062980A (ja) * | 2002-07-29 | 2004-02-26 | Toyota Gakuen | 磁性合金、磁気記録媒体、および磁気記録再生装置 |
US7263472B2 (en) * | 2004-06-28 | 2007-08-28 | Mitsubishi Electric Research Laboratories, Inc. | Hidden markov model based object tracking and similarity metrics |
WO2006012645A2 (en) * | 2004-07-28 | 2006-02-02 | Sarnoff Corporation | Method and apparatus for total situational awareness and monitoring |
US20060233461A1 (en) * | 2005-04-19 | 2006-10-19 | Honeywell International Inc. | Systems and methods for transforming 2d image domain data into a 3d dense range map |
WO2007139658A2 (en) * | 2006-05-24 | 2007-12-06 | Objectvideo, Inc. | Intelligent imagery-based sensor |
US8274564B2 (en) * | 2006-10-13 | 2012-09-25 | Fuji Xerox Co., Ltd. | Interface for browsing and viewing video from multiple cameras simultaneously that conveys spatial and temporal proximity |
DE102007001649A1 (de) * | 2007-01-11 | 2008-07-17 | Robert Bosch Gmbh | Verfahren, Vorrichtung und Computerprogramm zur Selbstkalibrierung einer Überwachungskamera |
US20080263592A1 (en) * | 2007-04-18 | 2008-10-23 | Fuji Xerox Co., Ltd. | System for video control by direct manipulation of object trails |
US8310542B2 (en) * | 2007-11-28 | 2012-11-13 | Fuji Xerox Co., Ltd. | Segmenting time based on the geographic distribution of activity in sensor data |
US9749594B2 (en) * | 2011-12-22 | 2017-08-29 | Pelco, Inc. | Transformation between image and map coordinates |
RU2531876C2 (ru) * | 2012-05-15 | 2014-10-27 | Общество с ограниченной ответственностью "Синезис" | Способ индексирования видеоданных при помощи карты |
-
2013
- 2013-06-06 CN CN201380029066.7A patent/CN104335577B/zh not_active Expired - Fee Related
- 2013-06-06 WO PCT/JP2013/065758 patent/WO2013183738A1/ja active Application Filing
- 2013-06-06 JP JP2014520061A patent/JP6206405B2/ja not_active Expired - Fee Related
- 2013-06-06 US US14/404,113 patent/US9886761B2/en active Active
- 2013-06-06 EP EP13801420.4A patent/EP2860970A4/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1149100A (ja) * | 1997-08-05 | 1999-02-23 | Mitsubishi Electric Corp | エプロン監視装置 |
WO2009110417A1 (ja) * | 2008-03-03 | 2009-09-11 | ティーオーエー株式会社 | 旋回型カメラの設置条件特定装置および方法ならびに当該設置条件特定装置を備えるカメラ制御システム |
JP2010193170A (ja) | 2009-02-18 | 2010-09-02 | Mitsubishi Electric Corp | カメラキャリブレーション装置及び監視エリア設定装置 |
Non-Patent Citations (3)
Title |
---|
C. STAUFFER; W. E. L. GRIMSON: "Adaptive background mixture models for real-time tracking", COMPUTER VISION AND PATTERN RECOGNITION, 1999 |
N. DALAL; B. TRIGGS: "Histograms of Oriented Gradients for Human Detection", 2005, COMPUTER VISION AND PATTERN RECOGNITION |
See also references of EP2860970A4 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2015093330A1 (ja) * | 2013-12-17 | 2017-03-16 | シャープ株式会社 | 認識データ伝送装置、認識データ記録装置及び認識データ記録方法 |
US10699541B2 (en) | 2013-12-17 | 2020-06-30 | Sharp Kabushiki Kaisha | Recognition data transmission device |
Also Published As
Publication number | Publication date |
---|---|
EP2860970A4 (en) | 2016-03-30 |
JP6206405B2 (ja) | 2017-10-04 |
US9886761B2 (en) | 2018-02-06 |
JPWO2013183738A1 (ja) | 2016-02-01 |
EP2860970A1 (en) | 2015-04-15 |
CN104335577B (zh) | 2018-06-12 |
CN104335577A (zh) | 2015-02-04 |
US20150170354A1 (en) | 2015-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6206405B2 (ja) | 情報処理装置、情報処理方法、プログラムおよび監視カメラシステム | |
JP6428266B2 (ja) | 色補正装置、色補正方法および色補正用プログラム | |
JP5603403B2 (ja) | 対象物計数方法、対象物計数装置および対象物計数プログラム | |
JP6159179B2 (ja) | 画像処理装置、画像処理方法 | |
JP6141079B2 (ja) | 画像処理システム、画像処理装置、それらの制御方法、及びプログラム | |
WO2018051944A1 (ja) | 人流推定装置、人流推定方法および記録媒体 | |
KR20150021526A (ko) | 데이터베이스 생성 및 업데이트를 위한 심도 기반 추적을 이용하는 자기 학습 얼굴 인식 기법 | |
JP2008219570A (ja) | カメラ間連結関係情報生成装置 | |
JP6503079B2 (ja) | 特定人物検知システム、特定人物検知方法および検知装置 | |
TW201025193A (en) | Method for automatic detection and tracking of multiple targets with multiple cameras and system therefor | |
JP6779410B2 (ja) | 映像解析装置、映像解析方法、及びプログラム | |
JP2019049786A (ja) | 人識別システム及び人識別方法 | |
JP2017076288A (ja) | 情報処理装置、情報処理方法及びプログラム | |
WO2016031313A1 (ja) | 体調検出装置、体調検出方法及び体調検出プログラム | |
JP2010140425A (ja) | 画像処理システム | |
JP2008225704A (ja) | 作業評価装置、作業評価方法、および、制御プログラム | |
KR101469099B1 (ko) | 사람 객체 추적을 통한 자동 카메라 보정 방법 | |
KR20200134502A (ko) | 이미지 인식을 통한 3차원 인체 관절 각도 예측 방법 및 시스템 | |
JP6336935B2 (ja) | 移動物体追跡装置 | |
JP2007134845A (ja) | カメラ制御装置およびカメラ制御プログラム | |
JP5930808B2 (ja) | 画像処理装置、画像処理装置の制御方法、およびプログラム | |
JP2020095651A (ja) | 生産性評価システム、生産性評価装置、生産性評価方法、及びプログラム | |
US11216969B2 (en) | System, method, and computer-readable medium for managing position of target | |
JP2021125183A (ja) | 作業負荷分析装置、作業負荷分析方法、プログラム | |
JP2020201674A (ja) | 映像解析装置及びその制御方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13801420 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014520061 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013801420 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14404113 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |