CN104954747B - Video monitoring method and device - Google Patents

Video monitoring method and device Download PDF

Info

Publication number
CN104954747B
CN104954747B CN201510336397.6A CN201510336397A CN104954747B CN 104954747 B CN104954747 B CN 104954747B CN 201510336397 A CN201510336397 A CN 201510336397A CN 104954747 B CN104954747 B CN 104954747B
Authority
CN
China
Prior art keywords
target
coordinate information
virtual door
plane
video image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510336397.6A
Other languages
Chinese (zh)
Other versions
CN104954747A (en
Inventor
程淼
潘华东
潘石柱
张兴明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201510336397.6A priority Critical patent/CN104954747B/en
Publication of CN104954747A publication Critical patent/CN104954747A/en
Priority to EP16810884.3A priority patent/EP3311562A4/en
Priority to US15/737,283 priority patent/US10671857B2/en
Priority to PCT/CN2016/082963 priority patent/WO2016202143A1/en
Priority to US16/888,861 priority patent/US11367287B2/en
Application granted granted Critical
Publication of CN104954747B publication Critical patent/CN104954747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a video monitoring method and a video monitoring device, and relates to the field of monitoring. The video monitoring method comprises the following steps: acquiring a plane video image; determining plane coordinate information of a target in a plane video image according to the plane video image; 3D reconstruction is carried out through a 3D reconstruction algorithm according to the plane coordinate information to obtain three-dimensional coordinate information of the target; an event occurrence is extracted based on a positional relationship of the target and a virtual door, wherein the virtual door includes three-dimensional coordinate information. By the method, the three-dimensional coordinate information of the target is acquired according to the video image, and the position relation between the virtual door and the target is judged based on the three-dimensional coordinate information of the target, so that the occurrence of the event is extracted, the event misjudgment caused by the perspective effect in the two-dimensional image is effectively avoided, and the accuracy of the event judgment is improved.

Description

Video monitoring method and device
Technical Field
The invention relates to the field of monitoring, in particular to a video monitoring method and a video monitoring device.
Background
The intelligent video behavior analysis system has high application value in various monitoring places, and the basic general method is that background modeling is carried out on input video, a background image and an image of a current frame are utilized to detect a moving target, the moving target is subsequently tracked, classified and behavior analysis is carried out, or a specified type of target is directly detected from the video in a training and recognition mode, the detected target is tracked and analyzed, and early warning judgment is carried out on behavior events, so that the purpose of intelligent monitoring is achieved.
In behavior analysis, tripwire detection and area intrusion detection are basic detection functions. The basic implementation is as follows: setting at least one line segment or an area in a video image, detecting whether a moving object in the video crosses the line segment or enters/leaves the area, and generating an alarm if an event occurs. The tripwire detection is characterized in that at least one line segment with a direction is arranged in a video image, whether a moving target moves from one side of a line to the other side is detected, and if the tripwire action occurs, an alarm event is generated; the regional intrusion detection is characterized in that at least one detection region is arranged in a video image, whether a moving target enters the region from the outside of the region is detected, and if regional intrusion behavior occurs, an alarm event is generated.
The existing tripwire and area intrusion detection technology judges whether to trigger corresponding rules or not according to the intersection of a target, a set tripwire and an area directly on an image plane. Because the imaging of the camera has perspective effect, when a target in an image intersects with a trip wire or an area, the action of the trip wire or the person entering the area does not necessarily occur in the real world, so that misjudgment is easy to occur, and false alarm is generated.
Disclosure of Invention
The invention aims to solve the problem of event misjudgment caused by the perspective effect of a camera.
According to an aspect of the present invention, there is provided a video surveillance method, including: acquiring a plane video image; determining plane coordinate information of a target in a plane video image according to the plane video image; 3D reconstruction is carried out through a 3D reconstruction algorithm according to the plane coordinate information to obtain three-dimensional coordinate information of the target; an event occurrence is extracted based on a positional relationship of the target and a virtual door, wherein the virtual door includes three-dimensional coordinate information.
Optionally, the virtual door is a door area perpendicular to the ground, and an intersection line of the virtual door and the ground is a straight line, a line segment or a broken line.
Optionally, determining the plane coordinate information of the object in the plane video image according to the plane video image includes: comparing continuous frame plane video images or comparing the plane video images with background images to obtain change points or point groups in the plane video images; and extracting points or point groups from the change points or point groups as targets, and determining plane coordinate information of the targets according to the plane video images.
Optionally, the apparatus for acquiring the planar video image comprises more than one 2D camera.
Optionally, performing 3D reconstruction by a 3D reconstruction algorithm according to the plane coordinate information, and obtaining three-dimensional coordinate information of the target: 3D reconstruction is carried out through a 3D reconstruction algorithm according to the plane coordinate information to obtain horizontal coordinate information of the target under the three-dimensional coordinate; extracting an event occurrence based on a positional relationship of the target and the virtual door, wherein the virtual door includes three-dimensional coordinate information: an event occurrence is extracted based on a positional relationship of the target and a virtual door, wherein the virtual door includes horizontal coordinate information in three-dimensional coordinates.
Optionally, performing 3D reconstruction by a 3D reconstruction algorithm according to the plane coordinate information, and obtaining three-dimensional coordinate information of the target: according to the formula
Figure BDA0000739720290000021
And converting the plane coordinate information of the target into horizontal coordinate information under three-dimensional coordinates, wherein u and v are the plane coordinate information of the target, X, Y is the horizontal coordinate information under the three-dimensional coordinates of the target, P is a conversion matrix, and lambda is a distortion coefficient.
Optionally, the method further comprises: determining a motion track of a target in a plane video image according to a plurality of frame plane video images; determining three-dimensional coordinate information of a motion track of a target; and extracting event occurrence based on the motion trail of the target and the position relation of the virtual door.
Optionally, the event comprises being located inside the virtual door, being located outside the virtual door, being located in the area of the virtual door, passing through the virtual door from the outside inwards, passing through the virtual door from the inside outwards, moving from the outside inwards and not passing through the virtual door and/or moving from the inside outwards and not passing through the virtual door.
Optionally, determining a type of the object, wherein the type of the object comprises a human, an animal and/or a car.
Optionally, the method further includes sending alarm information if a predetermined event is extracted, where the alarm information includes intrusion position information and/or intrusion direction information.
Alternatively, the extracting of the event occurrence based on the positional relationship between the target and the virtual door includes counting a number of consecutive frames of the event, and judging the event occurrence when the number of frames is greater than a predetermined alarm number of frames.
By the method, the three-dimensional coordinate information of the target is acquired according to the plane video image, and the position relation between the virtual door and the target is judged based on the three-dimensional coordinate information of the target, so that the occurrence of the event is extracted, the event misjudgment caused by the perspective effect in the two-dimensional image is effectively avoided, and the accuracy of the event judgment is improved.
According to another aspect of the present invention, there is provided a video monitoring apparatus comprising: the video acquisition module is used for acquiring a plane video image; the target acquisition module is used for determining plane coordinate information of a target in the plane video image according to the plane video image; the three-dimensional coordinate determination module is used for carrying out 3D reconstruction through a 3D reconstruction algorithm according to the plane coordinate information to obtain three-dimensional coordinate information of the target; and the event extraction module is used for extracting event occurrence based on the position relation between the target and the virtual door, wherein the virtual door comprises three-dimensional coordinate information.
Optionally, the virtual door is a door area perpendicular to the ground, and an intersection line of the virtual door and the ground is a straight line, a line segment or a broken line.
Optionally, the target acquisition module includes: the frame comparison unit is used for comparing continuous frame plane video images or comparing the plane video images with background images to obtain change points or point groups in the plane video images; a target determination unit that extracts a point or a point group from the change point or the point group as a target; and the plane coordinate acquisition unit is used for acquiring plane coordinate information of the target through the plane video image.
Optionally, the video capture module comprises more than one 2D camera.
Optionally, the three-dimensional coordinate determination module is further configured to: 3D reconstruction is carried out through a 3D reconstruction algorithm according to the plane coordinate information to obtain horizontal coordinate information of the target under the three-dimensional coordinate; the event extraction module is further configured to: an event occurrence is extracted based on a positional relationship of the target and a virtual door, wherein the virtual door includes horizontal coordinate information in three-dimensional coordinates.
Optionally, the three-dimensional coordinate determination module is configured to: according to the formula
Figure BDA0000739720290000041
And converting the plane coordinate information of the target into horizontal coordinate information under three-dimensional coordinates, wherein u and v are the plane coordinate information of the target, X, Y is the horizontal coordinate information under the three-dimensional coordinates of the target, P is a conversion matrix, and lambda is a distortion coefficient.
Optionally, the target obtaining module further includes a motion trajectory determining unit, configured to determine a motion trajectory of a target in the video image according to the multiple frames of planar video images; the three-dimensional coordinate determination module is also used for determining the three-dimensional coordinate information of the motion trail of the target; and the event extraction module is also used for extracting event occurrence based on the motion trail of the target and the position relation of the virtual door.
Optionally, the event comprises being located inside the virtual door, being located outside the virtual door, being located in the area of the virtual door, passing through the virtual door from the outside inwards, passing through the virtual door from the inside outwards, moving from the outside inwards and not passing through the virtual door and/or moving from the inside outwards and not passing through the virtual door.
Optionally, a target type analysis module is further included for analyzing the target type, the target type including a human, an animal and/or a car.
Optionally, the system further comprises an alarm module, configured to send alarm information according to the extracted predetermined event, where the alarm information includes intrusion position information and/or intrusion direction information.
Optionally, the event extraction module is further configured to count a number of consecutive frames of the event, and determine that the event occurs when the number of frames is greater than a predetermined number of alarm frames.
By the device, the three-dimensional coordinate information of the target is acquired according to the plane video image, and the position relation between the virtual door and the target is judged based on the three-dimensional coordinate information of the target, so that an event is extracted, the event misjudgment caused by the perspective effect in the two-dimensional image is effectively avoided, and the accuracy of event judgment is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of one embodiment of a video surveillance method of the present invention.
FIG. 2 is a flow chart of another embodiment of a video surveillance method of the present invention.
FIG. 3 is a flow chart of one embodiment of determining a three-dimensional coordinate portion of a target of the present invention.
FIG. 4 is a schematic diagram of one embodiment of a video surveillance apparatus of the present invention.
Fig. 5 is a schematic diagram of another embodiment of the video surveillance apparatus of the present invention.
FIG. 6 is a schematic diagram of a video surveillance apparatus according to another embodiment of the invention.
Detailed Description
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
A flow diagram of one embodiment of a video surveillance method of the present invention is shown in fig. 1.
In step 101, the image pickup apparatus acquires a video image of a monitored area.
In step 102, an object to be monitored in the video image is determined, and plane coordinate information of the object is acquired. The target can be obtained by comparing the current image with the background image, or can be obtained by comparing the previous and next frames of images. The target may be a moving object, a stationary object located in a monitoring area, or a pixel point or a point group that changes in a planar video image.
In step 103, 3D reconstruction is performed through a 3D reconstruction algorithm according to the plane coordinate information, so as to obtain three-dimensional coordinate information of the target.
In step 104, the position relationship between the target and the virtual door is determined according to the three-dimensional coordinate information of the target and the virtual door, thereby extracting the occurrence of the event.
By the method, the three-dimensional coordinate information of the target is acquired according to the single-plane video image, and the position relation between the virtual door and the target is judged based on the three-dimensional coordinate information of the target, so that the occurrence of an event is extracted, the event misjudgment caused by the perspective effect in the two-dimensional image is effectively avoided, and the accuracy of the event judgment is improved.
In one embodiment, the three-dimensional coordinate information of the target acquired in step 103 is horizontal coordinate information of the target in three-dimensional coordinates, i.e. XY-axis coordinate information; the three-dimensional coordinate information of the virtual door is horizontal coordinate information under three-dimensional coordinates, namely XY axis coordinate information. Based on the horizontal coordinate information under the three-dimensional coordinates, the position relation between the XY-axis coordinates of the target on the same horizontal plane and the XY-axis coordinates of the virtual door can be judged, and therefore the occurrence of the event can be extracted more accurately.
In one embodiment, a flow diagram of another embodiment of a video surveillance method of the present invention is shown in FIG. 2.
In step 201, the image pickup apparatus acquires a video image of a monitored area.
In step 202, the continuous frame flat video images are compared, or the flat video images are compared with the background image, and the changing points or point groups in the flat video images are obtained.
In step 203, a point or a point group is extracted as a target from the change points or the point group. A predetermined extraction strategy may be set, such as extracting a change point or a point group of continuous multi-frame changes as a target, extracting a change point or a point group of which the change amount exceeds a threshold as a target, or extracting a change point group of which the area exceeds a certain size as a target, or the like.
In step 204, plane coordinate information of the object is acquired through the plane video image.
In step 205, 3D reconstruction is performed through a 3D reconstruction algorithm according to the plane coordinate information, so as to obtain three-dimensional coordinate information of the target.
In step 206, the position relationship between the target and the virtual door is determined based on the three-dimensional coordinate information of the target and the virtual door, thereby extracting the occurrence of the event.
By the method, the continuously changing points or point groups in the continuous frames can be obtained according to the change of the plane video image pixel points, and the continuously changing points or point groups are used as targets for detection, so that the possibility of missing judgment is reduced, and tighter monitoring is realized.
In one embodiment, the flat video image may be acquired by a 2D camera. The number of the 2D cameras can be 1, 3D reconstruction is carried out according to a plane video image acquired by a single 2D camera, and three-dimensional coordinate information of a target is acquired. The method is low in cost, the 2D camera monitoring equipment which is commonly used does not need to be modified, and the cost for upgrading the existing system is reduced.
In one embodiment, a flat video image of the same video image area may be acquired by multiple 2D cameras. The multiple 2D cameras can respectively extract events, judgment is carried out according to the event extraction results of the multiple 2D cameras, and whether the target passes through the virtual door or not can be determined by comparing and judging the weight of the target passing through the virtual door and the weight of the target not passing through the virtual door in a mode of setting weights for different cameras. By the method, misjudgment caused by angles, distances and the like can be avoided, and event judgment is more accurate.
FIG. 3 is a flow diagram of one embodiment of determining three-dimensional coordinates of an object.
In step 301, a vanishing line (a most basic concept in projection geometry, that is, a straight line formed by intersection points of all parallel lines in the real world on an image after projection) equation is obtained by using height information of an object at three different positions on the plane in the image. The height of the object in space needs to be known, the three different positions obtained need not be in a straight line, and the height information can be expressed in pixels.
In step 302, the length information of a straight line on the ground plane is used to indicate the rotation angle of the acquisition camera around the X-axis and the rotation angle around the Y-axis. The length of the acquired straight line in space is known, and the length information of the straight line in the image can be represented by pixels.
In step 303, a 2D and 3D projection matrix is obtained using the vanishing line equation and the rotation angle of the camera.
In step 304, three-dimensional coordinates are obtained based on the projection matrix in step 303 and the planar coordinates.
By the method, the three-dimensional coordinates can be obtained according to the plane coordinates of the target and the virtual door, so that the event extraction can be carried out according to the three-dimensional coordinates.
A specific implementation method for obtaining the correspondence between the 2D flat video image and the three-dimensional coordinates is given below.
Firstly, calibrating a plane concerned by video monitoring, and obtaining the corresponding Euclidean distance of any two points on the plane in a real world coordinate system after calibration.
The correspondence between the two-dimensional image and the three-dimensional object can be expressed as the following formula:
Figure BDA0000739720290000071
wherein λ represents the distortion coefficient of the video camera, and λ is reduced to 1 in consideration of the smaller distortion of the general video camera, so the important point of the plane calibration is to obtain the projection matrix p, which can be determined by α (α represents the rotation angle (tilt angle) of the camera around the X axis, β represents the rotation angle (pan angle) of the camera around the Y axis) and the extinction equation through derivation, and the details of the derivation can be referred to in the documents "Fengjun Lv; Tao Zhao; Ram new vatia, selfcalimentation of a camera from video of a walking human, ICPR, 2002".
Then, the vanishing line equation is obtained by using the height information (pixel representation) of an object (with known height in space) at three different positions (not on a straight line) on the ground plane in the image, and the length information (pixel representation) of a straight line (with known length in space) on the ground plane is used to obtain α, so as to obtain a projection matrix P to calibrate a plane.
A. The user finds a ground plane on the input video image, arbitrarily specifies two points on the ground plane, and the pixel position thereof is expressed as (u)1,v1) And (u)2,v2) And giving the Euclidean distance d between the coordinates of the two points in the real world coordinate system.
B. The use is as followsFirst, discretizing α between 0 and 360 degrees, for each possible α combination (α)ii) Constructing a mapping matrix PiThe pixel position (u) obtained in step A1,v1) And (u)2,v2) By PiCalculating the Euclidean distance d once after obtaining the corresponding three-dimensional real world coordinatesiD, which has the smallest error with diCorresponding (α)ii) As camera or camcorder parameters.
Since α are all between 0 degrees and 360 degrees in size, we can discretize α separately, such as α for 1 degree, 2 degrees, ….360 degrees, β for 1 degree, 2 degrees, ….360 degrees, as a set of candidate combinations for each of the possible angle values (α)ii)。
Slightly deforming equation 1 to obtain equation 2:
Figure BDA0000739720290000081
wherein P is-1Representing the inverse of the matrix P, i.e. P-1The dimension of the matrix P is 3 ×, but considering that the nominal point is located on the ground plane in the real world, i.e. the coordinate in the Z direction is 0, the matrix P is degenerated to a matrix of 3 ×, which can be inverted.
Will (u)1,v1)、(u2,v2) Substituting the above formula to obtain the world coordinates (X) of the two points1,Y1,Z1) And (X)2,Y2,Z2) Then, calculate its Euclidean distance
Figure BDA0000739720290000091
Error Δ (α) from dii) The specific definition can have a plurality of expressions, and only two of the more common expressions are recommended
Figure BDA0000739720290000092
Or does not have(X1-X2)2+(Y1-Y2)2+(Z1-Z2)2-d2|。
For all possible values of α, the set of parameters α with the smallest error is selected**As the optimum parameters:
Figure BDA0000739720290000093
C. and calculating a vanishing line equation. In the prior art, any method for obtaining the wire-out is applicable to the present invention, wherein the calculation method of the wire-out can be referred to in the literature "Single-View Metrology: Algorithms and Applications", Antonio Criminisi, proceedings of 24DAGM symposium on Pattern Recognition ".
D. And calculating a projection matrix P from the two-dimensional coordinates to the three-dimensional coordinates of the estimation matrix.
After acquiring the camera parameters α, a projection matrix may be obtained by:
P=K[R|t](4)
wherein, the matrix P is a mapping matrix of 3 × 4, K is an internal parameter matrix of 3 × 3,
Figure BDA0000739720290000094
(u0,v0) The eigen-points representing the video image, typically represented by the center point of the video image,
Figure BDA0000739720290000095
representing the focal length of the camera or video camera, R is a rotation matrix of 3 × 3, represented by equation (5), where α represents the rotation angle of the camera about the X-axis (tilt angle), β is the rotation angle about the Y-axis (pan angle), γ is the rotation angle about the Z-axis (yaw angle),
Figure BDA0000739720290000096
γ approximates the tilt of the equation of vanishing line with respect to the horizontal.
Figure BDA0000739720290000097
t is a matrix of 3 × 1, which can be expressed as t ═ R [0, Hc,0]THc denotes the height of the finder device from the ground, T denotes the pair [0, Hc,0]A transposition operation is performed.
E. Since the virtual door and the target are on the same horizontal plane, that is, the Z-axis coordinate is the same, in the calculation process, the calculation of the Z-axis is omitted, and only the XY-axis coordinate of the target in the three-dimensional coordinate is needed to be obtained. And (4) substituting any coordinate point on the ground plane in the image into the mapping matrix P by using a formula (6) to obtain the XY axis coordinate under the corresponding three-dimensional coordinate. P-1Represents the inverse of the mapping matrix of 3 × 3 after the degeneration process.
Figure BDA0000739720290000101
With the above method, the matrix formula (6) for conversion between 2D and 3D can be obtained, and by substituting the plane coordinates into u, v of the formula (6), the XY-axis coordinates in the three-dimensional coordinates corresponding to the plane coordinates can be obtained.
In summary, after the camera is calibrated, the distortion coefficient λ and the projection matrix P are fixed, and the plane coordinates (u, v) of the target are substituted into the above formula (6), so that the horizontal coordinates (X, Y) of the target in the three-dimensional coordinates can be obtained, and the three-dimensional coordinate information of the target can be obtained.
Before image calibration, preprocessing such as noise reduction filtering, image enhancement and/or electronic image stabilization can be performed on the image, so that the detection accuracy is improved.
By the method, the plane coordinate of the target can be converted into the three-dimensional coordinate under the condition that the image acquired by only one 2D camera is a plane video image, so that the purpose of extracting the event based on the three-dimensional coordinate of the object is realized with less cost and equipment.
In one embodiment, the video surveillance may be acquired simultaneously for multiple targets, thereby reducing missed extraction of events.
The virtual door is a door area vertical to the ground, and the intersection line of the virtual door and the ground can be a straight line, a line segment or a broken line. By the method, the boundary of the area to be monitored and protected can be defined as much as possible, monitoring is carried out from the ground to the space, and the comprehensiveness and the accuracy of event extraction are improved.
The virtual door extends upward on the basis of the straight line, line segment or broken line, and the height may be infinite or may be a predetermined height. The virtual door can be arranged in a mode of arranging an interface line between the virtual door and the ground; the virtual door can also be set directly by defining a convex polygon, the polygon is vertical to the ground, and the lower boundary of the polygon is the intersection line of the virtual door and the ground; the distance between the virtual door and the camera can be set; or the boundary line between the virtual door extension surface and the ground is set firstly, then the virtual door area is set, and the upper boundary and the lower boundary of the virtual door can be appointed by the user image, or the height is set. Through the mode, the virtual door can be freely set according to the monitoring requirement, so that the method is more flexible, and the video monitoring area is more targeted.
By the method, the three-dimensional coordinate information of the target can be flexibly acquired, the occurrence of the event can be extracted according to the positions of the three-dimensional coordinate information of the virtual door and the three-dimensional coordinate information of the virtual door, and the false extraction of the event caused by the perspective phenomenon of the camera equipment can be prevented.
In one embodiment, since the target may be in a motion state, the occurrence of an event may be extracted according to a motion trajectory of the target. And extracting a moving target by comparing the previous and the next frames of video images, recording the position information of the target in each frame of image, and acquiring the movement track of the target.
And extracting the occurrence of the event according to the motion trail of the target and the three-dimensional coordinate information of the virtual door. The extracted events may include: through the virtual door from outside-in, from inside-out, moving from outside-in and not through the virtual door, moving from inside-out and not through the virtual door.
A specific implementation of extracting the occurrence of an event is given below.
A. And acquiring three-dimensional coordinate information of the target and the virtual door. A reference line is determined, where a line is selected that is perpendicular to the lower boundary of the image through the center lowest point of the image.
B. Respectively calculating included angles between a connecting line from each line segment endpoint set by the virtual gate in the current frame image to the reference point coordinate and a reference straight line, and respectively recording the included angles as theta1,θ2…θmM is the number of end points, the included angle α between the line from the target coordinate point to the reference point coordinate in the current frame image and the reference straight line is calculated, and theta is calculated1,θ2…θmAnd α are sorted according to the magnitude of the values, and the minimum value of theta greater than α is selected as T1Selecting a maximum value of theta less than α and recording the maximum value as T2Record T1、T2Three-dimensional coordinate (x) of the corresponding line segment end point after conversion1,y1) And (x)2,y2) And recording the three-dimensional coordinates (X, Y) of the motion target after conversion at the moment, and recording the three-dimensional coordinates (X, Y) of the reference point after conversion.
C. Respectively calculating the included angles between the connecting lines from the end points of the line segments set by the virtual gate in the previous frame image to the reference point coordinates and the reference straight line, and respectively recording the included angles as theta1',θ2'…θm'm is the number of end points, the included angle α' between the line connecting the target coordinate point to the reference point coordinate in the previous frame image and the reference straight line is calculated, and theta is calculated1',θ2'…θm'and α' are sorted according to the magnitude of the values, and the minimum value of theta 'larger than α' is selected as T1' and the maximum value of theta ' less than α ' is selected and denoted as T2', recording T1'、T2' three-dimensional coordinates (x) after conversion of corresponding line segment end points1′,y1') and (x)2′,y2') the three-dimensional coordinates (x ', y ') of the moving object after conversion at this time are recorded.
D. Separately calculate T1,T2Three-dimensional coordinate (x) of the corresponding line segment end point after conversion1,y1) And (x)2,y2) Distance d from three-dimensional coordinates (X, Y) after conversion of reference points1,d2And calculating the distance d between the three-dimensional coordinates (X, Y) of the moving target after conversion and the three-dimensional coordinates (X, Y) of the reference point after conversion.
d=((X-x)2+(Y-y)2)1/2(10)
Determining d and d1And d2Of the cell, three results are possible: d to d1And d2Are all larger, d is greater than d1And d2Are all small, d is between d1And d2The results are indicated as 1.1,1.2 and 1.3, respectively.
E. Separately calculate T1',T2' three-dimensional coordinates (x) after conversion of corresponding line segment end points1',y1') and (x)2',y2') distance d from the three-dimensional coordinates (X, Y) after the conversion of the reference point1',d2'calculating the distance d' between the three-dimensional coordinates (X, Y) of the moving object after conversion and the three-dimensional coordinates (X, Y) of the reference point after conversion.
Determining d' and d1' and d2Of' three results are possible: d' to d1' and d2'all are large, d' is greater than d1' and d2'all are small, d' is between d1' and d2' between, the results are denoted 2.1,2.2,2.3, respectively.
F. And judging the movement direction according to the result.
Results 1.1,2.1 combinations: the distance between the moving target and the reference point is always larger than the distance between the end point of the line segment set by the virtual gate and the reference point, and the situation of passing through the virtual gate does not occur.
Results 1.1,2.2 combinations: the distance between the moving target and the reference point is from less than the distance between the moving target and the reference point and the distance between the moving target and.
Results 1.1,2.3 combination: the distance between the moving target and the reference point is from less than the distance between the moving target and the reference point and the distance between the moving target and.
Results 1.2,2.1 combinations: the distance between the moving target and the reference point is larger than the distance between the moving target and the reference point and the distance between the moving target and the.
Results 1.2,2.2 combinations: the distance between the moving target and the reference point is always smaller than the distance between the end point of the line segment set by the virtual gate and the reference point, and the situation of passing through the virtual gate does not occur.
Results 1.2,2.3 combination: the distance between the moving target and the reference point is larger than the distance between the moving target and the reference point and the distance between the moving target and the.
Results 1.3,2.1 combinations: the distance between the moving target and the reference point is larger than the distance between the moving target and the reference point and the distance between the moving target and the.
Results 1.3,2.2 combinations: the distance between the moving target and the reference point is from less than the distance between the moving target and the reference point and the distance between the moving target and.
Results 1.3,2.3 combinations: the distance between the moving target and the reference point is always between the distance between the end point of the line segment set by the virtual door and the reference point, the situation of passing through the virtual door does not occur, and no alarm is given.
By the method, the occurrence of the event can be extracted according to the motion state of the target, the motion direction of the target is judged, whether the target passes through the virtual door is judged, and the accurate and detailed event extraction effect is achieved.
In one embodiment, an alarm function is also included. Events requiring an alarm are scheduled, such as being located within the virtual door, being in the area of the virtual door, traversing the virtual door from the outside to the inside, and/or traversing the virtual door from the inside to the outside, etc. When these events occur, an alarm is triggered. By the method, the alarm information can be clearly provided for the user, and the user is helped to process the occurred events in time.
In one embodiment, target type analysis is also included. The target types include human, animal and/or car. And acquiring the type of the target through image matching, thereby enriching the event extraction information. By the method, the target type needing alarming can be selected, and the workload of a user is reduced.
In one embodiment, the number of frames of event extraction is counted, and the event is determined to occur when the extraction of the event reaches a predetermined number of frames. By the method, misjudgment can be prevented, and part of false alarms can be filtered.
A schematic diagram of one embodiment of a video surveillance apparatus of the invention is shown in fig. 4. 41 is a video acquisition module, which is used for acquiring a video image of a monitored area, and the video acquisition module can be a camera; the video acquisition module 41 sends the acquired video image to the target acquisition module 42, and the target acquisition module 42 acquires the plane coordinate information of a target according to the video image, where the target may be obtained by comparing with a set background image or by comparing between previous and next frames, and the target may be a moving object, a static object located in a monitoring area, or a pixel point or a point group that changes in the plane video image. The object acquisition module 42 acquires planar coordinate information of the object, i.e., coordinate information in a planar image. The target obtaining module 42 sends the obtained plane coordinate information of the target to the three-dimensional coordinate determining module 43 to determine the three-dimensional coordinate information of the target, and the three-dimensional coordinate determining module 43 obtains the three-dimensional coordinate information of the target through a 3D reconstruction algorithm and sends the three-dimensional coordinate information to the event extracting module 44; the event extraction module 44 determines the position relationship between the target and the virtual door according to the three-dimensional coordinate information of the target and the three-dimensional coordinate information of the virtual door, and extracts the occurrence of the event.
By the device, the three-dimensional coordinate information of the target is acquired according to the video image, and the position relation between the virtual door and the target is judged based on the three-dimensional coordinate information of the target, so that an event is extracted, the event misjudgment caused by the perspective effect in the two-dimensional image is effectively avoided, and the accuracy of event judgment is improved.
In one embodiment, the three-dimensional coordinate information of the target is horizontal coordinate information of the target in three-dimensional coordinates, namely XY-axis coordinate information; the three-dimensional coordinate information of the virtual door is horizontal coordinate information under three-dimensional coordinates, namely XY axis coordinate information. Based on the horizontal coordinate information under the three-dimensional coordinates, the position relation between the XY-axis coordinate information of the target on the same horizontal plane and the XY-axis coordinate information of the virtual door can be judged, so that the occurrence of an event can be more accurately extracted.
Fig. 5 is a schematic diagram of another embodiment of the video surveillance apparatus of the present invention. 51. 52, 53, and 54 are a video capture module, an object acquisition module, a three-dimensional coordinate determination module, and an event extraction module, respectively, and the working process is the same as that in the embodiment of fig. 4. The target acquisition module 52 includes a frame alignment unit 521, a target determination unit 522, and a planar coordinate acquisition unit 523. The frame comparison unit 521 compares the continuous frame flat video images, or compares the flat video image of the current frame with the background image, to obtain a change point or a point group in the flat video image. The target determination unit 522 extracts a point or a point group from the change point or the point group as a target, and may set a predetermined extraction policy, such as extracting a change point or a point group of continuous multi-frame changes as a target, or extracting a change point group of which the area exceeds a certain size as a target, or the like. The plane coordinate acquiring unit 523 acquires plane coordinate information of the target from the plane image.
The device can acquire continuously changing points or point groups in continuous frame plane video images according to the change of the plane video image pixel points, and detect the continuously changing points or point groups as targets, thereby reducing the possibility of missing judgment and realizing tighter monitoring.
In one embodiment, since the target may be in a motion state, an event may be extracted according to a motion trajectory of the target, as shown in fig. 4, the video capture module 51 sends the captured video image to the target obtaining module 52, the target obtaining module 52 further includes a motion trajectory determining unit, the motion trajectory determining unit determines a motion trajectory of the target in the video image according to a comparison result of the frame comparison unit 521 on multiple frames of video images, and the motion trajectory may be recorded according to position information of the target in each frame; the target acquisition module 52 sends the acquired target motion trajectory information to the three-dimensional coordinate determination module 53, and the three-dimensional coordinate determination module 53 determines the three-dimensional coordinate information of the motion trajectory of the target and sends the three-dimensional coordinate information to the event extraction module 54; the event extraction module 54 determines the position relationship between the target and the virtual door according to the three-dimensional coordinate information of the target motion trajectory and the three-dimensional coordinate information of the virtual door, and extracts the occurrence of the event.
Events extracted by such means may include: the event extraction method comprises the steps of passing through the virtual door from outside to inside, passing through the virtual door from inside to outside, moving from outside to inside without passing through the virtual door, and moving from inside to outside without passing through the virtual door, so that the types of events can be enriched, the occurrence process of the events is extracted, and a more detailed event extraction effect is achieved.
In one embodiment, the video capture module comprises a 2D camera. And 3D reconstruction is carried out according to the plane video image acquired by the single 2D camera, and the three-dimensional coordinate information of the target is acquired. The method has low cost, does not need to modify hardware, and reduces the cost of upgrading the existing system.
In one embodiment, the video capture module may include a plurality of 2D cameras. The multiple 2D cameras can respectively extract events, judgment is carried out according to the event extraction results of the multiple 2D cameras, and whether the target passes through the virtual door or not can be determined by comparing and judging the weight of the target passing through the virtual door and the weight of the target not passing through the virtual door in a mode of setting weights for different cameras. By the method, misjudgment caused by angles, distances and the like can be avoided, and event judgment is more accurate.
In one embodiment, the virtual door is a door area perpendicular to the ground, and the intersection line of the virtual door and the ground can be a straight line, a line segment or a broken line. By the method, the boundary of the area to be monitored and protected can be defined as much as possible, monitoring is carried out from the ground to the space, and the comprehensiveness and the accuracy of event extraction are improved.
The virtual door extends upward on the basis of the straight line, the line segment or the broken line, and the height can be infinite or preset. The virtual door can be arranged in a mode of arranging an interface line between the virtual door and the ground; or can be set by directly defining a convex polygon, wherein the polygon is vertical to the ground, and the lower boundary of the polygon is the intersection line of the virtual door and the ground; the distance between the virtual door and the camera can be set; or the boundary line between the virtual door extension surface and the ground is set firstly, then the virtual door area is set, and the upper boundary and the lower boundary of the virtual door can be appointed by the user image, or the height is set. The setting of the virtual door may be realized by an interface of an external program. Through the mode, the virtual door can be freely set according to the monitoring requirement, so that the method is more flexible, and the video monitoring area is more targeted.
FIG. 6 is a schematic diagram of a video surveillance apparatus according to another embodiment of the invention. Wherein 61, 62, 63, and 64 are a video capture module, a target acquisition module, a three-dimensional coordinate determination module, and an event extraction module, respectively, and 621, 622, and 623 are a frame comparison unit, a target determination unit, and a planar coordinate acquisition unit in the target acquisition module 62, respectively, and their working processes are similar to those in the embodiment of fig. 5. The video surveillance apparatus also includes a target type analysis module 65. The target type analyzing module 65 matches the obtained target type, which may include people, animals and/or vehicles, according to the target information obtained by the target determining unit 622. And acquiring the type of the target through image matching, thereby enriching the event extraction information. By the aid of the device, the target type needing alarming can be selected, and workload of a user is reduced.
In one embodiment, an alarm module 66 may also be included. When event extraction module 64 extracts a predetermined event requiring an alarm, an alarm signal and/or event information is sent to alarm module 66 to trigger the alarm. By the aid of the device, alarm information can be clearly provided for a user, and the user is helped to timely process an occurred event. The predetermined events requiring an alarm may include being located within a virtual door, being in the area of the virtual door, traversing the virtual door from the outside inward and/or traversing the virtual door from the inside outward, etc. In one embodiment, the alert module 66 may also obtain the object type information determined by the object type analysis module 65 to provide the object type information to the user while alerting.
In one embodiment, the event extraction module counts the number of extracted events occurring, and determines that an event occurs when the extraction of the event reaches a predetermined number of frames. By the method, misjudgment can be prevented, and part of false alarms can be filtered.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention and not to limit it; although the present invention has been described in detail with reference to preferred embodiments, those skilled in the art will understand that: modifications to the specific embodiments of the invention or equivalent substitutions for parts of the technical features may be made; without departing from the spirit of the present invention, it is intended to cover all aspects of the invention as defined by the appended claims.

Claims (20)

1. A video surveillance method characterized by:
acquiring a plane video image;
determining plane coordinate information of a target in the plane video image according to the plane video image;
3D reconstruction is carried out through a 3D reconstruction algorithm according to the plane coordinate information to obtain three-dimensional coordinate information of the target;
extracting event occurrence based on the change situation of the position relation of the target and the virtual door, comprising: acquiring a straight line which is perpendicular to the lower boundary of the video image through the lowest point of the center of the video image, and taking the straight line as a reference straight line;
acquiring an included angle α between a connecting line of the target and the reference point and a reference straight line, and a distance d between the target and the reference point;
obtaining the included angle between the reference line and the connecting line of each end point of the virtual door and the ground intersecting line, determining the distance d between the reference line and the end point corresponding to the smallest included angle between the reference line and the connecting line of the end point to the reference point larger than α1And the distance d between the end point corresponding to the included angle between the connecting line from the maximum end point to the reference point and the reference straight line, which is smaller than α, and the reference point2
Determining d and d1、d2The magnitude relationship of (1);
according to d and d1、d2Determining the change condition of the position relationship between the target and the virtual door according to the change condition of the previous frame and the current frame;
the virtual door comprises three-dimensional coordinate information, the virtual door is a door area vertical to the ground, and the intersection line of the virtual door and the ground is a straight line, a line segment or a broken line.
2. The method of claim 1, wherein determining planar coordinate information of an object in the planar video image from the planar video image comprises:
comparing the continuous frames of the plane video image or comparing the plane video image with a background image to obtain a change point or a point group in the plane video image;
extracting a point or a point group from the change point or the point group as a target;
and determining the plane coordinate information of the target according to the plane video image.
3. The method of claim 1, wherein the device that acquires the flat video image comprises more than one 2D camera.
4. The method of claim 1,
and 3D reconstruction is carried out through a 3D reconstruction algorithm according to the plane coordinate information, and the three-dimensional coordinate information of the target is obtained by: 3D reconstruction is carried out through a 3D reconstruction algorithm according to the plane coordinate information to obtain horizontal coordinate information of the target under the three-dimensional coordinate;
the extracting of the event occurrence based on the positional relationship between the target and the virtual door, wherein the virtual door includes three-dimensional coordinate information: extracting an event occurrence based on a positional relationship of the target and a virtual door, wherein the virtual door includes horizontal coordinate information in three-dimensional coordinates.
5. The method according to claim 1, wherein the 3D reconstruction is performed by a 3D reconstruction algorithm according to the plane coordinate information, and the three-dimensional coordinate information of the object is obtained by: according to the formula
Figure FDA0002442310440000021
And converting the plane coordinate information of the target into horizontal coordinate information under a three-dimensional coordinate, wherein u and v are the plane coordinate information of the target, X, Y is the horizontal coordinate information under the three-dimensional coordinate of the target, P is a projection matrix, and lambda is a distortion coefficient.
6. The method of claim 1, further comprising:
determining a motion track of a target in the plane video image according to the plurality of frames of the plane video image;
determining three-dimensional coordinate information of the motion trajectory of the target;
extracting an event occurrence based on the motion trajectory of the target and a positional relationship of the virtual door.
7. The method of claim 1, wherein the event comprises being inside the virtual door, being outside the virtual door, being in the area of the virtual door, passing through the virtual door from the outside inward, passing through the virtual door from the inside outward, moving from the outside inward without passing through the virtual door, and/or moving from the inside outward without passing through the virtual door.
8. The method of claim 1, further comprising determining a type of the object, the type of object comprising a human, an animal, and/or a car.
9. The method according to claim 1, further comprising sending alarm information if a predetermined event is extracted, wherein the alarm information comprises intrusion position information and/or intrusion direction information.
10. The method of claim 1, wherein extracting the event occurrence based on the positional relationship between the target and the virtual door comprises counting a number of consecutive frames of the event, and determining the event occurrence when the number of frames is greater than a predetermined alarm number of frames.
11. A video monitoring apparatus, comprising:
the video acquisition module is used for acquiring a plane video image;
the target acquisition module is used for determining plane coordinate information of a target in the plane video image according to the plane video image;
the three-dimensional coordinate determination module is used for performing 3D reconstruction through a 3D reconstruction algorithm according to the plane coordinate information to obtain the three-dimensional coordinate information of the target;
the event extraction module is used for extracting event occurrence based on the change situation of the position relation between the target and the virtual door, and comprises:
acquiring a straight line which is perpendicular to the lower boundary of the video image through the lowest point of the center of the video image, and taking the straight line as a reference straight line;
acquiring an included angle α between a connecting line of the target and the reference point and a reference straight line, and a distance d between the target and the reference point;
obtaining the included angle between the reference line and the connecting line of each end point of the virtual door and the ground intersecting line, determining the distance d between the reference line and the end point corresponding to the smallest included angle between the reference line and the connecting line of the end point to the reference point larger than α1And the distance d between the end point corresponding to the included angle between the connecting line from the maximum end point to the reference point and the reference straight line, which is smaller than α, and the reference point2
Determining d and d1、d2The magnitude relationship of (1);
according to d and d1、d2Determining the change condition of the position relationship between the target and the virtual door according to the change condition of the previous frame and the current frame;
the virtual door comprises three-dimensional coordinate information, the virtual door is a door area vertical to the ground, and the intersection line of the virtual door and the ground is a straight line, a line segment or a broken line.
12. The apparatus of claim 11, wherein the target acquisition module comprises:
the frame comparison unit is used for comparing the continuous frames of the plane video image or comparing the plane video image with a background image to obtain a change point or a point group in the plane video image;
a target determination unit that extracts a point or a point group from the change point or the point group as a target;
and the plane coordinate acquisition unit is used for acquiring the plane coordinate information of the target through the plane video image.
13. The apparatus of claim 11, wherein the video capture module comprises more than one 2D camera.
14. The apparatus of claim 11,
the three-dimensional coordinate determination module is further to: 3D reconstruction is carried out through a 3D reconstruction algorithm according to the plane coordinate information to obtain horizontal coordinate information of the target under the three-dimensional coordinate;
the event extraction module is further configured to: extracting an event occurrence based on a positional relationship of the target and a virtual door, wherein the virtual door includes horizontal coordinate information in three-dimensional coordinates.
15. The apparatus of claim 11, wherein the three-dimensional coordinate determination module is configured to: according to the formula
Figure FDA0002442310440000041
And converting the plane coordinate information of the target into horizontal coordinate information under a three-dimensional coordinate, wherein u and v are the plane coordinate information of the target, X, Y is the horizontal coordinate information under the three-dimensional coordinate of the target, P is a projection matrix, and lambda is a distortion coefficient.
16. The apparatus of claim 12, wherein:
the target acquisition module further comprises a motion track determining unit, which is used for determining the motion track of the target in the video image according to the plurality of frames of the plane video images;
the three-dimensional coordinate determination module is further used for determining three-dimensional coordinate information of the motion trail of the target;
the event extraction module is further used for extracting event occurrence based on the motion trail of the target and the position relation of the virtual door.
17. The apparatus of claim 11, wherein the event comprises being inside the virtual door, outside the virtual door, in a virtual door area, passing through the virtual door from outside to inside, passing through the virtual door from inside to outside, moving from outside to inside and not passing through the virtual door, and/or moving from inside to outside and not passing through the virtual door.
18. The apparatus of claim 11, further comprising a target type analysis module for analyzing a target type, the target type comprising a human, an animal and/or a car.
19. The device of claim 11, further comprising an alarm module for sending alarm information according to the extracted predetermined event, wherein the alarm information includes intrusion position information and/or intrusion direction information.
20. The apparatus of claim 11, wherein the event extraction module is further configured to count a number of consecutive frames of the event, and determine that the event occurs when the number of the consecutive frames is greater than a predetermined number of alarm frames.
CN201510336397.6A 2015-06-17 2015-06-17 Video monitoring method and device Active CN104954747B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201510336397.6A CN104954747B (en) 2015-06-17 2015-06-17 Video monitoring method and device
EP16810884.3A EP3311562A4 (en) 2015-06-17 2016-05-23 Methods and systems for video surveillance
US15/737,283 US10671857B2 (en) 2015-06-17 2016-05-23 Methods and systems for video surveillance
PCT/CN2016/082963 WO2016202143A1 (en) 2015-06-17 2016-05-23 Methods and systems for video surveillance
US16/888,861 US11367287B2 (en) 2015-06-17 2020-06-01 Methods and systems for video surveillance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510336397.6A CN104954747B (en) 2015-06-17 2015-06-17 Video monitoring method and device

Publications (2)

Publication Number Publication Date
CN104954747A CN104954747A (en) 2015-09-30
CN104954747B true CN104954747B (en) 2020-07-07

Family

ID=54169046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510336397.6A Active CN104954747B (en) 2015-06-17 2015-06-17 Video monitoring method and device

Country Status (1)

Country Link
CN (1) CN104954747B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016202143A1 (en) * 2015-06-17 2016-12-22 Zhejiang Dahua Technology Co., Ltd Methods and systems for video surveillance
CN107396037B (en) * 2016-05-16 2020-04-03 杭州海康威视数字技术股份有限公司 Video monitoring method and device
CN106500714B (en) * 2016-09-22 2019-11-29 福建网龙计算机网络信息技术有限公司 A kind of robot navigation method and system based on video
CN108111802B (en) * 2016-11-23 2020-06-26 杭州海康威视数字技术股份有限公司 Video monitoring method and device
CN107424208B (en) * 2017-08-11 2018-07-20 江苏柚尊家居制造有限公司 A kind of baby bed and monitoring method of smart home
CN111951598B (en) * 2019-05-17 2022-04-26 杭州海康威视数字技术股份有限公司 Vehicle tracking monitoring method, device and system
CN110798618B (en) * 2019-10-30 2022-01-11 广州海格星航信息科技有限公司 Camera resource scheduling method and device in dynamic tracking
WO2022061631A1 (en) * 2020-09-24 2022-03-31 Intel Corporation Optical tracking for small objects in immersive video
CN112818748A (en) * 2020-12-31 2021-05-18 北京字节跳动网络技术有限公司 Method and device for determining plane in video, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068344A (en) * 2006-03-17 2007-11-07 株式会社日立制作所 Object detection apparatus
CN101872524A (en) * 2009-08-14 2010-10-27 杭州海康威视数字技术股份有限公司 Video monitoring method, system and device based on virtual wall
CN103578133A (en) * 2012-08-03 2014-02-12 浙江大华技术股份有限公司 Method and device for reconstructing two-dimensional image information in three-dimensional mode
CN103716579A (en) * 2012-09-28 2014-04-09 中国科学院深圳先进技术研究院 Video monitoring method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8432448B2 (en) * 2006-08-10 2013-04-30 Northrop Grumman Systems Corporation Stereo camera intrusion detection system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068344A (en) * 2006-03-17 2007-11-07 株式会社日立制作所 Object detection apparatus
CN101872524A (en) * 2009-08-14 2010-10-27 杭州海康威视数字技术股份有限公司 Video monitoring method, system and device based on virtual wall
CN103578133A (en) * 2012-08-03 2014-02-12 浙江大华技术股份有限公司 Method and device for reconstructing two-dimensional image information in three-dimensional mode
CN103716579A (en) * 2012-09-28 2014-04-09 中国科学院深圳先进技术研究院 Video monitoring method and system

Also Published As

Publication number Publication date
CN104954747A (en) 2015-09-30

Similar Documents

Publication Publication Date Title
CN104954747B (en) Video monitoring method and device
CN104902246B (en) Video monitoring method and device
US10452931B2 (en) Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system
CN110660186B (en) Method and device for identifying target object in video image based on radar signal
CN104966062B (en) Video monitoring method and device
CN105141885B (en) Carry out the method and device of video monitoring
JP6055823B2 (en) Surveillance camera control device and video surveillance system
CN107657244B (en) Human body falling behavior detection system based on multiple cameras and detection method thereof
CN105631418B (en) People counting method and device
WO2014092552A2 (en) Method for non-static foreground feature extraction and classification
US10789495B2 (en) System and method for 1D root association providing sparsity guarantee in image data
WO2011102872A1 (en) Data mining method and system for estimating relative 3d velocity and acceleration projection functions based on 2d motions
CN109145696B (en) Old people falling detection method and system based on deep learning
JP5047382B2 (en) System and method for classifying moving objects during video surveillance
KR20190046351A (en) Method and Apparatus for Detecting Intruder
CN106600628A (en) Target object identification method and device based on infrared thermal imaging system
WO2022127181A1 (en) Passenger flow monitoring method and apparatus, and electronic device and storage medium
JP2012212236A (en) Left person detection device
JP4610005B2 (en) Intruding object detection apparatus, method and program by image processing
JP6799325B2 (en) Image correction device, image correction method, attention point recognition device, attention point recognition method and abnormality detection system
CN108111802B (en) Video monitoring method and device
KR101640527B1 (en) Method and Apparatus for Monitoring Video for Estimating Size of Single Object
JP4707019B2 (en) Video surveillance apparatus and method
CN110930432A (en) Video analysis method, device and system
KR102407202B1 (en) Apparatus and method for intelligently analyzing video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant